A total of 28 healthcare providers and payers, including Geisinger, CVS Health, and Curai Health, have signed voluntary commitments for using artificial intelligence (AI) in a safe and secure manner.

Emory Healthcare, Endeavor Health, Fairview Health Systems, Boston Children’s Hospital, UC San Diego Health, John Muir Health, Mass General Brigham, and UC Davis Health, were among the others committed to the cause.

The 28 companies have pledged to vigorously develop AI solutions in a responsible way to minimise the risks posed by the technology.

The aim is to make healthcare more affordable, expand its access, lower clinician burnout, and offer more coordinated care.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

In addition, the companies will ensure that AI-based healthcare outcomes are in line with fair, appropriate, valid, effective, and safe (FAVES) AI principles.

These principles require the companies to inform users when they receive data that is largely generated by AI and not reviewed by people.

The companies will also have to comply with a risk management framework for using apps driven by foundation models, and monitor and address any potential damage. 

The latest move builds on the commitments of 15 leading AI companies on responsible AI development. These companies include OpenAI, Microsoft, Google, Amazon, Meta, Nvidia, and Salesforce, among others.

In a statement, the White House said: “We must remain vigilant to realise the promise of AI for improving health outcomes.  Healthcare is an essential service for all Americans, and quality care sometimes makes the difference between life and death. 

“Without appropriate testing, risk mitigations, and human oversight, AI-enabled tools used for clinical decisions can make errors that are costly at best, and dangerous at worst. Absent proper oversight, diagnoses by AI can be biased by gender or race, especially when AI is not trained on data representing the population it is being used treat.

“Additionally, AI’s ability to collect large volumes of data, and infer new information from disparate datapoints, could create privacy risks for patients. All these risks are vital to address.”