Twenty-eight healthcare companies, including CVS Health, are joining U.S. President Joe Biden's efforts to ensure the safe development of artificial intelligence (AI). This follows commitments from 15 leading AI companies, such as Google, OpenAI, and Microsoft, to develop AI models responsibly.
The Executive Order on Safe, Secure, and Trustworthy AI
President Biden emphasizes the potential and risks of artificial intelligence (AI), especially in healthcare. Through an Executive Order released on October 30, the Biden-Harris Administration committed to urgently and responsibly oversee AI development to enhance health outcomes for Americans, ensuring security and privacy.
"The administration is pulling every lever it has to advance responsible AI in health-related fields," the White House official said.
In response to the Administration's guidance, 28 prominent healthcare providers and payers, CVS Health, Duke Health, and Mass General Brigham, have declared voluntary commitments to the ongoing efforts of the Department of Health and Human Services (HHS), the AI Executive Order, and earlier commitments from 15 leading AI companies.
READ ALSO: Companies Invest Millions for AI Workforce Integration, Anticipating Extended ROI Beyond A Year
Committed to Enhanced Patient Experience
The commitments aim to align industry actions on AI with the "FAVES" principles: Fair, Appropriate, Valid, Effective, and Safe healthcare outcomes, where companies pledge to notify users when content is predominantly AI-generated and not human-reviewed. They also commit to a risk management framework for applications using foundation models, monitoring and addressing potential harms. Additionally, the companies promise to explore and develop AI uses responsibly, contributing to health equity, enhancing access to care, affordability, coordinated care, improved outcomes, and reducing clinician burnout and patient experiences.
The Whitehouse official also mentioned that remaining watchful is crucial for realizing the potential of AI to improve health outcomes. Healthcare is a vital service, and the quality of care can be a matter of life and death for Americans. Without rigorous testing, risk management, and human supervision, AI tools in clinical decisions may make costly or dangerous errors.
Lack of proper oversight, especially if the AI is not trained on representative data, can lead to biased diagnoses based on gender or race. Furthermore, Addressing these risks is essential, as AI's capacity to gather extensive data and derive new insights poses privacy risks for patients.
Participation in initiatives like these directly shapes the Administration's approach. In the President's October AI Executive Order, the Department of Health and Human Services (HHS) was assigned various tasks to promote safe, secure, and trustworthy AI. These tasks involve creating frameworks, policies, and potential regulations for responsible AI use. The Order also instructs HHS to initiate a program documenting AI-related safety incidents, offering grants for innovation in underserved communities, and ensuring AI deployers in healthcare comply with nondiscrimination laws. These actions complement existing efforts at HHS, such as recent transparency rules for AI in electronic health records and the FDA's approval of nearly 700 AI-enabled medical devices.
The commitments made by twenty-eight private-sector entities are a crucial part of our collective effort to advance AI for the health and well-being of Americans. These providers and payers have taken a significant step, and the Administration encourages more to join in the coming weeks.