Highlights of President Biden’s Executive Order on AI and Its Impact on Health Care

Member Advisory

November 21, 2023

EO includes directives to the HHS Secretary on the use of AI in health care

President Biden October 30 issued an Executive Order (EO) on the use of artificial intelligence (AI) that attempts to strike a balance between managing the risks and encouraging innovation.

The EO is a road map intended to ensure U.S. leadership in developing and deploying safe, secure and trustworthy AI systems. It outlines several actions, such as sharing safety test results, developing standards and tools, protecting against AI risks, addressing algorithmic discrimination, catalyzing AI research, expanding AI talent, collaborating on AI, accelerating AI standards, promoting responsible AI abroad, and issuing guidance for agencies' use of AI.

The EO aims to ensure that the U.S. leads in AI innovation and competitiveness, while addressing the social, economic and ethical challenges posed by the adoption of AI. The order also emphasizes the need to support American workers, protect the interests and rights of Americans who use or interact with AI, manage the risks from the government's own use of AI, and promote global progress and cooperation on AI issues.

Key Highlights

The Executive Order:

  • Is comprehensive and attempts to address a broad range of technical, security, safety and data management considerations.
  • Relies on agencies with sector specific oversight and expertise to recommend guidance for those sectors, such as the Department of Health and Human Services (HHS) for health care.
  • Establishes a set of eight guiding principles and priorities for the federal government to advance and govern the development and use of AI.
  • Calls on the HHS Secretary to create an AI Task Force by the end of January 2024.

What You Can Do

  • Share this advisory with leaders in your organization.
  • Share your feedback and concerns with AHA.

AHA Take

AI has been used in health care for years, but the emergence late last year of generative AI tools, such as ChatGPT, pushed all forms of AI into the public spotlight and ignited a debate over how to safely harness the full potential of these tools. In hospitals and health systems, AI has already demonstrated benefits, such as more accurate diagnostics, improved operations and reduced administrative tasks, while risks related to data integrity, privacy and algorithmic bias still remain. AI is a complex set of technologies, and its regulation requires careful consideration to understand its many nuances, diverse applications and fluid definitions. The AHA appreciates the Administration’s willingness to take a leadership role in the broader conversation about the need for regulations that ensure AI is developed and used safely and securely without stifling innovation.

The EO attempts to address a comprehensive and broad range of technical, security, safety and data management considerations of AI use across multiple industries and the impact of AI on aspects of everyday life. We appreciate the thoughtful consideration included in this EO related to the different challenges inherent in Generative AI and large language models (LLMs) compared to established uses of machine learning technology and predictive (also known as “Discriminative”) AI models. Notably, the technical capabilities described in the EO are well beyond the computing capabilities of any commercially available computer processor or any computing platform technology expected to emerge in the next several years. The EO does not address the foundational work already done by the Food and Drug Administration (FDA) to create a regulatory framework for AI. Existing FDA guidance, including Software as a Medical Device (SaMD Guidance), although written before the advent of commercially viable Generative AI, is already in use and should provide a reasonable baseline from which new guidance, more precisely suited to Generative AI, can be adapted.

There are some topics addressed by the EO that require additional clarification, including privacy protections. The EO makes several recommendations regarding the protection of privacy related to AI, but it does not advocate for a national privacy standard. The Fact Sheet summarizing the EO “calls on Congress to pass bipartisan data privacy legislation to protect all Americans;” however, the EO itself does not reiterate this call nor clearly reinforce this concept.

Lastly, the EO calls for the HHS Secretary to create an AI Task Force within 90 days of this order. While the AHA supports the idea that the EO is asking for sector-specific expertise regarding discussions of AI regulations, there are no details given about the formation, selection process or membership of the HHS AI Task Force. The White House should require HHS to make this a transparent process by calling for public nominations for the AI Task Force and clarifying how it will work with the newly forming FDA Digital Health Advisory Committee.

Additional highlights and details related to the EO follow.

Guiding Principles and Priorities for AI

The EO sets out eight guiding principles and priorities for the federal government to advance and govern the development and use of AI in a safe, secure, responsible and equitable manner:

  1. Artificial Intelligence must be safe and secure.
  2. Promoting responsible innovation, competition, and collaboration will allow the U.S. to lead in AI and unlock the technology's potential to solve some of society's most difficult challenges.
  3. The responsible development and use of AI require a commitment to supporting American workers.
  4. AI policies must be consistent with the Administration's dedication to advancing equity and civil rights.
  5. The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected.
  6. Americans' privacy and civil liberties must be protected as AI continues advancing.
  7. It is important to manage the risks from the federal government's own use of AI and increase its internal capacity to regulate, govern and support responsible use of AI to deliver better results for Americans.
  8. The federal government should lead the way to global societal, economic, and technological progress, as the U.S. has in previous eras of disruptive innovation and change.

AI In Health Care

Responsible Development and Use

HHS will support responsible AI development and use in health care and research by working with the private sector to create personalized immune-response profiles for patients, funding projects that improve health care data quality for AI tools, and boosting grants for health equity and researcher diversity through the National Institutes of Health’s “AIM–AHEAD” program.

The HHS AI Task Force

The HHS AI Task Force will be created by the HHS Secretary within 90 days of this order. The task force will have one year to develop a strategic plan for responsible AI in the health and human services sector. The plan will include policies and frameworks that guide the design and development of AI tools, and the maintenance, management and documentation of those tools throughout their useful lifecycles. Furthermore, those policies and frameworks will provide an outline for AI quality, safety and oversight; AI performance monitoring and reporting; AI equity and fairness; AI privacy and security; AI documentation and guidance; workplace efficiency and worker satisfaction. Additionally, the task force will work with state, local, Tribal, and territorial agencies to promote safe and effective AI deployment in local settings.

AI Assurance Policy

The EO directs the HHS Secretary to develop within 180 days a quality control strategy for AI-enabled technologies used in health care. This includes the creation of an AI assurance policy. The purpose of the policy is to evaluate the performance of AI-enabled tools used in health care and establish the infrastructure requirements to establish pre-market assessment and post-market oversight of how AI-enabled algorithms perform against real-world data.

Non-discrimination Laws

To ensure that AI is used in a fair and ethical way by health and human services providers who receive federal funds, the HHS Secretary will take the following steps within 180 days of this order:

  • Provide training and support to these providers and payers on how to comply with federal laws that prohibit discrimination and protect privacy in relation to AI, and what could happen if they violate these laws.
  • Respond to any complaints or reports of noncompliance with these laws by issuing guidance or taking other appropriate actions.

Voluntary Safety Program

Within 365 days of the date of the EO, the HHS Secretary in consultation with the Secretary of Defense and the Secretary of Veterans Affairs, establish an AI safety program in partnership with voluntary federally listed patient safety organizations. The intent of the program is to monitor and prevent clinical errors from AI in health care, but the program also will collect data on incidents that harm patients or others due bias or discrimination in AI. The program also will share recommendations and guidelines to avoid these harms with stakeholders, such as health care providers.

Tools in Drug Development

To ensure the safety and efficacy of AI or AI-enabled tools in drug-development processes, the HHS Secretary shall establish a regulatory strategy within one year of this order. The strategy includes:

  • The objectives, goals and high-level principles for regulating AI or AI-enabled tools at each stage of drug development;
  • The areas where new rules, guidance or statutory authority may be needed to implement such a regulatory system;
  • The current and projected budget, resources, personnel, and potential public/private partnerships for such a regulatory system; and
  • The risks associated with the implementation of section 4 of this order.

AI across All Sectors

Because the EO is so comprehensive, its impact on health systems and hospitals will be felt both directly through HHS actions, but also indirectly, in the actions of other agencies such as the Departments of State, Energy, Commerce and Defense.

Broadly, the EO states that the federal government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination and infringements on privacy. To create a basis for the technical standards underpinning any potential regulations, the EO relies on a foundation based on the National Institute of Science and Technology (NIST) Artificial Intelligence Management Framework (AI RMF 1.0) which, among other things, covers safety, security and bias.

The NIST AI RMF 1.0 framework is comprehensive and provides a cogent outline for the secure computing and storage network architecture required to develop, deploy and host AI tools or AI augmented applications. However, the EO goes further calling for cybersecurity stress testing, known as “red teaming” in AI models and also suggests additional technology controls such as labeling or “watermarking” that clearly identifies when AI is being used actively in a program or application, or alerts the users that content was generated using AI. More broadly, the EO also directs the Secretary of Commerce, the Secretary of State, the Secretary of Defense, the Secretary of Energy, and the Director of National Intelligence to collaborate on establishing a set of technical conditions in the next year that identifies the potential for a large AI model to be used in “malicious cyber-enabled activity” and, until that model is developed, assumes that AI models requiring excessive amounts of computing power are capable of being deployed for potentially dangerous purposes and will be subject to comply with reporting requirements defined in that multi-agency effort. The uses of AI in cybersecurity described in the EO are not limited to just defending against possible AI augmented attacks, but to proactively use AI tools, such as LLMs, to find and fix vulnerabilities in vital U.S. government software, systems and networks.

The EO also calls for a cross-sector risk assessment related to the use of AI in critical infrastructure sectors. The EO asks the Secretary of Homeland Security to collaborate with the Director of the Cybersecurity and Infrastructure Security Agency and other relevant officials and agencies to assess and manage the risks of using AI in critical infrastructure sectors, such as health care. The assessment should include how AI deployment may increase the vulnerability of critical infrastructure systems to failures, attacks and disruptions, and how to reduce these risks. The EO also requires the adoption of the NIST AI Risk Framework and other security guidance by agencies involved in critical infrastructure. Furthermore, within 240 days after the NIST Framework is incorporated into the safety and security guidelines used by those agencies, “the Assistant to the President for National Security Affairs and the Director of OMB, in consultation with the Secretary of Homeland Security, shall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines, or appropriate portions thereof, through regulatory or other appropriate action.”

The EO cites the Homeland Security Act of 2002 (Public Law 107–296) to establish an Artificial Intelligence Safety and Security Board as an advisory to provide advice on AI-related issues in critical infrastructure. It is not specified in the EO, but as the health care sector is included in the definition of critical infrastructure cited by the EO, the AHA expects that some hospitals and health systems will be asked to join this board.

A likely area of interest for hospitals and health systems is how the EO addresses the impact of AI on the workforce. The President will receive a report from the Secretary of Labor on how to help workers who may lose their jobs because of AI and other technologies. The report will review existing or past federal programs that help workers in transition, such as unemployment insurance and the Workforce Innovation and Opportunity Act, and how they can deal with possible negative AI impacts on the American workforce. Additionally, the report will propose options, including possible laws, to increase or create new federal support for workers affected by AI and, with the help of the Secretary of Commerce and the Secretary of Education, improve and expand education and training opportunities that prepare people for AI-related jobs.

Further Questions

If you have further questions, please contact Stephen Hughes, AHA’s director of health information technology policy, at stephen.hughes@aha.org.

Member Advisory: Highlights of President Biden’s Executive Order on AI page 1.