AHA Responds to OSTP Request on AI Policies for Health Care
October 27, 2025
Michael Kratsios
Director
Office of Science and Technology Policy
Eisenhower Executive Office Building
1650 Pennsylvania Avenue NW
Washington, D.C. 20502
Submitted Electronically
RE: OSTP-TECH-2025-0067 Request for Information; Regulatory Reform on Artificial Intelligence
Dear Director Kratsios,
On behalf of our nearly 5,000 member hospitals, health systems and other health care organizations, our clinician partners — including more than 270,000 affiliated physicians, 2 million nurses and other caregivers — and the 43,000 health care leaders who belong to our professional membership groups, the American Hospital Association (AHA) appreciates the opportunity to provide comment on the Office of Science and Technology Policy (OSTP) request for information (RFI) regarding regulatory reform on artificial intelligence (AI).
The AHA applauds the administration’s recognition that overly restrictive regulations can increase costs, stifle innovation and hamper competition. Within the health care ecosystem, excessive regulatory and administrative burdens have added unnecessary cost and reduced patient access to care. More than a quarter of all health care spending goes to administrative tasks — totaling more than $1 trillion annually.1 These administrative burdens have contributed to the financial instability of many hospitals, with nearly 40% operating with negative margins; in the face of such headwinds, some hospitals have scaled back services or closed outright.2 Reducing unnecessary regulation can improve the sustainability of our health care system, and as such, the AHA has provided recommendations to the Office of Management and Budget, Department of Health and Human Services, Federal Trade Commission and Department of Justice on ways to reduce the regulatory burden on hospitals and health systems.3,4,5,6 We welcome the additional opportunity to provide comments to OSTP on ways to reduce regulatory burden for the development and deployment of AI tools within health care.
AI tools hold tremendous potential in helping transform care delivery and address some of the administrative burdens that increase costs. From ambient listening technologies assisting with clinical documentation, to chatbots helping with scheduling and triaging, to algorithms supporting clinician interpretation of images, AI-based tools have already made a significant positive impact on providers and patients across the country. Hospitals and health systems are just scratching the surface of potential use cases and continue to explore new ways these tools can support the patients and communities they serve.
Given AI’s potential to drive efficiencies and enhance the quality of care, our members have urged that policy frameworks strike the appropriate balance of flexibility to enable innovation while ensuring patient safety. The AHA offers four categories of recommendations to maximize the potential for AI to improve care, accelerate innovation and support the health care workforce.
- Synchronize and leverage existing policy frameworks to avoid redundancy. While AI policies should be elastic to keep pace with the rapid pace of innovation, they should also be synchronized and integrated within certain existing health care policy frameworks to minimize redundancies.
- Remove regulatory barriers. Certain statutes and regulations in the health care ecosystem, such as the patchwork of state privacy laws and 42 CFR part 2, have indirectly impacted hospitals and health systems’ ability to develop and deploy certain AI tools. We provide recommendations on ways to reduce regulatory barriers that inhibit the development and deployment of AI tools.
- Ensure the safe and effective use of AI. AI policies must also ensure privacy and safety. The AHA recommends policies that ensure clinicians are included in the decision loop for algorithms that may impact access to care or care delivery, policies that provide consistency in privacy and security standards for third-party vendors and policies that offer post-deployment standards for health care AI to ensure ongoing integrity of tools.
- Address organizational and infrastructural factors. Appropriate incentives and infrastructure investment are necessary to expand AI in health care. These are necessary for both provider readiness and patient adoption.
Below, we provide our detailed comments with recommendations in each of these four categories.
SYNCHRONIZE AND LEVERAGE EXISTING POLICY FRAMEWORKS TO AVOID REDUNDANCY
The RFI recognizes that AI poses novel challenges from a policy development perspective, noting that “regulations for medical devices, telehealth, and patient privacy were designed around human clinicians and discrete medical device updates. It may create challenges to apply the same policy framework for overseeing continuously updating AI diagnostic tools and ensuring explainable clinical recommendations.” At the same time, AHA believes that AI policy should not be considered in a vacuum, as it intersects with a wide range of other policy issues with regulatory frameworks of their own. These include, but are not limited to:
- Data Privacy: HIPAA provides baseline federal standards for the protection of personal health information. HIPAA covers a wide range of health information technology applications, including AI.
- Cybersecurity: The National Institute of Standards and Technology cybersecurity framework and the Department of Health and Human Services (HHS) cybersecurity performance goals (CPGs) provide reliable voluntary standards frameworks for cybersecurity.
- Premarket Testing: The Food and Drug Administration regulations for Software as a Medical Device (SaMD) require testing of the safety and efficacy of AI-enabled medical devices through a premarket submission program.
- Transparency: The Assistant Secretary for Technology Policy under HHS requires certified health IT to meet certain transparency requirements for AI.
- Anti-bias and Discrimination: Any entity receiving federal financial assistance, including all health care providers and insurers, is prohibited from using AI tools and algorithms that discriminate through the HHS Office for Civil Rights (OCR) Anti-Bias and Discrimination regulations.
- Access to Care: The Centers for Medicare & Medicaid Services (CMS) Medicare Advantage regulations specify that AI cannot “act alone” to terminate or deny services. These regulations also establish that the health plan is responsible for ensuring the tool is accurate and free from bias.
Given this wide range of policy issues, developing separate AI frameworks could inadvertently add redundancy and inefficiency. For this reason, we encourage agencies to synchronize policies with these existing regulatory frameworks. As described in further detail later in this letter, we believe there are important opportunities to clarify and/or remove parts of existing regulations to ensure they can support the safe, effective use of AI.
REMOVING REGULATORY BARRIERS
We appreciate the administration’s interest in reducing regulatory and procedural barriers (both direct and indirect) that unnecessarily slow the development and deployment of safe and beneficial AI tools. We believe the best opportunities for removing regulatory barriers to AI are in the following areas.
Privacy and Security
Privacy and security of health data are essential for patient safety and quality of care. This applies not only to AI tools, but any persons, entities or tools that leverage or exchange personal health information. HIPAA provides sound foundational standards for privacy, security and breach notification. While certain regulations have helped provide clarifying guidance, other regulations would add unnecessary burden and run counter to the original purpose of HIPAA to protect health information. Notably, we have urged HHS to modify the “Notice of Proposed Rulemaking: HIPAA Security Rule To Strengthen the Cybersecurity of Electronic Protected Health Information” to make the requirements voluntary and to modify the HIPAA Breach Notification Rule to remove the requirement to report breaches affecting fewer than 500 individuals. The HIPAA 2024 cybersecurity proposed rule offered several problematic requirements, including requiring providers to ensure system restoration of electronic information systems and data within 72 hours of a cybersecurity incident. Not only is this not technically feasible, but this could actually increase risk as providers may be pressured by these artificial timelines to bring systems online earlier than when a full threat assessment can be completed and exposures isolated. As such, we recommend the rule be modified to make requirements voluntary.
The AHA does not support proposals for mandatory cybersecurity requirements levied on hospitals as if they were at fault for the success of hackers in perpetrating a crime. Instead, the AHA supports voluntary consensus-based cybersecurity practices such as the cybersecurity performance goals. The now well-documented source of cybersecurity risk in the health care sector is from vulnerabilities in third-party technology, not hospitals’ primary systems. To make meaningful progress in the war on cybercrime, Congress and the administration should focus on the entire health care sector and not just hospitals.
HIPAA Preemption
While HIPAA generally preempts contrary state law, there are specific exceptions to that preemption that have enabled a plethora of differing state laws that bear on health data privacy.
For all the strengths of the existing HIPAA framework, its approach to preemption has proven challenging, burdening hospitals and health systems with a myriad of overlapping legal requirements, raising compliance costs and diverting limited resources that could otherwise be used on patient care. In addition, the existing state and federal patchwork of health information privacy requirements remains a significant barrier to the robust sharing of patient information necessary for coordinated clinical treatment. For instance, the patchwork of differing requirements poses significant challenges for providers’ use of a common electronic health record that is a critical part of the infrastructure necessary for effectively coordinating patient care and maintaining population health. This can also inhibit the development and deployment of AI tools, given that data drives algorithm validity.
We encourage the administration to work with Congress to address this issue and enact a full preemption provision. HIPAA is more than sufficient to protect patient privacy and, if interpreted correctly, it strikes the appropriate balance between health information privacy and valuable information sharing. Varying state laws only add costs and create complications for hospitals and health systems. As such, the AHA reiterates its long-standing recommendation that Congress strengthen HIPAA preemption.
42 CFR Part 2
We urge the administration to work with Congress to remove remaining requirements under 42 CFR Part 2 that hinder care team access to important health information. These regulations require separate maintenance of records pertaining to substance use disorder (SUD) information, which prevents the integration of behavioral and physical health care because the patient data cannot be used and disclosed like all other health care data. This can also impact the ability of SUD providers to leverage AI tools for care delivery. Despite regulatory changes in recent years, the regulations in Part 2 are outdated, fail to protect patient privacy, and, in fact, erect barriers to providing coordinated, whole-person care to people with a history of SUD. By working with Congress, the administration can resolve the statutory conflicts that prevent full alignment of 42 CFR Part 2 requirements with the HIPAA requirements that govern other patient health information.
ENSURING THE SAFE AND EFFECTIVE USE OF AI
We support OSTP’s assertion that “the realization of the benefits from AI applications cannot be done through complete deregulation but requires policy frameworks, both regulatory and non-regulatory. Suitable policy frameworks enable innovation while safeguarding public interest.” The AHA has consistently advocated for AI policy frameworks that balance the need for flexibility to drive market-based innovations, while simultaneously providing appropriate boundaries to ensure privacy and safety. Oftentimes, the level of risk associated with AI-enabled tools in the health care context can be especially significant as their use can directly influence access to care and delivery of care for patients. With this in mind, we have several recommendations for AI policies to ensure the safe and effective use of AI in the health care ecosystem.
Include Trained Clinicians in Decision Loop for Health Care AI Tools
A key driver of excessive administrative costs for hospitals and health systems is the onerous requirements imposed by commercial insurers to check patients’ eligibility for coverage, bill for payment, and process prior authorizations and appeals of coverage denials. Most claims initially denied by insurers (70%) are ultimately paid, meaning a significant amount of administrative cost is a complete waste.7
To make matters worse, commercial insurers’ use of AI to determine disposition of claims and prior authorizations has exacerbated inappropriate denials. A U.S. Senate Permanent Subcommittee on Investigations report from last year found that certain plans had significant increases in prior authorization denials, in part driven by automated tools, and that the use of AI for prior authorization potentially targeted financial gain over medical necessity.8 A recent American Medical Association survey also found that 62% of doctors think that payer use of AI is increasing denials for medically necessary care.9
To mitigate this, we have advocated for HHS and congressional action to ensure that clinicians — not just AI tools — are included in the decision loop for any recommendations of partial or full denial of requested items or services.10,11 While the use of AI tools to more quickly process prior authorizations is not inherently problematic, it is imperative that any recommendation to deny care, whether it is or is not AI-generated, is independently reviewed by a clinician. Further, human reviewers should have the requisite training and expertise to engage in an informed medical decision about a patient’s condition and the proposed treatment plan.
Applying Consistent Privacy and Security Standards for Third-party Applications
According to the HHS OCR, the number of individuals impacted by health care data breaches increased from 27 million in 2020 to a staggering 259 million in 2024.12 Notably, most protected health information (PHI) data breaches reported to OCR were the result of hacking incidents targeting non-hospital health care providers, including third-party service and software providers.
AI systems rely on large data sets to maximize their predictive power. However, aggregating large data sets may also pose unique cybersecurity vulnerabilities that can further be exposed by privacy and security standards gaps. With the rise in third-party vendor PHI data breaches, it is essential that entities that hold or process PHI (including certain AI vendors that may not meet the definition for covered entities or business associates under the current law) and not currently covered by HIPAA be subject to similarly rigorous privacy and security standards.
We recommend that third-party entities (including AI vendors) that collect, hold or transmit PHI should be held to the same standards and accountability as covered entities and business associates. Third parties also should be accountable for the privacy and security of data they pull from covered entities and business associates.
Developing Post-deployment Standards for AI
Hospitals and health systems continually assess the strengths and limitations of all AI models they use. The “black box” nature of many AI systems can make it more challenging for hospitals and health systems to identify flaws in models that may affect the accuracy and validity of an AI tool’s analyses and recommendations. There are many reports of certain AI tools producing “hallucinations” or false results based on flaws in model design or biases in underlying data. This underscores the importance of ongoing developer testing to maintain AI model validity. Voluntary premarket and post-deployment standards that are developed with the input of stakeholders across the health care ecosystem can support the safe and effective use of these tools. We support the development of premarket and post-deployment testing standards to ensure the ongoing validity and integrity of AI algorithms in health care. As the administration considers approaches to streamlining this evaluation, we encourage the agency to consider end-user burden and take steps to minimize it.
ADDRESSING ORGANIZATIONAL AND INFRASTRUCTURAL FACTORS
We agree that AI adoption for health care providers may be influenced by organizational factors like workforce readiness and institutional capacity. However, there are also broader infrastructure factors that may limit the ability of providers and patients to adopt these tools. The AHA provides the following recommendations to address organizational and infrastructure factors limiting the adoption of AI.
Aligning Incentives to Support AI Adoption
While hospitals and health systems recognize the potential benefits of AI solutions, inadequate reimbursement has left many without the resources to invest in the infrastructure necessary to develop and deploy AI tools. Implementing new technologies and standards often requires significant financial investment, education and workflow changes for health care providers. Ensuring appropriate reimbursement can support wider adoption of these tools and ultimately support improved access to services. We recently provided feedback to CMS on payment for Software as a Service, including cost factors that the agency should consider when setting payment rates for AI-enabled services.13,14 We recommend aligning incentives to support broader adoption of AI by providers and patients.
Addressing Infrastructural Barriers
The expansion of digital health products, including AI tools, to rural and underserved populations has been hindered in part by a lack of access to enabling technologies (like broadband, reliable Wi-Fi or smartphones) and education to support digital literacy. Indeed, the Federal Communications Commission (FCC) reported that in 2020, over 22% of Americans in rural areas lacked access to appropriate broadband (fixed terrestrial 25/3 Mbps) compared to 1.5% of urban areas.15 Furthermore, according to a recent report from the Assistant Secretary for Planning and Evaluation, over 26% of Medicare beneficiaries reported not having computer or smartphone access at home.16 The lack of infrastructure, such as broadband and reliable WiFi, has contributed to the “digital divide,” where rural and other underserved areas have less access to digital services, including AI tools for clinicians and patients. These data points suggest that investment in foundational infrastructure and educational resources may increase providers’ and patients’ access to digital health and AI applications. We encourage cross-agency collaboration to develop training and potential grant funding opportunities to support patient educational efforts on digital health tools. This could include coordination across agencies such as HHS, the FCC, the Department of Commerce, the Department of Agriculture and the Department of Education.
We look forward to working with OSTP on AI regulatory reform as well as other areas of OSTP’s AI Action Plan. Please contact me if you have questions, or feel free to have a member of your team contact Jennifer Holloman, AHA director of health IT policy, at jholloman@aha.org.
Sincerely,
/s/
Ashley Thompson
Senior Vice President
Public Policy Analysis and Development
__________
1 “Active steps to reduce administrative spending associated with financial transactions in US health care,” Sahni, N., et. al., Health Affairs Scholar, Volume 1, Issue 5, November 2023, qxad053, https://doi.org/10.1093/haschl/qxad053.