As artificial intelligence (AI) applications have proliferated throughout health care, there is still a need for guidance and guardrails to ensure ethical, unbiased and appropriate use of the technology.
A recently released blueprint from the Coalition for Healthcare AI (CHAI) aims to do just that. CHAI and the National Association of Medicine coordinated with experts from academic medical centers, health care, technology and other sectors along with several federal agencies to produce the document.
“Transparency and trust in AI tools that will be influencing medical decisions is paramount for patients and clinicians,” said Brian Anderson, M.D., a co-founder and chief digital health physician at MITRE. “The CHAI blueprint seeks to align health AI standards and reporting to enable patients and clinicians to better evaluate the algorithms that may be contributing to their care."
Among other things, the guide covers:
- Key elements of trustworthy AI in health care.
- Systemic, statistical and computational biases.
- Secure and resilient AI applications.