As artificial intelligence (AI) gains prominence in drug development, regulatory agencies such as the US Food and Drug Administration (FDA) are creating and implementing frameworks to promote the responsible use of AI for medical products. The FDA defines AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.
A perspective article published in JAMA in January 2025 noted that the FDA’s first approval of a partially AI-enabled medical device occurred in 1995 for PAPNET, a software used to prevent misdiagnosis in women undergoing Papanicolaou tests for cervical cancer.2 The article indicated that, since the approval of PAPNET, the agency has authorised approximately 1,000 AI-enabled medical devices. The authors, including former FDA commissioner, Robert M. Califf, MD, stated that to “keep up with the pace of change,” in AI across biomedicine and healthcare, regulators will have to “advance flexible mechanisms.” While the agency has developed a total life cycle approach to support the deployment and innovation of AI-enabled products, industry and other external stakeholders will need to “ramp up,” their evaluation and quality management of AI “beyond the remit of the FDA.”




