As artificial intelligence (AI) becomes increasingly integrated into clinical research, Institutional Review Boards (IRBs) face new challenges in evaluating associated risks and ethical implications. IRBs need a framework that offers structured questions and decision-making tools to help them determine when and how to apply appropriate, proportionate oversight, consistent with regulatory directives. This article will discuss key topics, including aligning review practices with regulatory standards, assessing AI-specific risks and benefits, and addressing the growing debate around ‘AI exceptionalism,’ in ethical review.
Institutional Review Boards (IRBs) oversee clinical research to protect the rights and welfare of human participants. Ethical guidelines such as the Belmont Report, Declaration of Helsinki, and Nuremberg Code, as well as regulations issued by the U.S. Department of Health and Human Services and the U.S. Food and Drug Administration, provide ethical standards by which research should be conducted. However, these guidelines and regulations were developed well before artificial intelligence (AI) came to the forefront. Rudimentary applications of AI have been used in medical devices for many years; however, it wasn’t until recently that AI permeated society and clinical research in a more expansive way.




