James Riddle of Advarra, writing in the January 2025 edition of DIA’s Global Forum magazine, looks at how the EU’s AI Regulation will impact clinical trials in Europe.

 

In an era where artificial intelligence (AI) is revolutionizing healthcare and clinical research, regulatory bodies are racing to keep pace. The European Union’s Artificial Intelligence Regulation 2024/1689 represents a watershed moment in AI regulation, with potentially far-reaching implications for the global clinical research community. What will be the regulation’s impact on clinical trials, and how might stakeholders navigate this new regulatory landscape?The artificial intelligence (AI) revolution is getting attention from everyone, including regulators. The European Parliament recently took a significant regulatory step in the oversight of AI technologies by adopting the AI Regulation. This landmark legislation seeks to produce a comprehensive framework for the development and deployment of AI, helping ensure its safety, ethical use, and transparency for European Union (EU) residents. The AI Regulation’s introduction comes at a critical juncture, as the integration of AI in clinical trials accelerates, raising questions about data integrity, patient safety, and regulatory compliance.

The EU AI Regulation is applicable to all industries. However, there are potentially unique implications specifically for clinical trials, where researchers increasingly use AI for tasks like medical image analysis, natural language processing for endpoint analysis, and generating/analyzing data for synthetic control arms. Non-EU entities in the clinical research community should be familiar with the AI Regulation and how it impacts their business. Understanding related efforts currently underway at the US Food and Drug Administration (FDA) are also key to ensuring compliance when including AI in the clinical trial setting.

 

An Overview of the AI Regulation

The full EU AI Regulation will be enforced in March 2026, although some compliance dates are already in effect as of August 2024. The EU AI Regulation uses four risk levels to categorize AI applications: unacceptable, high, limited, and minimal. This risk-based approach applies to all industries and aims to balance innovation with safety, ensuring that AI applications with the potential to significantly impact human health and safety are subject to stringent oversight.

The AI in benign gaming apps and language generators is one example of a system that might be considered of “limited” or “minimal risk.” These applications still must meet specific standards to ensure ethical use, but they may face fewer regulatory requirements. “High-risk” systems must comply with strict regulatory requirements regarding transparency, data governance, registration with the central competent authorities, and human oversight. Unacceptable-risk AI systems are banned entirely by the regulation.

High-Risk AI-powered Systems: Key Requirements in the EU AI Regulation
Many AI-based systems used in contemporary clinical trials may be considered “high risk” under the AI Regulation. This includes technology like drug discovery software, study feasibility solutions, and patient recruitment tools. Consider these key requirements for “high risk” AI systems in the context of clinical trials (for an exhaustive list of requirements, reference the full AI Regulation):

  • Transparency and explainability
  • Data governance
  • Human oversight
  • Accuracy and reliability
  • Ethical considerations
  • Continuous monitoring

These requirements underscore the EU’s commitment to ensuring that AI systems which may be used in clinical research are not only effective but also trustworthy and aligned with ethical standards.

 

Read the full version of this article on the DIA Global Forum website here