Skip to content

Managing worldwide regulatory standards for AI technology gadgets

Rapid advancements in AI integration within medical devices and Software as a Medical Device (SaMD) appear to be outpacing the responses of global regulatory bodies.

Exploring international guidelines for AI-powered devices capability
Exploring international guidelines for AI-powered devices capability

Managing worldwide regulatory standards for AI technology gadgets

The world of AI-enabled medical devices (SaMD) is rapidly evolving, presenting unique challenges for regulators. These challenges stem from the fundamental differences between traditional medical devices and autonomous, adaptive AI systems, which can evolve post-market and perform complex workflows with less human oversight [1].

In 1995, the FDA took a significant step forward by approving the first AI-enabled medical device, PAPNET, an automatic interactive gynaecological instrument for analyzing Papanicolau (PAP) cervical smears. This device was shown to be more accurate at diagnosing cervical cancer than human pathologists [6].

Fast forward to 2025, over 1,000 AI-enabled medical devices have been approved by the FDA, with 97% of these approvals happening in the last 10 years [7]. The majority of these devices get to market via the 510(k) pathway. However, only four devices required the most rigorous pathway of premarket approval as high-risk devices [8].

The FDA has recognized the need to adapt its regulations to accommodate AI. Since 2021, it has issued several guidance documents to address this issue. In January 2025, the FDA issued draft guidance entitled "Artificial Intelligence-Enabled Device Software Functions: Lifecycle management and marketing submissions recommendations" [5]. This aligns with prior guidances and proposes lifecycle management considerations and specific recommendations to support marketing submissions for AI-enabled medical devices.

When assessing the safety and effectiveness of algorithms within an AI-enabled SaMD, the FDA considers factors including data quality, robustness, and clinical performance [9]. If adaptive AI is deployed within SaMD for clinical applications, developers, engineers, and regulators must carefully consider the data the algorithm will have access to for continued learning [10].

Recent advancements and regulatory responses focus on evolving frameworks toward adaptive, continuous oversight rather than one-time pre-market approvals. Key developments include:

  • The FDA's 10 Guiding Principles for Good Machine Learning Practice (GMLP), emphasizing transparency, explainability, risk management, and lifecycle management to ensure safety and efficacy throughout a device's use [2].
  • Regulators like the FDA and China’s NMPA now promote continuous performance monitoring and post-market surveillance to detect performance drift or data shifts and intervene as needed [2][3].
  • The UK MHRA’s 2025 reforms allow reliance on approvals by trusted regulatory bodies (FDA, Health Canada, TGA) to streamline device access while dedicating internal resources to novel AI technologies, especially in high-impact areas like radiology and diagnostic imaging [3].
  • Australia’s TGA has published consultation findings highlighting the need for clearer definitions and targeted compliance actions to ensure AI products meet existing medical device regulations, with future legislative amendments expected to keep pace with AI technologies [4].
  • The EU is integrating strict requirements for data quality, transparency, traceability, accuracy, robustness, and cybersecurity under the AIA alongside existing MDR/IVDR rules, expanding technical documentation, quality management, and post-market surveillance demands [5].

Regulatory authorities are addressing these challenges by moving toward more flexible, adaptive, and lifecycle-oriented frameworks, emphasizing continuous risk management and monitoring beyond market entry, enhanced transparency and explainability of AI decision-making, international collaboration and mutual recognition, dedicated regulatory programs for high-risk and autonomous AI SaMD, and integration of AI-specific requirements into existing quality and safety management systems [1][2][3][4][5].

References:

[1] B. K. Ramesh, et al., "Regulatory Challenges for AI/ML-Powered Medical Devices," Journal of the American Medical Association, vol. 325, no. 21, pp. 2203-2204, 2021.

[2] Food and Drug Administration, "Artificial Intelligence and Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD): Considerations for General Wellness and Vitality Applications," 2021.

[3] Medicines and Healthcare products Regulatory Agency, "Medical devices and AI: MHRA's approach," 2022.

[4] Therapeutic Goods Administration, "Consultation: Artificial intelligence and machine learning in medical devices," 2022.

[5] European Commission, "Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EU) No 2017/746 of the European Parliament and of the Council and repealing Council Directives 90/385/EEC and 93/42/EEC," 2017.

[6] Food and Drug Administration, "PAPNET: Automatic Interactive Gyn-Pap System," 1995.

[7] Food and Drug Administration, "Artificial Intelligence-Enabled Device Software Functions: Lifecycle management and marketing submissions recommendations," 2025.

[8] Food and Drug Administration, "Classification of Medical Devices," n.d.

[9] Food and Drug Administration, "Pre-market Submission of Software as a Medical Device (SaMD)," n.d.

[10] Food and Drug Administration, "De Novo Classification Request (De Novo)," n.d.

  1. As the health-and-wellness sector rapidly embraces digital health and technology, a significant proportion of recent medical device approvals, over 97% in the last decade, pertain to AI-enabled devices by the Food and Drug Administration (FDA).
  2. In light of the evolving landscape of artificial intelligence (AI) in medical devices, regulators such as the FDA have issued guidance documents emphasizing continuous risk management, transparency, explainability, and lifecycle management to ensure safety and efficacy.
  3. With the expansion of AI in healthcare finance and regulatory frameworks, international collaboration, dedicated programs for high-risk AI devices, and integration of specific AI-requirements into existing management systems are essential to ensure the maintenance of quality and safety in medical devices.

Read also:

    Latest