We identified 26,046 policy records (2976 for EU, 3161 for Belgium, 368 for Estonia, 2947 for France, 1084 for Germany, 3466 for Italy, 205 for Malta, 418 for Poland, 9421 for Portugal, 1364 for Sweden, and 636 for the UK). Additionally, 757 academic records were identified through scientific and grey literature searches (457 through PubMed; 300 through Google Scholar). The final number of sources included in the qualitative synthesis was 141. The PRISMA flowchart (see Fig. 1) shows the details of the search strategy for this policy mapping. Figure 2 shows the high-level details of how the AI regulatory framework is currently composed based on the final clustering of five categories: 1) AI regulation, 2) processing data, 3) technology appraisal, 4) supporting innovation and 5) health & human rights. An overview of country-specific details are included in Supplementary Tables 3 and 4. It is important to keep in mind that the EU legislation applies to all studied countries, except for the UK for legislation published after its departure from the EU. Thus, when there is no national law in place, European law is directly applicable without supplementary national law: regulations apply directly across the EU upon ratification, whereas Directives still need to be transposed into national law.
Fig. 1: A PRISMA flowchart outlining the data collection process.
The provided figure shows a PRISMA flowchart depicting a systematic overview of the identification, screening, eligibility, and inclusion processes used to determine the final set of studies included in the qualitative synthesis. Source: flowchart is adapted from Moher et al.70, which is an open-access article distributed under the terms of the Creative Commons Attribution License (CC-BY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited70.
Fig. 2: Regulatory framework for artificial intelligence in healthcare and population health.
The regulatory framework for artificial intelligence in healthcare and population health consists two parts: the technological part and the health and human rights part. The key components of each of the four dimensions in the the technological regulatory landscape for artificial intelligence (AI) in healthcare and population health are outlined as AI Regulation, Processing Data, Technology Appraisal, and Supporting Innovation. The health and human rights part complements the technological regulatory landscape to ensure that the total regulatory framework encompasses the specific needs, norms, and values of the health domain. Source: authors’ own creation.
AI regulation
Any AI system being placed on the EU market falls under the scope of the novel EU AI Act. It aims to ensure the ethical development of AI in Europe and beyond its borders while protecting health, safety, and fundamental rights. In the AI Act, specific requirements and obligations for high-risk AI systems are listed, such as a risk management system, draw-up of technical documentation, post-market monitoring, and the need to be developed in such a way that its operations are sufficiently transparent. The design and development of high-risk AI systems should be in such a way that natural persons can oversee their functioning. For the development of future AI systems, the EU AI Act provides the possibility to establish AI regulatory sandboxes at national level as well as the testing in real-world setting prior to market placement. The testing of high-risk AI systems in real-world conditions is also allowed for certain types of AI systems. High-risk AI systems finally need to undergo third-party conformity assessment to verify their performance and safety before market placement. No national binding policies were identified.
Data processing
AI relies on data for its development and use, making rules concerning data protection, access, and (re)use fundamental. The European legal framework for personal data protection is embedded in the General Data Protection Regulation (GDPR). The purpose of the GDPR is to “protect fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data” while enabling the free movement of such data. This duality means that for personal data to move freely, different legal requirements must be in place. The GDPR binds all EU member states and governs data transfer and handling in non-EU countries. Every country that receives or processes data from the EU needs to demonstrate compliance with the standards of the GDPR. As this also applies to countries such as the United States of America, China, and Australia, the GDPR can be viewed as quite influential in the promotion of personal data collection and processing at a global scale23.
At the national level, additional data protection policies were identified in eight of the ten included countries. To protect the processing of personal data, Belgium, Estonia, Germany, Malta, Poland, and the UK adopted the GDPR or modified their current legal base to complement the GDPR (France). The GDPR enables the member states to implement specifications on certain rules set out in the GDPR, resulting in some minor differences between the data protection laws. Relevant differences for this paper are the requirements of processing for public interest (Article 6 GDPR), of processing in the context of employment (Article 88 GDPR), of the age for when a child can give consent themselves (Article 8 GDPR), as well as processing of special categories of personal data (including health) (Article 9 GDPR). For example, Estonia, France, Germany, Malta, and Poland give specific rules, which need to be satisfied in order to process personal data on a public interest basis. Furthermore, France, Germany, and Malta passed additional data policies for health data specifically. In Germany, the context of this additional protection is the processing of patient data for use within the healthcare sector; in Malta, it involves the processing for insurance reasons as well as the secondary processing of personal data in the health sector; and in France, it is the processing of personal data within the Health Data Platform. France specifically states that the data within this platform can be used for the implementation and evaluation of health and social protection policies, for analysing health insurance expenditure, for health surveillance, monitoring, and security, and for research, studies, evaluation and innovation in health and social care.
Additional regulations apply to the protection of non-personal data. The Open Data Directive (Directive (EU) 2019/1024) intends to “promote the use of open data and stimulate innovation in products and services” by setting rules for the re-use of public sector information. The national transposition of the Open Data Directive was identified for Germany, Malta, Sweden, and the UK. Building on the Open Data Directive is the Data Governance Act, which increases the availability of data and facilitates data sharing by specifying conditions for the re-use of certain protected data held by public sector bodies (including data protected because of intellectual property rights) and providing processes and structures that should facilitate voluntary data sharing. The Data Act, passed in 2023 and applicable from September 2025, fosters fair access to and use of data that has been generated using products or services. It specifies specific requirements for the accessibility and transfer of the generated data. Both of these new Acts are regulations and therefore applicable to all member states.
While the above-mentioned policies specify the protection of data while processing, interoperability standards are setting the framework for making processing possible. The Interoperable Europe Act specifies the framework for cross-border interoperability of public services. In its form as a regulation, it will be applicable to all member states. Furthermore, as health systems can be regarded in most member states as public services, the requirements for interoperability standards will be applicable to all public health systems in the EU member states.
Technology appraisal
AI systems are often embedded in other technologies, meaning general technology policies could be applicable. Technology policies are not a direct competence of the EU (Article 4 TFEU). However, the competence of the EU includes the proper functioning of the single market, meaning it can set standards for the safety of technological products being placed on the EU market. The General Product Safety Regulation (GPSR), applicable from December 2024, offers a broad-based framework for the safety of products being placed on the EU market, which do not fall within sector-specific safety regulations (e.g. medical devices, food). It defines a product as an item, which is intended for or used by consumers, either as a stand-alone product or interconnected to other products, hence potentially covering AI technology embedded in other products.
Products used for health and medical purposes are specifically regulated at the EU level with the Medical Devices Regulation (EU) 2017/745 (MDR) and the In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746 (IVDR). Their purpose is to ensure that each medical device performs consistently with its intended purpose and complies with the general safety and performance requirements. Medical devices can be split into 4 risk classes (I, IIa, IIb, III) with AI medical devices being considered class IIa or higher due to the provision on software as medical devices, which automatically corresponds to them being considered high-risk AI systems under the AI Act (see Fig. 3). Before market placement, medical devices of class IIa and higher need to undergo third-party conformity assessment to provide a sufficient body of clinical evidence regarding the safety and performance of the medical device. Furthermore, Regulation (EU) 2021/2282 seeks to improve the availability of innovative medical technologies by setting an overall framework for the joint clinical assessment of health technologies. All medical devices and in vitro medical devices are also covered within this Regulation. The three regulations are directly applicable in all ten countries. From two of these ten countries (Malta and Sweden) updated national regulations were identified.
Fig. 3: Risk classification process following the EU AI Act and EU Medical Devices Regulation.
The figure shows a flowchart for the risk classification of artificial intelligence (AI)-enabled medical devices under the Medical Devices Regulation (MDR) and the AI Act. It begins with the question: “Is the technology an artificial intelligence system?” If the answer is “No,” it falls outside the scope of the AI Act. If “Yes,” it proceeds to check if the intended purpose is within the scope of medical devices. A “No” here means it is outside the scope of MDR. If both answers are “Yes,” the flowchart notes that the vast majority of software as a medical device is classified as risk category IIa or higher. Medical devices with risk class IIa or higher are high-risk artificial intelligence systems. This means that these AI-enabled medical device require Conformité Européene (CE) marking through a decentralized notified body. The boxes positioned on the left side indicate the specific section of the MDR or AI Act that informs that specific step. Source: authors’ own creation.
With the growing social and economic importance of the internet, policies for the digital sector have been enacted. One area is the protection of the Union against cyber-attacks by passing cybersecurity policies: Directive (EU) 2022/2555, which member states need to incorporate in their national law until October 2024, and the Cybersecurity Act Regulation (EU) 2019/881, which establishes the ENISA and a framework for voluntary European cybersecurity certification schemes. From five of the ten countries (France, Germany, Sweden, Portugal, and the UK) national law on cybersecurity was identified.
The two most recent EU policies for the digital sector are the Digital Markets Act (DMA) and the Digital Services Act (DSA). The DMA focuses on ensuring fair and open digital markets by setting rules for large and impactful online platforms and services (e.g. online search engines, online social networking services, web browsers, and virtual assistants). The DSA addresses online intermediaries (e.g. online marketplaces, social networks) and directs them, amongst others, to include information on measures and tools used for content moderation, including algorithm decision-making. Both regulations are binding for all EU member states.
Supporting innovation
The establishment of innovation-friendly environments is embedded in the TFEU, stating that the EU should “have the objective of strengthening its scientific and technological bases”. Several binding decisions established different funding programs, such as Horizon Europe, which have AI as one of their priority areas. Complementary to the EU programs, there are national projects or programs, which aim to foster (health) innovation. In Germany, health insurance companies may promote the development of digital innovations “to improve the quality and cost-effectiveness of care.” The focus of Italy’s projects (the highly specialised competence centres and the ITS Academy) lies on training and capability building and funding. In Malta, important legislative steps were taken with the establishment of the Digital Innovation Authority. In Portugal, Decree-Law No. °67/2021 of July 30 sets the conditions for creating technological free zones (TFZs). TFZs are test sites which “intends to test new policy concepts, forms of governance, financing systems and social innovations”. Though not specifically geared towards AI, such settings allow for the testing of a broad range of potentially disruptive innovations. Notably hereby is that “The tests must not call into question the safety of people, animals and property, and must properly safeguard health and environmental risks in compliance with applicable legislation;”.
The EU provides regulations for the legal protection of copyrights and related rights in the information society and the digital market for enhancing the development of innovation. In particular, the 2019 amendment of Directive 2001/29/EC on the harmonisation of certain aspects of copyright and related rights in the information society stipulates in Article 1 how computer programs can fall under the protection of intellectual property legislation, while Article 2 highlights that computer programs are not among the eligible parties whose outputs can receive protection under intellectual property legislation. This is particularly relevant in the age of generative AI, where the AI program itself might fall under the copyright umbrella, but the outputs of that generative AI might not.
Health & human rights
Compared to the other policy domains, health and human rights policies influence AI by establishing standards that should benefit society as a whole and contribute to the well-being of individuals. An important standard on the EU level is the European Convention on Human Rights (last amended in 2021). The convention states, amongst others, the importance of the right to private life and correspondence, which can only be restricted “in the interest of national security, public safety or the economic well-being of the country (…) for the protection of health or morals”. All ten countries in our study signed this Convention, and are therefore bound to it.
In 2021, Portugal passed its “Portuguese Chapter of Human Rights in the Digital Age” law, which states that “The use of artificial intelligence should be guided by respect for fundamental rights, ensuring a fair balance between the principles of explainability, security, transparency, and responsibility, which meets the circumstances of each specific case and establishes processes aimed at avoiding any prejudice and forms of discrimination”. In Sweden, both the “Patient Data Law” and the “Discrimination Act” are central parts of Swedish legislation, but neither explicitly mention AI. Furthermore, the Swedish “Patient Law” specifies that the patient needs to receive information about the proposed treatment. It remains unclear to what extent this is applicable when using AI-enabled medical devices for treatment support. Moreover, to what extent the patient should be informed on and how much information can be provided about the AI technologies embedded in medical systems or devices remains ambiguous. France, Italy, and Malta have specific bodies to uphold health and human rights standards in line with their national legislation. However, the scope of their responsibilities is broad, leaving it undefined whether it captures AI.