The EU AI Act: Incoming Regulations

by | May 24, 2023

An AI Legislative and Regulatory Update

Key Takeaways

A number of regulatory or legislative approaches to address the risks and legal issues around AI are emerging and creating a patch quilt of regulations from a global perspective. National and regional regulatory institutions are developing laws and regulations governing AI, resulting in a patchwork quilt of global regulations. At the moment, the EU is leading the charge.

The US and EU have similar guiding principles for regulating AI, but there is a growing gap in approach (centralized vs. decentralized) and enforceability (the US is more problematic from an enforceability perspective). Trade advocates are pointing to the need for greater harmonization between the two approaches, with the EU having more strict requirements for applications with socioeconomic effects.

While the EU AI Act has received praise for the greater role in civil society participation, there are concerns it could be detrimental to open-source large language model-based companies. The compliance requirements around “general purpose AI” that includes large language models is concerning for the open-source community. There may be a need for greater specificity around open-source that provides safety while allowing for innovation from an important part of the AI ecosystem.

Introduction: The EU AI Act

The EU AI Act is a proposed legislation by the European Union (EU) aimed at regulating artificial intelligence (AI) systems within the EU. Its purpose is to establish a legal framework that governs the development, deployment, and use of AI technologies in a manner that promotes transparency, accountability, and ethical considerations.

Here is what ChatGPT had to say about the EU AI Act:

“The act seeks to address potential risks associated with AI, such as privacy violations, discrimination, and the erosion of fundamental rights. Furthermore it aims to strike a balance between fostering innovation and ensuring the protection of individuals’ rights and well-being.”

Some key elements that were anticipated to be included in the EU AI Act include:

  1. Risk-based approach: The act would likely categorize AI systems into different risk levels based on their potential harm, with stricter regulations applied to high-risk AI systems.
  2. Prohibited AI practices: Certain AI applications that are considered unacceptable or high-risk, such as AI systems used for social scoring and certain types of biometric identification, might be prohibited or heavily regulated.
  3. Transparency and accountability: The EU AI Act would likely require developers and deployers of AI systems to provide clear information about the capabilities, limitations, and intended purposes of their AI technologies. It may also mandate the use of human oversight and the ability to explain AI decisions.
  4. Data governance: The act emphasizes the importance of data protection, privacy, and data governance in the development and deployment of AI systems, ensuring compliance with existing EU regulations such as the General Data Protection Regulation (GDPR).
  5. Market surveillance and enforcement: The EU AI Act would likely establish mechanisms for market surveillance to ensure compliance with the regulations. It may include provisions for testing, certification, and enforcement measures.

(The ChatGPT take on the EU Act was pretty accurate despite the recent activity on the bill that came out post-gpt4 release). The US and EU have been coordinating their regulatory efforts in order to facilitate EU-US trade; however, there are important differences. Both utilize risk-based approaches that classify AI use cases from high- to low-risk areas with different levels of regulatory requirements. However, the US has a distributed approach across different agencies and development of regulatory guidance has been slow with only 4 out of 41 agencies having created AI regulatory plans as of December 2022, with Health and Human Services serving as one of the most robust examples.[1]

While May 2023 has seen a congressional hearing on AI regulation with both industry titans and critics of AI testifying, the US approach has been outlined through the Blueprint for an AI Bill of Rights which relies on a sector-driven approach to regulating AI. Most agencies are left with using existing laws to regulate AI and many policy advocates have been pointing to these and the need to enforce already existing legislation on the books. The FTC of late has been playing a more active role in prosecuting privacy violations in the cases of sharing data with Meta and some of the femtech companies that have violated users’ privacy rights with flagrant sharing of personal data with third parties. States such as California, Vermont, and Connecticut have been moving ahead on their own legislation to address algorithmic harms[2] and another 14 have at least introduced AI legislation.

Details of the EU AI Act

The EU AI Act is a more detailed approach than US regulatory approaches so far and lays out high risk or problematic applications such as deepfakes, chatbots, and biometrics for closer scrutiny and outright bans for those classified as “unacceptable risks.” This category includes social scoring along Chinese approaches, those considered manipulative, and biometric applications used by law enforcement in public spaces.  The high-risk applications used for socio-economic decisions will have to meet standards for data quality, accuracy, robustness, and non-discrimination with transparency in technical documentation, risk management systems utilized, and human oversight. This will mean that digital health and many AI-related products will be placed in a continuum of high- to low-risk categorizations and be regulated by these designations.

This also means that many medical products will be regulated under existing rules from the New Legislative Framework and will not be required to undergo a second review for conformity assessment for the AI Act. However, there are also new potential regulatory moves that may directly impact large language models (including multi-modal) and those considered to be “general purpose AI systems.”

The Brookings Institute (ibid) has analyzed the EU-US approaches and found similar approaches from a risk management perspective that covers many of the buckets around responsible AI frameworks, including: accuracy and robustness, safety, non-discrimination, security, transparency and accountability; explainability and interpretability; and data privacy with both also relying heavily on standards bodies to create guardrails. However, the EU adopts a more centralized, comprehensive approach compared to the more decentralized, ambiguous approach that US agencies have adopted so far. This means that the US approach is currently less legally enforceable than the EU counterpart.

The Brookings Institute analysis also notes substantial misalignment between the EU-US for AI applications for socioeconomic decisions in areas such as finance, housing, and employment. They note that these domains have not worked as closely with NIST and other standards bodies. Healthcare is a bit different in this regard and the NIST risk management framework is playing an increasingly important role in how the AI regulatory discussion is progressing in the health IT industry. Online platforms are coming under heavy scrutiny in the EU, given the higher risk levels, and the US has fallen behind.

One area that may grow in importance in the coming years is the open-source generative AI space, despite the attention on OpenAI’s proprietary approach to GPT4. While receiving accolades for the larger civil society role in the latest version of the AI Act proposed rules, the approach to “general purpose AI systems” that include large language models (LLMs) is viewed as having a “chilling effect” on the development of open source models. The regulatory requirements embodied in the current proposed rules would create substantial compliance barriers for smaller companies that typically contribute to the open-source ecosystem. Critics are pointing out the unfair advantage this creates for open-source proponents vs. big tech.

Conclusion

Companies developing AI applications will increasingly face a growing patchwork quilt of regulatory frameworks across different geographical scales. In the US states, examples such as the California Consumer Privacy Act of 2018 have a different set of pro-consumer privacy regulations that limit the data that can be collected on consumers and more rights over the control of their data. As seen above, there are divergent views between the EU-US as well. Companies that work across these jurisdictions will need to map out compliance frameworks that meet the highest level of regulatory requirements to reach global markets.

The big tech players responsible for creating most of the dominant LLMs to date are aggressively lobbying against the EU AI Act which would make the likes of OpenAI, Microsoft, and Google liable for harm resulting from their models. Their critics are worried about how OpenAI is using plug-ins that may make phishing and scams easier.

Shifts in the US political landscape have increasingly moved the power to regulate businesses from Congress to the Supreme Court. This has the effect of putting the power to determine how much these regulations are enforced will rest in the courts, rather than primarily from government agencies that have historically been responsible for regulation. The EU may ultimately lead in the development of actual, enforceable AI legislation.

The differences between these various regulatory regimes may add greater impetus to building robust responsible AI frameworks and responsibility-by-design approaches early in the technology development process to attain universal compliance; one could make the point that adoption of this approach could result in more trustworthy technologies with more robust adoption in the long run. The EU and California are adopting more consumer-friendly regimes than the US federal approach.

In our Trust and AI in Healthcare report, we provide an overview of the frameworks being developed to build trust in AI in healthcare. Moreover, the report covers where we feel that industry should go with cooperation to ensure that only the highest quality models are utilized in the market. This means going beyond standard compliance checklists. The Coalition for Health AI hosted by Mitre is working on the standards and processes across the various domains that will need to be certified and validated. At Chilmark Research, we are doing a deep dive into the various certifications and evaluation frameworks so that in the coming months we will be able to offer advisory services for companies wanting to develop a more robust responsible AI program.


[1] https://www.brookings.edu/research/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/

[2] Ibid, see FN 1.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Content

HIMSS24: Back to Form but Haunted by Change Healthcare

HIMSS24: Back to Form but Haunted by Change Healthcare

Good luck trying to get noticed for anything other than AI or cybersecurity HIMSS24 was the first HIMSS national conference that I will have missed since I first attended in 2012. It felt weird not to be there with all my friends and colleagues, and I certainly missed...

read more
ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

Workforce / capacity issues and AI – and where the two meet – are still the two biggest topics on clinical executives’ minds right now at both ViVE 2024 and HAS24. Probably the first time I’ve seen the same primary focus two years in a row – historically we’ve always seen a new buzzword / hype topic every year…

read more
Powered By MemberPress WooCommerce Plus Integration