Search
Close this search box.

EU AI Act agreed: 5 key considerations for businesses for the road ahead

Introduction

After a long and intensive period of negotiations, a provisional agreement was reached on the AI Act on 8 December 2023.

Although the final text has not yet been published, organisations who develop, provide or use AI in any way should take the opportunity now to understand the impact of this groundbreaking legislation and to familiarise themselves with the core elements ahead of time.

This blog picks out some of the broader points around the AI Act at a high level which businesses should be aware of when looking to get ahead on mapping out their AI strategy.

Key considerations

1. Scope

As with the EU GDPR, the AI Act will have a far-reaching effect and capture those who deploy or use in-scope AI systems in the EU, regardless of their location. For example, a UK-based AI-powered technology company will be caught by the AI Act if it deploys its technology to customers in the EU.

Other players in the AI supply chain may also be caught. For example, importers and distributors of AI systems, product manufacturers who deploy AI systems in the EU and authorised representatives of providers which are established in the EU.

Companies should be checking now whether the AI systems they develop, manufacture, distribute or use are within scope to be regulated, as well as which category those activities involving AI fall into as this will determine the level of obligations and level of risk.  

2.Classifications

There are certain AI uses which are banned under the AI Act. This includes untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, emotion recognition in workplaces or educational environments and social scoring based on behaviour or personal characteristics. Businesses should take steps to identify whether any of their uses of AI fall within this ‘unacceptable risk’ category and consider removing them from their products and services if they want to maintain access to the EU market.

There are also certain AI uses which are classed as high-risk due to their potential to cause significant harm. The key takeaway here is that high-risk uses of AI are subject to stricter requirements which impose additional obligations such as maintaining detailed technical documentation, ensuring human oversight and implementing risk-mitigation systems. Businesses should assess their use of AI and consider whether they need to comply with these additional obligations as this could result in higher compliance and operational costs.

3. AI Governance

A company may use AI in a variety of ways, such uses falling into different risk-based categories under the AI Act. Therefore, a ‘one size fits all’ AI governance strategy may not be appropriate. When structuring an AI governance team, businesses should consider including individuals from a range of existing teams to ensure that the requirements of the AI Act can be fully met. For example, although certain requirements will be familiar to privacy teams (e.g. risk and impact assessments), when it comes to AI there is a level of technical knowledge needed relating to testing and monitoring of systems, oversight and transparency requirements. A wide range of technical knowledge and skills across AI, privacy and security is therefore the best approach for comprehensive governance strategy.

4. Interplay with wider regulatory landscape

The AI Act will not exist in a vacuum and is not the beginning and end for AI governance. It must be read alongside other laws in the regulatory landscape e.g. GDPR. The interplay with privacy is clear, given that data is at the heart of AI systems. This inextricable link is demonstrated by, for example,  the provisions in the GDPR on automated decision-making. Earlier this month we saw the first judgment where the CJEU interpreted Article 22 GDPR when deciding what constitutes ‘automated decision-making’ (C-634/21 SCHUFA Holding (Scoring)).

There is also proposal for further regulation which was put forward around the same time as the AI Act – the AI Liability Directive and Updated Product Liability Directive, which are essentially revisions to long-standing EU products rules, will make it easier for individuals in the EU who are harmed by AI systems to seek compensation.

5. Enforcement

Fines for non-compliant entities could reach 7% of global turnover (capped at €35 million). This is higher than under the EU GDPR which currently stands at 4% of global turnover or €20 million. Businesses should take into account the potential for this significantly higher financial risk than they may have been used to under GDPR.[1]

At a national level, the specifics of enforcement are generally still unknown, although some member states have already started to designate roles for certain regulators in the AI enforcement space. For example, Spain was the first to assign a standalone AI governance regulator[2]. However, given the cross-dimensional nature of AI, it may be the case that some member states designate several existing regulators to take the lead on AI enforcement.

At an EU level, the AI Act will set up a new AI Office to coordinate compliance, implementation, and enforcement. It will be the first body globally to enforce binding rules on AI.

Businesses should also be aware that individuals will have a right to file complaints about AI systems and to receive explanations about decisions based on high-risk AI systems that impact their rights.

Next steps and date

We still need to wait for the finalised text of the AI Act to know the specifics of what has been agreed at a technical level. This is expected to be published in late January or early February 2024, noting that  the Parliament would need to adopt the text before the current session ends and also considering the elections which will be held in early June.

The AI Act then needs formal approval by the Parliament and the Council and will enter into force 20 days after its publication in the Official Journal.

The AI Act will then become applicable two years after its entry into force, with the exception of provisions on prohibitions which will apply after six months and the rules on general purpose and high-risk AI which will apply after 12 months.


[1] Note that the level of the fines will vary based on the type of infringement and the size of the company

[2] The Spanish Artificial Intelligence Supervisory Agency (AESIA) was established on 22 August 2023

Share:

More Posts

Send Us A Message