A Comprehensive Guide to the EU AI Act

Article - What is Ai featured image

1. What is AI?

The concept of Artificial Intelligence has been around for quite some time now, mostly in movies and more often in a rather disastrous context. However, the last few years, with the uprising of ChatGPT, AI Image creators, deepfakes, etc., the popularity of AI is rising increasingly.

2. But what is Artificial Intelligence?

According to the ISO 22989:2022, Artificial Intelligence is “a technical and scientific field devoted to the engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.

In more human language, Artificial Intelligence is a field of computer science that focuses on creating systems that can perform tasks that typically require human intelligence and replicate, or even surpass human-like abilities in machines to accomplish various tasks efficiently.

3. What is the EU AI Act?

In this fast-evolving landscape of AI-systems, -models and applications, the European Union has taken a significant step forward with the introduction of the EU AI Act. Although it is still in draft, the regulation has already been the subject of many coffee-corner talks, discussions and opportunities.

One thing is very obvious, the need for a regulatory framework is ever more pressing.

First and foremost, to comprehend the EU AI Act, there are a few concepts that you need to understand.

A few concepts you need to understand:

  • AI system: An AI system is essentially a machine-based setup meant to function with varying levels of autonomy or independence and that may exhibit adaptiveness after deployment. Its main function is to infer, from the inputs it receives how to generate output such as predictions, content, recommendations or decisions.
  • High-Risk AI System: AI systems that pose significant potential risks to the health, safety, fundamental rights or welfare of individuals or society as a whole. These systems typically involve AI applications in critical sectors such as healthcare, transportation, enery and law enforcement.
  • General Purpose AI model: AI systems designed for broad and flexible applications across various domains.
  • Provider: A natural or legal person that develops an AI system or a general purpose aI model or that has an AI system or a general purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
  • Deployer: A natural or legal person using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.

The EU AI Act lays down rules regarding the placing on the market, putting into services and/or use of AI systems within the Union, sets out transparency obligations and identifies certain high-risk categories of AI-systems or models. These high-risk AI systems are subject to more specific and strict rules.

4. Who does the AI Act apply to?

The AI Act is applicable to providers, deployers, importers, deployers and authorised representatives of providers, laying down specific rules and obligations for each specific actor.
Although the most rules apply to the providers (the developers) of AI-systems, also deployers (organisations who make use of AI systems) are impacted by this regulation.
Check out our EU AI Act Assessment to check if the EU AI Act applies to you, your organisation or AI system and which rules and obligations you must follow!

Check out our blog

The Ultimate Guide to Data Protection Impact Assessments

5. When will the AI act come into force?

On 21 May 2024, the Council of the European Union approved the EU Artificial Intelligence Act. The EU AI Act will enter into force on the 20th day after publication in the EU’s official journal.

The majority of its provisions will take effect two years after the AI Act’s entry into force. However, provisions related to prohibited AI Systems will be enforceable after six months, the provisions regarding generative AI will apply after 12 months.

6. AI and Privacy

But, more importantly, where does Privacy, or Data Protection, fit into all this Artificial Intelligence? And where to start when your organization wants to use, implement or even develop an AI-system?

Step 1 ﹣ From the start: Necessity

When your organization want to use, integrate or even build an AI system, start by asking the question “why?”. Why would we need an AI system, what problems would it fix and are those business critical or rather nice-to-have?

Step 2﹣ Identify the type of AI system

The second step is a little bit more technical. You should check the risks your AI system imposes on its users or on your clients/customers/members/etc. The EU AI Act provides you with industries, techniques, purposes, etc. in Annexes I to III which indicate whether your AI system is considered a high-risk AI system.

Step 3 ﹣Use the principles

When the AI system will interact with natural persons, it is safe to assume that personal data will be processed, and thus the GDPR applies. Once you have established the necessity of the AI system, you should test the AI system against the GDPR principles:

Lawfulness, fairness and transparency

What will the legal ground be for processing personal data of our clients, users, employees, etc.? Can we provide sufficient transparency regarding the processing of personal data?

As a deployer of an AI system, the provider is obligated by the EU AI act to provide your organization with sufficient documentation to ensure the transparency towards your data subjects. However, it is up to you and your organization, as the data controller, to provide a legal ground and ensure the fairness towards the data subjects.

In specific cases laid down in the EU AI act you are also obligated to present transparency notices towards its end-users, reminding them they are interacting with an AI-system or notifying them that the text, images, videos or audio fragments are created using Artificial Intelligence.

Purpose limitation

Quite similar to the necessity of the AI system, the purpose of the personal data should be clearly established and further processing outside the scope of that purpose requires an additional GDPR compliance check!

Data minimization

It is obvious to check the intended use of an AI system, however you should also take into account the possible misuse of an AI system. What data is being processed by the AI system, and

Accuracy

Was the information used to train the AI system correct? How will the provider, or you, assure the outputs are correct as well?

Storage limitation

How does your organization ensure that the personal data is no longer kept for longer than necessary? As a deployer, can you ensure that the personal data is actually deleted?

Do not forget that in certain cases, the EU AI act requires you to keep logs of your AI system. Is your organization able to ensure data retention there?

Integrity and confidentiality

Where does the tool come from? Who is the organization behind the AI system? But also what measures can I or my organisation take to ensure the integrity and confidentiality of the information processed?

Accountability

Be sure to document the use of your AI system in your record of processing activities with all the required information.

Check out our webinar

AI Tools & GDPR Compliance

Step 4 ﹣Perform the necessary assessments

If you are using a high-risk AI assessment, it is mandatory by law to perform a DPIA. In other cases, it could also be considered to be mandatory. Since you are using new technologies, probably on a large scale, etc. it is safe to say your organization needs to perform a DPIA.

Conclusion

In conclusion, the rapid advancement of AI brings both opportunities and challenges, particularly in regulatory compliance and data protection. The EU AI Act provides a framework to ensure responsible AI use, requiring organizations to adhere to principles like necessity, transparency, and accountability. By understanding and complying with these regulations, and conducting necessary assessments like DPIAs, organizations can benefit from AI while protecting individual rights.

For more guidance, explore resources such as the EU AI Act Assessment and our RESPONSUM Expert Session on AI Tools & GDPR Compliance.

Still have questions? Don’t hesitate to contact us!

Liked reading this article? Spread the word!

Get the inside scoop on simplified privacy management

Get exclusive tips ‘n tricks straight to your inbox. Join +1,100 privacy professionals already subscribed and stay ahead of the game!

Written by

Brian Goyvaerts

Implementation Consultant @ RESPONSUM

Connect

Copyright © RESPONSUM BV

ISO certification logo