EU AI Act: Key to the future of artificial intelligence in Europe

Key objectives of the EU AI Act: Ethics, transparency and accountability

AI

Fecha

November 2025

Tiempo de lectura

Approximately 11 minutes


Silvia Núñez

Head of Systems at Iberdrola España

Pablo Salas

Legal Counsel, Global Legal Services Regulation, Resources and Services at Iberdrola

The European Artificial Intelligence Act is the first major global legislation aimed at establishing a common legal framework for the development, marketing and use of this technology. This pioneering regulation combines the promotion of innovation with the guarantee that artificial intelligence (AI) is applied safely, ethically and with full respect for fundamental rights. Here we examine its main objectives, the risk classification it establishes, the implementation timelines and Iberdrola’s commitment to the responsible use of AI.

The European Artificial Intelligence Regulation, commonly known as the AI Act and officially titled Regulation (EU) 2024/1689, is the world’s first comprehensive legal framework on AI and marks a milestone in the way this technology is governed. Its purpose is twofold: to harness the potential of artificial intelligence in Europe while ensuring that its development and application are carried out with safety, transparency and full respect for fundamental rights. The regulation therefore promotes the development of safe and trustworthy AI, while fostering an environment that encourages innovation and public policies aimed at driving investment in new projects. To achieve this, it introduces a system that categorises AI uses according to their level of risk, ranging from everyday applications to those with a critical impact on people’s lives.

Alongside this landmark regulation, other initiatives are underway to improve AI governance, whether through guiding principles or codes of conduct. Organisations leading these efforts include the United Nations (UN) External link, opens in new window. , the Organisation for Economic Co-operation and Development (OECD) External link, opens in new window.  and the G7.  External link, opens in new window. 

Why was AI regulation necessary?

The European Commission summarises the need for this regulation in one key idea: that “Europeans can trust what AI has to offer”. While most artificial intelligence systems are safe and beneficial, some may involve significant risks. It is not always possible to understand why an algorithm makes a particular decision, which can lead to unfair situations such as bias or discrimination. Existing legislation was not sufficient to address these challenges, hence the creation of this new framework – designed to bring trust, order and transparency to a field that is evolving at great speed. 

What’s more, the AI Act forms part of a broader European Union strategy to strengthen a robust and responsible innovation ecosystem. Alongside the AI Innovation Package, AI Factories and the Coordinated Plan on AI, this regulation aims to encourage investment, attract talent and position Europe as an international leader in the development of reliable, ethical and human-centred artificial intelligence. 

Risk classification under the European AI Act: An impact-based approach

The AI Act establishes four levels of risk for AI systems:

  • 1 Minimal or no risk
  • 2 Limited risk
  • 3 High risk
  • 4 Unacceptable risk
1

Minimal or no risk

Icon of a person next to a building with lines representing an urban environment

Types of use

AI systems with no impact on fundamental rights, health or safety.

Legal implications

No specific requirements, although voluntary compliance measures (codes of conduct) may be adopted.

Example

Email spam or junk filters, or AI-based or AI-enabled video games.

2

Limited risk

Icon of a brain with circuit-like lines

Types of use

AI systems not classified as high risk but whose use may pose certain risks to health and safety or to individuals’ fundamental rights.

Legal implications

Transparency obligations to ensure that users are informed, clearly know they are interacting with an AI system, and can identify content that has been artificially generated or manipulated, among other requirements.

Example

Chatbots, virtual assistants or image and text generators.

3

High risk

Icon of two overlapping capsules or pills

Types of use

AI systems that may pose serious risks to people’s health, safety or fundamental rights.

Legal implications

Subject to strict requirements regarding transparency, human oversight, accuracy, robustness and cybersecurity, among others.

Example

Biometric AI systems for emotion recognition, or systems used for hiring or selecting individuals – in particular, for publishing targeted job advertisements, analysing and filtering applications, and assessing candidates.

4

Unacceptable risk

Icon of a person next to a building with lines representing an urban environment

Types of use

AI systems considered a clear threat to people’s health, safety and fundamental rights.

Legal implications

Prohibited within the European Union.

Example

The use of AI through subliminal or manipulative techniques to influence human behaviour, or AI systems designed to evaluate or classify individuals based on their behaviour in a way that produces a “social scoring” effect resulting in unfair or adverse treatment.

Source: Regulation (EU) 2024/1689 of the European Parliament and of the Council; European Commission, ‘AI Act enters into force’

Timeline of the European AI Act: key dates and implementation deadlines

The application of the European AI Act has been designed with a phased timeline to allow businesses, administrations and developers time to adapt. Since its publication in the Official Journal of the European Union in July 2024, the regulation sets out different deadlines for the entry into force of its provisions, starting with the prohibition of unacceptable high-risk uses and culminating in full application from 2026. 

This timeline balances legal certainty with sufficient time for the transition to a responsible artificial intelligence ecosystem. Here are some of the key dates in this regulation: 

  • 2023
  • December 9, 2023
  • 2024
  • May 21, 2024
  • July 12, 2024
  • August 1, 2024
  • 2025
  • February 2, 2025
  • May 2, 2025
  • August 2, 2025
  • 2026
  • February 2, 2026
  • August 2, 2026
  • 2027
  • August 2, 2027
  • 2028
  • Going forward
Illustration of a man in a green suit writing on a document

December 9, 2023

Provisional agreement

The European Parliament and the Council reach a provisional agreement on the AI Act, establishing the base text.

Illustration of a robot talking with a person

May 21, 2024

Formal adoption by the European Council

The step prior to its official entry into force.

Illustration of a person reviewing a large document

July 12, 2024

Publication in the Official Journal of the European Union

The regulation is published in full, including all its articles.

Illustration of two people holding balloons and a European Union flag

August 1, 2024

European AI Act enters into force

The law officially applies from this day, although it is implemented progressively.

Illustration of a woman sitting and reading a document

February 2, 2025

Certain prohibitions and early obligations begin to be applied

General provisions and the ban on AI practices with unacceptable risk (prohibited AI practices) come into effect, along with obligations on AI literacy (training and awareness).

Illustration of a robot and a person working in front of a screen

May 2, 2025

Codes of good practice to support correct implementation of the AI Act

The Commission finalises the codes of good practice and will adopt the necessary measures, including inviting suppliers of general-purpose AI models to adhere.

Illustration of a handshake between two people

August 2, 2025

Various provisions begin to be applied

Entry into force of the notifying authorities and notified bodies, general-purpose AI models, EU and national governance (designation of competent authorities), the sanctioning regime (except fines for providers of general-purpose AI models), and confidentiality obligations.

Illustration of a person and a robot analyzing information on a screen

February 2, 2026

European Commission guidance on high-risk AI systems

The European Commission will provide practical guidance on the classification rules for high-risk AI systems, along with a comprehensive list of practical examples of high-risk and non-high-risk AI use cases.

Illustration of a woman speaking from a podium next to a blue flag

August 2, 2026

General application of the AI Act

The AI Act will apply in general, with the exception of one provision relating to the classification of high-risk AI systems.

Illustration of a woman standing on large books

August 2, 2027

Compliance with certain specific obligations related to high-risk AI classification

Application of specific obligations related to the classification of high-risk systems, particularly for providers of general-purpose AI models.

Illustration of three people sitting at a table

Going forward

Evaluation and adoption of delegated acts, among others

The Commission will evaluate the implementation of the AI Act and the functioning of the authorities, and may adopt delegated acts.

Source: Regulation (EU) 2024/1689 of the European Parliament and of the Council; Council of the European Union, European Parliament [PDF].

Iberdrola’s commitment to the European AI Act: ethics, transparency and regulatory compliance

At Iberdrola, we remain firmly committed to aligning with the principles set out in the European regulation on artificial intelligence. Our goal is not only to meet the new legal requirements of the so-called AI Act, but also to stay ahead of them – and of the certification processes – through the continuous improvement of our systems and internal procedures. In this way, we have been pioneers in achieving key milestones that have set the standard for the responsible, ethical and transparent use of AI.

Iberdrola’s AENOR certification: a pioneering step in ethical AI management

In September 2025, Iberdrola Clientes and Iberdrola Energía España – both companies that form part of Iberdrola España – reached a major milestone by becoming the first companies to certify their Artificial Intelligence Management System (AIMS) under the international ISO/IEC 42001 standard with AENOR, the Spanish Association for Standardization and Certification. This achievement confirms that their AI systems not only enhance process efficiency but are also managed responsibly, safely and ethically. 

Read press release
AI Pact

Iberdrola and the EU AI Pact: Committing to safe and responsible AI

In September 2024, Iberdrola became the first European energy company to join the European AI Pact – a voluntary initiative promoted by the European Commission. By joining, we voluntarily committed to implementing some of the provisions of the European AI Act ahead of its full entry into force, such as transparency, responsible governance and risk mapping, positioning ourselves at the forefront of building a solid framework of trust and accountability. 

Objectives of the AI Pact

Promote an AI governance strategy that encourages its adoption across the organisation and supports progress toward compliance with the forthcoming AI Act. 

Identify and map AI systems that may be classified as high risk under the framework of the regulation. 

Raise awareness and promote AI literacy among employees, fostering ethical, responsible use aligned with the organisation’s values. 

Ensuring transparency and ethics in Iberdrola’s AI Systems: a model aligned with EU standards

At Iberdrola, we have a Policy on the Responsible Development and Use of Artificial Intelligence Tools, approved by the Board of Directors of Iberdrola, S.A. on 10 May 2022 and last amended on 25 March 2025. This policy sets out the basic principles that must guide the design, development and implementation of AI tools, including equal opportunity and non-discrimination, transparency, security and resilience, privacy and an innovative culture. 

Iberdrola and AI in the energy transition: leading the way toward a sustainable and ethical future

At Iberdrola, we have been using AI for more than a decade to make predictions, optimise processes and identify patterns applicable to our daily operations. As announced at the 2025 Digital Summit, we already have more than 150 use cases across the Group. Among the areas where this technology is being applied, renewable energy and electricity networks stand out. 

Clean energy

AI helps us make the most of the wind and sun in power generation. It can be applied at every stage of the process – from plant design, using models that identify the best location for a wind turbine, to operation and maintenance, through algorithms capable of anticipating faults before they occur by analysing millions of data points. What’s more, AI enables precise forecasting of wind and solar output for every hour of the day and for each turbine or solar panel at our facilities worldwide.

Electricity grids

At Iberdrola, we use artificial intelligence to enhance the customer experience and anticipate operational needs. Thanks to an algorithm, we can offer accurate estimates of supply restoration times in the event of an outage. AI also allows our teams to plan ahead which networks or substations may require partial upgrades, relying on the analysis of around a hundred variables and more than six years of historical data to predict potential failures.

Biodiversity protection

AI is also applied to the protection of birdlife, with systems that detect birds within a radius of up to five kilometres around wind farms and automatically stop the necessary turbines to prevent any risk.

Backup batteries

At Iberdrola, we are exploring new AI applications such as optimising the location of backup batteries to strengthen the grid against possible incidents.

Since 2023, we have also had an AI Technology Centre that brings together the specialised knowledge needed to drive these initiatives, supporting business areas in developing and implementing use cases.

We are also active in education through public-private partnerships, organising multi-stakeholder workshops under the Iberdrola Chair for the Sustainable Development Goals External link, opens in new window. , promoted by the Innovation and Technology for Human Development Centre at the Technical University of Madrid (itdUPM) and Iberdrola. This project aims to critically and collaboratively analyse the use of these technologies in customer service. Through sessions with companies, government and academia, it explores both the benefits and the risks of generative AI, fostering governance, transparency and human oversight mechanisms that ensure the ethical and responsible development of artificial intelligence. 

More about artificial intelligence