Policy on the Responsible Development and Use of Artificial Intelligence Tools

Iberdrola guarantees responsible, transparent, safe and reliable use of artificial intelligence systems and algorithms

Política de uso responsable de herramientas de inteligencia artificial y de algoritmos.

Corporate Governance.

 

Policy on the Responsible Development and Use of Artificial Intelligence Tools

 

 

 

 

20 February 2024

The Board of Directors of IBERDROLA, S.A. (the “Company”) has the power to design, assess and continuously revise the Governance and Sustainability System, and specifically to approve and update the corporate policies, which contain the guidelines governing the conduct of the Company and of the companies belonging to the group of which the Company is the controlling entity, within the meaning established by law (the “Group”).

Pursuant to the provisions of the Company’s By-Laws and as part of its commitment to the social dividend, the innovation and digital transformation strategy of the Group must be focused on the sustainable creation of value, in accordance with the Purpose and Values of the Iberdrola Group and with the commitments made in the Code of Ethics.

Aware of the significance of the development and implementation of artificial intelligence tools for the application of this strategy, and of the importance of ensuring its responsible use, in accordance with the Company’s corporate philosophy and the principles that inform its corporate culture based on ethics and the commitment to sustainable development, the Board of Directors approves this Policy on the Responsible Development and Use of Artificial Intelligence Tools (the “Policy”), which is aligned with the OECD Council’s Recommendation on Artificial Intelligence.

1. Purpose

The purpose of this Policy is to establish the common and general principles and guidelines for conduct that are to govern the design, development and application of artificial intelligence tools, defined as any automated system designed to function with different levels of autonomy and which may, with explicit or implicit aims, generate results such as predictions, recommendations or decisions, which in turn influence physical or virtual environments. It also has the purpose of regulating the responsible use of these tools, ensuring compliance with applicable law, the Purpose and Values of the Iberdrola Group, the Code of Ethics and the other rules that form part of the Governance and Sustainability System. 

In this regard, this Policy establishes the principles and guidelines to ensure the responsible, transparent, secure and trustworthy use of artificial intelligence systems by the companies of the Iberdrola Group. 

2. Scope of Application

This Policy applies to all companies of the Group, as well as to all investees not belonging to the Group over which the Company has effective control, within the limits established by law.

Without prejudice to the provisions of the preceding paragraph, listed country subholding companies and their subsidiaries, based on their own special framework of strengthened autonomy, may establish an equivalent policy, which must be in accord with the principles set forth in this Policy and in the other environmental, social and corporate governance and regulatory compliance policies of the Governance and Sustainability System.

At those companies in which the Company has an interest and to which this Policy does not apply, the Company will promote, through its representatives on the boards of directors of such companies, the alignment of their own policies with those of the Company.

This Policy shall also apply, to the extent relevant, to the joint ventures, temporary joint ventures (uniones temporales de empresas) and other equivalent associations, if the Company assumes the management thereof.

Finally, the principles established in this Policy shall also apply to the suppliers who develop artificial intelligence tools for the Company or the entities subject to this Policy, to the extent appropriate.

3. Main Principles of Conduct 

To comply with the commitment outlined in Section 1 above, the companies to which this Policy applies must design, develop, apply and use artificial intelligence tools in accordance with the following main principles of conduct: 

a) Principle of Respect for Human Beings and Social Wellbeing 

Artificial intelligence systems will be developed and used as tools in the service of people, fully respecting human dignity and the environment, in accordance with the technological state of the art at any time and so that they benefit all human beings, endeavouring to ensure that the development thereof contributes to the achievement of the Sustainable Development Goals (SDGs) approved by the United Nations (UN).

They shall endeavour to use artificial intelligence tools responsibly, in compliance with the Iberdrola Group’s commitment to human rights and to the principles that inform the Purpose and Values of the Iberdrola Group and the Code of Ethics, facilitating the possibility of human beings controlling and supervising their design and use. In any event, they shall pay special attention to ensuring that artificial intelligence systems do not harm health or safety or have a negative impact on fundamental human rights.

b) Principle of Diversity, Non-Discrimination and Fairness 

They shall endeavour to develop and use artificial intelligence systems so that they foster equality of access, gender equality and cultural diversity, at the same time as avoiding biases with discriminatory effects (based on race, ethnic origin, religion, sex, sexual orientation, disability or any other personal condition) and unfair prejudice

c) Principle of Culture of Innovation

They shall endeavour to ensure that the design, development and application of artificial intelligence tools are aligned with the Group’s innovation strategy, which seeks to keep it at the forefront of new technologies and disruptive business models, by encouraging a “culture of innovation” that pervades the entire organisation and promotes motivating work environments that favour and reward the generation of ideas and innovative practices. 

d) Principle of Privacy

They shall ensure that artificial intelligence systems are developed and used in accordance with privacy and data protection laws, as well as with the Governance and Sustainability System, and also that they shall process data that comply with established standards of quality and integrity.

e) Principle of Transparency 

Artificial intelligence systems shall be developed and used so that they permit adequate tracking and transparency, ensuring that users are aware they are communicating or interacting with an artificial intelligence system, for which purpose they shall duly inform affected persons of such system’s capacities and limitations, as well as of the rights that protect them.

f) Principle of Security and Resilience 

They shall endeavour to ensure that artificial intelligence systems are developed and used so that they minimise involuntary and unexpected harm and are resilient against unauthorised attempts to access them or alter their use or performance, and against unlawful and malicious third-party use, ensuring continuity of service provision at all times.

They shall have hardware, technical and software security mechanisms to protect and foster the proper functioning of their artificial intelligence systems against any alteration, misuse or unauthorised access (physical or cyber), as well as to guarantee the integrity of data that are stored or transmitted via those systems.

Without prejudice to the exceptions that may be established for well-founded reasons by the Digital Transformation Division (or such division as assumes the duties thereof at any time), they shall generally not develop or use artificial intelligence systems that are classified as high-risk pursuant to the standards established at any time.

g) Principle of Training and Awareness-Raising

They shall endeavour to ensure that the developers of artificial intelligence tools receive training on all aspects required to understand the risks implicit in the use of those systems, such as legal and ethical considerations, behavioural aspects and best security practices, so as to ensure that the end users of artificial intelligence tools can use them safely.

4.  Instruments and Coordination of the Digital Transformation and Use of Artificial Intelligence 

To achieve the specified goals, the Company has a Digital Transformation Division (or such division as assumes the duties thereof at any time), which may rely on an Artificial Intelligence Global Coordination Group that is created for this purpose and which shall act in coordination with any local groups created at the country subholding companies, and it shall prepare the procedures required to ensure the proper use of artificial intelligence and the management of the potential risks arising from the use thereof. 

5. Supervision 

The Digital Transformation Division (or such division as assumes the duties thereof at any time) shall supervise compliance with the provisions of this Policy and regularly report to the Audit and Risk Supervision Committee thereon.

Similarly, the Digital Transformation Division (or such division as assumes the duties thereof at any time) shall review this Policy at least once per year to ensure that the content thereof conforms to the ongoing progress, innovations, risks and regulatory changes that are occurring in the area. 
 

This Policy was initially approved by the Board of Directors on 10 May 2022 and was
last amended on 20 February 2024.