Skip to main content
Outreach
Aug. 2025

EU AI Act keys to responsible AI implementation

[Begoña Vega, Head of AI Models & Applications, in Big Data Magazine]

The European Union has set a global precedent with the “EU AI Act,” the first comprehensive regulatory framework for artificial intelligence. In force since August 2024, with a gradual implementation until August 2027, this regulation sets a series of standards to ensure the development of AI from a safe, ethical and respectful approach to fundamental rights.

In line with the regulation, Spain has recently passed the “Draft Bill for Good Use and Governance of Artificial Intelligence,” which reinforces the European approach, requiring providers and developers operating on the continent to comply with strict standards to avoid possible sanctions. Aware of the new legislative framework and responding to the needs of their customers, the experts at Innova-tsn, a consulting firm specialising in the comprehensive data lifecycle and artificial intelligence, have identified the key points of the regulations for responsible adoption:

  • Risks: the draft bill classifies AI systems in four levels of risk in order to apply regulations proportionate to their impact. Unacceptable risk systems are those which violate fundamental rights or manipulate human behaviour. These are banned throughout the European Union. High-risk systems, such as those used for medical diagnosis or staff recruitment are allowed, but subject to strict requirement such as conformity assessments, human supervision and governance mechanisms

On the other hand, limited risk systems, such as text or image generators, are subject to transparency obligations towards the user. Finally, those with minimal risk, such as spam filters or virtual assistants, are not subject to specific regulation, although they must respect common compliance rules such as the General Data Protection Regulation (GDPR).

  • Penalty regime: the legislation imposes fines proportional to the seriousness of the infringement, which can reach up to 35 million euros or 7% of global turnover, the higher figure always being applied in serious cases, such as the creation or use of prohibited systems. Other breaches, such as failing to submit required documentation or misrepresenting information to an official request, can lead to penalties of up to 15 million euros or 3% of overall business.

For less serious breaches, fines of €7.5 million or 1% of turnover are envisaged. In contrast, the draft bill adopts a more permissive approach to the public sector, providing only for warnings or reprimands, even in cases of the use of prohibited or high-risk technologies, such as remote biometric identification.

  • Transparency: AI systems must provide clear and accessible documents on the logic behind the functioning of the model, data use and automated decisions, especially in cases with a possible impact on fundamental rights. This transparency must be maintained throughout the entire lifecycle of the model, from data collection and preparation to final monitoring and maintenance.

 

In the case of generative AI systems, such as ChatGPT or Copilot, additional requirements are needed, such as detailed summaries of training data, watermarks to identify synthetic content, and explicit consent for the use of biometric data, limiting the storage of such data to the strictly necessary time.

  • Explainability: in addition to guaranteeing transparency, organisations shall ensure that users and controllers understand how AI systems work, providing clear justifications for their behaviour, both in technical and functional terms. In this regard, continuous training of the teams involved is particularly important to promote informed and ethical use.
  • Security: the regulation requires organisations to protect the integrity, confidentiality and availability of both the data and the models and infrastructure that support them. To this end, it is recommended that measures aligned with the ISO 27001 regulations are taken, such as data encryption, multi-factor authentication (MFA), access restrictions and periodic assessments with human supervision.
  • Privacy: Organisations must ensure the protection of personal data and control by users throughout the lifecycle of the model, in compliance with the GDPR. Measures such as the anonymisation of identification data when models are trained favour this principle.
  • Sustainability: considering that training a large language model (LLM) can consume up to 10 GWh, equivalent to the annual supply of over a thousand US homes, sustainability becomes a key focus of legislation. Strategies such as algorithmic optimisation, the use of pre-trained models and efficient architecture can reduce the environmental impact.
  • Equity: In parallel, to minimise possible discriminatory bias and to ensure social welfare, the use of balanced data samples and the involvement of multidisciplinary teams in all phases of the project is recommended.
  • AI Governance: for the correct adoption of regulations, organisations must set processes, policies and internal structures to ensure the responsible operation of AI. This governance approach has two dimensions: institutional, which deals with organisational aspects such as assigning roles and creating governance structures, and operational, which focuses on the technical aspects of implementation. Combining both perspectives can be an effective strategy for aligning compliance with business goals.

 

Beyond avoiding possible sanctions, the EU AI Act offers companies opportunities to stand out in regulated sectors. According to Begoña Vega, Head of AI Models & Applications at Innova-tsn:

“Each AI system has its own particularities, so cases will need to be analysed carefully for correct classification. In this regard, a strategic partner can facilitate the process of assessing, categorising and deploying solutions that are aligned with regulations and the principles of responsible AI.”