Portugal is not alone. Only seven EU countries have appointed an AI regulator
Belgium, France, Germany, and Italy are among the countries that have also missed the deadline for appointing an AI Act supervisor. In contrast, Spain has created its own agency.
Portugal is behind in implementing the latest rules of the European regulation on artificial intelligence (AI), but it is not alone. Of the 27 Member States of the European Union (EU), only seven have already defined their respective supervisory authority, according to the official list published by the European Commission.
The countries that have already officially notified the EU executive of their AI regulator are Spain (which has even created a new authority for this purpose, AESIA – Spanish Agency for the Supervision of Artificial Intelligence), Latvia, Slovenia, Cyprus, Lithuania, Luxembourg and Ireland. On the contrary, just like Portugal, France, Italy, Germany and Belgium are among the countries that have yet to reveal who will regulate AI in their territories.
EU countries had until 2 August to notify the European Commission of the national entity chosen to perform the role of market surveillance authority under the Artificial Intelligence (AI) Regulation. However, as ECO reported last week, Portugal did not indicate an entity within the deadline, thus risking infringement proceedings from Brussels that could result in sanctions.
Body monitoring implementation of regulation postpones meeting until October
The delay in implementing this important stage of the AI Act — which involves the entry into force of rules for companies that provide models such as ChatGPT — has led the European advisory board, a body that monitors this process which includes representatives from all Member States, including Portugal, to postpone its fifth meeting, originally scheduled for 18 September, to 24 October, ECO has learned.
“The meeting of the AI Board — the European Committee for Artificial Intelligence — has been postponed until October, as most Member States are behind schedule in implementing the regulation”, said an official source from the Ministry of State Reform, which oversees AI in Portugal. ECO understands that one of the items on the agenda is precisely an assessment of the process of implementing the regulation by the various Member States.
The AI Board is a body created by the AI Act that monitors the implementation of the legislation and provides guidance, opinions and recommendations to help clarify and operationalise the rules. This body also contributes to the definition of innovation policies, international cooperation and AI development strategies, ensuring that the EU remains competitive and that AI is used safely, ethically and for the benefit of all European citizens.
ECO contacted an official source at the European Commission about this postponement on Tuesday and is awaiting a response.
Applications such as ChatGPT pose other challenges
Having officially entered into force on 1 August 2024, the AI Act is a pioneering and comprehensive European law regulating AI technologies in the European Union. The law establishes common rules for the development, marketing and use of AI systems in all Member States, based on a risk-based approach: the greater the potential impact of the technology on fundamental rights, safety or consumer confidence, the more stringent the obligations imposed on companies.
High-risk systems, such as those used in critical sectors — finance, health, transport or human resources — are required to comply with strict requirements for transparency, data governance and human oversight. Low-risk applications, such as spam filters, are subject to lighter rules. Non-compliance could result in heavy fines, similar to those imposed under the General Data Protection Regulation (GDPR).
One of the most sensitive points of the AI Act is the regulation of so-called general-purpose AI. These are models capable of performing multiple functions in different contexts, such as large language models (chatbots) or image generation systems. One of the most prominent examples is the ChatGPT application, developed by the American company OpenAI.
As they cut across various sectors and are difficult to monitor, these models will have additional requirements in terms of technical documentation, risk mitigation, transparency, and cooperation with supervisory authorities. However, the law stipulates that general-purpose systems that were already on the market before 2 August 2025 will only have to comply with the rules from 2 August 2027 onwards.