top of page

An “AI Act” policy assessment: what was adopted, what could have been done more and what the rest of the world can learn.

It is well known that the final text of the AI Act has been adopted after a long tug-of-war between the European Parliament and the Council of Ministers of the EU which culminated in December 2023 with a 3-day long intensive talks. However, the regulation has not entered into force yet, since it is expected to do so in May or June 2024. If on the one hand, the EU can be collocated in the international arena as the first legislator on the matter with this comprehensive legal framework on AI systems, on the other hand, the rest of the world appears not to be willing to remain just a mere spectator. From Canada to China, even reaching the U.S., several governments, and non-governmental actors, mainly tech businesses, are making their first steps in the national-level or internal-business-level legislation on AI. 

Artificial intelligence has become an essential tool used to carry out our daily tasks. From writing essays to preparing activities for children at school or even to have immediate answers to any kind of questions that spur up in our minds without searching for the answers on the web, the uses of AI reach seemingly unimaginable targets, such as the biotech-related application of AI systems. These AI systems and modules were created to answer the needs, questions and problems of people and have been progressively developed and integrated into digital devices even those different from mobile phones and PCs (i.e. consider the instances of AI applied to the IoT) in almost every sector.

Additionally, AI has become a sort of mandatory investment for countries leading to a “race to the top” in terms of legislation for several state actors in the world. It can be said that the use of AI constitutes already an integral part of economies, government institutions (i.e. internet security, military operations), and for the good functioning of societies while also lifting tedious tasks from the lives of EU citizens.


Therefore, it was because of the centrality gained by AI systems that in 2021 the EU Commission decided to propose the first official draft of the AI Act regulation. In this race to the top, the EU managed to adopt after a series of ups and downs in the decision-making process of the AI Act, this first-in-the-world all-encompassing regulation on AI. The Act has already been adopted on March 13th, 2024, however, it will be fully applicable, particularly in the case of high-risk AI systems, only after 24 months from its entry into force which, to reiterate, is expected by the end of June. A further step will come after 36 months from its adoption when the content of the whole Act will be fully applicable.

 The AI Act is the first comprehensive law on AI to be adopted in Europe and in the world. The act per se is part of the EU “Digital strategy”. Under this strategy, the EU aims to shape the digital transformation in Europe to benefit EU citizens, businesses and even the environmental targets of the EU green transition. Therefore, considering the digital transformation target and of the EU Digital Decade Mission which sets the targets for Member States governments, businesses, and other actors to reach by 2030, the EU aims to strengthen the digital sovereignty of EU institutions and Member States governments. This can be achieved by setting internal standards to make the EU “fit for the digital age” even through the targets set by the AI Act. The Digital Decade Mission of the EU, as well as the AI Act, can be considered primary examples of specific technological normative standards which allow the EU to be normatively independent from foreign countries AI regulatory framework, allowing the EU to raise the bar on its internal standards rather than using standards set by non-EU actors and states which are not in line with EU values. However, this may also entail that other non-EU states may reject EU standards on the same premises. In addition, the AI Act also aims to protect the fundamental rights of EU citizens, even including businesses, in line with the objectives of the EU Coordinated Action Plan on AI of 2018. 

Within this EU digital legislative framework, with its adoption by the Council and the European Parliament in December 2023, the AI Act has an all-encompassing approach to AI systems, since the AI Act targets all types of AI regardless of their functions or aim, therefore preventing, particularly, unacceptable and high-risk AI from negatively affecting the lives of individuals, while ensuring businesses and governments with the necessary support to advance in the development of competitive AI systems within an ethical human-centred perspective. As several Members of the European Parliament (MEPs) highlighted several times during the legislative process which led to the final adopted text of the AI Act the need for the AI Act to have a human-centring and democratic nature. Particularly, the human-centred approach in line with the EU Declaration on Digital Rights and Principles of 2022. With the help of this Declaration, the European Parliament and the Council better assessed the effects of AI on people focussing on a digital transformation which respects the EU citizens’ human rights, and the democratic principles of the Union while allowing businesses and governments the space for developing AI which are, among other things, safe, non-divisive and which support EU sustainability commitments.

Therefore, the AI Act is the first legal act of the EU which defines four categories of AI systems depending on the level of risk they pose, mainly, to people and businesses. It is clear that AI development in Europe can lead to several benefits for EU citizens, such as by providing faster, more accessible and better healthcare and by improving public services or more advanced digital products. Additionally, AI systems allow businesses to be more competitive in the global market regarding several sectors, for instance in the case of businesses which implement AI in their product-making processes, or even with those businesses using AI systems for security reasons. Parallelly, AI systems can be used also by governments to ensure high-level security standards of national digital sources and information relating to governments or, more generally, improve the efficiency of public transportation and waste management. 

Despite these positive outcomes which can derive from AI systems’ implementation, the AI Act aims to reach these positive scenarios by classifying, thus identifying, and either blocking or limiting, the types of AI systems which are definitely or potentially harmful, non-transparent and unethical. Therefore, the first category of the AI Act contains those AI systems which pose an unacceptable risk. These AI systems will be banned within the Union’s borders from May to June 2024. A particularly impactful example of unacceptable risk AI is the social scoring AI similar to those systems which are already put to practice in China. Another important unacceptable risk AI instance is given by those AI which allow emotion recognition in the workplace or in other education institutions, for example, those used for the surveillance of employees’ level of tiredness, attention and/or productivity level. 

Concerning the high-risk AI systems, the second category, these AI systems constitute a risk to the lives of citizens. This category stretches from the medical field (chirurgical robots) to the economic and financial ones (AI is used for assessing the credit score which can potentially deny citizens to obtain loans or to make certain financial operations). The high-risk AI systems will not be completely banned, rather they will be assessed before being sold or distributed on European markets.

As far as the limited and minimal risk AI systems categories, these contain the most common AI systems we use today: chatbots, phone operators or call assistants, spam filters or AI used for simulating human interactions or in video games. Additionally, the AI Act does also contain further provisions on the regulation of all four major categories, however, explaining all the other aspects would be a reiteration of several other writings and reports and it would be outside of the scope of this article.

After having briefly seen the major general advancements which will be brought by the implementation of the AI Act and by EU institutions in the text of this regulation, the AI Act, as it is usually expected by most EU acts, is still “on the making”, meaning that it does not provide a specific regulative framework for all specificities related to the field of AI. This is particularly true in the case of the AI Act since the fast-paced world of AI training and development generates new unseen issues which are difficult to regulate in a timely way. Therefore, even if the AI Act generally aims to improve the use of AI systems while preventing unethical or non-transparent uses of AI, the text adopted does not target certain issues which are to be kept in mind before the entry into force of the AI Act, so when it will become legally binding for Member States to implement and respect. The AI Act tries to provide answers to some of the dozens of risks related to the use and misuse of AI, however, it is useful to keep in mind some of the left-open problems which are not fully targeted by the AI Act which might be difficult to manage and would most certainly require additional future acts to regulate them.  Some of these issues are presented in the following part of this article. 

A first problem can be identified in the lack of specific provisions concerning both the underuse and the overuse of AI (i.e. abuses of AI to explain human interactions, societal issues, or certain legal dilemmas). Regarding the underuse of AI, it is interesting to highlight how companies which do not integrate AI systems into their products (cars, fridges, etc.) found a loss in terms of revenues and a more general loss of competitiveness. Moreover, from an environmental point of view, the underuse of AI is directly related to an increase of greenhouse gasses emissions, since for both production and functioning of goods, the use of AI for these two actions allows businesses to disperse less energy in the process of making and functioning of the products. Therefore, the absence of AI systems in the making and in the functioning of certain goods may lead to more difficult-to-manage and less efficient business models, even in light of the environmental targets set by the EU.

Another problem relating to the risks and malfunctioning of AI systems is linked to the question of who is to be held responsible for the malfunctioning of the AI system. The AI Act, in its articles, does not clarify who is liable, so either the provider of the AI or the manufacturer of the good with an AI system, when for instance a self-driving car or any other good with an AI system does not work properly and damages the property or health of individuals. Therefore, the provider under the AI Act is generally the person to be held liable for AI malfunctions, however, under the AI Act, in the case of an AI implemented in a tangible good the liability cannot be easily attributed.

Additionally, through the AI Act, AI systems may also constitute a safety and security risk (high-risk AI) for instance when considering those AI systems implemented in pieces of weaponry (ballistic missiles and drones) which can be hacked or misused. Regarding this issue the AI Act seems not to specify in any of its provisions how to carefully move on AI systems applied to the defence and national security fields. 

AI systems may also pose a threat to democracy, particularly on a communication level. Social media use AI systems to offer the most compatible type of content to EU citizens based on their likes, comments, and other indirect parameters like the so-called “online behaviour”. This practice allows the AI to direct only the likeable type of content a person would have while using the platform. This prevents the creation of an environment of democratic, pluralistic, and inclusive online space allowing for democratic debates and interactions. Moreover, it is widely accepted that AI is increasingly and maliciously used to create fake content, the so-called “deep fakes” of politicians and public figures to ultimately polarise and divide people in the online world. 

Even if the AI Act generally aims to protect the fundamental rights of EU citizens and businesses, an additional problem which was not entirely treated in the AI Act provisions was the online profiling and tracking of individuals. It is well known that AI systems expand their knowledge of the information provided by users. However, the AI Act seems not to contain clear guidelines on how to deal with potential unlawful tracking of information or profiling of individuals. This is especially true when considering the enforcement measures outlined in the final text of the Act. This puzzle of information of EU citizens, used to train through raw data the AI system, enlarges the pool of information used by the AI systems to carry out their tasks. Such type of training may easily lead to violations of the right to privacy and data protection of individuals. However, this problem can manifest when using high-risk AI for which a thorough conformity assessment is made also on the gathering of information ability of certain AI systems though no real certainty that, once on the market, a high-risk AI may lead to a right to privacy violation can be ensured.

Particularly interesting might be the fact that before a final draft of the AI Act was reached, the AI Act had already answered to the problem of biassed AI models and training systems in its art.10. Article 10 targeted particularly the issue of unethical and biassed data on which high-risk AI systems can be trained on by setting specific quality criteria for data. This prevents possible security issues regarding the EU citizens as well as biases for specific racial and minority groups of people on which a high-risk AI may be used (i.e. to avoid the racial bias found in several experiments conducted across multiple US prisons which negatively affect African Americans or Mexicans).

Therefore, the AI Act seems to provide more than clear definitions and risk-prevention-oriented requirements for AI systems, however, the practical solutions to the problems highlighted above are mainly left to industries and scientific research groups to resolve. To answer these specific problems, it is more than plausible that the AI Act will be subject to an expansion in terms of its legal framework through other amending and delegated acts of the EU in the future. This is primarily because even if the AI Act framework can be defined as “flexible” because it allows new technologies and AI developments to circulate in the EU, the Act appears to be insufficient to tackle specific technical and social drawbacks created by AI systems in relation to societies and businesses. 

Additionally, the AI Act can already be defined as a remarkable piece of EU secondary law for what concerns its cutting-edge substantial content. Despite this aspect, the AI Act presents several implementing and enforcement issues. In particular, two of these multiple implementation and enforcement hardships are worth mentioning. First, the AI Act requires high costs of maintenance especially in the case of high-risk AI systems. These costs represent a burden for key actors such as the Commission, the EU AI Office (created for the implementation of the AI Act), the Member States and especially businesses, mainly for high-tech SMEs. The high costs of compliance are, for instance, those indirectly imposed by art.9 of the AI Act. Art.9 sets the high-risk management and compliance systems requiring businesses or AI providers and operators to periodically assess and update the management systems of high-risk AI. Despite the straightforward necessary nature of this provision, it cannot be said that such maintenance services come free. Moreover, these costs must be implemented by law throughout the whole life cycle of the high-risk AI resulting in an incredible economic loss for enterprises in the long term. This may stunt the birth and growth of high-tech SMEs willing to invest in sectors such as those of generative AI and AI training, leading to a general loss of competitiveness on a global level before non-EU states which are already unethically investing in high-risk AI systems.  

In this regard, an additional economic issue to consider is the envisaging of enormous fines established by the AI Act for those operators or enterprises violating the requirements of high-risk AI. To reiterate, even if the fines established in the adopted text are conceptually incontestable, they reach numbers which might seem too punitive, and which create huge disincentives for EU and foreign tech businesses to invest in the development of new AI models and systems in Europe. Therefore, even if this reasoning applies to high-risk AI, it must also be underlined that, while the rest of the world seems to become more accustomed to less democratic AI, a clash of intents on how to develop and accept dangerous AI systems between the EU and the rest of the world is already surging.

Second, the AI Act appears to be difficult to enforce in some parts. Despite the abovementioned substantial issues, which are not fully answered in the final text of the AI Act, the most impelling problem Member States and the Commission will face will be the enforcement of all the provisions of the Act. To reiterate, for many scholars and experts, the AI Act presents several enforcement issues which derive from its widely “notionistic” and abstract text, as some critics would have it, rather than practical one. In terms of enforcement of the AI Act, Member States have a pivotal role. Member States will have to designate the competent authorities observing the correct implementation, enforcing and, when needed, communicating to the Commission the necessity to enforce the content of the AI Act. These national authorities will also sit at the table of the European Artificial Intelligence Board. This last will aim to gather representatives of supranational AI-related and national authorities, the European Data Protection Supervisor, and the Commission. The role of the AI Board is to facilitate a smooth, effective, and harmonised implementation of the new AI Regulation. Additionally, a European AI Office will be created by the Commission to issue non-binding recommendations and opinions for GPAI (General Purpose AI) models. The AI Office will also coordinate the roles of EU institutions, national actors and other non-state and non-EU actors interested in the development, research, and policies on AI systems. This is to link several actors, particularly: governmental decision-making, representatives of industries, universities, and research groups, without excluding members of societies. All these actors will facilitate the knowledge transfer in the field of AI to create the most advanced AI policies. 

It can be seen how the enforcement of the provisions of the AI Act will have to be managed on multiple levels from a vertical policy application perspective. A perfect enforcement of the AI Act might be difficult to achieve after June 2024, and during the time frame granted to Member States to implement the regulation also because of the existence of these multiple actors with sometimes overlapping competencies. In addition, particularly the AI Office including other EU and Member States actors and designated national implementation and enforcement authorities, have limited resources and it is likely that because of pressing war-related issues the EU would hardly increase the funding of all these actors.

Therefore, before these issues, it can be said that the text of the AI Act could have been improved, particularly by better framing substantial aspects of the implementation and enforcement of the AI Act should be made. For instance, no provision in the AI Act establishes common grounds between EU and national supervisory authorities (NSAs) for identifying and ensuring that new AI systems can be unequivocally collocated in one specific category of the four established by the AI Act. Consequently, for both existing and future models, nation states AI authorities and EU AI authorities (i.e. AI Office or the Commission) may risk collocating an AI system in different categories thus raising problems on how to limit or enforce the respect of fundamental rights depending on an assigned category for that specific AI system. Additionally, the AI Act is not a sector-specific legal text. The Act has an all-embracing approach on AI, this means that the EU will have to regulate those specific sectors linked to the use of high-risk AI to increase security and accountability parameters respecting EU values, especially democracy and fundamental rights. Some of these sectors are healthcare, finance, and mass-media communication. Subsequently, codes of conduct and certifications for businesses demonstrating their competence and knowledge on how to comply with and apply the AI Act are also needed, as shown by several Commission reports. These can be understood as accountability and even “internal self-enforcement” for business tools. 


Finally, given these specificities of each contemporary status of non-EU countries' AI legislation, it must be noted that the AI Act can provide guidance to foreign state actors, and even international organisations, on which characteristics a piece of legislation on AI should have or which are the problems of drafting and implementing such texts. 

First of all, non-EU actors must bear in mind the need for a human-centred approach of AI legal texts, which must give priority to safety and human rights protection of individuals and to the security of businesses and institutions. 

An AI legislative text must also strike a balance between the fundamental rights and economic interests of people, the market stability of tech businesses investing mainly in AI LLM (Large Language Models) and AI systems, thus keeping high and thriving the internal EU market competition in this sector. 


Even if the implementation of this Act presents some added specificities due to the EU institutional and Member States frameworks, another important aspect for non-EU states is to avoid institutional surveillance and responsibility-to-enforce fragmentation. A multi-level implementation and enforcement framework which stretches and touches multiple actors may lead to the risk of overlapping duties and powers previously discussed in this article, such as the lack of coordinated action and risk-attribution for AI systems between Member States authorities and the AI Office. This risk of complex multi-level implementation is still present in those states which present a vast territory and complex government structures such as those of the US and China, where different provinces and large populations may require a sub-state management and legal regulatory frameworks for AI. This could be eventually translated into a lack of a coordinated and homogeneous regulation of human-centred AI approaches caused by issues of difficult enforcement and possibly of conflicting AI policies. 

To substantiate these claims, for instance, the US states issued a copious amount of AI-related bills. However, all these bills have been concerned with either data protection related to AI or accountability of AI risks. Therefore, an all-encompassing text like the AI Act has not been yet drafted. Consequently, considering the several bills on AI which are yet to pass, the US may as well learn from the AI Act and its wide human-centred approach. In addition, the US can be further helped by the AI Act to prevent the fragmentation of AI legislation into different state-level laws, rather than prompting a nationwide text. This would probably increase the chances of avoiding overlapping or contrasting regulation and enforcement between states, the last being an issue also between EU institutions and Member States AI agencies.  


Similarly, in China AI regulations may have the same issue of the US and of EU but rather in light of the business concentration which differentiates the necessity for certain provinces to have more or less stringent AI regulations depending also on the future developments of technologies and AI systems and models. Additionally, China seems to be interested in specific aspects for regulating AI, particularly regarding AI governance, data protection, algorithmic transparency and AI monitoring mechanisms. Therefore, in the short term, it seems unlikely that China will adopt an all-encompassing AI regulation like the AI Act. Moreover, China may also take inspiration from the AI Act regarding the AI-related fundamental rights protection principles and parts of provisions contained in this EU regulation. Precisely, a key aspect of all future AI legal texts is going to be the human-centric perspective of the legal texts, which should be at the base of the enforcement provisions and conditionality requirements to launch an AI on the European and global markets. In this regard, it seems that China has, up to now, advanced specific laws regulating AI systems and models with specific aims grounded on its political agenda or major national business interests. Therefore, in terms of fundamental rights respect and given the past AI systems created and used in China, the least it could be said is that the AI Act can positively influence the fragmented and too-specific Chinese national AI laws. Therefore, compared to more fragmented and uneven non-EU legislation, the AI Act, regardless of some substantial, implementation and enforcement limits, which, to reiterate, are given by the complex structure and coordination of EU institutions and other national actors, manages to distinctly balance the fundamental rights respect, the EU values and the technological development.


In conclusion, it can be said that the AI Act, despite its partial incompleteness and general scope of protection for persons, businesses, and national governments from dangerous AI systems, can indeed provide non-EU states with a more than solid base to legislate outside the EU. However, it will be only after the entry into force of the AI Act that both EU and non-EU states may truly see the grey areas left by the AI Act, particularly concerning recently developed and future AI models.  As in the case of the Chinese national legislation on the protection of privacy, particularly of private data of individuals, which was deeply influenced by the EU GDPR, it is likely that the US and Chinese governments, both leaders in AI systems and models development, can be once again influenced by an act of the Union. 

Non-EU governments might also consider mimicking a more human-centred approach for their AI national legislation given the widely known privacy and data protection issues present in both the US and China. With a more human-centred and wider approach to the regulation of AI systems which assesses and describes the levels of risk of different AI systems in the same way the AI Act does, non-EU states might finally ensure the safety and coexistence of their citizens and AI with evident guarantees on their interests and wellbeing preventing misuses of dangerous AI while also allowing their development within their territory. 















Kai Zenner, ‘Some personal reflections on the EU AI Act: a bittersweet ending’, 16 Feb 2024, available at… 


10 views0 comments


bottom of page