Brexit Institute News

Artificial Intelligence: EU and National Strategies

Dr. Edoardo Celeste (Dublin City University)

In less than two years, we have witnessed the first mass commercialization of artificial intelligence (AI) systems. Generative AI has reached our societies thanks to the offer of multiple companies that have made their AI interfaces freely available online. AI, despite not being a ‘new’ technology, has made the headlines with the media simultaneously celebrating its positive potential – it could have easily written this entire report in an eyeblink – and warning against its threats for our democracies and societies – with an enhanced risk of fake news, disinformation and mass loss of intellectual jobs being only a few examples.

The social impact of AI has led to an intensification of policy and regulatory activities related to these types of technologies. On 13 March 2024, the EU completed the legislative process leading to adoption of the Artificial Intelligence (AI) Act. A much-awaited regulation, originally proposed by the EU Commission in 2021 and strategically enacted before the end of the current European legislature.

Despite its denomination, the AI Act is a regulation, thus representing a legislative instrument which is directly applicable at the level of EU member states. It does not enshrine specific rights for users but rather includes a series of obligations for AI developers and providers. The Act adopts a risk-based approach, distinguishing between AI systems that are prohibited ipso facto as they generate ‘unacceptable’ risks and those that produce ‘high’, ‘limited’ or ‘minimal’ risks that are permitted upon the adoption of an increasing level of guarantees.

The EU has been the first organisation in the world to adopt a comprehensive piece of legislation on AI. This move can be considered as strategic from a standard-setting perspective. The EU does not represent a leading developer of AI systems. However, the economic weight of its market positions the EU in a strong position from a regulatory perspective.

In the digital field, the strategy of the EU has been to make sure to be the first to regulate and apply its rules extraterritorially to all providers of digital products and services operating in the EU market, regardless of their country of establishment. This approach has not only allowed the EU to prevent forms of circumvention from companies incorporated in non-EU countries and trading in the EU but has also progressively consolidated the EU regulatory influence at global level, what the doctrine has defined the ‘Brussels effect’.

Despite the leading role exercised by the EU in this field, member states too have actively worked to develop and implement their national AI strategies. This phenomenon can be explained by looking at the critical role that AI technologies play at national level. AI systems are key to the future growth of national industries, have the potential to boost the efficiency of public administrations, and can represent a key driver of climate change mitigation strategies, but their misuse can potentially affect core aspects of democratic life, such as elections. Acquiring a sufficient level of digital sovereignty emerges as a priority in this context. The economic prosperity, smooth functioning, environmental preservation and democratic functioning of a country depends on the level of security and trustworthiness of AI technologies often developed by non-EU companies.

The US and China represent the two global technopoles in this sector, and both have adopted an approach to innovation which is radically different to the one promoted at EU level. The US has focused on fostering national economy and innovation; China has privileged national security aspects. Conversely, in the EU, both the Union and its member states are trying to promote a human-centric approach that considers the protection of fundamental rights as its priority while at the same time promoting responsible innovation and economic growth.

Dr. Edoardo Celeste is Associate Professor of Law, Technology and Innovation and Chair of the Erasmus Mundus Master in Law, Data and AI, School of Law and Government, Dublin City University.

This excerpt is from the Brexit Institute’s 2024 annual report. Read the full report here.

The views expressed in this blog post are the position of the author and not necessarily those of the Brexit Institute blog.