What does the EU Artificial Intelligence Act mean for the development of AI?
Artificial Intelligence (AI) has captured headlines in recent months due to its seemingly limitless potential.
Whether its revolutionising healthcare practices, optimising workflows or providing personalised learning experiences, the possibilities for applying AI are endless. However, it is also becoming clear that AI technology is developing much faster than governments can legislate for.
The European Union (EU) has set out to become the first major regulatory authority in the world to draw up laws to govern how AI can be used. The implications of the proposed EU AI Act will, thus, have a significant effect on the way AI will be used by people and businesses around the world.
What is the EU AI Act and why is regulation needed?
The EU AI Act is a proposed framework that will regulate the development and use of AI. The proposed legislation will classify AI according to four levels of risk, ranging from ‘minimal’ to ‘unacceptable’.
The scope of the legislation will apply to all use cases (except for military) and types of artificial intelligence
Systems defined as ‘high-risk’ will still be permitted, but will be subjected to tighter legislation and oversight. Technologies covered by EU product safety regulations include medical devices and autonomous vehicles. Other AI systems that require biometric data or are used for critical infrastructure must also be registered in the EU database.
The proposed AI law primarily aims to strengthen rules around data quality, transparency, human oversight and accountability. The law is expected to enter into force in early 2024.
What does it mean for other regulators like the US and UK?
Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act has the potential to become a global benchmarking standard. The so-called “Brussels effect” could see regulatory bodies in the UK and US being influenced by the EU’s rules and replicate them in their jurisdictions.
Alternatively, the EU’s large market could nudge companies towards adhering to its rules on a global scale.
In addition, it is hoped closer alignment of AI regulation between different jurisdictions will facilitate bilateral trade, improved cooperation and better regulatory oversight.
There has been discussion of a voluntary code of conduct between the EU and the US to bridge the divides until formal legislation is adopted.
The UK’s current “pro-innovation” approach to AI regulation is less restrictive compared to the EU. In a whitepaper released earlier this year, the UK proposes a set of high-level principles to guide regulators rather than create a new regulatory framework specifically for AI.
What does it mean for the industry at large?
AI companies themselves have called for increased government oversight of the development of AI systems. For example, Sam Altman, chief executive of OpenAI (the company behind ChatGPT), has often called for coordinated regulation to minimize the dangers these systems can pose.
Despite this, the EU’s proposed AI Act has attracted vocal criticism.
In an open letter to the European Commission, 160 chief executives from companies such as Siemens, Renault and Airbus have expressed their concern about the impact the act would have on Europe’s competitiveness. These European organisations fear that excessive regulation could hamper Europe’s position as a leader in AI technology and give other regions a competitive edge.
Nonetheless, the EU AI Act represents an important step in regulating the development of AI. By classifying AI according to levels of risk and focusing on data quality, transparency, and accountability, it aims to strike a balance between innovation and potential harm.
Its influence could set a global benchmark, but industry views diverge, with some in favour of coordinated regulation and others fearful of hampering Europe’s competitive edge.
Striking the right balance will be essential to promote responsible and ethical advances in AI.