The EU proposes a first step in regulating AI, but can its fragmented approach become the global blueprint?

Last month, the EU proposed the world’s first regulation on Artificial intelligence - The Artificial Intelligence Act. The law proposes to not necessarily tame, but control wildly developing Artificial intelligence technology.

The idea of drafting AI law was birthed when EU Commission released a white paper - "On Artificial Intelligence - A European Approach to Excellence and trust". The paper sparked initial debates and outlined the EU vision for regulating AI.

The AI law draft was proposed by EU Commission in April 2021. The draft was accepted by lawmakers in December 2023 and was officially adopted by the European Parliament and the Council of the EU on 13th March 2024. 

The law will gradually be implemented over the next two years.

Features of AI law:

The central theme of the law is risk-based approach. AI applications are categorized in one of four categories- unacceptable risk, high-risk, limited-risk and minimal risk activities. 

The EU is particularly concerned with AI systems that could cause widespread societal harm or undermine the fundamental rights of citizens. This is why social scoring systems are explicitly banned. Social scoring systems identify citizens based on their social behaviors (internet activity tracking, social connection monitoring) and personal traits (biometrics, facial recognition, credit score, financial history). These systems, as noted in the case of China, assign a social score to individuals and businesses to determine their trustworthiness. Such systems are detrimental to not only individual privacy but are also scalding for societal structures. 

AI applications designed to exploit vulnerabilities and manipulate behavior, particularly among children are also categorized under unacceptable risk. This prohibits AI-powered toys, which employ manipulative and harmful tactics to lead children into gaming addiction.

Another critical feature of the law is its guidelines for generative AI technologies such as ChatGPT and Gemini. Although these have not been categorized under the unacceptable risk category, such businesses are required to show training datasets employed in their development phase. 

Also, to promote transparency in content generation, the law requires mandatory disclosure for the source of content generation. The law includes written content, images, videos, and music generated and edited by AI. 

All the businesses falling under high-risk category are required to conduct thorough risk assessments, utilize high-quality data to train their AI, and prioritize transparency in how the systems function. 

The penalties and scrutiny for violation varies from 1.5-7% of the global turnover of the business depending upon the level of threat posed by the business. 

Timeline of the AI Law Implementation:

All the products and services under the unacceptable risk category will be banned in the coming six months; high-risk category businesses have nine months till scrutiny is faced by them and generative AI businesses have to comply with the law requirements in the coming 12 months. 

Global Landscape of AI Regulation and AI Act’s role in it:

As noted in the case of climate control- the world has come together and decided to reach net-zero emissions by 2050 and capping the global temperature rise to 1.5 degrees Celsius. Similar treaties exist for maintaining global peace and saving the oceans. These treaties set standards and obligations for member countries. However, in case of artificial intelligence, leave universally accepted, no nationally accepted laws have been put in place by any nation. 

Leading superpowers such as The United States, have adopted segmented approach to impose AI regulations. The Federal Trade Commission (FTC) dictates consumer protection aspects, while Food and Drug Administration Department (FDA) regulates AI-powered drug discovery or device development.

In China, the prime focus is on domestic AI technology development. This is evident through its government funding programs, focus on homegrown technologies such as chip-manufacturing, and rapid AI adoption in strategic sectors like smart city initiatives, and increasing use of facial recognition for security applications.

India and Canada are also actively working on national AI regulatory frameworks. But at the bottom of it, the bottleneck lies in confronting the unknown. The countries are still grappling the risks posed by rapidly developing AI technologies and which is why EU’s AI law is the first step in the right direction.

As discussed above, the law categorizes AI applications- not based on industry, geographic region, or size, but based on the risk they pose- which is the most befitting technique to address threats posed by this evolving domain. 

However, the drafted text still has some loopholes. The act focuses on specific AI applications. Highly adaptable "general purpose" AI with broad potential applications can still fall under weaker regulations if its specific use case at the time of development isn't flagged as high-risk. Moreover, the stringent risk assessment requirements might leave smaller companies battling against the resources and technical expertise required for AI Act adherence.


At the bottom line, the AI regulation domain is fragmented and segmented. 

While AI Law is the first step in the right direction, we still have a long way to go. As AI continues its rapid development, we must strive to regulate it ethically and responsibly. The question of whether the EU's approach serves as a blueprint for a unified global framework, or entirely new strategies are required to navigate the complexities of AI on a global scale remains to be seen.

         *READ MORE:- AI in Healthcare is The Blend Of Innovation And Ethics

Copyright © 2024