Connect with us

International Circuit

Explainer: What is the European Union AI Act?

The AI Act is expected to be a landmark piece of EU legislation governing the use of artificial intelligence in Europe that has been in the works for over two years.

Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable. Governments and companies using these tools will have different obligations, depending on the risk level.

The Act is expansive and will govern anyone who provides a product or a service that uses AI. The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments.

Apart from uses of AI by companies, it will also look at AI used in public sector and law enforcement. It will work in tandem with other laws such as the General Data Protection Regulation (GDPR).

Those using AI systems which interact with humans, are used for surveillance purposes, or can be used to generate “deepfake” content face strong transparency obligations.

A number of AI tools may be considered high risk, such as those used in critical infrastructure, law enforcement, or education. They are one level below “unacceptable,” and therefore are not banned outright.

Instead, those using high-risk AIs will likely be obliged to complete rigorous risk assessments, log their activities, and make data available to authorities to scrutinise. That would be likely to increase compliance costs for companies.

Many of the “high risk” categories where AI use will be strictly controlled would be areas such as law enforcement, migration, infrastructure, product safety and administration of justice.

A GPAIS (General Purpose AI System) is a category proposed by lawmakers to account for AI tools with more than one application, such as generative AI models like ChatGPT.

Lawmakers are currently debating whether all forms of GPAIS will be designated high risk, and what that would mean for technology companies looking to adopt AI into their products. The draft does not clarify what obligations AI system manufacturers would be subject to.

The proposals say those found in breach of the AI Act face fines of up to 30 million euros or 6% of global profits, whichever is higher.

For a company like Microsoft, which is backing ChatGPT creator OpenAI, it could mean a fine of over $10 billion if found violating the rules.

While the industry expects the Act to be passed this year, there is no concrete deadline. The Act is being discussed by parliamentarians, and after they reach common ground, there will be a trilogue between representatives of the European Parliament, the Council of the European Union and the European Commission.

After the terms are finalised, there would be a grace period of around two years to allow affected parties to comply with the regulations. Reuters

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!