Connect with us

International Circuit

Here’s how different governments are working to regulate AI tools

Italy’s data protection watchdog is ready to reactivate ChatGPT artificial intelligence (AI) technology on April 30 if its maker OpenAI takes “useful steps” to address concerns over privacy and data protection, the agency’s chief said on Tuesday.

Rapid advances in AI such as Microsoft-backed OpenAI’s ChatGPT are complicating governments’ efforts to agree on laws governing the use of the technology.

Here are the latest steps national and international governing bodies are taking to regulate AI tools:

The government requested advice on how to respond to AI from Australia’s main science advisory body and is considering next steps, a spokesperson for the industry and science minister said on April 12.

Britain said in March it planned to split responsibility for governing AI between its regulators for human rights, health and safety, and competition, rather than creating a new body.

China’s cyberspace regulator on April 11 unveiled draft measures to manage generative AI services, saying it wanted firms to submit security assessments to authorities before they launch offerings to the public.

China’s capital Beijing will support leading enterprises in building AI models that can challenge ChatGPT, its economy and information technology bureau said in February.

Twelve EU lawmakers urged world leaders on April 17 to hold a summit to find ways to control the development of advanced AI systems, saying they were developing faster than expected.

The European Data Protection Board, which unites Europe’s national privacy watchdogs, on April 13 said it had set up a task force on ChatGPT, a potentially important first step towards a common policy on setting privacy rules on AI.

EU lawmakers are also discussing introduction of the European Union AI Act that will govern anyone who provides a product or a service that uses AI. Lawmakers have proposed classifying different AI tools according to their perceived level of risk, from low to unacceptable.

France’s privacy watchdog CNIL said on April 11 it was investigating several complaints about ChatGPT after the chatbox was temporarily banned in Italy over a suspected breach of privacy rules.

France’s National Assembly approved in March the use of AI video surveillance during the 2024 Paris Olympics, overlooking warnings from civil rights groups that the technology posed a threat to civil liberties.

Italy’s data protection watchdog is ready to reactivate ChatGPT on April 30 if OpenAI takes “useful steps” to address the agency’s concerns, its chief said in an interview on April 18. On April 12, it had set an end-April deadline for OpenAI to meet its demands on data protection and privacy.

Italy imposed a temporary ban on ChatGPT on March 31 after the authority raised concerns over possible privacy violations and for failing to verify that users were aged 13 or above, as it had requested.

Digital transformation minister Taro Kono said on April 10 he wanted the upcoming G7 digital ministers’ meeting, set for April 29-30, to discuss AI technologies including ChatGPT and issue a unified G7 message.

Spain’s data protection agency said on April 13 it was launching a preliminary investigation into potential data breaches by ChatGPT. It has also asked the EU’s privacy watchdog to evaluate privacy concerns surrounding ChatGPT, the agency told Reuters on April 11.

The Biden administration said on April 11 it was seeking public comments on potential accountability measures for AI systems. President Joe Biden had earlier told science and technology advisers that AI could help address disease and climate change, but it was also important to address potential risks to society, national security and the economy. Reuters

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!