Bringing global regulation to control the proliferation of powerful artificial intelligence models, such as ChatGPT, was one of the key discussion points between OpenAI CEO Sam Altman and Prime Minister Narendra Modi when they met on Thursday morning.
“We talked about the opportunities in front of the country, what the country should do, and also the need to think about global regulation to prevent some of the downsides from happening. It was a great hour,” said Altman during a gathering at the Indraprastha Institute of Information Technology, Delhi (IIITD).
This is the latest of Altman’s appeals to governments around the world to start thinking about setting up guardrails for AI. In a US Congressional hearing last month, he told the US Senate, “Regulate us,” after admitting to the risk of unbridled AI being catastrophic for the world at large.
In May, the OpenAI blog professed the need to have stronger regulations to govern AI, saying it will eventually need something like an International Atomic Energy Agency (IAEA), which seeks to promote the peaceful use of nuclear energy.
Altman, who after his Europe trip, is now on a whirlwind tour of Israel, Jordan, Qatar, the UAE, India, and South Korea, said, “Pleasantly surprised by the enthusiasm of almost all of the world leaders in thinking about this (a regulatory body like IAEA). I am optimistic we can get something done.”
India, which is slated to release the draft of the Digital India Bill in June, has its own views on AI, as indicated by Rajeev Chandrasekhar, the Union Minister for Electronics & Information Technology (MeitY).
“We will not ban anything in the innovation space unless it is linked with user harm. We want to lead the charge in Web 3.0 and in AI—with guardrails defined. I am not a big fan of regulators in the sense that it shouldn’t create another layer of compliance,” the minister had said last month.
“Even as governments grapple with the complexity of regulating AI, shouldn’t the builders of powerful models, like ChatGPT, think of self-regulation?” an audience member at IIITD asked.
In response, Altman said, “We do self-regulate. We spent almost eight months on GPT-4, making sure that it was safe for release. Self-regulation is important and it is something we want to offer, but we won’t be the only players here. I don’t think the world should be left entirely in the hands of the companies, given what we think is the power of this technology.”. CNBCTV18