Regulations around the digital economy have been one of the most hotly debated areas in recent years including issues such as monitoring online content, regulating social media platforms, and protecting user data. Now, policymakers are grappling with new risks emanating from the development of Artificial Intelligence for different applications. businesslike spoke with Nick Clegg the former Deputy Prime Minister of the UK and currently the President for Global Affairs at Meta to understand how he is thinking about some of the vexed problems facing law makers in India and other countries.
Policymaking has not kept pace with tech innovation so far. Do you see the gap reducing at all?
Over the last decade and a half policymakers have been very far behind technology. I think there’s now a concerted attempt to avoid that. But of course, to avoid that, you also need to make sure that the regulations that do get passed are sensible recommendations, and are ones that are proportionate, and elastic enough to be able to keep up with technology. This is always a difficult balance, if you over-regulate technology in a very micromanaging way, then it’s exactly those kinds of laws that are outdated most quickly because they only fit the purpose of a very specific technology or a specific period of time. So I think the emphasis on building principles-based legislation tends to be the kind of regulation that stands the test of time.
But if every country comes up with its own regulation, is there a risk of the Internet getting fragmented? We are already seeing companies offering differentiated features in different regions based on local regulations.
The Internet is globally quite fractured already. You have Chinese internet, you have increasingly Russian internet. But it’s a risk that we will have to work to avoid. But it’s a natural risk. For example, AI is a technology that is bigger than any company or any country and yet, India can only legislate for itself, right? So you have this mismatch between the level at which the technology operates and the level at which, or the geographical scope, which legislation operates. So I think you definitely have that tension. And sometimes that leads to a very real risk of fragmentation. But we are seeing some encouraging signs. For example, there was a debate around data localisation earlier but I believe we have moved on from that.
So how do you react when the TRAI issues a consultation paper on regulating the OTT space to bring it par with telecom operators? TRAI is also looking at a framework for banning specific apps during crisis times. Your thoughts.
I have a lot of question marks about the wisdom of disabling apps, which are used by hundreds of millions of people in India, from one moment to the next. I’m not totally clear how you could do that without affecting lots of other apps because you’d have to do it at the cloud level. But more than that, if the idea is that you disabled an app, like WhatsApp during a time of heightened tension, do remember you’re also then disabling the app for lots of people who need it more than anything else at that time to reach out to loved ones or to communicate with people about when they’re feeling in danger. You don’t put out a fire by removing the chosen means by which human beings seek to communicate with each other.
Of course, we have a responsibility to act when we see that our apps are being abused. We are already disabling millions of accounts that are misusing the platform. I’m sure there’s more we can do. But I think it seems to me to pass legislation to disable the app altogether is like taking a sledgehammer to crack a nut. And as for the idea of equating OTTs with telcos well, they’re just not the same. I mean, you really are mixing apples and pears if you do that. This debate, by the way, happens in lots of different places, but time and time again, it has been concluded that if you paint apps like WhatsApp with the same brush as a telco, you are not really comparing like with like, So I would counsel, considerable caution on both counts.
What are your views on the proposal to constitute a fact-checking unit, which will have sweeping powers to determine what is fake or false or misleading with respect to any business of the Central Government?.
I obviously can’t comment on the matter since it is subjudice. But we have in India the largest number of fact-checkers and we’re very proud of the work we have done to have these independent fact-checkers, identify misinformation and flag it for us on our. For us, anything that enhances the transparency and independence of those fact-checkers is a good thing. We’re not at all opposed to the idea of some self-regulatory body. I think a self-regulatory body in this space is always going to be better than one prescribed in law. Meanwhile, we will continue to do our work with our fact-checkers.
There’s a lot of discussion around regulating AI. Many want AI developments to be paused until guard rails are put in place, others are batting for an industry-wide collaboration. What are your thoughts on this isssue
Let’s first identify what harms and risks we’re trying to tackle today. And then decide whether existing law is sufficient to deal with those problems. But what I would avoid is trying to regulate the tech itself. You can regulate the uses to which AI is put, and then you need to decide what layers of risk we have to deal with now. You need to have some voluntary commitments from the industry, whilst legislators are doing their own work as they are in the EU and elsewhere.
And then there’s a whole different class of risks, which is about the possible risks that might ensue from AI models that currently don’t exist. And this is this whole futuristic debate about whether are we all going to be turned into paperclips by next Tuesday, because of a sort of demonic AI, which develops autonomy of its own to think of for itself, to plan for itself, independent of human intervention. I don’t want to sound facetious, but it doesn’t exist. I think it’s extremely difficult to have a sensible debate about risks related to a technology standard. There’s a huge leap between large language models, which are basically textual guessing machines to becoming super intelligent. Current models have no understanding of the world, they can’t intuit anything, and they’ve got no capacity to develop their own motivations So I worry sometimes that the debate in recent months has been so focused on the future, which doesn’t exist that we have sucked up a lot of energy, when in fact, what we should do is break this down and say what are the things that we have to deal with now in response to these large language models. The Hindu BusinessLine