Connect with us

Headlines of the Day

Passing the Digital India Act, a challenge-yet a necessity

In November last year, global multinational corporations Eli Lily and Lockheed Martin lost billions of dollars when fake handles managed to get “verified” and posted false news announcements. This was the result of a hastily introduced verification feature by the Elon Musk-led Twitter, where anyone ready to pay US$8 could get the much-coveted blue check mark.

While Twitter withdrew the feature and suspended the fake handles, the damage was done. Closer home, in 2013, fake videos from Pakistan were uploaded on YouTube to incite riots in Muzaffarnagar, Uttar Pradesh. While Indian police officials scrambled to block them, hundreds had been killed, injured, or rendered homeless as the violence spread.

The world is now grappling with an exponential growth in cybercrimes, Child Sexual Abuse Material (CSAM), misinformation, self-harm including suicide and a host of other abuses that were unthinkable before the internet arrived.

For policymakers, the internet is an opportunity as well as a challenge.

As India moves towards passing the ambitious Digital India Act, it will have to find a delicate balance between protecting its users without compromising fundamental rights such as free speech, liberty and privacy, among others.

India has already seen an exponential rise in cybercrimes. The National Crime Records Bureau (NCRB) recorded 305 cases of cybercrimes against children in 2019 that went up to 1,102 in 2020. In the same period NCRB recorded that cybercrime cases against women went up from 8,379 to 10,405. These are only details of registered FIRs while millions more are closed at the complaint stage.

If the DIA intends to effectively tackle the growth of user harms, it should start by categorising the various kinds of harms that exist on the internet. The current IT Act categorises content that can harm the sovereignty of India, sexually explicit material with special emphasis on children.

Categorising Online Harms
The DIA can expand these categories and bring in self-harm, misinformation and disinformation, privacy harms and those caused by addiction and algorithm bias. Algorithmic bias are systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. It also occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process. The DIA needs to build more expansive categories of online harms and build safeguards against them.

The landmark Puttaswamy judgment of August 2017 by a nine-judge Constitutional bench recognised the right to privacy as a fundamental right . Therefore, ensuring privacy by design, data minimisation and features such as end-to-end encryption need to be factored in as measures to prevent online harms, in conjunction with the proposed Digital Personal Data Protection (DPDP) Bill that is likely to be presented in Parliament during the monsoon session.

Categorisation also provides a common language and framework for international collaboration. When different countries and organisations categorise online harms in a similar manner, it becomes easier to exchange information, share best practices, and collaborate on initiatives to combat these harms globally.

Protecting Internet Users
Many jurisdictions require platforms to take steps against illegal content, particularly radicalisation or CSAM. Some jurisdictions go beyond this and aim to regulate content that is “lawful but harmful”. In UK’s Online Safety Bill, this category might include abuse, harassment, and exposure to content encouraging self-harm or eating disorders as well as misogynistic abuse and disinformation.

Content that promotes self-harm including suicide was criminalised in UK following the death of a teenager, Molly Russel. According to the coroner’s report, she committed suicide after engaging with vast amounts of harmful content on the Internet. Of 16,300 pieces of content that Russel interacted with on Instagram in the six months before she died, 2,100 were related to suicide, self-harm, and depression. It also emerged that Pinterest, the image-sharing platform, had sent her content recommendation emails with titles such as “10 depression pins you might like”.

While there is a need to monitor certain kinds of content on the internet because of the grievousness of the harm caused, such monitoring cannot be at the cost of the fundamental right to free speech. UK’s Online Safety Bill initially included provisions regarding “legal but harmful” content, but there were concerns raised about potential restrictions on free speech. As a result, those specific provisions were removed from the bill.

However, the Bill has been criticised by privacy experts for making provisions for “backdoors” that can harm end-to-end encryption. What falls within the purview of illegal content and legal but harmful content varies significantly from jurisdiction to jurisdiction. This is largely based on local, cultural, social, and political considerations. An emerging economy such as India should carefully curate its list of harmful content particular to its digital economy aspirations and constitutional framework.

The DIA can mandate platforms to develop mechanisms for users to control content they want to see and who they engage with. They can also enforce age limits and age-checking measures for children.

Self-Regulatory Bodies
Moreover, India’s digital landscape must be cognisant of emerging categories of harms such as addictive tech, algorithm bias or content that promotes suicide and self-harm. Different categories of harms require different sets of responses, but the same regulatory body cannot form mechanisms to address all harms. For example, in the recently released IT Amendment Rules, 2023, the self-regulatory body (SRB) for gaming will be responsible for safeguarding users against the risk of gaming addiction, financial loss, and fraud.

Since user safety is a priority for both legislators and users, the principle of “responsible play” will have to be developed by SRBs that have technical experts drawn from diverse sectors. Thus, responsibilities for addressing user harms should be largely left to the relevant SRBs and the government could act as an appellate body. The SRBs can also review the third-party audits to measure the efficacy of online safety measures taken by platforms. Moneycontrol

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!