Connect with us

Perspective

Digital security on AI person

Deepfakes, hyper-realistic AI-generated videos, audios, and images have surged into the realms of cyber security and media, presenting both technological potential and significant hazards. While their uses span from innocuous entertainment to educational purposes, the ease with which they create believable falsehoods has sparked controversy. These technologies encompass various methods, including face swapping for realistic impersonation, voice synthesis, manipulation of gestures and movements in videos, and text-based deepfakes mimicking writing styles.

The next step in synthetic IDs is often used to scam money from companies. Synthetic IDs take data from real people and use it to create new identities with the help of scams happening around them, which costs billions of pounds every year. In the new age of increasing remote working, these AI persons could end up entering the workforce.

Once, a picture of a politician taking bribe proved to be hard evidence of corruption. Knowing the above details, would not you just take a minute and wonder whether or not this was an AI-generated image? The world has already seen some fake news spread – such as AI-generated deepfake video of Ukrainian president Volodymyr Zelensky telling his troops to lay down their arms last March. AI makes the already volatile information space even more dangerous with trust in traditional news sources already low.

AI-generated faces are becoming worryingly realistic – none of these people actually exist, they are merely created by computer.

Their ability to blur the line between fact and fiction poses threats to reputations, truth, and democratic institutions. Malicious actors exploit these tools to craft fake news, fabricate evidence, and spread misinformation effortlessly. In response, certain US states have enacted laws, primarily targeting deepfake pornography. At the federal level, the Deep Fakes Accountability Act, proposed in 2019, aims to mandate disclosure by deepfake creators, prevent deceptive distribution during elections or harm to individuals’ reputations, and impose penalties for violations. India’s IT Rules, 2021, require intermediary platforms to remove reported fake or deepfake content within 36 hours.

Indian Prime Minister raised alarm on November 17, 2023, highlighting the threats posed by deepfake videos and digital security risks. Criminals have long exploited deepfake videos and audios, particularly in the porn industry. The advent of advanced technologies like GenAI has made creating such deepfakes even easier. Typically, they acquire photos and audio clips from social media and manipulate them for cybercrimes.

Political campaigns in 2020 witnessed AI-generated deepfakes, such as manipulated videos featuring Bharatiya Janata Party (BJP) leader Manoj Tiwari and a doctored video of Madhya Pradesh Congress chief Kamal Nath. These instances caused confusion and stirred controversies. In a recent incident in November 2023, a deepfake video emerged with actress Rashmika Mandanna’s face digitally placed onto a British-Indian social media figure, Zara Patel.

Detecting these manipulations proves challenging. This burgeoning technology necessitates innovation and robust legal frameworks to leverage its potential while mitigating harmful effects.

Regarding existing laws, deepfakes encroach upon an individual’s identity and features, infringing upon the right to privacy guaranteed by Article 21 of India’s Constitution. While India lacks specific legislation targeting deepfakes and AI-related crimes, existing laws offer potential avenues for civil and criminal remedies. The IT Intermediary Rules, 2021, compel social media platforms to exercise due diligence, explicitly prohibiting content impersonation. In a recent advisory on December 26, 2023, the Indian government emphasized communicating such content restrictions clearly to users, specifying penalties under the IPC, 1860, and IT Act for violations of Rule 3(1)(b).

S. No. Legislation Section Title Penalty
1 Information Technology Act, 2000 (IT

Act)

66E Punishment for violation of privacy Imprisonment which may extend to three years or with fine not exceeding two lakh rupees, or with both
2 Information Technology Act, 2000 (IT

Act)

66D Punishment for cheating by personation by Imprisonment of either description for a term which may extend to three years and
using computer resource shall also be liable to fine which may extend to one lakh rupees
3 Information Technology Act, 2000 (IT

Act)

67 Punishment for publishing or transmitting obscene material in electronic form On first conviction punished with imprisonment of either description for a term which may extend to three years and with fine which may extend to five lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to five years and also with fine which may extend to ten lakh rupees
4 Information Technology Act, 2000 (IT

Act)

67A Punishment for publishing or transmitting of material containing sexually explicit act, etc., in electronic form Imprisonment of either description for a term which may extend to five years and with fine which may extend to ten lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to seven years and also with fine which may extend to ten lakh rupees
5 Information Technology Act, 2000 (IT

Act)

67B Punishment for publishing or transmitting of material depicting children in sexually explicit act, etc., in electronic form First conviction punished with imprisonment of either description for a term which may extend to five years and with fine which may extend to ten lakh rupees and in the event of second or subsequent conviction with imprisonment of either description for a term which may extend to seven years and also with fine which may extend to ten lakh rupees
6 Indian Penal Code, 1860, (IPC) 416 and

419

Cheating by personation and Punishment for cheating by personation Imprisonment of either description for a term which may extend to three years, or with fine, or with both
7 Indian Penal Code, 1860, (IPC) 420 Cheating and dishonestly inducing delivery of

property

Imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine
8 Indian Penal Code, 1860, (IPC) 465 Punishment for forgery Imprisonment of either description for a term which may extend to two years, or with fine, or with both
9 Indian Penal Code, 1860, (IPC) 468 Forgery for purpose of cheating Imprisonment of either description for a term which may extend to seven years, and shall also be liable to fine
10 Indian Penal Code, 1860, (IPC) 469 Forgery for purpose of harming reputation Imprisonment of either description for a term which may extend to three years, and shall also be liable to fine
11 Indian Penal Code, 1860, (IPC) 499 and 500 Defamation and punishment for defamation Punished with simple imprisonment for a term which may extend to two years, or with fine, or with both
12 Indian Penal Code, 1860, (IPC) 509 Words, gestures, or acts intended to insult the modesty of a woman Punished with simple imprisonment for a term which may extend to three years, and also with fine
Indian Penal Code, 1860, (IPC) 153A Promoting enmity between different groups on grounds of religion, race, place of birth, residence, language, etc., and doing acts prejudicial to maintenance of

harmony

Imprisonment which may extend to three years, or with fine, or with both
Indian Penal Code, 1860, (IPC) 153B Imputations, assertions prejudicial to

national integration

Imprisonment which may extend to three years, or with fine, or with both
13 Protection of Children from Sexual Offences Act, 2012 2(da) r/w 13, 14 Use of child for pornographic purposes and Punishment for using child for pornographic purposes Imprisonment for a term which shall not be less than five years and shall also be liable to fine and in the event of second or subsequent conviction with imprisonment for a term which shall not be less than seven years and also be liable to fine

Rule 3(1)(b) within the IT regulations mandates intermediaries to communicate their policies, regulations, privacy terms, and user agreements in the language preferred by the user. Moreover, Rule 3(2)(b) necessitates intermediaries to promptly address complaints about impersonation content, including digitally altered images, within a 24-hour window. They are required to take necessary actions to remove or restrict access to such content.

In its latest Advisory dated November 7, 2023, the Ministry of Electronics and Information Technology directed major social media intermediaries to:

  • Exercise due diligence and make reasonable efforts to detect misinformation, particularly content violating regulations or user agreements.
  • Act swiftly within the timeframes specified by the IT Rules 2021 against such instances.
  • Ensure users refrain from hosting such prohibited information or content, including deep fakes.
  • Take down reported content within 36 hours.
  • Take expedited actions within the timelines set by the IT Rules 2021 to restrict access to such content.

Section 79 of the IT Act 2000 provides a safe harbor provision. It states that an intermediary is not liable for third-party information, data, or communication links made available or hosted by them.

The Ministry emphasized that failure to comply with the IT Act and Rules might result in the loss of protection under Section 79(1) of the Information Technology Act, 2000. This section shields intermediaries from liability concerning third-party information, data, or communication links hosted on their platforms.

Disclaimer: The views expressed in this article are solely those of the author and do not necessarily reflect the opinions, beliefs, or policies of his employer or its affiliates. The content provided herein is intended for informational purposes only and should not be considered as professional advice. His employer disclaims any liability arising from the content of this article and does not endorse or verify the accuracy of the information provided. Readers are encouraged to seek professional advice or conduct their own research before acting upon any information contained in this article.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

error: Content is protected !!