Connect with us

Headlines of the Day

CERT-In warns against misuse of AI-based language apps

The Indian Computer Emergency Response Team (CERT-In) on Wednesday published a new advisory on security implications of AI language based applications.

In the advisory dated May 10, the cyber security agency under the Ministry of Electronics & Information Technology said that AI language-based models such as ChatGPT, Bing AI, Bard AI etc are widely getting recognition and being discussed for its useful impacts but can be used by threat actors to target individuals and organizations.

CERT-In stated the various uses of the AI language-based applications in its advisory, adding that people are using these to understand, interpret, and enumerate cyber security contexts, to review security events and logs, to interpret malicious codes and malware samples etc.

“The applications have the potential to be used in vulnerability scanning, translation of security code from one language to another or transfer of code into natural languages, performing security audit of the codes, VAPT, or integration of application with SOC and SIEMs for monitoring reviewing, and generating alerts,” the advisory said.

However, as per CERT-In, AI-based applications can also be used by threat actors to conduct various malicious activities such as:

  • A threat actor could use the application to write malicious codes for, exploit a vulnerability, conduct scanning, perform privilege escalation & lateral movement, to construct malware or a ransomware for a targeted system.
  • AI-based applications can generate output in the form of text as written by human. This can be used to disseminate fake news, scams, generate misinformation, create phishing messages, or produce deep fake texts.
  • A threat actor can ask for a promotional email, a shopping notification, or a software update in their native language and get a well-crafted response in English, which can be used for phishing campaigns.
  • Creation of fake websites and web pages to host and distribute malwares to users through malicious links or attachments using the domain similar to AI based applications.
  • Creation of fake applications impersonating AI based applications.
  • Cybercriminals could use AI language models to scrape information from the internet such as articles, websites, news and posts, and potentially taking Personal Identifiable Information (PII) without explicit consent from the owners to build corpus of text data.

How To Minimize Threats Arising From AI Based Applications

Here are some of the safety measures stated by CERT-In in its advisory that can be followed to minimize the adversarial threats arising from AI based applications:

Educate developers and users about the risks and threats associated with interacting with AI language models.

  • Verify domains and URLs impersonating AI language-based applications, avoid clicking on suspicious links.
  • Implement appropriate controls to preserve the security and privacy of data. Do not submit any sensitive information, such as login credentials, financial information or copyright data to such applications.
  • Ensure that the text generated is not being used for illegal, unethical activities or for dissemination of misinformation.
  • Use content filtering and moderation techniques within organization to prevent the dissemination of malicious links, inappropriate content, or harmful information through such applications.
  • Secure and conduct regular security audits and assessments of the systems and infrastructure to identify potential vulnerabilities and information disclosures.
  • Organizations may continuously monitor user interactions with AI language-based applications for any suspicious or malicious activity within their infrastructure.
  • Organizations may prepare an incident response plan and establish the set of activities that may be followed, in case of an incident happens.

Bloomberg

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!