ChatGPT, the AI-powered chatbot developed by research firm OpenAI, continues to impress users with its capabilities. The platform, which is currently free to use, can engage in conversation, solve arithmetic problems, write long essays and campaigns for brands, and even review and write computer code. However, some hackers have been using ChatGPT to write malicious code and create malware. Despite this, the chatbot’s versatility and accuracy (although it may not always be perfect) make it a popular choice among users.
According to security firm Check Point Research (CPR), several underground communities indicate that hackers are using OpenAI’s tool to develop malicious applications. In a blog post, researchers claim that the current iteration of malicious tools is basic, though “it is only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.”
The research firm also spotted a thread named “ChatGPT – Benefits of Malware” in a popular underground hacking forum, where the publisher had disclosed his experience with ChatGPT. The publisher used the platform to create Python-based information stealer that “searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.”
In another instance, a hacker used ChatGPT to create a simple Java-based malware. The post notes, “This (Java) script can of course be modified to download and run any program, including common malware families.”
Similarly, the research firm also spotted instances where hackers used ChatGPT to create a malicious encryption tool and a dark web marketplace to facilitate “fraud activity”.
The research firm cautions that it is still early to decide whether or not ChatGPT capabilities will become a new favourite tool for participants in the dark web. However, the platform is slowly gaining momentum and it may aid both amateur and professional hackers to at least create campaigns and text that appear on shady websites.
For instance, in India, there have been multiple instances where bad actors used WhatsApp to steal users’ money. But in many cases, the malicious campaign included grammatically wrong English, which can now easily be fixed by ChatGPT. Similarly, a hacker can also leverage OpenAI’s Dall-E platform to create images without violating copyrights. As these tools can assist with the creatives, practically for free, hackers may find more instances of creating legitimate-appearing campaigns, bearing phishing links to steal users’ personal details and even money.
At the moment, ChatGPT is getting continuous upgrades, and the developer may address the problem of writing malicious code using the platform. The platform is already working on an invisible watermark to distinguish AI-generated text. It may help with checking plagiarism. IndiaToday