Connect with us

International Circuit

China tech firms need to raise security efforts with ChatGPT-like services

As global tech firms scramble to offer rival products to ChatGPT, the much-talked-about chatbot launched by San Francisco-based tech start-up OpenAI, Chinese artificial intelligence (AI) and security experts are warning that the unchecked growth of such services raises cybersecurity concerns.

Hackers and online scam groups have been using ChatGPT, which is capable of giving humanlike responses to complex questions, to write malicious code that could be used in spam and phishing emails, according to a representative at Beijing-based online security firm Huorong.
“We’ve noticed instances of ChatGPT being used to generate malicious codes,” said the person. “That it is lowering the barriers for [launching] online attacks.”

The Huorong representative added that while ChatGPT has made it easier for online attacks to be launched, it does not necessarily increase the efficacy of such attacks.

“[ChatGPT] is able to quote open-source malicious backdoor or trojan horse codes that are already available online, but it will not be able to elevate the function of the codes [to make them more effective],” said the person.

Still, having another tool that can assist and potentially popularise internet scams does not bode well for Chinese online users, who are already at risk from a variety of online frauds, from privacy leaks to malicious adware.

Dr You Chuanman, director of the IIA Centre for Regulation and Global Governance with the Chinese University of Hong Kong, Shenzhen Campus, cautioned that as the technology evolves it is capable of adding more challenges to the online security sector.

“There have been cases of ChatGPT being used together with some other encrypted services, such as Telegram or WhatsApp, making online criminal activities more covert and harder to discover or track,” said You.

He added that the AI chatbot could also make life much harder for Chinese internet firms, which up until now have largely relied on armies of human censors to review online content. ChatGPT-like services that can potentially churn out a huge volume of online scams and sensitive content will mean a significant rise in content review budgets, said You.

An increase in the potential propagation of online scams is not the only issue though. Hackers are also putting ChatGPT’s language abilities to good use by engaging it to author phishing emails that appear more persuasive.

“Personalised and error-free phishing and scam content appears more credible to the victims and is likely to be more effective [with AI-powered chat tools], ” said Feixiang He, Adversary Intelligence Research lead at cybersecurity solutions provider Group-IB.

“AI makes it quicker and cheaper for scammers to generate unique and personalised phishing content and scripts targeted at victims,” he added.

In mid-February, a Hangzhou citizen used ChatGPT, which is not officially available in China and requires a virtual proxy network service to access, to write a fake announcement – in the tone of the city’s municipal government – about the city retiring its end-number licence plate policy.

The announcement spread online rapidly and led to a local police investigation into the matter according to local media reports, in the first major example of ChatGPT being used to spread an online rumour in China.

Chinese tech firms, in their race to launch their own ChatGPT-like services, are increasingly aware of the security challenges AI technologies could bring, according to Liang Hongjin, a partner at talent agency CGL Consulting that has been helping Chinese firms to hire AI talent.

Liang said that his firm has been tapped by a slew of China’s top internet firms to recruit top scientists who specialise in AI-related security.

But compared with the red-hot competition for people who can develop and launch ChatGPT-like services, Chinese companies are behind the curve on security to rein it in, and “overall, this is a universal trend [of ignoring the need to better regulate AI technologies] globally”, said Liang. South China Morning Post

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!