Artificial Intelligence (AI) has penetrated every industry in the past few years; cyber criminals too have joined the race. According to security experts, the cyber criminal community has started leveraging AI capabilities to aid in their illegal business.
According to a recent Kaspersky report, the darknet currently provides a range of language models specifically designed for hacking purposes such as BEC (Business Email Compromise), malware creation, phishing attacks, and beyond.
One such model is WormGPT, a nefarious version of ChatGPT which, unlike its legitimate counterpart, lacks specific limitations, making it an effective tool for cyber criminals looking to carry out attacks, for example, Business Email Compromise (BEC).
Cybersecurity experts believe that AI technology is a catalyst for increased levels of threats across the internet world. The technology is weaponised by threat actors to create deep fakes, malware, crack passwords, and carry out phishing attacks, to target business entities and individuals.
“Generative AI is revolutionising the way malware is created. Threat actors can use AI algorithms to generate highly evasive and adaptable malware variants that can easily evade traditional signature-based antivirus solutions. These AI-generated malware strains constantly evolve, making detection and containment a significant challenge for cybersecurity professionals,” said Kumar Ritesh, founder and CEO of Cyfirma.
Phishers and scammers often exploit the popularity of certain products and brands, and WormGPT is one instance of such exploits. On darknet forums and in illicit Telegram channels, Kaspersky experts have found websites and ads, offering fake access to the malicious AI tool and targeting other cyber criminals, that are apparently phishing sites.
“Deepfake technology, a subset of Generative AI, allows threat actors to create convincing video and audio forgeries. These attacks can tarnish reputations, manipulate public opinion, and even influence financial markets,” added Ritesh.
Over the past decade, there has been an increase in the deployment of bots over the internet. While bots aid in providing automation to a variety of services, their use by cyber criminals has raised concern. Data from Barracuda’s Threat Spotlight for the first half of 2023 shows that nearly half (48 per cent) of total global internet traffic was made up of bots, and most of these were bad bots.
Bad bots are programs designed to cause harm, at speeds and volumes that human attackers couldn’t match. These bots are programmed with combinations of ID-passwords and then deployed to attack email accounts and breach APIs.
“For the organisations targeted by these bots, a combination of under-secured APIs, weak authentication and access policies, and a lack of bot-specific security measures—such as limiting the volume and speed of inbound traffic—leave them vulnerable to attack,” said Tushar Richabadas, Principal Product Marketing Manager, Applications and Cloud Security at Barracuda.
A recent study by CERT-in also highlighted the security risks associated with APIs. The report stated that there has been a 62 per cent increase in the number of API (Application Program Interface) attacks on the Indian financial sector as of June 30, 2023, compared to the previous year.
GenAI also makes ransomware attacks easy to deploy and at pace and scale, cybersecurity researchers say. “We are witnessing unmistakable trends and shifts in ransomware threats, with prominent groups like ALPHV/BlackCat and LockBit poised to continuously refine their tactics, exploiting novel vulnerabilities and expanding their reach,” said Raj Sivaraju, President APAC at Arete.
Indian cybersecurity firms have been keeping up with the challenges in the security domain. India’s CERT-in has been flagging vulnerabilities in smartphones and popular browsers like Chrome to keep users safe from potential attacks and malware.
“The cybersecurity maturity level varies significantly across organisations and sectors in India. Some organisations have robust cybersecurity practices, while others, especially small and medium-sized enterprises, have limited resources and capabilities, making them more vulnerable to attacks,” said Pankit Desai, Co-Founder and CEO of Sequretek.
“Ethical considerations in AI, including transparency, accountability and responsible AI practices, are crucial. Companies need to develop ethical guidelines and practices to ensure that AI is used in a manner that aligns with societal values,” he added. Business Standard