Generative AI and its Impact on Cybercrime
Category Artificial Intelligence Sunday - May 26 2024, 17:52 UTC - 8 months ago Generative AI has greatly impacted the criminal underworld, providing cybercriminals with a new, efficient toolkit that allows them to work globally and on a large scale. Phishing is the main use case for generative AI, with a sharp increase in activity since the rise of ChatGPT. While OpenAI has policies in place to restrict illegal activities, it is challenging to enforce. As AI models continue to improve, it will become increasingly difficult to distinguish between legitimate and malicious emails.
Artificial intelligence has greatly impacted many industries, but it has also brought a new level of efficiency to the criminal underworld. According to Vincenzo Ciancaglini, a senior threat researcher at Trend Micro, generative AI has provided criminals with a powerful toolkit that allows them to work more efficiently and on a global scale. In fact, Ciancaglini notes that most cybercriminals are regular people with regular activities that also require productivity. This makes the appeal of generative AI even greater for those looking for easy gains without much effort.
Last year, a controversial AI language model called WormGPT was created that was specifically designed to assist hackers. Built on top of an open-source model and trained on malware-related data, this model was created without any ethical rules or restrictions. However, after gaining media attention, its creators decided to shut down the model. Since then, cybercriminals have shifted their focus from developing their own AI models to using existing tools that have proven to be reliable.
One of the main uses of generative AI among criminals is in phishing. This involves trying to trick people into revealing sensitive information or access credentials that can be used for malicious purposes. According to Mislav Balunović, an AI security researcher at ETH Zurich, there has been a huge increase in the number of phishing emails since the rise of ChatGPT. Criminals are also using spam-generating services, such as GoMail Pro, which have ChatGPT integrated into them. This allows them to translate or improve their messages in order to deceive their victims more effectively. While OpenAI's policies restrict the use of their products for illegal activities, it is difficult to enforce in practice due to the innocuous nature of many prompts that can be used for malicious purposes as well.
As a company, OpenAI takes the safety of their products seriously and is constantly working to improve their safety measures to prevent misuse and abuse. In a recent report, they stated that they had closed five accounts associated with state-affiliated malicious actors. Despite their efforts, AI security experts warn that as AI models continue to improve, attackers will be able to quickly generate more targeted and sophisticated lures, making it increasingly difficult to distinguish between legitimate and malicious emails.
Share