Launched in November 2022, ChatGPT has taken the internet by storm, garnering 1 million users in just the first five days. In January 2023, this conversational AI amassed 100 million monthly active users thanks to its ability to generate human-like, coherent content. While users have tested its capabilities by generating poems, marketing messages, and even assisting in meal planning, businesses are starting to worry if this tool can be used for murkier uses.
The burning question: can ChatGPT be used for cyberattacks?
Cybercriminals are always on the lookout for infiltrating networks, stealing data, and putting businesses in a position of loss using various techniques.
For instance, phishing has remained one of the most severe and frequent threats, attacking millions of businesses yearly. It uses social engineering techniques to trick users into opening an email, downloading a malicious attachment, or clicking on a malicious link. These actions result in network infiltration and loss of sensitive business data.
However, one of the signs of a fraudulent email so far has been poor language and grammatical errors. By using ChatGPT, hackers, even those who don’t have English as their native language, can write flawless emails, making readers believe in its legitimacy.
Not only this, but creating emails using ChatGPT can be written in half the time and in bulk. Additionally, they can be personalized using specific tones, styles, and personal customer information.
These capabilities accelerate the process of creating phishing emails and make them look legitimate.
Another example of using ChatGPT can be to generate malicious codes which could be used to infiltrate websites. Businesses are worried the ability to create code and write genuine emails can allow even the most immature threat actors to conduct attacks on a large scale.
Should businesses be worried?
While ChatGPT is at its nascent stages, the threat it can pose can be catastrophic. However, to conduct any malicious attack, threat actors not only need to write an email or create a code but also have the skills to deploy the entire attack.
Another aspect is that results generated from ChatGPT aren’t always accurate.
And finally, ChatGPT will not comply with users’ attempts to create phishing emails due to unethical concerns. “ I cannot create or assist with the creation of phishing emails or any other malicious content,” it says. However, users can tweak their questions to create genuine emails for phishing purposes.
In conclusion, whether or not hackers begin using ChatGPT for malicious purposes, we’re sure the threat landscape will only get more catastrophic, making it essential for businesses to follow fundamental cyber hygiene practices like staying aware and educating employees.
In a world of AI-enabled threats, mitigating them may not always be straightforward. Therefore, by following basic cyber hygiene practices coupled with awareness, businesses could continue to mitigate cyber criminals’ malicious intentions.
The article has been written by Shibu Paul, Vice President – International Sales at Array Networks