Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
Malicious Microsoft VS Code Extensions Used in Cryptojacking Campaign
Security researchers from ExtensionTotal have found nine malicious extensions in Visual Studio Code, Microsoft’s lightweight source-code editor Read More
Smishing Triad Fuels Surge in Toll Payment Scams in US, UK
A rise in smishing campaigns impersonating toll service providers has been linked to China’s Smishing Triad Read More
Darknet’s Xanthorox AI Offers Customizable Tools for Hackers
Xanthorox AI, a self-contained system for offensive cyber operations, has emerged on darknet forums Read More
King Bob pleads guilty to Scattered Spider-linked cryptocurrency thefts from investors
A Florida man, linked to the notorious Scattered Spider hacking gang, has pleaded guilty to charges related to cryptocurrency thefts...
DIRNSA Fired
In “Secrets and Lies” (2000), I wrote: It is poor civic hygiene to install technologies that could someday facilitate a...
Vodafone Urges UK Cybersecurity Policy Reforms as SME Cyber-Attack Costs Reach £3.4bn
Vodafone Business has urged the UK government to implement policy changes, including improvements to the Cyber Essentials scheme and tax...