Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
The AI Fix #45: The Turing test falls to GPT-4.5
In episode 45 of The AI Fix, our hosts discover that ChatGPT is running the world, Mark learns that mattress...
Google Releases April Android Update to Address Two Zero-Days
Google’s latest Android update fixes 62 flaws, including two zero-days previously used in limited targeted attacks Read More
NIST Defers Pre-2018 CVEs to Tackle Growing Vulnerability Backlog
NIST marks CVEs pre-2018 as “Deferred” in the NVD as agency focus shifts to managing emerging threats Read More
Half of Firms Stall Digital Projects as Cyber Warfare Risk Surges
Armis survey reveals that the growing threat of nation-state cyber-attacks is disrupting digital transformation Read More
CISA Warns of CrushFTP Vulnerability Exploitation in the Wild
The US Cybersecurity and Infrastructure Security Agency (CISA) has added CVE-2025-31161 to its Known Exploited Vulnerabilities (KEV) catalog Read More
Arguing Against CALEA
At a Congressional hearing earlier this week, Matt Blaze made the point that CALEA, the 1994 law that forces telecoms...