Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
Major Online Platform for Child Exploitation Dismantled
An international law enforcement operation has shut down Kidflix, a platform for child sexual exploitation with 1.8m registered users Read...
CrushFTP Vulnerability Exploited Following Disclosure Issues
A critical authentication bypass flaw in CrushFTP is under active exploitation following a mishandled disclosure process Read More
HellCat ransomware: what you need to know
HellCat - the ransomware gang that has been known to demand payment... in baguettes! Are they rolling in the dough?...
Amateur Hacker Leverages Russian Bulletproof Hosting Server to Spread Malware
The cybercriminal uses the service of Proton66, an infamous Russian-based bulletproof hosting provider, to deploy malware Read More
Web 3.0 Requires Data Integrity
If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known...
Sensitive Data Breached in Highline Schools Ransomware Incident
Highline Public Schools revealed that sensitive personal, financial and medical data was accessed by ransomware attackers during the September 2024...