Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
Cybercriminals Use Fake CrowdStrike Job Offers to Distribute Cryptominer
CrowdStrike warned it had observed a phishing campaign impersonating the firm’s recruitment process to lure victims into downloading cryptominer Read...
Slovakia Hit by Historic Cyber-Attack on Land Registry
A large-scale cyber-attack has targeted the information system of Slovakia’s land registry, impacting the management of land and property records...
Canadian man loses a cryptocurrency fortune to scammers – here’s how you can stop it happening to you
A Canadian man lost a $100,000 cryptocurrency fortune - all because he did a careless Google search. Read more in...
Medusind Breach Exposes Sensitive Patient Data
The US medical billing firm is notifying over 360,000 customers that their personal, financial and medical data may have been...
Fake PoC Exploit Targets Security Researchers with Infostealer
Trend Micro detailed how attackers are using a fake proof-of-concept for a critical Microsoft vulnerability, designed to steal sensitive data...
Smashing Security podcast #399: Honey in hot water, and reset your devices
Ever wonder how those "free" browser extensions that promise to save you money actually work? We dive deep into the...