Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
Friday Squid Blogging: Two-Man Giant Squid
The Brooklyn indie art-punk group, Two-Man Giant Squid, just released a new album. As usual, you can also use this...
Cyber Agencies Warn of Fast Flux Threat Bypassing Network Defenses
A joint cybersecurity advisory warns organizations globally about the defense gap in detecting and blocking fast flux techniques, which are...
Troy Hunt Gets Phished
In case you need proof that anyone, even people who do cybersecurity for a living, Troy Hunt has a long,...
Tj-actions Supply Chain Attack Traced Back to Single GitHub Token Compromise
The threat actors initially attempted to compromise projects associated with the Coinbase cryptocurrency exchange, said Palo Alto Networks Read More
Chinese State Hackers Exploiting Newly Disclosed Ivanti Flaw
Mandiant warned that Chinese espionage actor UNC5221 is actively exploiting a critical Ivanti vulnerability, which can lead to remote code...
Major Online Platform for Child Exploitation Dismantled
An international law enforcement operation has shut down Kidflix, a platform for child sexual exploitation with 1.8m registered users Read...