Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
Friday Squid Blogging: Squid Facts on Your Phone
Text “SQUID” to 1-833-SCI-TEXT for daily squid facts. The website has merch. As usual, you can also use this squid...
Law Enforcement Crackdowns Drive Novel Ransomware Affiliate Schemes
Increased law enforcement pressure has forced ransomware groups like DragonForce and Anubis to move away from traditional affiliate models Read...
SAP Fixes Critical Vulnerability After Evidence of Exploitation
A maximum severity flaw affecting SAP NetWeaver has been exploited by threat actors Read More
M&S Shuts Down Online Orders Amid Ongoing Cyber Incident
British retailer M&S continues to tackle a cyber incident with online orders now paused for customers Read More
Security Experts Flag Chrome Extension Using AI Engine to Act Without User Input
Researchers have found a Chrome extension that can act on the user’s behalf by using a popular AI agent orchestration...
Cryptocurrency Thefts Get Physical
Long story of a $250 million cryptocurrency theft that, in a complicated chain events, resulted in a pretty brutal kidnapping....