Security researchers have used the GPT-3 natural language generation model and the ChatGPT chatbot based on it to show how such deep learning models can be used to make social engineering attacks such as phishing or business email compromise scams harder to detect and easier to pull off.
The study, by researchers with security firm WithSecure, demonstrates that not only can attackers generate unique variations of the same phishing lure with grammatically correct and human-like written text, but they can build entire email chains to make their emails more convincing and can even generate messages using the writing style of real people based on provided samples of their communications.
More Stories
Data Breaches in US Schools Exposed 37.6M Records
Comparitech said 2023 was a record year for breaches with 954 reported, up from 139 in 2022 and 783 in...
Upcoming Speaking Engagements
This is a current list of where and when I am scheduled to speak: I’m giving a webinar via Zoom...
Ebury Botnet Operators Diversify with Financial and Crypto Theft
The 15-year-old Ebury botnet is more active than ever, as ESET found 400,000 Linux servers compromised for cryptocurrency theft and...
CISA and Partners Unveil Cybersecurity Guide For Civil Society Groups
The guide is designed to provide high-risk communities with actionable steps to bolster their cybersecurity defenses Read More
How Scammers Hijack Your Instagram
Authored by Vignesh Dhatchanamoorthy, Rachana S Instagram, with its vast user base and dynamic platform, has become a hotbed for...
NIST Confusion Continues as Cyber Pros Complain CVE Uploads Stalled
Several software security experts have told Infosecurity that no new vulnerabilities have been added to the US National Vulnerability Database...