According to internal Slack messages that were leaked to Insider, an Amazon lawyer told workers that they had “already seen instances” of text generated by ChatGPT that “closely” resembled internal company data.
This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.
“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”
Friday Squid Blogging: Giant Squid vs. Blue Marlin
Epic matchup. As usual, you can also use this squid post to talk about the security stories in the news...
German Police Raid DDoS-Friendly Host ‘FlyHosting’
Authorities in Germany this week seized Internet servers that powered FlyHosting, a dark web offering that catered to cybercriminals operating...
From Workshops to Leader Panels: A Recap of Women’s History Month at McAfee
From Workshops to Leader Panels: A Recap of Women’s History Month at McAfee March is Women’s History Month and International...
Italy’s Privacy Watchdog Blocks ChatGPT Amid Privacy Concerns
GPDP probe is due to allegations that ChatGPT failed to comply with data collection rules Read More
Modular “AlienFox” Toolkit Used to Steal Cloud Service Credentials
Harvesting API keys and secrets from AWS SES, Microsoft Office 365 and other services Read More
New Azure Flaw “Super FabriXss” Enables Remote Code Execution Attacks
The cross-site scripting flaw affects SFX version 9.1.1436.9590 or earlier and has a CVSS of 8.2 Read More