According to internal Slack messages that were leaked to Insider, an Amazon lawyer told workers that they had “already seen instances” of text generated by ChatGPT that “closely” resembled internal company data.
This issue seems to have come to a head recently because Amazon staffers and other tech workers throughout the industry have begun using ChatGPT as a “coding assistant” of sorts to help them write or improve strings of code, the report notes.
[…]
“This is important because your inputs may be used as training data for a further iteration of ChatGPT,” the lawyer wrote in the Slack messages viewed by Insider, “and we wouldn’t want its output to include or resemble our confidential information.”
More Stories
Ascension Ransomware Attack Diverts Ambulances, Delays Appointments
A ransomware attack on US private healthcare provider Ascension has disrupted patient care, with several hospitals currently on diversion Read...
How Did Authorities Identify the Alleged Lockbit Boss?
Last week, the United States joined the U.K. and Australia in sanctioning and charging a Russian man named Dmitry Yuryevich...
LLMs’ Data-Control Path Insecurity
Back in the 1960s, if you played a 2,600Hz tone into an AT&T pay phone, you could make calls without...
2024 RSA Recap: Allow us to Reintroduce Ourselves
The 2024 RSA Conference has officially wrapped, and this year’s event served as the perfect backdrop for us to make...
Black Basta Ransomware Victim Count Tops 500
Affiliates of prolific Black Basta ransomware group have breached over 500 global organizations Read More
Threat Actor Claims Major Europol Data Breach
A threat actor known as IntelBroker claims to be selling confidential Europol data after a May breach Read More