Google has announced the launch of the Secure AI Framework (SAIF), a conceptual framework for securing AI systems. Google, owner of the generative AI chatbot Bard and parent company of AI research lab DeepMind, said a framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements so that when AI models are implemented, they’re secure-by-default. Its new framework concept is an important step in that direction, the tech giant claimed.
The SAIF is designed to help mitigate risks specific to AI systems like model theft, poisoning of training data, malicious inputs through prompt injection, and the extraction of confidential information in training data. “As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical,” Google wrote in a blog.
To read this article in full, please click here
More Stories
Indian Fishermen Are Catching Less Squid
Fishermen in Tamil Nadu are reporting smaller catches of squid. Blog moderation policy. Read More
More on My AI and Democracy Book
In July, I wrote about my new book project on AI and democracy, to be published by MIT Press in...
NHS England Warns of Critical Veeam Vulnerability Under Active Exploitation
NHS England has issued an alert regarding a critical Veeam Backup & Replication vulnerability that is being actively exploited, potentially...
US Border Agency Under Fire for App’s Handling of Personal Data
Access Now announced that the US Customs and Border Protection agency released records on its app following the NGO’s lawsuit...
IronNet Has Shut Down
After retiring in 2014 from an uncharacteristically long tenure running the NSA (and US CyberCommand), Keith Alexander founded a cybersecurity...
Sonatype Reports 156% Increase in OSS Malicious Packages
A new Sonatype report reveals a 156% surge in open source malware, with over 704,102 malicious packages identified since 2019,...