Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.
Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
To read this article in full, please click here
More Stories
This Windows PowerShell Phish Has Scary Potential
Many GitHub users this week received a novel phishing email warning of critical security holes in their code. Those who...
Infostealers Cause Surge in Ransomware Attacks, Just One in Three Recover Data
Infostealer malware and digital identity exposure behind rise in ransomware, researchers find Read More
FBI Shuts Down Chinese Botnet
The FBI has shut down a botnet run by Chinese hackers: The botnet malware infected a number of different types...
Western Agencies Warn Risk from Chinese-Controlled Botnet
Cyber and law enforcement agencies across the “Five Eyes” countries issue warning about large-scale botnet linked to Chinese firm and...
8000 Claimants Sue Outsourcing Giant Capita Over 2023 Data Breach
A Manchester law firm has filed a lawsuit against outsourcing giant Capita, representing nearly 8000 claimants who were affected by...
FCC $200m Cyber Grant Pilot Opens Applications for Schools and Libraries
US Schools and libraries have until November 1, 2024 to enrol for a three-year program during which participants will receive...