Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.
Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
More Stories
The AI Fix #30: ChatGPT reveals the devastating truth about Santa (Merry Christmas!)
In episode 30 of The AI Fix, AIs are caught lying to avoid being turned off, Apple’s AI flubs a...
US and Japan Blame North Korea for $308m Crypto Heist
A joint US-Japan alert attributed North Korean hackers with a May 2024 crypto heist worth $308m from Japan-based company DMM...
Spyware Maker NSO Group Found Liable for Hacking WhatsApp
A judge has found that NSO Group, maker of the Pegasus spyware, has violated the US Computer Fraud and Abuse...
Spyware Maker NSO Group Liable for WhatsApp User Hacks
A US judge has ruled in favor of WhatsApp in a long-running case against commercial spyware-maker NSO Group Read More
Major Biometric Data Farming Operation Uncovered
Researchers at iProov have discovered a dark web group compiling identity documents and biometric data to bypass KYC checks Read...
Ransomware Attack Exposes Data of 5.6 Million Ascension Patients
US healthcare giant Ascension revealed that 5.6 million individuals have had their personal, medical and financial information breached in a...