Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.
Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.
To read this article in full, please click here
More Stories
FBI Shuts Down Chinese Botnet
The FBI has shut down a botnet run by Chinese hackers: The botnet malware infected a number of different types...
Western Agencies Warn Risk from Chinese-Controlled Botnet
Cyber and law enforcement agencies across the “Five Eyes” countries issue warning about large-scale botnet linked to Chinese firm and...
8000 Claimants Sue Outsourcing Giant Capita Over 2023 Data Breach
A Manchester law firm has filed a lawsuit against outsourcing giant Capita, representing nearly 8000 claimants who were affected by...
FCC $200m Cyber Grant Pilot Opens Applications for Schools and Libraries
US Schools and libraries have until November 1, 2024 to enrol for a three-year program during which participants will receive...
Cryptojacking Gang TeamTNT Makes a Comeback
Group-IB claims to have found evidence of a new TeamTNT cryptojacking campaign Read More
Insecure APIs and Bot Attacks Cost Global Firms $186bn
Thales claims API insecurity and automated bot abuse is costing organizations an estimated $186bn annually Read More