Business and government organizations are rapidly embracing an expanding variety of artificial intelligence (AI) applications: automating activities to function more efficiently, reshaping shopping recommendations, credit approval, image processing, predictive policing, and much more.
Like any digital technology, AI can suffer from a range of traditional security weaknesses and other emerging concerns such as privacy, bias, inequality, and safety issues. The National Institute of Standards and Technology (NIST) is developing a voluntary framework to better manage risks associated with AI called the Artificial Intelligence Risk Management Framework (AI RMF). The framework’s goal is to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.