Assessed Intelligence | Your Secure and Responsible Technology Partner
Featured Publication — arXiv · January 2024

Evolving AI Risk Management: A Maturity Model Based on the NIST AI Risk Management Framework

The responsible AI community has produced principles. The private sector has not kept pace. This paper provides the first structured maturity model for measuring that gap.

Researchers, government bodies, and organisations have called for a shift from general AI ethics principles to tangible, operationalisable practices. The private sector’s implementation has been sporadic and selective. At worst, it serves as a veneer of legitimacy over substantively unchanged practices.

The authors present a flexible maturity model based on the NIST AI Risk Management Framework, giving organisations a calibrated, actionable view of their AI governance maturity. The model maps to the NIST AI RMF’s structure and provides the graduated attainment levels that maturity models require for practical assessment and roadmapping.

“The gap between AI ethics principles and operationalised practice is not a knowledge problem. It is a measurement problem. You cannot close a gap you cannot locate.”

Published in January 2024 on arXiv, the paper has become a reference point in operationalising AI ethics. Its co-authors span academia, civil society, and industry.

Authors
Ravit Dotan

TechBetter

Borhane Blili-Hamelin

AI Risk and Vulnerability Alliance

Ravi Madhavan

University of Pittsburgh

Jeanna Matthews

Clarkson University

Joshua Scarpino

CEO & Founder, Assessed Intelligence

Publication
arXiv
January 2024
Published Research

Read the Full Article

Evolving AI Risk Management: A Maturity Model Based on the NIST AI Risk Management Framework, published in arXiv, January 2024.