Ravit DotanBorhane Blili-HamelinRavi MadhavanJeanna MatthewsJoshua Scarpino

Researchers, government bodies, and organizations have been repeatedly calling for a shift in the responsible AI community from general principles to tangible and operationalizable practices in mitigating the potential sociotechnical harms of AI. Frameworks like the NIST AI RMF embody an emerging consensus on recommended practices in operationalizing sociotechnical harm mitigation. However, private sector organizations currently lag far behind this emerging consensus. Implementation is sporadic and selective at best. At worst, it is ineffective and can risk serving as a misleading veneer of trustworthy processes, providing an appearance of legitimacy to substantively harmful practices. In this paper, we provide a foundation for a framework for evaluating where organizations sit relative to the emerging consensus on sociotechnical harm mitigation best practices: a flexible maturity model based on the NIST AI RMF.

Cite as:arXiv:2401.15229 [cs.CY] (or arXiv:2401.15229v2 [cs.CY] for this version) https://doi.org/10.48550/arXiv.2401.15229

Leave a Reply

Your email address will not be published. Required fields are marked *