The rise of AI brings exciting opportunities, but also poses significant challenges, creating new risks and exacerbating existing ones. The “criticality” of an AI application is often regarded as the level of risk an organization is exposed to if the underlying model malfunctions. It informs the degree of governance rigor required during model development, operationalization, and procurement. In this paper we discuss the need for a systematic approach to defining and determining an AI application’s criticality, and the necessary guardrails and controls that must be put in place to ensure its robust development, ethical application and ongoing monitoring post-deployment.
Article ID: 2022IND1
Publisher: Canadian Artificial Intelligence Association