The rise of AI brings exciting opportunities, but also poses significant challenges, creating new risks and exacerbating existing ones. The “criticality” of an AI application is often regarded as the level of risk an organization is exposed to if the underlying model malfunctions. It informs the degree of governance rigor required during model development, operationalization, and procurement. In this paper we discuss the need for a systematic approach to defining and determining an AI application’s criticality, and the necessary guardrails and controls that must be put in place to ensure its robust development, ethical application and ongoing monitoring post-deployment.
Article ID: 2022IND1
Month: June
Year: 2023
Address: Online
Venue: The 36th Canadian Conference on Artificial Intelligence
Publisher: Canadian Artificial Intelligence Association
URL: https://caiac.pubpub.org/pub/90jedwes