Skip to main content
SearchLoginLogin or Signup

Evaluating Explanation Correctness in Legal Decision Making

Published onMay 27, 2022
Evaluating Explanation Correctness in Legal Decision Making
·

Abstract

As machine learning models are being extensively deployed across many applications, concerns are rising with regard to their trustability. Explainable models have become an important topic of interest for high-stakes decision making, but their evaluation in the legal domain still remains seriously understudied; existing work does not have thorough feedback from subject matter experts to inform their evaluation. Our work here aims to quantify the faithfulness and plausibility of explainable AI methods over several legal tasks, using computational evaluation and user studies directly involving lawyers. The computational evaluation is for measuring faithfulness, how close the explanation is to the model’s true reasoning, while the user studies are measuring plausibility, how reasonable is the explanation to a subject matter expert. The general goal of this evaluation is to find a more accurate indication of whether or not machine learning methods are able to adequately satisfy legal requirements


Article ID: 2022L23

Month: May

Year: 2022

Address: Online

Venue: Canadian Conference on Artificial Intelligence

Publisher: Canadian Artificial Intelligence Association

URL: https://caiac.pubpub.org/pub/67i6fcki

Comments
0
comment
No comments here
Why not start the discussion?