We extend the formats of explanations in interpretable NLP with the proposed entity-centric reasoning chains for multi-hop question answering. We also propose a cooperative game approach to learn to recover such explanations from weakly supervised signals, i.e., the question-answer pairs. We evaluate our task and method via newly created benchmarks based on two multi-hop datasets, HotpotQA and MedHop; and hand-labeled reasoning chains for the latter. The experiments demonstrate the effectiveness of our approach.
Article ID: 2021S12
Publisher: Canadian Artificial Intelligence Association