Beyond the Black Box: Unraveling the Role of Explainability in Human-AI Collaboration
Participer
Information Systems and Operations Management
Intervenant: Tamer Boyaci (ESMT)
Salle Bernard Ramanantsoa
Abstract
While AI-based decision tools are increasingly employed for their ability to enhance collaborative decision-making processes, challenges such as overreliance or underreliance on AI outputs pose risks to their efficiency in achieving complementary team performance. To address these concerns, explainable AI models have been increasingly studied. Despite the promise of bringing transparency and enhanced understanding of algorithmic decision-making processes, evidence from recent empirical studies has been quite mixed. In this paper, we bring a theoretical perspective on the role of AI explainability in mitigating these challenges. We develop an analytical model that incorporates the defining features of human and machine intelligence, capturing the limited but flexible nature of human cognition with imperfect machine recommendations and explanations that reflect the quality of these predictions. We then systematically investigate the multifaceted impact of explainability on decision accuracy, underreliance, overreliance, as well as users' cognitive loads. Our results indicate that while low explainability levels have no impact on decision accuracy and reliance levels, they lessen the cognitive burden of the decision-maker. On the other hand, providing higher explainability levels enhances accuracy by improving overreliance but at the expense of higher underreliance. Furthermore, the incremental impact of explainability (c.f. a black-box system) is higher when the decision-maker is more cognitively constrained, the decision task is sufficiently complex or when the stakes are lower. Surprisingly, we find that higher explainability levels can escalate the overall cognitive burden, especially when the decision-maker is particularly pressed for time to complete a complex task and initially doubts the machine's quality, scenarios where explanations are expected to be most needed. By eliciting the comprehensive effects of explainability on decision outcomes and cognitive efforts, our study contributes to our understanding of designing effective human-AI systems in diverse decision-making environments.
Authors: Tamer Boyaci (joint work with Francis de Vericourt, Caner Canyakmaz).