Analyzing the Impact of Explainable AI on Strategic Decision Quality in Large Enterprises

Authors

  • Faseeha Ameen Author

Keywords:

Explainable AI, Strategic Decision Quality, Trust, Perceived Usefulness, Risk Perception, Large Enterprises

Abstract

The rapid evolution of artificial intelligence (AI) has transformed strategic decision-making across industries, yet concerns persist regarding the transparency and interpretability of AI models in enterprise contexts. Explainable AI (XAI) refers to methods and algorithms that make AI decisions understandable to human stakeholders. This thesis investigates the impact of XAI on strategic decision quality in large enterprises, focusing on operational efficiency, managerial trust, risk mitigation, and decision agility. Drawing on current literature from AI, organizational decision-making, and information systems research, this study proposes a conceptual framework linking XAI adoption to enhanced strategic decisions mediated by trust, perceived usefulness, and accountability. Large enterprises increasingly rely on complex machine learning models for forecasting, resource allocation, and competitive strategy. However, the “black-box” nature of many AI systems reduces user trust and limits managerial uptake of AI recommendations, potentially undermining decision quality (Doshi-Velez & Kim, 2017). Explainability is thus positioned as both a technical and organizational imperative. By providing interpretable insights into AI outputs, XAI can reduce cognitive barriers, improve stakeholder comprehension, and align model reasoning with enterprise objectives (Arrieta et al., 2020). This research integrates both qualitative and quantitative approaches. A cross-sectional survey of AI users and decision makers in large enterprises is utilized, with constructs measured via validated scales. Structural equation modeling (SEM) using SmartPLS tests hypotheses on how explainability affects trust, perceived usefulness, risk perception, and strategic decision quality. Results indicate significant positive relationships between XAI satisfaction and trust, trust and perceived usefulness, and perceived usefulness and decision quality. Notably, risk perception negatively moderates the relationship between trust and decision quality, highlighting the complexity of AI acceptance. The findings suggest that XAI enhances strategic decision quality by building managerial trust and facilitating clearer understanding of AI logic. Practical implications include the need for enterprise investment in explainable models, transparent analytics dashboards, and decision support training. Additionally, the research contributes to theory by delineating mechanisms through which XAI affects high-level strategic outcomes. This thesis underscores the importance of transparent AI systems for effective enterprise decision making, offering guidance for both AI developers and organizational leaders. Further research should explore longitudinal effects of XAI adoption and cross-industry comparisons.

Downloads

Published

2024-06-30