Measuring Trust in Artificial Intelligence Systems: A Multidimensional Framework and Empirical Assessment
Keywords:
Artificial Intelligence; Trust in AI; Trustworthiness; Transparency; Algorithmic Fairness; Accountability; Reliability; Technology Acceptance; AI Governance; Explainable Ai; Human–AI Interaction; Structural Equation ModelingAbstract
The rapid integration of artificial intelligence (AI) systems into high-stakes decision environments has intensified concerns regarding trust, reliability, and ethical governance. Despite growing scholarly attention, existing research on trust in AI remains fragmented, often focusing on isolated technical or psychological factors without integrating ethical and institutional dimensions. This study develops and empirically validates a multidimensional framework for measuring trust in AI systems. Drawing on theories of trust in automation, technology acceptance, and responsible AI governance, the framework conceptualizes trust as a composite construct comprising perceived competence, transparency, reliability, fairness, and accountability. A structured survey instrument was developed using validated measurement scales and administered to respondents with prior exposure to AI systems. Reliability and validity analyses confirm the robustness of the proposed constructs. Structural modeling results demonstrate that all five dimensions significantly influence overall trust, with transparency and fairness emerging as particularly strong predictors. Furthermore, trust significantly predicts user reliance intentions, underscoring its central role in AI adoption and operational integration. The findings contribute to the literature by providing an integrated measurement model that bridges technical performance attributes with ethical and governance considerations. The study also offers practical implications for AI developers and policymakers seeking to design trustworthy systems and regulatory frameworks. By operationalizing trust as a measurable and multidimensional phenomenon, this research advances empirical approaches to responsible AI deployment and contributes to ongoing efforts to strengthen public confidence in intelligent systems.
