Transparency and Explainability

We study the role of AI transparency and explainability in shaping user trust, comprehension, and decision satisfaction. Our research evaluates how different forms of explanations—such as procedural vs outcome transparency, performance history—affect user understanding and engagement. The goal is to identify effective strategies for making AI systems more interpretable without oversimplifying or distorting their functionality.

Selected Works

Mahmud, H., Islam, A.N., Luo, X.R. and Mikalef, P., 2024. Decoding algorithm appreciation: Unveiling the impact of familiarity with algorithms, tasks, and algorithm performance. Decision Support Systems, 179, p.114168.