Transparency and Explainability
We study the role of AI transparency and explainability in shaping user trust, comprehension, and decision satisfaction. Our research evaluates how different forms of explanations—such as procedural vs outcome transparency, performance history—affect user understanding and engagement. The goal is to identify effective strategies for making AI systems more interpretable without oversimplifying or distorting their functionality.