Trust in AI
This research area investigates how users build, maintain, and recover trust in AI systems across different contexts. We examine the psychological processes underlying trust formation, trust damage, and recovery. Our work draws from behavioral experiments and user studies to inform the design of AI systems that are reliable and trustworthy.
Selected Works
Choung, H., David, P. and Ross, A., 2023. Trust in AI and its role in the acceptance of AI technologies. International Journal of Human–Computer Interaction, 39(9), pp.1727-1739.
Choung, H., David, P. and Ling, T.W., 2024. Acceptance of AI-powered facial recognition technology in surveillance scenarios: Role of trust, security, and privacy perceptions. Technology in Society, 79, p.102721.
David, P., Choung, H. and Seberger, J.S., 2024. Who is responsible? US Public perceptions of AI governance through the lenses of trust and ethics. Public Understanding of Science, 33(5), pp.654-672.
Xu, S. and Li, W., 2024. A tool or a social being? A dynamic longitudinal investigation of functional use and relational use of AI voice assistants. New Media & Society, 26(7), pp.3912-3930.