The Effects of Citations and Confirmation Bias on Trust in Chatbots

Abstract

Large-language-model chatbots often display citations to justify their answers, yet we do not fully understand when that evidence boosts user trust. We surveyed participants across political leanings, presenting chatbot responses that varied in stance and whether citations were provided. Trust increased for moderate replies and those that aligned with prior beliefs, but citations alone did not move trust scores, highlighting the need to pair transparency features with careful framing in sensitive topics.

Publication
In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2025