Accuracy and Political Bias of News Source Credibility Ratings by Large Language Models

Kai-Cheng Yang, Filippo Menczer

Abstract

Search engines increasingly leverage large language models (LLMs) to generate direct answers, and AI chatbots now access the Internet for fresh data. As information curators for billions of users, LLMs must assess the accuracy and reliability of different sources. This paper audits eight widely used LLMs from three major providers—OpenAI, Google, and Meta—to evaluate their ability to discern credible and high-quality information sources from low-credibility ones. We find that while LLMs can rate most tested news outlets, larger models more frequently refuse to provide ratings due to insufficient information, whereas smaller models are more prone to hallucination in their ratings. For sources where ratings are provided, LLMs exhibit a high level of agreement among themselves (average Spearman’s ρ=0.81), but their ratings align only moderately with human expert evaluations (average ρ=0.59). Analyzing news sources with different political leanings in the US, we observe a liberal bias in credibility ratings yielded by all LLMs in default configurations. Additionally, assigning partisan identities to LLMs consistently results in strong politically congruent bias in the ratings. These findings have important implications for the use of LLMs in curating news and political information

Related publications