Abstract
This study attempts to advance automated content analysis from consensus-oriented to coordination-oriented practices, thereby embracing diverse coding outputs and exploring the dynamics among differential perspectives. As an exploratory investigation, we evaluate six GPT-4o configurations to analyze sentiment in Fox News and MSNBC transcripts on Biden and Trump during the 2020 U.S. presidential campaign. By assessing each model’s alignment with partisan perspectives, we explore how partisan selective processing can be identified in LLM-Assisted Content Analysis (LACA). The findings indicate that LLM-based partisan perspective simulations reflect politically polarized standpoints across partisan groups, revealing a pronounced divergence in sentiment analysis between Democrat-aligned and Republican-aligned persona models. This pattern is evident in intercoder-reliability metrics, which are higher among same-partisan than cross-partisan persona model pairs. Results also suggest that LLM partisan simulations exhibit stronger ideological biases when analyzing politically congruent content. This approach enhances the nuanced understanding of LLM outputs and advances the integrity of AI-driven social science research and may also enable simulations of real-world implications.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
