Abstract
Conspiracies are consequential and social, yet online conspiracy groups that consist of individuals (and bots) seeking to explain events or a system have been neglected in sociology. We extract conspiracy talk about the COVID-19 pandemic on Twitter and use the biterm topic model (BTM) to provide a descriptive baseline for the discursive and social structure of online conspiracy groups. We find that individuals enter these communities through a gateway conspiracy theory before proceeding to extreme theories, and humans adopt more diverse conspiracy theories than do bots. Event-history analyses show that individuals tweet new conspiracy theories, and tweet inconsistent theories simultaneously, when they face a threat posed by a rising COVID-19 case rate and receive attention from others via retweets. By contrast, bots are less responsive to rising case rates, but they are more consistent, as they mainly tweet about how COVID-19 was deliberately created by sinister agents. These findings suggest human beings are bricoleurs who use conspiracy theories to make sense of COVID-19, whereas bots are designed to create moral panic. Our findings suggest that conspiracy talk by individuals is defensive in nature, whereas bots engage in offense.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
