Abstract
Advances in machine learning have led to the creation natural language models that can mimic human writing style and substance. Here we investigate the challenge that machine-generated content, such as that produced by the model GPT-3, presents to democratic representation by assessing the extent to which machine-generated content can pass as constituent sentiment. We conduct a field experiment in which we send both handwritten and machine-generated letters (a total of 32,398 emails) to 7132 state legislators. We compare legislative response rates for the human versus machine-generated constituency letters to gauge whether language models can approximate inauthentic constituency voices at scale. Legislators were only slightly less likely to respond to artificial intelligence (AI)-generated content than to human-written emails; the 2% difference in response rate was statistically significant but substantively small. Qualitative evidence sheds light on the potential perils that this technology presents for democratic representation, but also suggests potential techniques that legislators might employ to guard against misuses of language models.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
