Abstract
Social media platforms increasingly offer users control over their feeds, promising to reduce toxic discourse. This study tests how the mere offer of algorithmic control shapes user experiences. Respondents evaluated identical posts from a fictional platform, with half given the option to filter out toxic political content. Those given this choice reported greater platform satisfaction. However, those who opted for filtering rated content as more hostile than similar respondents who were not offered the choice. A follow-up experiment showed that exposure to only positive content did not reduce hostility ratings; it heightened them compared to exposure to both positive and negative content. These findings challenge the assumption that user autonomy will improve content experiences. Instead, algorithmic choice raises expectations, prompting users to scrutinize content more critically or attempt to “train” the algorithm to align with their preferences. Platforms must consider how expectations, not just content exposure, shape online experiences.
Keywords
Get full access to this article
View all access options for this article.
