Abstract
We are witnessing a changing social media environment with new actors, new influencers, and new challenges. Considering the changes on social media platforms, the rise of bots, and the increased participation of state actors, this thematic collection addresses the methodological, topical, and ethical issues of networked influence. The Facebook-Cambridge Analytica scandal opened a new chapter to analyze what “influence” means in our current, complicated social media age. As discussed in the five papers stemming from the 2018 International Conference on Social Media & Society, this special issue introduces a wide array of interdisciplinary topics and approaches that highlight the rapid changes in social media environments, use, and users—with a focus on networked influence; by doing so, we attempt to answer some of the key research questions in this area, such as (1) how to identify and measure influence (broadly defined), (2) how to track propaganda campaigns, (3) how to effectively disseminate information and measure the public’s response to these information campaigns, (4) how do bots influence opinion trends on social media, and, finally, (5) how does the public frame privacy in a social media age?
Introduction
There is no denying that the use and misuse of social media have influence. In the wake of Occupy and the Arab Spring in 2012, social media was largely celebrated as a tool to democratize power. Since that time, we have also witnessed how social media has been used to connect like-minded people (Gruzd & Sedo, 2012; Martin, Gruzd, & Howard, 2013), rally supporters for a cause (Freelon, McIlwain, & Clark, 2018; White, Castleden, & Gruzd, 2015), galvanize donations for individuals and communities in need (Jacobson & Mascaro, 2016), support teaching and learning (Gruzd et al., 2018a; Haythornthwaite et al., 2018), support diaspora communities (Kumar, 2012), and so on. Powered by networked influence and made possible by privately owned social media platforms, we are said to be in a sharing economy (Shade, 2018). We are now owning less and sharing more, giving and receiving crowdsourced content, adapting, innovating, remaking, and re-sharing original and remixed materials (Gauntlett, 2011; Jenkins, 2006). New attitudes, practices, and legal precedents about ownership, rights, and information evaluation are emerging with the growing use of social media.
While the utopian and dystopian visions of new technologies are not new (Hemsley, Jacobson, Gruzd, & Mai, 2018), what we are witnessing now is a changed social media environment with new actors, new influencers, and new challenges. In 2019, we find ourselves in quite a different, some would argue, darker social media landscape. Hateful, anti-social speech, coordinated disinformation campaigns (i.e., “fake news”), and “false flag” operations by actors unknown now dominate the news cycle and compete for an opportunity to “go viral” (Chaudhry & Gruzd, 2019; Tandoc, Lim, & Ling, 2018). Those democratizing forces we previously celebrated are now being exploited in ways we, as a society, did not foresee, which have had real consequences.
A case in point is the now-infamous Cambridge Analytica scandal that broke in 2018 to reveal that the British consulting firm had harvested and used the personal data from millions of Facebook users around the world. The mined data were used to create psychographic profiles for users and strategically influence them using targeted advertisements. These practices were undertaken for political purposes and without users’ knowledge or consent. As we have learned, various political campaigns leveraged these data to influence public opinion, including the 2016 Brexit vote (Cadwalladr, 2017) and the 2016 U.S. presidential election (Rosenberg, Confessore, & Cadwalladr, 2018). The scandal rightfully incited public outrage about the influence of social media and prompted discussion on the ethical use of social media data (Dubois, Gruzd, & Jacobson, 2018; Jacobson, Gruzd, & Hernandez-Garcia, 2019).
Following the Cambridge Analytica scandal, Facebook announced a significant change to data access, which affects not only Facebook but also Instagram and WhatsApp that are owned by Facebook. This news was met with some initial enthusiasm by some and was viewed as a positive step toward more secure and ethical data practices. However, the changes further compounded the problem and diminished transparency and opportunities for third parties, such as academic researchers and journalists, to provide independent and critical oversight. Urging social media platforms to provide better access for academic use, a group of leading Internet researchers published an open letter arguing that the tightening of the Application Programming Interfaces (APIs) on various social media platforms further blackboxes the platforms, consolidates social media platforms’ power, and limits the type of research that scholars can engage in (Bruns, 2018). The open letter has since generated hundreds of signatures from researchers from around the world, which is an indication of growing concern in the social media research community.
This situation has sparked a reinvigorated critical conversation about data, access, influence, and the role of academic research in a turbulent social media age. The Facebook-Cambridge Analytica scandal does not represent the end of the conversation, but opens a new chapter to what influence means in our current, complicated social media age. With the rapid changes on social media platforms, the rise of bots, and the increased participation of state actors, this special issue addresses the methodological, topical, and ethical issues of networked influence. The 2012 call for proposals from, what has come to be known as, the International Conference on Social Media & Society stated, “With a multitude of voices all talking at once on social media, finding interesting and influential voices among the masses can be difficult.” Finding those voices is still a challenge, but there has also been the rise of other, more significant, challenges when considering the influence of social media use. With this in mind, we believed the time is right to revisit the ideas of networked influence that we began in 2012 (Gruzd & Wellman, 2014) and continued at the 2018 International Conference on Social Media & Society (Gruzd et al., 2018b). In particular, featured in this special thematic collection of Social Media + Society are five articles that seek to expand our theoretical and methodological understanding of the current and future trends in social media research with a focus on networked influence by addressing the following:
How to identify and measure influence (Gräve, 2019),
How to track political interference campaigns (Bastos & Farkas, 2019),
How to effectively disseminate important information and measure the public’s response to information campaigns (Gurajala, Dhaniyala, & Matthews, 2019),
How do social bots influence opinion trends on social media, and how can organizations counter incorrect opinionated information (Yuan, Crooks, & Schuchard, 2019), and finally
How does the public frame privacy in a social media age? (Quinn, Epstein, & Moon, 2019).
The authors work in various countries, including Australia, Germany, Israel, the United States, Sweden, and the United Kingdom, which points to the power and strength of cross-national approaches when engaging in social media research. Social media is, by nature, an inter-disciplinary and trans-disciplinary research area, which is evidenced in the scholarship in this special issue coming from fields such as Arts, Communication, Computer Sciences, Creative Industries, Data Sciences, Engineering, Marketing, and Sociology. The articles use various methods of inquiry including topic modeling, semantic network analysis, surveys, tweet-based analysis, supervised and unsupervised machine learning analysis, and social network analysis. These types of interdisciplinary collaborations have afforded interesting—and novel—research approaches, theories, and methods to be applied to the study of social media influence.
How to Identify and Measure Influence?
To begin, the first article, “What KPIs Are Key? Evaluating Performance Metrics for Social Media Influencers,” focuses on an emerging area of social media influencers. People with large social media audiences have influence and can, therefore, attract the attention of organizations looking to participate in paid engagements in exchange for the influencer promoting the brand. While marketers seek to collaborate with social media influencers, the identification and the evaluation of the influencer’s marketing campaign is difficult. Gräve uses a mixed-methods approach by combining Instagram data from several influencer-marketing campaigns with an online survey and finds that marketing professionals tend to rely on a social media influencer’s number of followers and number of interactions as success metrics. These simple metrics act as proxies for the amount of attentive views. However, working in a system that relies on undisclosed algorithms, influencers may also game these widely understood metrics using both acceptable methods (such as raffles) or questionable practices (such as purchasing followers). Gräve’s research challenges the use of these simplistic quantitative metrics and instead proposes the use of content-based metrics to serve as a proxy for actual perceived quality of the influencer’s marketing campaign content, including sentiment analysis.
How to Track Political Interference Campaigns?
Beyond brands using social media to promote products, social media has also been weaponized by malicious actors to spread propaganda. In “‘Donald Trump Is My President!’ The Internet Research Agency Propaganda Machine,” Bastos and Farkas examine the Twitter accounts and content produced by the Internet Research Agency (IRA), a recently publicized “troll factory,” in Russia. The research details the tactical approaches used to perpetuate disguised propaganda. Using an inductive typology based on profile descriptions, images, location, language, and tweeted content, the authors manually coded IRA user profiles and 6,377 tweets posted from 2012 to 2017. Based on the resulting dataset, the authors detail the difference between white, gray, and black propaganda, and found that most IRA accounts disseminated black propaganda. They also show how temporal patterns can identify the short-, medium-, and long-term propaganda campaigns with different propaganda targets.
From a practical perspective, the research finds that the agency operated different user accounts to perform specific tasks that were skillfully targeted, including pro-Russian profiles, local American and German news sources, pro-Trump conservatives, and Black Lives Matter activists. From a theoretical perspective, the results point to source classification from propaganda theory as a useful framework to understand IRA’s social media operations.
The authors conclude that reliable identification of disguised human-driven accounts requires collaboration with social media companies themselves. The weaponized misuse of social media as a fearmongering tool of misinformation has real implications that are concerning and point to the power of social media influence at scale.
How to Disseminate and Measure Information Campaigns on Important Public Health Issues and How Do Social Bots Influence Opinion Trends on Social Media?
There are many global issues that can benefit from leveraging social media to positively influence the public. Many previous communications strategies have failed to inform the public about various health concerns. The following two articles seek to address this by focusing on two critically important public health issues that pose a major risk for human health around the world: air quality and vaccines.
In “Understanding Public Response to Air Quality Using Tweet Analysis,” Gurajala, Dhaniyala, and Matthews identified three relevant hashtags relating to air quality and collected over 25 million pollution-related tweets with a focus on three major cities: Paris, London, and New Delhi. Using text classification, the tweets were classified into one of four topics: health, climate, politics, or other. When compared with real air quality sensor data, the findings suggest that health concerns dominated the public’s response online when air quality was low. The results of the research evidence the value of using unsupervised models to identify topics that researchers or policy makers may not initially consider, whereas supervised models can support a deep analysis into specific pre-identified topics and aid policy officials to respond in a timely and appropriate way. The authors contend that social media can be used to disseminate information and influence public opinion, but social media can be also used as a tool to assess the societal response to important issues. There is a need to do more than just push information; there is also a need to measure the public’s response to these information campaigns.
Like the challenges surrounding air quality and pollution, governing bodies have sought to disseminate information campaigns about the benefits of vaccines, yet anti-vaccine sentiment is growing. An infamous and discredited 1998 study perpetuated the pseudoscientific claim linking vaccines to autism which not only resulted in the author losing his medical license but also led to the spread of the fraudulent claim that has created an international health epidemic. In “Examining Emergent Communities and Social Bots Within the Polarized Online Vaccination Debate in Twitter,” Yuan, Crooks, and Schuchard analyze the communicative patterns of vaccine-related discussions on Twitter. Using supervised machine learning, the authors classified over 260,000 users into three groups: anti-vaccination, neutral to vaccination, and pro-vaccination. Combining social network analysis with sentiment analysis, the authors analyze a retweet network with over 660,000 tweets related to vaccines after the 2015 California Disneyland measles outbreak. Significant bot activity was detected: bots produced 4.59% of all the tweets in the dataset.
In analyzing the cross-group interactions of pro- and anti-vaccine users, anti-vaccine users tend to communicate with like-minded users which suggests an echo chamber of anti-vaxxers, which makes it difficult for health organizations to combat the misinformation. The research expands our understanding of the anti-vaccine activities on Twitter and the significance of identifying bots in the context of public health issues.
Most research in this area tends to be hand-tagged and the introduction of automated methods in this article can be used more widely. By understanding how information spreads within and across opinion groups, the research points to more effective ways to disseminate information about the benefits of vaccination, which can also be extended to other public concerns. There is a need for evidence-based scientific research to be communicated in a way that connects with, and has influence on, the public.
How Does the Public Frame Privacy in a Social Media Age?
With the increased power of automated methods and sophisticated algorithms on social media, power imbalances have emerged and an understanding of how average users conceptualize privacy is of critical importance. In “We Care About Different Things: Non-Elite Conceptualizations of Social Media Privacy,” Quinn, Epstein, and Moon directly address the infamous adage of “privacy is dead” by asking “How do users of social media frame social media privacy?” and whether “privacy” as a concept is indeed something people care about.
In a participant-centric approach using cross-sectional survey data, the authors apply topic modeling and semantic network analysis to extend the theorization of privacy and propose privacy-sensitive tools and policies for practitioners. The authors find that users emphasize dimensions of horizontal privacy, which refers to the relationship between social media users, over vertical privacy, which refers to the relationship between a user and institutions. The lack of attention to vertical privacy may reify existing power imbalances whereby the state and large private institutions know disproportionately more than the individual knows about the state or large private institutions. Further findings suggest people who are less powerful cannot afford to disengage from the power exercised over them by other players. The researchers urge others to move way from treating privacy as a unidimensional construct and to rather analyze how privacy is perceived and enacted on. The research calls for future researchers to systematically track perceptions, attitudes, and behavior toward privacy over time and in combination with the technological and cultural contexts.
Conclusion
As many social media researchers can attest, the ever-evolving nature of the social media platforms—the changing affordances, algorithms, platforms, and access to social media—means that scholarship on social media use and users is not an easy task. The diverse and novel research methods in each article of this thematic collection showcase the utility of combining more traditional methods with new methodological approaches when analyzing the influence of social media.
In sum, social media has been and is still being used as a proxy to understand people’s attitudes toward important societal issues whether they discuss brands (Gräve, 2019), elections (Bastos & Farkas, 2019), air pollution (Gurajala et al., 2019), or vaccine-related information (Yuan et al., 2019). Increasingly, when researchers and others are trying to derive insights from social media, they need to (1) pay attention and recognize how bots may be influencing public discourse online (Yuan et al., 2019) and (2) account for users’ changing privacy expectations when collecting and working with social media data (Quinn et al., 2019). The joys of sharing and connecting on social media are tempered with concerns about the manipulation and exploitation of social media platforms.
The use of social media has influence. The misuse of social media has influence. And, academic research on the use and impact of social media also has influence. For better or for worse, the question of influence in a social media age is far from resolved and this special issue seeks to contribute to this growing and critically important area.
Footnotes
Acknowledgements
The authors would like to extend their appreciation to all of the reviewers of the International Conference on Social Media & Society who peer-reviewed these articles for presentation at the annual conference and their extended versions for consideration for this special issue. The authors would also like to thank Zizi Papacharissi, editor of the Social Media + Society journal, for her ongoing collaboration and support, as well as Rachel Kinnard, the editorial assistant at Social Media + Society journal, for assisting in moving the special issue through to publication.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
