Abstract: Social media platforms have been exploited to conduct election interference in recent years. In particular, the Russian-backed Internet Research Agency (IRA) has been identified as a key source of misinformation spread on Twitter prior to the 2016 U.S. presidential election. The goal of this research is to understand whether general Twitter users changed their behavior in the year following first contact from an IRA account. We compare the before and after behavior of contacted users to determine whether there were differences in their mean tweet count, the sentiment of their tweets, and the frequency and sentiment of tweets mentioning @realDonaldTrump or @HillaryClinton. Our results indicate that users overall exhibited statistically significant changes in behavior across most of these metrics, and that those users that engaged with the IRA generally showed greater changes in behavior.
DOI:10.1145/3449164
Data: None Disclosed
Literature and Data
Coordinated Inauthentic Info Ops
Russia: Strategy/Tactics/Impact
2021
- Bastos, M., Mercea, D. & Goveia, F. (2021). Guy next Door and Implausibly Attractive Young Women: The Visual Frames of Social Media Propaganda, New Media & Society.
Abstract: This study combines data analysis with multilevel processing of visual communication to classify the visual frames of state-sponsored social media propaganda. We relied on Twitter’s Election Integrity data to sample five propaganda targets of the Internet Research Agency, including Russian and American partisan groups, and explored how their operations deviated from canonical state propaganda marked by symbols of national identity and heroic masculinity. The results show that the visual frames employed by the Internet Research Agency are designed to embody the vox populi with relatable, familiar, or attractive faces of ordinary people. The results also indicate that Internet Research Agency influence operations displayed cultural acuity and familiarity with the social identity of their targets, and that the visual narrative the agency crafted trafficked primarily in the tropes of regular guys or attractive young women. We discuss these findings and argue that state-sponsored propaganda has attuned to subcultural and visual affordances of social platforms.
DOI: 10.1177/14614448211026580
Data: None Disclosed - Dutta, U., Hanscom, R., Zhang, J.S., Han, R., Lehman, T., Lv, Q., & Mishra, S. (2021). Analyzing Twitter Users' Behavior before and after Contact by the Russia's Internet Research Agency, Proceedings of the ACM on Human-Computer Interaction, vol. 5, no. CSCW1, 1–24.
- Ehrett, C., Livill, D., Smith, H., Warren, P.L., Bellamy, L., Moawad, M., Moran, O., & Moody, M. (2021). Inauthentic Newsfeeds and Agenda Setting in a Coordinated Inauthentic Information Operation, Social Science Computer Review.
Abstract: The 2015–2017 Russian Internet Research Agency (IRA)’s coordinated information operation is one of the earliest and most studied of the social media age. A set of 38 city-specific inauthentic “newsfeeds” made up a large, under-analyzed part of its English-language output. We label 1,000 tweets from the IRA newsfeeds and a matched set of real news sources from those same cities with up to five labels indicating the tweet represents a world in unrest and, if so, of what sort. We train a natural language classifier to extend these labels to 268 k IRA tweets and 1.13 million control tweets. Compared to the controls, tweets from the IRA were 34% more likely to represent unrest, especially crime and identity danger, and this difference jumped to about twice as likely in the months immediately before the election. Agenda setting by media is well-known and well-studied, but this weaponization by a coordinated information operation is novel. DOI: 10.1177/08944393211019951
Data: https://blog.twitter.com/en_us/topics/company/2018/2016-election-update - Riedl, M. J., Strover, S., Cao, T., Choi, J. R., Limov, B., & Schnell, M. (2021). Reverse-Engineering Political Protest: The Russian Internet Research Agency in the Heart of Texas, Information, Communication & Society, 1–18.
Abstract: In the aftermath of the 2016 US presidential election, the public slowly began to grapple with the extent of Russian disinformation campaigns by the Internet Research Agency (IRA), elements of which were carried out on Facebook. Campaigns targeted people in the United States in many ways, including by publishing event pages on Facebook that were at times piggybacking on existing events, stoking fear, anger, and other emotions already on the rise in an increasingly tribal political climate. In the summer before the election, two particular Facebook pages – ‘Heart of Texas’ and the ‘United Muslims of America,’ published events advertising protests in front of the Islamic Da’wah Center, a mosque and religious center in downtown Houston. Our study reverse-engineers the IRA-inspired ‘Heart of Texas’ protests on 21 May 2016, using qualitative in-depth interviews with 14 individuals connected to these events – including counterprotest participants and local organizers, journalists who covered the protest, as well as representatives of local organizations. Results shed light on the role that news media played in protest coverage, the dynamics at the protest, issues around vetting information and the serendipity around how protests emerge and get organized on and off social media. This research documents and critically assesses the on-the ground transactions such propaganda foments and offers insights into the role of social media in local protests. DOI: 10.1080/1369118x.2021.1934066
Data: https://intelligence.house.gov/social-media-content/ - Park, S., Strover, S., Choi, J., & Schnell, M. (2021). Mind games: A temporal sentiment analysis of the political messages of the Internet Research Agency on Facebook and Twitter, New Media & Society, 1-22.
Abstract: This study examines the temporal dynamics of emotional appeals in Russian campaign messages used in the 2016 election. Communications on two giant social media platforms, Facebook and Twitter, are analyzed to assess emotion in message content and targeting that may have contributed to influencing people. The current study conducts both computational and qualitative investigations of the Internet Research Agency’s (IRA) emotion-based strategies across three different dimensions of message propagation: the platforms themselves, partisan identity as targeted by the source, and social identity in politics, using African American identity as a case. We examine (1) the emotional flows along the campaign timeline, (2) emotion-based strategies of the Russian trolls that masked left- and right-leaning identities, and (3) emotion in messages projecting to or about African American identity and representation. Our findings show sentiment strategies that differ between Facebook and Twitter, with strong evidence of negative emotion targeting Black identity. DOI: 10.1177/14614448211014355
Data: None Disclosed
2020
- Alizadeh, M., Shapiro, J., Buntain, C., &Tucker, J. (2020). Content-based features predict social media influence operations, Science Advances 22 Jul.
Abstract: We study how easy it is to distinguish influence operations from organic social media activity by assessing the performance of a platform-agnostic machine learning approach. Our method uses public activity to detect content that is part of coordinated influence operations based on human-interpretable features derived solely from content. We test this method on publicly available Twitter data on Chinese, Russian, and Venezuelan troll activity targeting the United States, as well as the Reddit dataset of Russian influence efforts. To assess how well content-based features distinguish these influence operations from random samples of general and political American users, we train and test classifiers on a monthly basis for each campaign across five prediction tasks. Content-based features perform well across period, country, platform, and prediction task. Industrialized production of influence campaign content leaves a distinctive signal in user-generated content that allows tracking of campaigns from month to month and across different accounts.
DOI: 10.1126/sciadv.abb5824
Data: None Disclosed - Alsmadi, I., & O'Brien, M.J., (2020). How Many Bots in Russian Troll Tweets?. Information Processing Management, 57(6), p.102303.
Abstract: Increased usage of bots through the Internet in general, and social networks in particular, has many implications related to influencing public opinion. Mechanisms to distinguish humans from machines span a broad spectrum of applications and hence vary in their nature and complexity. Here we use several public Twitter datasets to build a model that can predict whether or not an account is a bot account based on features extracted at the tweet or the account level. We then apply the model to Twitter's Russian Troll Tweets dataset. At the account level, we evaluate features related to how often Twitter accounts are tweeting, as previous research has shown that bots are very active at some account levels and very low at others. At the tweet level, we noticed that bot accounts tend to sound more formal or structured, whereas real user accounts tend to be more informal in that they contain more slang, slurs, cursing, and the like. We also noted that bots can be created for a range of different goals (e.g., marketing and politics) and that their behaviors vary based on those distinct goals. Ultimately, for high bot-prediction accuracy, models should consider and distinguish among the different goals for which bots are created.
DOI: 10.1016/j.ipm.2020.102303
Data: None Disclosed - Bail, C. A., Guay, B., Maloney, E., Combs, A., Hillygus, D. S., Merhout, F., & Volfovsky, A. (2020). Assessing the Russian Internet Research Agency’s impact on the political attitudes and behaviors of American Twitter users in late 2017, Proceedings of the National Academy of Sciences of the United States, 117(1), 243-250.
Abstract: There is widespread concern that Russia and other countries have launched social-media campaigns designed to increase political divisions in the United States. Though a growing number of studies analyze the strategy of such campaigns, it is not yet known how these efforts shaped the political attitudes and behaviors of Americans. We study this question using longitudinal data that describe the attitudes and online behaviors of 1,239 Republican and Democratic Twitter users from late 2017 merged with nonpublic data about the Russian Internet Research Agency (IRA) from Twitter. Using Bayesian regression tree models, we find no evidence that interaction with IRA accounts substantially impacted 6 distinctive measures of political attitudes and behaviors over a 1-mo period. We also find that interaction with IRA accounts were most common among respondents with strong ideological homophily within their Twitter network, high interest in politics, and high frequency of Twitter usage. Together, these findings suggest that Russian trolls might have failed to sow discord because they mostly interacted with those who were already highly polarized. We conclude by discussing several important limitations of our study—especially our inability to determine whether IRA accounts influenced the 2016 presidential election—as well as its implications for future research on social media influence campaigns, political polarization, and computational social science.
DOI: 10.1073/pnas.1906420116
Data: None Disclosed - Beskow, D., & Carley, K. (2020). Characterization and Comparison of Russian and Chinese Disinformation Campaigns, Disinformation, Misinformation, and Fake News in Social Media, 63-81.
Abstract: While substantial research has focused on social bot classification, less computational effort has focused on repeatable bot characterization. Binary classification into “bot” or “not bot” is just the first step in social cybersecurity workflows. Characterizing the malicious actors is the next step. To that end, this paper will characterize data associated with state sponsored manipulation by Russia and the People’s Republic of China. The data studied here was associated with information manipulation by state actors, the accounts were suspended by Twitter and subsequently all associated data was released to the public. Of the multiple data sets that Twitter released, we will focus on the data associated with the Russian Internet Research Agency and the People’s Republic of China. The goal of this paper is to compare and contrast these two important data sets while simultaneously developing repeatable workflows to characterize information operations for social cybersecurity.
DOI: 10.1007/978-3-030-42699-6_4
Data: Twitter Info Ops Archvie - Freelon, D., & Lokot, T. (2020). Russian Twitter disinformation campaigns reach across the American political spectrum, The Harvard Kennedy School (HKS) Misinformation Review January.
Abstract: Evidence from an analysis of Twitter data reveals that Russian social media trolls exploited racial and political identities to infiltrate distinct groups of authentic users, playing on their group identities. The groups affected spanned the ideological spectrum, suggesting the importance of coordinated counter-responses from diverse coalitions of users.
DOI: 10.37016/mr-2020-003
Data: None Disclosed - Freelon, D., Bossetta, M., Wells, C., Lukito, J., Xia, Y., & Adams, K. (2020). Black Trolls Matter: Racial and Ideological Asymmetries in Social Media Disinformation. Social Science Computer Review, 0894439320914853.
Abstract: The recent rise of disinformation and propaganda on social media has attracted strong interest from social scientists. Research on the topic has repeatedly observed ideological asymmetries in disinformation content and reception, wherein conservatives are more likely to view, redistribute, and believe such content. However, preliminary evidence has suggested that race may also play a substantial role in determining the targeting and consumption of disinformation content. Such racial asymmetries may exist alongside, or even instead of, ideological ones. Our computational analysis of 5.2 million tweets by the Russian government-funded “troll farm” known as the Internet Research Agency sheds light on these possibilities. We find stark differences in the numbers of unique accounts and tweets originating from ostensibly liberal, conservative, and Black left-leaning individuals. But diverging from prior empirical accounts, we find racial presentation—specifically, presenting as a Black activist—to be the most effective predictor of disinformation engagement by far. Importantly, these results could only be detected once we disaggregated Black-presenting accounts from non-Black liberal accounts. In addition to its contributions to the study of ideological asymmetry in disinformation content and reception, this study also underscores the general relevance of race to disinformation studies.
DOI: 10.1177/0894439320914853
Data: None Disclosed - Golovchenko, Y., Buntain, C., Eady, G, Brown, M., & Tucker, J. (2020). Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube during the 2016 U.S. Presidential Election, The International Journal of Press/Politics.
Abstract: This paper investigates online propaganda strategies of the Internet Research Agency (IRA)—Russian “trolls”—during the 2016 U.S. presidential election. We assess claims that the IRA sought either to (1) support Donald Trump or (2) sow discord among the U.S. public by analyzing hyperlinks contained in 108,781 IRA tweets. Our results show that although IRA accounts promoted links to both sides of the ideological spectrum, “conservative” trolls were more active than “liberal” ones. The IRA also shared content across social media platforms, particularly YouTube—the second-most linked destination among IRA tweets. Although overall news content shared by trolls leaned moderate to conservative, we find troll accounts on both sides of the ideological spectrum, and these accounts maintain their political alignment. Links to YouTube videos were decidedly conservative, however. While mixed, this evidence is consistent with the IRA’s supporting the Republican campaign, but the IRA’s strategy was multifaceted, with an ideological division of labor among accounts. We contextualize these results as consistent with a pre-propaganda strategy. This work demonstrates the need to view political communication in the context of the broader media ecology, as governments exploit the interconnected information ecosystem to pursue covert propaganda strategies.
DOI: 10.1177/1940161220912682
Data: None Disclosed - Linvill, D., & Warren, P. (2020). Engaging with others: how the IRA coordinated information operation made friends, The Harvard Kennedy School (HKS) Misinformation Review, April.
Abstract: We analyzed the Russian Internet Research Agency’s (IRA) 2015-2017 English-language information operation on Twitter to understand the special role that engagement with outsiders (i.e., non-IRA affiliated accounts) played in their campaign. By analyzing the timing and type of engagement of IRA accounts with non-IRA affiliated accounts, and the characteristics of the latter, we identified a three-phases life cycle of such engagement, which was central to how this IRA network operated. Engagement with external accounts was key to introducing new troll accounts, to increasing their prominence, and, finally, to amplifying the messages these external accounts produced.
DOI: 10.37016/mr-2020-011
Data: 10.7910/DVN/DI2GD1 - Linvill, D., & Warren, P. (2020). Troll Factories: Manufacturing Specialized Disinformation on Twitter" Journal of Political Communication, February, (Working Paper).
Abstract: We document methods employed by Russia’s Internet Research Agency to influence the political agenda of the United States from September 9, 2009 to June 21, 2018. We qualitatively and quantitatively analyze Twitter accounts with known IRA affiliation to better understand the form and function of Russian efforts. We identified five handle categories: Right Troll, Left Troll, News Feed, Hashtag Gamer, and Fearmonger. Within each type, accounts were used consistently, but the behavior across types was different, both in terms of “normal” daily behavior and in how they responded to external events. In this sense, the Internet Research Agency’s agenda-building effort was “industrial”-- mass produced from a system of interchangeable parts, where each class of part fulfilled a specialized function.
DOI: 10.1080/10584609.2020.1718257
Data: https://github.com/patrick-lee-warren/IRA-Troll-Types - Lukito, J. (2020). Coordinating a Multi-Platform Disinformation Campaign: Internet Research Agency Activity on Three Us Social Media Platforms, 2015 to 2017, Political Communication, 37(2), 238-255.
Abstract: Though nation-states have long utilized disinformation to influence foreign audiences, Russia’s 2015 to 2017 campaign against the U.S.—executed by the Internet Research Agency (IRA) —is unique in its complexity and distribution through the digital communication ecology. The following study explores IRA activity on three social media platforms, Facebook, Twitter, and Reddit, to understand how activities on these sites were temporally coordinated. Using a VAR analysis with Granger Causality tests, results show that IRA Reddit activity granger caused IRA Twitter activity within a one-week lag. One explanation may be that the Internet Research Agency is trial ballooning on one platform (i.e., Reddit) to figure out which messages are optimal to distribute on other social media (i.e., Twitter).
DOI: 10.1080/10584609.2019.1661889
Data: https://github.com/jlukito/ira_3media - Lukito, J., Suk, J., Zhang, Y., Doroshenko, L., Kim, S. J., Su, M. H., Xia, Y., Freelon, D., & Wells, C. (2020). The Wolves in Sheep’s Clothing: How Russia’s Internet Research Agency Tweets Appeared in U.S. News as Vox Populi, The International Journal of Press/Politics 25, 196–216.
Abstract: The Russian-sponsored Internet Research Agency’s (IRA) use of social media to influence U.S. political discourse is undoubtedly troubling. However, scholarly attention has focused on social media, overlooking the role that news media within the country played in amplifying false, foreign messages. In this article, we examine articles in the U.S. news media system that quoted IRA tweets through the lens of changing journalism practices in the hybrid media system, focusing specifically on news gatekeepers’ use of tweets as vox populi. We find that a majority of the IRA tweets embedded in the news were vox populi. That is, IRA tweets were quoted (1) for their opinion, (2) as coming from everyday Twitter users, and (3) with a collection of other tweets holistically representing public sentiment. These findings raise concerns about how modern gatekeeping practices, transformed due to the hybrid media system, may also unintentionally let in unwanted disinformation from malicious actors.
DOI: 10.1177/1940161219895215
Data: None Disclosed - Strudwicke, I. J., & Grant, W. J. (2020). #JunkScience: Investigating Pseudoscience Disinformation in the Russian Internet Research Agency Tweets, Public Understanding of Science, vol. 29, no. 5, 459–472.
Abstract: Recent research has identified anti-vaccination propaganda in the so-called Russian Troll Tweets strongly associated with the 2016 US Presidential election. This study builds on this: hypothesising that if vaccination content was found in the sample, the Russia Tweets would be likely to contain other science content, and perhaps, similar pseudo or anti-science messages. As well as vaccination, climate change, genetically modified organisms, Ebola, flat Earth beliefs (flat Earthism) and Zika were found in the Russia tweets. Genetically modified organisms and flat Earthism appear to have been camouflage content – tweeted at similar rates to other Twitter users – while climate change, Ebola, Zika and vaccination appear to have been emphasized beyond the background rate for strategic disinformation purposes.
-
DOI: 10.1177/0963662520935071
Data: None Disclosed - Vargo, C. J., & Hopp, T. (2020). #Fear, Anger, and Political Advertisement Engagement: A Computational Case Study OF Russian-Linked Facebook and Instagram Content, Journalism & Mass Communication Quarterly, vol. 97, no. 3, 743–761.
Abstract: This study examined political advertisements placed by the Russian-based Internet Research Agency on Facebook and Instagram. Advertisements were computationally analyzed for four rhetorical techniques presumed to elicit anger and fear: negative identity-based language, inflammatory language, obscene language, and threatening language. Congruent with extant research on arousing emotional responses, advertising clickthrough rates were positively associated with inflammatory, obscene, and threatening language. Surprisingly, however, a negative relationship between clickthrough rate and the use of negative identity-based language was observed. Additional analyses showed that the advertisements were engaged with at rates that exceed industry benchmarks, and that clickthrough rates increased over time.
2019
- Bastos, M., & Farkas, J. (2019). “Donald Trump Is My President!”: The Internet Research Agency Propaganda Machine.” Social Media + Society, 5(3), 1–13.
Abstract: This article presents a typological study of the Twitter accounts operated by the Internet Research Agency (IRA), a company specialized in online influence operations based in St. Petersburg, Russia. Drawing on concepts from 20th-century propaganda theory, we modeled the IRA operations along propaganda classes and campaign targets. The study relies on two historical databases and data from the Internet Archive’s Wayback Machine to retrieve 826 user profiles and 6,377 tweets posted by the agency between 2012 and 2017. We manually coded the source as identifiable, obfuscated, or impersonated and classified the campaign target of IRA operations using an inductive typology based on profile descriptions, images, location, language, and tweeted content. The qualitative variables were analyzed as relative frequencies to test the extent to which the IRA’s black, gray, and white propaganda are deployed with clearly defined targets for short-, medium-, and long-term propaganda strategies. The results show that source classification from propaganda theory remains a valid framework to understand IRA’s propaganda machine and that the agency operates a composite of different user accounts tailored to perform specific tasks, including openly pro-Russian profiles, local American and German news sources, pro-Trump conservatives, and Black Lives Matter activists.
DOI: 10.1177/2056305119865466
Data: None Disclosed - Hjorth, F., & Adler-Nissen, R. (2019). Ideological Asymmetry in the Reach of Pro-Russian Digital Disinformation to United States Audiences. Journal of Communication, 69(2), 168–192.
Abstract: Despite concerns about the effects of pro-Russian disinformation on Western public opinion, evidence of its reach remains scarce. We hypothesize that conservative individuals will be more likely than liberals to be potentially exposed to pro-Russian disinformation in digital networks. We evaluate the hypothesis using a large data set of U.S.-based Twitter users, testing how ideology is associated with disinformation about the 2014 crash of the MH17 aircraft over eastern Ukraine. We find that potential exposure to disinformation is concentrated among the most conservative individuals. Moving from the most liberal to the most conservative individuals in the sample is associated with a change in the conditional probability of potential exposure to disinformation from 6.5% to 45.2%. We corroborate the finding using a second, validated data set on individual party registration. The results indicate that the reach of online, pro-Russian disinformation into U.S. audiences is distinctly ideologically asymmetric.
DOI: 10.1093/joc/jqz006
Data: None Disclosed - Linvill, D. L., Boatwright, B., Grant, W., & Warren, P. L. (2019). The Russians are hacking my brain; investigating Russia's Internet Research Agency Twitter tactics during the 2016 United States presidential campaign, Computers in Human Behavior, 99, 292-300.
Abstract: This study analyzed tweets from handles associated with the Russian Internet Research Agency in an effort to better understand the tactics employed by that organization on the social media platform Twitter in their attempt to influence U.S. political discourse and the outcome of the 2016 U.S. Presidential election. We sampled tweets from the month preceding the election and analyzed to understand the qualitative nature of these tweets as well as quantitative differences between how types of IRA Twitter accounts communicated. Seven categories of tweet behavior were identified: attack left, support right, attack right, support left, attack media, attack civil institutions, and camouflage. While camouflage was the most common type of tweet (52.6%), descriptive analyses showed it was followed by attack left (12%) and support right (7%). A variety of quantitative differences were shown between how account types behaved.
DOI: 10.1016/j.chb.2019.05.027
Data: GitHub - Stukal, D., Sanovich, S., Tucker, J., & Bonneau, R. (2019) For Whom the Bot Tolls: A Neural Networks Approach to Measuring Political Orientation of Twitter Bots in Russia, Sage Open, April.
Abstract: Computational propaganda and the use of automated accounts in social media have recently become the focus of public attention, with alleged Russian government activities abroad provoking particularly widespread interest. However, even in the Russian domestic context, where anecdotal evidence of state activity online goes back almost a decade, no public systematic attempt has been made to dissect the population of Russian social media bots by their political orientation. We address this gap by developing a deep neural network classifier that separates pro-regime, anti-regime, and neutral Russian Twitter bots. Our method relies on supervised machine learning and a new large set of labeled accounts, rather than externally obtained account affiliations or orientation of elites. We also illustrate the use of our method by applying it to bots operating in Russian political Twitter from 2015 to 2017 and show that both pro- and anti-Kremlin bots had a substantial presence on Twitter.
DOI: 10.1177/2158244019827715
Data: None Disclosed - Xia, Y., Lukito, J., Zhang, Y., Wells, C., Kim, S.J., & Tong, C (2019) Disinformation, performed: self-presentation of a Russian IRA account on Twitter. Information, Communication & Society, 22(11), 1646-1664.
Abstract: How disinformation campaigns operate and how they fit into the broader social communication environment – which has been described as a ‘disinformation order’ [Bennett & Livingston, (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139] – represent critical, ongoing questions for political communication. We offer a thorough analysis of a highly successful disinformation account run by Russia’s Internet Research Agency: the so-called ‘Jenna Abrams’ account. We analyze Abrams’ tweets and other content such as blogposts with qualitative discourse analysis, assisted by quantitative content analysis and metadata analysis. This yields an in-depth understanding of how the IRA team behind the Abrams account presented this persona across multiple platforms and over time. Especially, we describe the techniques used to perform personal authenticity and cultural competence. The performance of personal authenticity was central to her persona building as a likeable American woman, whereas the performance of cultural competence enabled her to infiltrate American conservative communities with resonant messages. Implications for understanding disinformation processes, and how some aspects of the hybrid media system are especially vulnerable to hijacking by bad actors are discussed.
DOI: 10.1080/1369118X.2019.1621921
Data: None Disclosed - Yan, G., Pegoraro, A., & Watanabe, N. (2019) "Examining IRA Bots in the NFL Anthem Protest: Political Agendas and Practices of Digital Gatekeeping" Communication & Sport, May.
Abstract: With the understanding that the mass-participated mechanism of social media has led to an evolved lens of gatekeeping, this study incorporates the framework of digital gatekeeping to examine activities of Internet Research Agency (IRA) bots in the Twitter sphere of the National Football League anthem protest. To do so, the investigation employed data of IRA bots released from Clemson University. We conducted analysis by approaching bots’ gatekeeping activities from three perspectives: the overall behavioral patterns, the discourses and underpinning ideologies, and communicative tactics to sustain attention on Twitter. The results revealed that the majority of tweets came from the right trolls and left trolls. Meanwhile, the activity level of the bots displayed high sensitivity to emergent political events. Importantly, the two types of bots orchestrated a gatekeeping agenda that propelled antagonistic, hyperpartisan politics. The right-wing trolls’ tweets, in particular, propagated pro-White, malicious propaganda infiltrated with fake news. The results yield meaningful implications for digital gatekeeping, social media’s complex roles in knowledge production related to athlete protest, and sport’s engagement in broader political struggles in today’s mediated culture.
DOI: 10.1177/2167479519849114
Data: Linvill-Warren 538 Data
2018
- Arif, A., Stewart, L., & Starbird, K. (2018). Acting the Part: Examining Information Operations Within #BlackLivesMatter Discourse, Proceedings of the ACM on Human-Computer Interaction, Nov.
Abstract: This research examines how Russian disinformation actors participated in a highly charged online conversation about the #BlackLivesMatter movement and police-related shootings in the USA during 2016. We first present high-level dynamics of this conversation on Twitter using a network graph based on retweet flows that reveals two structurally distinct communities. Next, we identify accounts in this graph that were suspended by Twitter for being affiliated with the Internet Research Agency, an entity accused of conducting information operations in support of Russian political interests. Finally, we conduct an interpretive analysis that consolidates observations about the activities of these accounts. Our findings have implications for platforms seeking to develop mechanisms for determining authenticity---by illuminating how disinformation actors enact authentic personas and caricatures to target different audiences. This work also sheds light on how these actors systematically manipulate politically active online communities by amplifying diverging streams of divisive content.
DOI: 10.1145/3274289
Data: None Disclosed - Badaway, A., Lerman, K., & Ferrara, E. (2018). Who falls for online political manipulation? the case of the Russian interference campaign in the 2016 U.S. presidential election.” In IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining.
Abstract: Social media, once hailed as a vehicle for democratization and the promotion of positive social change across the globe, are under attack for becoming a tool of political manipulation and spread of disinformation. A case in point is the alleged use of trolls by Russia to spread malicious content in Western elections. This paper examines the Russian interference campaign in the 2016 US presidential election on Twitter. Our aim is twofold: first, we test whether predicting users who spread trolls’ content is feasible in order to gain insight on how to contain their influence in the future; second, we identify features that are most predictive of users who either intentionally or unintentionally play a vital role in spreading this malicious content. We collected a dataset with over 43 million elections-related posts shared on Twitter between September 16 and November 9, 2016, by about 5.7 million users. This dataset includes accounts associated with the Russian trolls identified by the US Congress. Proposed models are able to very accurately identify users who spread the trolls’ content (average AUC score of 96%, using 10-fold validation). We show that political ideology, bot likelihood scores, and some activity-related account metadata are the most predictive features of whether a user spreads trolls’ content or not.
Arxiv link
Data: None Disclosed - Stewart, L., Arif, A., & Starbird, K. (2018). Examining Trolls and Polarization with a Retweet Network, MIS2, 2018, Marina Del Rey, CA, USA, 1–6.
Abstract: This research examines the relationship between political homophily and organized trolling efforts. This is accomplished by analyzing how Russian troll accounts were retweeted on Twitter in the context of the #BlackLivesMatter movement. This analysis shows that these conversations were divided along political lines, and that the examined trolling accounts systematically took advantage of these divisions. The findings of this research can help us better understand how to combat systematic trolling.
DOI: 10.475/123_4
Data: Non Disclosed