Industry responses to computational propaganda and social media manipulation

22 November 2019

Industry Responses to Computational Propaganda and Social Media Manipulation

About the report

What have Internet companies done to combat the creation and spread of computational propaganda on their platforms and services? What do the leading players’ initiatives tell us about their coping strategies? How are their actions supported by the companies’ terms and policies for users and advertisers? And have there been any substantial policy changes as a result of the proliferation of computational propaganda? We examined platform initiatives and terms of service agreements of six Internet companies (Facebook, Google and YouTube, LinkedIn, Reddit, and Twitter) and found:

  • Immediately following the events of 2016, platforms suggested that only a low percentage of overall posts or users were involved, and therefore not many self-governing actions were taken. But by the spring of 2017, attitudes seemed to have changed and a flurry of initiatives were launched. To various degrees, different platform companies have announced the following:
    • changes to the algorithms underlying newsfeeds or ad targeting
    • new partnerships with third-party fact-checkers o investment in and support for quality journalism (and the business of news organizations)
    • greater transparency about electoral advertising and internal content moderation practices
    • additional investments in both automated and human content moderation.
  • The initiatives that have been taken suggest some differences between the strategies of some of the largest platform companies (Facebook, Google and 4 YouTube, and Twitter) as they search for effective, appropriate, and credible self-regulatory responses amid a firestorm of public and political opprobrium.
  • The platforms’ responses also seem to be heavily influenced by news events, such as the Cambridge Analytica scandal (Facebook), the reports of Holocaust denial sites featuring prominently in search results and influencing autocomplete (Google), and research into the impact of fake accounts and bots (Twitter). Official announcements often reference current events and reporting, and their impact on companies’ actions suggests that their coping strategies are still emergent at best and reactive at worst. Large technology companies used to driving change in other areas often seem to be reactive and on the back foot when it comes to combating computational propaganda.
  • Overall, no major changes to terms and policies directly related to computational propaganda were observed, leading to the conclusion that current terms and policies provide plenty of opportunities to address these issues. The language of the terms and policies relating to users and advertisers tends to be widely drawn, offering flexibility for creative interpretation and different degrees and forms of enforcement. The major change indicated by the official blogs of the companies is that they have ramped up their enforcement activities, often through a combination of new automated efforts and increased investment in human content moderation.
  • Finally, it is apparent that past, impending, and possibly additional regulation is having an impact on company policies and practices. New European Union (EU) steps like the General Data Protection Regulation, as well as numerous proposals for national legislation (covered by Bradshaw & Neudert (2018)) are expected to result in a raft of updates to terms and policies as well as to platforms’ activities around enforcement, content moderation, etc.

This report is an adapted version of an earlier publication released by NATO Stratcom Centre of Excellence.

Emily Taylor & Stacie Hoffman, “Industry responses to computational propaganda and social media manipulation.” Working Paper 2019.4. Oxford, UK: Project on Computational Propaganda. demtech.oii.ox.ac.uk. 48 pp.

Related Content