Industry responses to computational propaganda and social media manipulation
22 November 2019
About the report
What have Internet companies done to combat the creation and spread of computational propaganda on their platforms and services? What do the leading players’ initiatives tell us about their coping strategies? How are their actions supported by the companies’ terms and policies for users and advertisers? And have there been any substantial policy changes as a result of the proliferation of computational propaganda? We examined platform initiatives and terms of service agreements of six Internet companies (Facebook, Google and YouTube, LinkedIn, Reddit, and Twitter) and found:
Immediately following the events of 2016, platforms suggested that only a low percentage of overall posts or users were involved, and therefore not many self-governing actions were taken. But by the spring of 2017, attitudes seemed to have changed and a flurry of initiatives were launched. To various degrees, different platform companies have announced the following:
changes to the algorithms underlying newsfeeds or ad targeting
new partnerships with third-party fact-checkers o investment in and support for quality journalism (and the business of news organizations)
greater transparency about electoral advertising and internal content moderation practices
additional investments in both automated and human content moderation.
The initiatives that have been taken suggest some differences between the strategies of some of the largest platform companies (Facebook, Google and 4 YouTube, and Twitter) as they search for effective, appropriate, and credible self-regulatory responses amid a firestorm of public and political opprobrium.
The platforms’ responses also seem to be heavily influenced by news events, such as the Cambridge Analytica scandal (Facebook), the reports of Holocaust denial sites featuring prominently in search results and influencing autocomplete (Google), and research into the impact of fake accounts and bots (Twitter). Official announcements often reference current events and reporting, and their impact on companies’ actions suggests that their coping strategies are still emergent at best and reactive at worst. Large technology companies used to driving change in other areas often seem to be reactive and on the back foot when it comes to combating computational propaganda.
Overall, no major changes to terms and policies directly related to computational propaganda were observed, leading to the conclusion that current terms and policies provide plenty of opportunities to address these issues. The language of the terms and policies relating to users and advertisers tends to be widely drawn, offering flexibility for creative interpretation and different degrees and forms of enforcement. The major change indicated by the official blogs of the companies is that they have ramped up their enforcement activities, often through a combination of new automated efforts and increased investment in human content moderation.
Finally, it is apparent that past, impending, and possibly additional regulation is having an impact on company policies and practices. New European Union (EU) steps like the General Data Protection Regulation, as well as numerous proposals for national legislation (covered by Bradshaw & Neudert (2018)) are expected to result in a raft of updates to terms and policies as well as to platforms’ activities around enforcement, content moderation, etc.
This report is an adapted version of an earlier publication released by NATO Stratcom Centre of Excellence.
Emily Taylor & Stacie Hoffman, “Industry responses to computational propaganda and social media manipulation.” Working Paper 2019.4. Oxford, UK: Project on Computational Propaganda. demtech.oii.ox.ac.uk. 48 pp.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
moove_gdrp_popup - a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.
This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
YouTube and Sparkle
This website uses the following additional cookies from third party websites:
YouTube tracks the videos you watch that are embedded on our webpages.
Doubleclick monitors the adverts you see on YouTube. This cookie is automatically added by YouTube, but the OII does not display any adverts.
5p4rk.3l displays the OII's Twitter feed on the website homepage.
These cookies will remain on your computer for 365 days, but you can edit your preferences at any time through the "Cookie Settings" in the website footer.
Please enable Strictly Necessary Cookies first so that we can save your preferences!
Google Analytics
This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.
Enabling this option will allow cookies from:
Google Analytics - tracking visits to the ox.ac.uk and oii.ox.ac.uk domains
YouTube - owned by Google. The cookie will track the OII videos that you watch on our site. This option will not allow cookies from doubleclick.net, however.
These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.
Please enable Strictly Necessary Cookies first so that we can save your preferences!