Project members Samantha Bradshaw, Lisa-Maria Neudert, and Robert Gorwa, along with project alumnus Sam Woolley, will be speaking at the Association of Internet Researchers (AoIR) conference in Montreal this week.
Samantha and Lisa-Maria will be speaking on a great panel about countermeasures and policy responses to media manipulation:
Panel: COUNTERMEASURES AND RESPONSES TO MISINFORMATION AND MANIPULATION ONLINE
The past few years, researchers, companies, regulatory authorities, and the wider public have become increasingly aware of attempts to manipulate politics online. International actors use bots, cyborgs, and sock puppets to disrupt information environments. Extremists use memes to spread hatred, encourage violence, and harass users on prominent platforms like Twitter; they also use less public channels like Discord to organize boots-on-the-ground activity, and they copy platforms (for example, creating a version of Patreon called Hatreon) to crowdfund their activities. The papers in this panel focus on attempts by governments, companies, journalists, and citizens to reduce the harm caused by these manipulation activities. Though focused on misinformation and manipulation online, these papers root their analyses in other examples of this kind of behavior, to which governments, companies, journalists, and citizens have developed responses for decades. Thus, this panel analyzes current problems through a historical lens, cautioning against easy solutions while suggesting countermeasures that build on past successes.
Sam will be speaking on a panel about understanding ‘fake news’ in 2018, covering various theoretical and methodological approaches.
MAKING MEANING FROM “FAKE NEWS” AND DISINFORMATION: CREATION, DISSEMINATION, AND SOLUTIONS TO THE PROBLEM
Since the 2016 election, “fake news” has emerged as a major concern for technology platforms, political activists, and journalists. The prevalence of hoaxes, disinformation, bots, and sensational false content on social media has given rise to a plethora of concerns involving the spread of damaging conspiracy theories, the ability of citizens to access accurate political information, and the manipulation of mainstream media by extremist groups and ideologues. However, the term “fake news” has been heavily politicized, used by partisan actors to refer to sources they disagree with, or to call into question the credibility of particular outlets. It is an umbrella term that encompasses a wide array of wildly variant problematic information, and is often used to criticize a variety of practices related to the shift from broadcast to social news consumption, such as clickbait headlines, personalized news, and algorithmic visibility.
To researchers, the current hubbub over “fake news” brings up a set of questions. What can we learn from media and communications histories to inform this moment? How do we examine problematic information as part of an overall media and technological landscape? How can academics and researchers help frame the problem in ways that will lead to effective solutions? This panel showcases empirical scholarship on problematic information, using qualitative, historical, and ethnographic methods to investigate the history, present, and future of so-called “fake news”—calling into question some of the assumptions made in both popular and scholarly discourse.
Specifically, this panel focuses on how institutions construct and contribute to the spread of fake news (paper 1 and paper 2) and how people make meaning of “fake news” in partisan environments (paper 3 and paper 4). Each paper takes up empirical evidence to investigate popular claims about online disinformation, draws upon cutting-edge, interdisciplinary scholarship, and takes a sociotechnical examination of the current information landscape.
Paper Session-16: Platform Governance and Moderation
Towards Fairness, Accountability, and Transparency in Platform Governance
Drawing inspiration from recent work on Fairness, Accountability, and Transparency (FAT) in machine learning, this paper explores a similar research agenda for fairness, accountability, and transparency in platform governance. The paper seeks to make two contributions: (a) provide the initial provocation for what could be termed FAT-platform studies, and to (b) build on the extant platform governance literature (e.g Gillespie 2010, 2015, 2017; Denardis & Hackl, 2015) with an empirical, qualitative case study of Facebook policy practices.