• Home >
  • news >
  • ComProp @ ICA 2018

    22 May 2018

    A Robot Among People

    A number of our team members are off to Prague for the 2018 conference of the International Communication Association.

    Phil Howard will be speaking on the opening plenary, “Communication and the Evolution of Voice” with Patricia Moy, Guobin Yang, Sheila Coronel, and Peter Baumgartner.

    Robert Gorwa and Doug Guilbeault will be representing the team and presenting a paper titled “What Should We Do About Political Automation? Challenges for Policy and Research,” which was awarded top student paper in Communication and Technology.

    Abstract

    Amidst widespread reports of digital influence operations during major elections, policymakers, scholars, and journalists have become increasingly interested in the political impact of social media ‘bots.’ Most recently, platform companies like Facebook and Twitter have been summoned to testify about bots as part of investigations into digitally-enabled foreign manipulation during the 2016 US Presidential election. Facing mounting pressure from both the public and from legislators, these companies have been instructed to crack down on apparently malicious bot accounts. But as this article demonstrates, since the earliest writings on bots in the 1990s, there has been substantial confusion as to exactly what a ‘bot’ is and what exactly a bot does. We argue that multiple forms of ambiguity are responsible for much of the complexity underlying contemporary bot-related policy, and that before successful policy interventions can be formulated, a more comprehensive understanding of bots — especially how they are defined and measured — will be needed. In this article, we provide a history and typology of different types of bots, provide clear guidelines to better categorize political automation and unpack the impact that it can have on contemporary technology policy, and outline the main challenges and ambiguities that will face both researchers and legislators concerned with bots in the future.

    Read the full paper on Arxiv.

     

    Samantha Bradshaw, Lisa-Maria Neudert and Phil Howard will be presenting a paper titled ‘Automating Suppression: How bots silence free speech and minority voices online,’ on a fantastic panel with Nick Couldry, Daniel Kreiss and Shannon McGregor, Rasmus Nielsen and Sarah Anne Gartner, and David Karpf.

    Abstract

    The anonymity, borderless nature, and free flow of information online have been celebrated as attributes that contribute to a more inclusive public sphere. While the Internet has emerged as a focal point of political discourse, algorithmically amplified attacks on the freedom of speech seek to suppress the voices of women, ethnical and cultural minorities. Orchestrated troll attacks and automated bot accounts target both group and individual actors with hate speech and harassment, spam, disinformation and an ongoing flood of messages designed to sow discontent, fear, and withdrawal. The investigations surrounding the interference of the 2016 US elections, have brought to light that such attempts of silencing and assaulting disadvantaged populations stand in close connection of concerns of propaganda campaigns. Building on political communication literature, this paper examines the different strategies used by trolls and bots to silence and smother minority voices online, shedding light on systematic targeting of female intellectuals, political activists and people of color. Using social media data collected during pivotal moments of public life with the Project on Computational Propaganda at the Oxford Internet Institute, we will present an inventory of strategies pursued by bots & trolls. Qualitative interviews conducted with perpetrators, bot developers, victims and social media operators, will be used to highlight the implications for freedom of expression and the diversity of public discourse.

     

    Robert Gorwa, Bence Kollanyi, and Phil Howard will be  presenting a paper titled ‘A Critical Analysis of Social Bot Detection Methodologies’ on a panel titled “Methodological Challenges to Studying Misinformation and Disinformation in Data-Driven Politics: Fake News, Bots, and Digital Campaigns,” chaired by Young Mie Kim.

    Abstract

    In the past year, a growing amount of attention has been paid to the role that automation, and politically motivated automation, in particular, may be having on social media platforms. Communications scholars have recently joined computer scientists in becoming interested in the effects of ‘bots’— automated social media accounts — on platforms like Twitter. To detect bots, researchers have analyzed various aspects of the Twitter-based communication, from simple sender-receiver relationships to complex behavioral patterns, used a wide array of tools and methods including machine learning, network analysis, and linguistic approaches, and deployed honeypot accounts. Almost all studies relating to bots hinge on the accurate detection and classification of bot accounts; however, detecting bots with publically available data is a very difficult computational puzzle. In this paper, we provide the first comprehensive literature review of various bot detection methodologies that critically assesses the strengths and limitations of each method. We introduce some major puzzles for current approaches, including the “ground-truth problem” (the lack of reliable training data for machine learning algorithms), and the “cyborg problem” (the issue of hybrid accounts that produce a combination of automated messages and human curated content). We then suggest some avenues which could potentially be harnessed to yield more accurate detection in the future.

    Related Content