No amount of “AI” in content moderation will solve filtering’s prior-restraint problem

Contemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the sol...

Full description

Bibliographic Details
Main Author: Emma J Llansó
Format: Article
Language:English
Published: SAGE Publishing 2020-04-01
Series:Big Data & Society
Online Access:https://doi.org/10.1177/2053951720920686
Description
Summary:Contemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the solution to complex challenges around online content, promising that AI is a scant few years away from resolving everything from hate speech to harassment to the spread of terrorist propaganda. Missing from these promises, however, is an acknowledgement that proactive identification and automated removal of user-generated content raises problems beyond issues of “accuracy” and overbreadth--problems that will not be solved with more sophisticated AI. In this commentary, I discuss how the technical realities of content filtering stack up against the protections for freedom of expression in international human rights law. As policymakers and companies around the world turn to AI for communications governance, it is crucial that we recall why legal protections for speech have included presumptions against prior censorship, and consider carefully how proactive content moderation will fundamentally re-shape the relationship between rules, people, and their speech.
ISSN:2053-9517