No amount of “AI” in content moderation will solve filtering’s prior-restraint problem

Contemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the sol...

Full description

Bibliographic Details
Main Author: Emma J Llansó
Format: Article
Language:English
Published: SAGE Publishing 2020-04-01
Series:Big Data & Society
Online Access:https://doi.org/10.1177/2053951720920686
id doaj-b384c9294ffb49a09179688f9d971182
record_format Article
spelling doaj-b384c9294ffb49a09179688f9d9711822020-11-25T03:20:33ZengSAGE PublishingBig Data & Society2053-95172020-04-01710.1177/2053951720920686No amount of “AI” in content moderation will solve filtering’s prior-restraint problemEmma J LlansóContemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the solution to complex challenges around online content, promising that AI is a scant few years away from resolving everything from hate speech to harassment to the spread of terrorist propaganda. Missing from these promises, however, is an acknowledgement that proactive identification and automated removal of user-generated content raises problems beyond issues of “accuracy” and overbreadth--problems that will not be solved with more sophisticated AI. In this commentary, I discuss how the technical realities of content filtering stack up against the protections for freedom of expression in international human rights law. As policymakers and companies around the world turn to AI for communications governance, it is crucial that we recall why legal protections for speech have included presumptions against prior censorship, and consider carefully how proactive content moderation will fundamentally re-shape the relationship between rules, people, and their speech.https://doi.org/10.1177/2053951720920686
collection DOAJ
language English
format Article
sources DOAJ
author Emma J Llansó
spellingShingle Emma J Llansó
No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
Big Data & Society
author_facet Emma J Llansó
author_sort Emma J Llansó
title No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
title_short No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
title_full No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
title_fullStr No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
title_full_unstemmed No amount of “AI” in content moderation will solve filtering’s prior-restraint problem
title_sort no amount of “ai” in content moderation will solve filtering’s prior-restraint problem
publisher SAGE Publishing
series Big Data & Society
issn 2053-9517
publishDate 2020-04-01
description Contemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the solution to complex challenges around online content, promising that AI is a scant few years away from resolving everything from hate speech to harassment to the spread of terrorist propaganda. Missing from these promises, however, is an acknowledgement that proactive identification and automated removal of user-generated content raises problems beyond issues of “accuracy” and overbreadth--problems that will not be solved with more sophisticated AI. In this commentary, I discuss how the technical realities of content filtering stack up against the protections for freedom of expression in international human rights law. As policymakers and companies around the world turn to AI for communications governance, it is crucial that we recall why legal protections for speech have included presumptions against prior censorship, and consider carefully how proactive content moderation will fundamentally re-shape the relationship between rules, people, and their speech.
url https://doi.org/10.1177/2053951720920686
work_keys_str_mv AT emmajllanso noamountofaiincontentmoderationwillsolvefilteringspriorrestraintproblem
_version_ 1724618159039709184