Summary: | Contemporary policy debates about managing the enormous volume of online content have taken a renewed focus on upload filtering, automated detection of potentially illegal content, and other “proactive measures”. Often, policymakers and tech industry players invoke artificial intelligence as the solution to complex challenges around online content, promising that AI is a scant few years away from resolving everything from hate speech to harassment to the spread of terrorist propaganda. Missing from these promises, however, is an acknowledgement that proactive identification and automated removal of user-generated content raises problems beyond issues of “accuracy” and overbreadth--problems that will not be solved with more sophisticated AI. In this commentary, I discuss how the technical realities of content filtering stack up against the protections for freedom of expression in international human rights law. As policymakers and companies around the world turn to AI for communications governance, it is crucial that we recall why legal protections for speech have included presumptions against prior censorship, and consider carefully how proactive content moderation will fundamentally re-shape the relationship between rules, people, and their speech.
|