Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves
A recent strand of research considers how algorithmic systems are gamed in everyday encounters. We add to this literature with a study that uses the game metaphor to examine a project where different organizations came together to create and deploy a machine learning model to detect hate speech from...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2020-06-01
|
Series: | Social Media + Society |
Online Access: | https://doi.org/10.1177/2056305120924778 |
id |
doaj-d61c82e194414f81a339335fd0626e9e |
---|---|
record_format |
Article |
spelling |
doaj-d61c82e194414f81a339335fd0626e9e2020-11-25T03:20:48ZengSAGE PublishingSocial Media + Society2056-30512020-06-01610.1177/2056305120924778Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and MovesJesse Haapoja0Salla-Maaria Laaksonen1Airi Lampinen2University of Helsinki, FinlandUniversity of Helsinki, FinlandStockholm University, SwedenA recent strand of research considers how algorithmic systems are gamed in everyday encounters. We add to this literature with a study that uses the game metaphor to examine a project where different organizations came together to create and deploy a machine learning model to detect hate speech from political candidates’ social media messages during the Finnish 2017 municipal election. Using interviews and forum discussions as our primary research material, we illustrate how the unfolding game is played out on different levels in a multi-stakeholder situation, what roles different participants have in the game, and how strategies of gaming the model revolve around controlling the information available to it. We discuss strategies that different stakeholders planned or used to resist the model, and show how the game is not only played against the model itself, but also with those who have created it and those who oppose it. Our findings illustrate that while “gaming the system” is an important part of gaming with algorithms, these games have other levels where humans play against each other, rather than against technology. We also draw attention to how deploying a hate-speech detection algorithm can be understood as an effort to not only detect but also preempt unwanted behavior.https://doi.org/10.1177/2056305120924778 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Jesse Haapoja Salla-Maaria Laaksonen Airi Lampinen |
spellingShingle |
Jesse Haapoja Salla-Maaria Laaksonen Airi Lampinen Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves Social Media + Society |
author_facet |
Jesse Haapoja Salla-Maaria Laaksonen Airi Lampinen |
author_sort |
Jesse Haapoja |
title |
Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves |
title_short |
Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves |
title_full |
Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves |
title_fullStr |
Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves |
title_full_unstemmed |
Gaming Algorithmic Hate-Speech Detection: Stakes, Parties, and Moves |
title_sort |
gaming algorithmic hate-speech detection: stakes, parties, and moves |
publisher |
SAGE Publishing |
series |
Social Media + Society |
issn |
2056-3051 |
publishDate |
2020-06-01 |
description |
A recent strand of research considers how algorithmic systems are gamed in everyday encounters. We add to this literature with a study that uses the game metaphor to examine a project where different organizations came together to create and deploy a machine learning model to detect hate speech from political candidates’ social media messages during the Finnish 2017 municipal election. Using interviews and forum discussions as our primary research material, we illustrate how the unfolding game is played out on different levels in a multi-stakeholder situation, what roles different participants have in the game, and how strategies of gaming the model revolve around controlling the information available to it. We discuss strategies that different stakeholders planned or used to resist the model, and show how the game is not only played against the model itself, but also with those who have created it and those who oppose it. Our findings illustrate that while “gaming the system” is an important part of gaming with algorithms, these games have other levels where humans play against each other, rather than against technology. We also draw attention to how deploying a hate-speech detection algorithm can be understood as an effort to not only detect but also preempt unwanted behavior. |
url |
https://doi.org/10.1177/2056305120924778 |
work_keys_str_mv |
AT jessehaapoja gamingalgorithmichatespeechdetectionstakespartiesandmoves AT sallamaarialaaksonen gamingalgorithmichatespeechdetectionstakespartiesandmoves AT airilampinen gamingalgorithmichatespeechdetectionstakespartiesandmoves |
_version_ |
1724616510684528640 |