How the machine ‘thinks’: Understanding opacity in machine learning algorithms

This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. Thes...

Full description

Bibliographic Details
Main Author: Jenna Burrell
Format: Article
Language:English
Published: SAGE Publishing 2016-01-01
Series:Big Data & Society
Online Access:https://doi.org/10.1177/2053951715622512
id doaj-e24df3681a27440f95339729c7a4e28c
record_format Article
spelling doaj-e24df3681a27440f95339729c7a4e28c2020-11-25T03:13:56ZengSAGE PublishingBig Data & Society2053-95172016-01-01310.1177/205395171562251210.1177_2053951715622512How the machine ‘thinks’: Understanding opacity in machine learning algorithmsJenna BurrellThis article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm.https://doi.org/10.1177/2053951715622512
collection DOAJ
language English
format Article
sources DOAJ
author Jenna Burrell
spellingShingle Jenna Burrell
How the machine ‘thinks’: Understanding opacity in machine learning algorithms
Big Data & Society
author_facet Jenna Burrell
author_sort Jenna Burrell
title How the machine ‘thinks’: Understanding opacity in machine learning algorithms
title_short How the machine ‘thinks’: Understanding opacity in machine learning algorithms
title_full How the machine ‘thinks’: Understanding opacity in machine learning algorithms
title_fullStr How the machine ‘thinks’: Understanding opacity in machine learning algorithms
title_full_unstemmed How the machine ‘thinks’: Understanding opacity in machine learning algorithms
title_sort how the machine ‘thinks’: understanding opacity in machine learning algorithms
publisher SAGE Publishing
series Big Data & Society
issn 2053-9517
publishDate 2016-01-01
description This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy, (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is a key to determining which of a variety of technical and non-technical solutions could help to prevent harm.
url https://doi.org/10.1177/2053951715622512
work_keys_str_mv AT jennaburrell howthemachinethinksunderstandingopacityinmachinelearningalgorithms
_version_ 1724645753041715200