Summary: | This dissertation examines the work practices of a team of art experts at DNArt as they render (1) art data commensurable and legible to a similarity matching algorithm and (2) algorithmic output legible to those with, or seeking, knowledge about art. Drawing on 13 months of fieldwork with the team, ethnographic interviews with members of the team members and supporting teams, and two years of archival data from team documentation and online interactions, I provide a sociological account of the articulation work involved in the classification and annotation of art data at DNArt. The study of work practices with and around algorithms is an emerging topic that is relevant to current scholarship in science and technology studies, sociology, communications, information studies, and management. I argue that to understand what algorithms are and do, one has to understand both how they are mobilized in practice and how the assumptions embedded in their logic are enacted in the world. First, I examine the calibration strategies of the team, demonstrating that standardization is an ongoing accomplishment comprised of interactional tasks delegated by the artifacts, devices, and infrastructure that coordinate the team’s remote work. Yet standardization is also conditioned by team dynamics, which normalize difference and encourage productive friction in these interactions. Second, I examine the team’s repair work, as they encounter breakdowns between algorithmic output and their expert expectations for it. Repairs of the algorithm produce and depend on the deployment of practical theories about how the algorithm works in order to explain breakdowns. Translating art into algorithmic output requires collaboration and conflict between team members, integrating art expertise and a form of algorithmic expertise that they develop over time. I demonstrate the distributed, interactional process of repair, raising questions about the difficulty of measuring and assigning value to such work and its material.
In my analysis of the team’s articulation work, I find that the similarity underlying the algorithm’s logic is enacted in the team’s everyday practices, when the algorithm delegates to the team the task of defining and regulating the meaning and use of ‘most similar’ across different task groups in the organization. ‘Most similar’—proposed by software engineers as a rational approach to the ambiguity presented by subjective data—trades one form of ambiguity for another. In effect, the algorithm creates a contested jurisdictional space in which the meaning of ‘most similar’ must be interpreted, negotiated, and regulated, allowing the team to make claims on the data based on the epistemological virtues traditionally associated with art history.
It is in this context that legitimacy strategies are made significant. The narratives team members construct about their work emphasize the interpretive cultural accounting of their art expertise when in the context of the art world. Alternatively, they emphasize the scientific rationality of the algorithm when in the context of the organization and industry. These narratives reinforce and preserve the algorithm’s—and thus the firm’s—scientific, as well as cultural, credibility. They concomitantly reinforce the invisibility of the art experts’ quasi-technical work and algorithmic expertise. The broader implication of the highly-skilled knowledge work involved in some algorithmic systems is that it reconfigures multiple orders of worth; as new tasks are delegated to ‘non-technical’ actors and subsequently rendered invisible.
In my analysis of the team’s articulation work, I find that the similarity underlying the algorithm’s logic is enacted in the team’s everyday practices, when the algorithm delegates to the team the task of defining and regulating the meaning and use of ‘most similar’ across different task groups in the organization. ‘Most similar’—proposed by software engineers as a rational approach to the ambiguity presented by subjective data—trades one form of ambiguity for another. In effect, the algorithm creates a contested jurisdictional space in which the meaning of ‘most similar’ must be interpreted, negotiated, and regulated, allowing the team to make claims on the data based on the epistemological virtues traditionally associated with art history. It is in this context that legitimacy strategies are made significant. The narratives team members construct about their work emphasize the interpretive cultural accounting of their art expertise in the context of the art world. Alternatively, they emphasize the scientific rationality of the algorithm in the context of the organization and industry. These narratives reinforce and preserve the algorithm’s—and thus the firm’s—scientific credibility but also reproduce the invisibility of the art experts’ work and algorithmic expertise within the organization. The broader implication is that the changing nature of work in teams and organizations configured around algorithms emphasizes multiple orders of worth, repositioning those order relative to one another as new tasks are delegated to “non-technical” actors and subsequently rendered invisible.
|