Systematizing audit in algorithmic recruitment

Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by compan...

Full description

Bibliographic Details
Main Authors: Hilliard, A. (Author), Kazim, E. (Author), Koshiyama, A.S (Author), Polle, R. (Author)
Format: Article
Language:English
Published: MDPI 2021
Subjects:
Online Access:View Fulltext in Publisher
LEADER 02337nam a2200289Ia 4500
001 10.3390-jintelligence9030046
008 220427s2021 CNT 000 0 und d
020 |a 20793200 (ISSN) 
245 1 0 |a Systematizing audit in algorithmic recruitment 
260 0 |b MDPI  |c 2021 
856 |z View Fulltext in Publisher  |u https://doi.org/10.3390/jintelligence9030046 
520 3 |a Business psychologists study and assess relevant individual differences, such as intelligence and personality, in the context of work. Such studies have informed the development of artificial intelligence systems (AI) designed to measure individual differences. This has been capitalized on by companies who have developed AI-driven recruitment solutions that include aggregation of appropriate candidates (Hiretual), interviewing through a chatbot (Paradox), video interview assessment (MyInterview), and CV-analysis (Textio), as well as estimation of psychometric characteristics through image-(Traitify) and game-based assessments (HireVue) and video interviews (Cammio). However, driven by concern that such high-impact technology must be used responsibly due to the potential for unfair hiring to result from the algorithms used by these tools, there is an active effort towards proving mechanisms of governance for such automation. In this article, we apply a systematic algorithm audit framework in the context of the ethically critical industry of algorithmic recruitment systems, exploring how audit assessments on AI-driven systems can be used to assure that such systems are being responsibly deployed in a fair and well-governed manner. We outline sources of risk for the use of algorithmic hiring tools, suggest the most appropriate opportunities for audits to take place, recommend ways to measure bias in algorithms, and discuss the transparency of algorithms. © 2021 by the authors. Licensee MDPI, Basel, Switzerland. 
650 0 4 |a Accountability 
650 0 4 |a Bias 
650 0 4 |a Compliance 
650 0 4 |a Explainability 
650 0 4 |a Fairness 
650 0 4 |a Governance 
650 0 4 |a Privacy 
650 0 4 |a Recruitment 
650 0 4 |a Robustness 
650 0 4 |a Transparency 
700 1 |a Hilliard, A.  |e author 
700 1 |a Kazim, E.  |e author 
700 1 |a Koshiyama, A.S.  |e author 
700 1 |a Polle, R.  |e author 
773 |t Journal of Intelligence