Two datasets of defect reports labeled by a crowd of annotators of unknown reliability

Classifying software defects according to any defined taxonomy is not straightforward. In order to be used for automatizing the classification of software defects, two sets of defect reports were collected from public issue tracking systems from two different real domains. Due to the lack of a domai...

Full description

Bibliographic Details
Main Authors: Jerónimo Hernández-González, Daniel Rodriguez, Iñaki Inza, Rachel Harrison, Jose A. Lozano
Format: Article
Language:English
Published: Elsevier 2018-06-01
Series:Data in Brief
Online Access:http://www.sciencedirect.com/science/article/pii/S2352340918303226
Description
Summary:Classifying software defects according to any defined taxonomy is not straightforward. In order to be used for automatizing the classification of software defects, two sets of defect reports were collected from public issue tracking systems from two different real domains. Due to the lack of a domain expert, the collected defects were categorized by a set of annotators of unknown reliability according to their impact from IBM's orthogonal defect classification taxonomy. Both datasets are prepared to solve the defect classification problem by means of techniques of the learning from crowds paradigm (Hernández-González et al. [1]).Two versions of both datasets are publicly shared. In the first version, the raw data is given: the text description of defects together with the category assigned by each annotator. In the second version, the text of each defect has been transformed to a descriptive vector using text-mining techniques.
ISSN:2352-3409