Two datasets of defect reports labeled by a crowd of annotators of unknown reliability
Classifying software defects according to any defined taxonomy is not straightforward. In order to be used for automatizing the classification of software defects, two sets of defect reports were collected from public issue tracking systems from two different real domains. Due to the lack of a domai...
Main Authors: | Jerónimo Hernández-González, Daniel Rodriguez, Iñaki Inza, Rachel Harrison, Jose A. Lozano |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2018-06-01
|
Series: | Data in Brief |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2352340918303226 |
Similar Items
-
Towards Transparency in Dermatology Image Datasets with Skin Tone Annotations by Experts, Crowds, and an Algorithm
by: Groh, Matthew, et al.
Published: (2023) -
A dataset of chest X-ray reports annotated with Spatial Role Labeling annotations
by: Surabhi Datta, et al.
Published: (2020-10-01) -
Leveraging the crowd for annotation of retinal images
by: Leifman, George, et al.
Published: (2017) -
CrowdFix: An Eyetracking Dataset of Real Life Crowd Videos
by: Memoona Tahira, et al.
Published: (2019-01-01) -
FabricSpotDefect: An annotated dataset for identifying spot defects in different fabric typesMendeley Data
by: Farzana Islam, et al.
Published: (2024-12-01)