Experiment-driven improvements in Human-in-the-loop Machine Learning Annotation via significance-based A/B testing

Rafael Alfaro-Flores, José Salas-Bonilla, Loic Juillard, Juan Esquivel-Rodríguez

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

We present an end-to-end experimentation framework to improve the human annotation of data sets used in the training process of Machine Learning models. It covers the instrumentation of the annotation tool, the aggregation of metrics that highlight usage patterns and hypothesis-testing tools that enable the comparison of experimental groups, to decide whether improvements in the annotation process significantly impact the overall results. We show the potential of the protocol using two real-life annotation use cases.

Original languageEnglish
Title of host publicationProceedings - 2021 47th Latin American Computing Conference, CLEI 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781665495035
DOIs
StatePublished - 2021
Externally publishedYes
Event47th Latin American Computing Conference, CLEI 2021 - Virtual, Cartago, Costa Rica
Duration: 25 Oct 202129 Oct 2021

Publication series

NameProceedings - 2021 47th Latin American Computing Conference, CLEI 2021

Conference

Conference47th Latin American Computing Conference, CLEI 2021
Country/TerritoryCosta Rica
CityVirtual, Cartago
Period25/10/2129/10/21

Keywords

  • Data warehouse and repository
  • Experimental Design
  • Human performance
  • Machine Learning
  • Statistical Methods

Fingerprint

Dive into the research topics of 'Experiment-driven improvements in Human-in-the-loop Machine Learning Annotation via significance-based A/B testing'. Together they form a unique fingerprint.

Cite this