Experiment-driven improvements in Human-in-the-loop Machine Learning Annotation via significance-based A/B testing

Rafael Alfaro-Flores, José Salas-Bonilla, Loic Juillard, Juan Esquivel-Rodríguez

Producción científica: Capítulo del libro/informe/acta de congresoContribución a la conferenciarevisión exhaustiva

2 Citas (Scopus)

Resumen

We present an end-to-end experimentation framework to improve the human annotation of data sets used in the training process of Machine Learning models. It covers the instrumentation of the annotation tool, the aggregation of metrics that highlight usage patterns and hypothesis-testing tools that enable the comparison of experimental groups, to decide whether improvements in the annotation process significantly impact the overall results. We show the potential of the protocol using two real-life annotation use cases.

Idioma originalInglés
Título de la publicación alojadaProceedings - 2021 47th Latin American Computing Conference, CLEI 2021
EditorialInstitute of Electrical and Electronics Engineers Inc.
ISBN (versión digital)9781665495035
DOI
EstadoPublicada - 2021
Publicado de forma externa
Evento47th Latin American Computing Conference, CLEI 2021 - Virtual, Cartago, Costa Rica
Duración: 25 oct 202129 oct 2021

Serie de la publicación

NombreProceedings - 2021 47th Latin American Computing Conference, CLEI 2021

Conferencia

Conferencia47th Latin American Computing Conference, CLEI 2021
País/TerritorioCosta Rica
CiudadVirtual, Cartago
Período25/10/2129/10/21

Huella

Profundice en los temas de investigación de 'Experiment-driven improvements in Human-in-the-loop Machine Learning Annotation via significance-based A/B testing'. En conjunto forman una huella única.

Citar esto