No-audio multimodal speech detection task at mediaeval 2019

Ekin Gedik, Laura Cabrera-Quiros, Hayley Hung

Producción científica: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

2 Citas (Scopus)

Resumen

This overview paper provides a description of the No-Audio multimodal speech detection task for the MediaEval 2019. Same as the first edition that was held in 2018, the task again focuses on the estimation of speaking status from multimodal data. Task participants are provided with cropped videos of individuals interacting freely during a crowded mingle event, captured by an overhead camera. Each individuals tri-axial acceleration throughout the event, captured with a single badge-like device hung around the neck, is also provided. The goal of this task is to automatically estimate if a person is speaking or not using these two alternative modalities. In contrast to conventional speech detection approaches, no audio is used for this task. Instead, the automatic estimation system must exploit the natural human movements that accompany speech. The task seeks to achieve competitive estimation performance compared to audio-based systems by exploiting the multi-modal aspects of the problem.

Idioma originalInglés
PublicaciónCEUR Workshop Proceedings
Volumen2670
EstadoPublicada - 2019
Evento2019 Working Notes of the MediaEval Workshop, MediaEval 2019 - Sophia Antipolis, Francia
Duración: 27 oct 201930 oct 2019

Huella

Profundice en los temas de investigación de 'No-audio multimodal speech detection task at mediaeval 2019'. En conjunto forman una huella única.

Citar esto