No-audio multimodal speech detection in crowded social setings task at MediaEval 2018

Laura Cabrera-Quiros, Ekin Gedik, Hayley Hung

Producción científica: Contribución a una revistaArtículo de la conferenciarevisión exhaustiva

1 Cita (Scopus)

Resumen

This overview paper provides a description of the automatic Human Behaviour Analysis (HBA) task for the MediaEval 2018. In its first edition, the HBA task focuses on analyzing one of the most basic elements of social behavior: the estimation of speaking status. Task participants are provided with cropped videos of individuals while interacting freely during a crowded mingle event that was captured by an overhead camera. Each individual is also wearing a badge-like device hung around the neck recording tri-axial acceleration. The goal of this task is to automatically estimate if a person is speaking or not using these two alternative modalities. In contrast to conventional speech detection approaches, no audio is used for this task. Instead, the automatic estimation system must exploit the natural human movements that accompany speech. The task seeks to achieve competitive estimation performance compared to audio-based systems by exploiting the multi-modal aspects of the problem. Copyright held by the owner/author(s).

Idioma originalInglés
PublicaciónCEUR Workshop Proceedings
Volumen2283
EstadoPublicada - 2018
Evento2018 Working Notes Proceedings of the MediaEval Workshop, MediaEval 2018 - Sophia Antipolis, Francia
Duración: 29 oct 201831 oct 2018

Huella

Profundice en los temas de investigación de 'No-audio multimodal speech detection in crowded social setings task at MediaEval 2018'. En conjunto forman una huella única.

Citar esto