TY - JOUR
T1 - The MatchNMingle Dataset
T2 - A Novel Multi-Sensor Resource for the Analysis of Social Interactions and Group Dynamics In-The-Wild during Free-Standing Conversations and Speed Dates
AU - Cabrera-Quiros, Laura
AU - Demetriou, Andrew
AU - Gedik, Ekin
AU - Van Der Meij, Leander
AU - Hung, Hayley
N1 - Publisher Copyright:
© 2010-2012 IEEE.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - We present MatchNMingle, a novel multimodal/multisensor dataset for the analysis of free-standing conversational groups and speed-dates in-The-wild. MatchNMingle leverages the use of wearable devices and overhead cameras to record social interactions of 92 people during real-life speed-dates, followed by a cocktail party. To our knowledge, MatchNMingle has the largest number of participants, longest recording time and largest set of manual annotations for social actions available in this context in a real-life scenario. It consists of 2 hours of data from wearable acceleration, binary proximity, video, audio, personality surveys, frontal pictures and speed-date responses. Participants' positions and group formations were manually annotated; as were social actions (eg. speaking, hand gesture) for 30 minutes at 20 FPS making it the first dataset to incorporate the annotation of such cues in this context. We present an empirical analysis of the performance of crowdsourcing workers against trained annotators in simple and complex annotation tasks, founding that although efficient for simple tasks, using crowdsourcing workers for more complex tasks like social action annotation led to additional overhead and poor inter-Annotator agreement compared to trained annotators (differences up to 0.4 in Fleiss' Kappa coefficients). We also provide example experiments of how MatchNMingle can be used.
AB - We present MatchNMingle, a novel multimodal/multisensor dataset for the analysis of free-standing conversational groups and speed-dates in-The-wild. MatchNMingle leverages the use of wearable devices and overhead cameras to record social interactions of 92 people during real-life speed-dates, followed by a cocktail party. To our knowledge, MatchNMingle has the largest number of participants, longest recording time and largest set of manual annotations for social actions available in this context in a real-life scenario. It consists of 2 hours of data from wearable acceleration, binary proximity, video, audio, personality surveys, frontal pictures and speed-date responses. Participants' positions and group formations were manually annotated; as were social actions (eg. speaking, hand gesture) for 30 minutes at 20 FPS making it the first dataset to incorporate the annotation of such cues in this context. We present an empirical analysis of the performance of crowdsourcing workers against trained annotators in simple and complex annotation tasks, founding that although efficient for simple tasks, using crowdsourcing workers for more complex tasks like social action annotation led to additional overhead and poor inter-Annotator agreement compared to trained annotators (differences up to 0.4 in Fleiss' Kappa coefficients). We also provide example experiments of how MatchNMingle can be used.
KW - Multimodal dataset
KW - cameras
KW - f-formation
KW - mingle
KW - personality traits
KW - speed-dates
KW - wearable acceleration
UR - http://www.scopus.com/inward/record.url?scp=85049103438&partnerID=8YFLogxK
U2 - 10.1109/TAFFC.2018.2848914
DO - 10.1109/TAFFC.2018.2848914
M3 - Artículo
AN - SCOPUS:85049103438
SN - 1949-3045
VL - 12
SP - 113
EP - 130
JO - IEEE Transactions on Affective Computing
JF - IEEE Transactions on Affective Computing
IS - 1
M1 - 8395003
ER -