Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding
Abstract
Multi-modal multi-party conversation (MMC) is a less studied yet important topic of research due to that it well fits real-world scenarios and thus potentially has more widely-used applications. Compared with the traditional multi-modal conversations, MMC requires stronger character-centered understanding abilities as there are many interlocutors appearing in both the visual and textual context. To facilitate the study of this problem, we present Friends-MMC in this paper, an MMC dataset that contains 24,000+ unique utterances paired with video context. To explore the character-centered understanding of the dialogue, we also annotate the speaker of each utterance, the names and bounding bboxes of faces that appear in the video. Based on this Friends-MMC dataset, we further study two fundamental MMC tasks: conversation speaker identification and conversation response prediction, both of which have the multi-party nature with the video or image as visual context. For conversation speaker identification, we demonstrate the inefficiencies of existing methods such as pre-trained models, and propose a simple yet effective baseline method that leverages an optimization solver to utilize the context of two modalities to achieve better performance. For conversation response prediction, we fine-tune generative dialogue models on Friend-MMC, and analyze the benefits of speaker information. The code and dataset is publicly available at https://github.com/yellow-binary-tree/Friends-MMC and thus we call for more attention on modeling speaker information when understanding conversations.
Community
- Dataset: The Friends-MMC dataset, comprising over 24,000 utterances paired with video context, is annotated with speaker information, character names, and face bounding boxes.
- Tasks:
- Speaker Identification: utilizing multi-modal context to enhance speaker identification performance.
- Response Prediction: demonstrating the significance of incorporating speaker information for a dialogue agent.
- Check the Code: https://github.com/yellow-binary-tree/Friends-MMC
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Multi-Party Supervised Fine-tuning of Language Models for Multi-Party Dialogue Generation (2024)
- StoryTeller: Improving Long Video Description through Global Audio-Visual Character Identification (2024)
- Generative Emotion Cause Explanation in Multimodal Conversations (2024)
- MuMu-LLaMA: Multi-modal Music Understanding and Generation via Large Language Models (2024)
- SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script (2024)
- Intent-Aware Dialogue Generation and Multi-Task Contrastive Learning for Multi-Turn Intent Classification (2024)
- CMATH: Cross-Modality Augmented Transformer with Hierarchical Variational Distillation for Multimodal Emotion Recognition in Conversation (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper