The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ValueError Message: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/wangyueqian/friends_mmc@da2281a6e7a32f3849e010bbd2ea4dc0c2663b36/5_turns/train-metadata.json. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 233, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2998, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1918, in _head return _examples_to_batch(list(self.take(n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2093, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1576, in __iter__ for key_example in islice(self.ex_iterable, self.n - ex_iterable_num_taken): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 279, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 172, in _generate_tables raise ValueError( ValueError: Failed to convert pandas DataFrame to Arrow Table from file hf://datasets/wangyueqian/friends_mmc@da2281a6e7a32f3849e010bbd2ea4dc0c2663b36/5_turns/train-metadata.json.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
This repository contains the multi-modal multi-party conversation dataset described in the paper Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding.
Related Resources
- Paper: Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding
- Conversation Speaker Identification Model: CSI model
Friends-MMC dataset
The structure of this repository is as follows:
datasets/
├── 5_turns/
│ ├── images/
│ ├── train-metadata.json
│ ├── test-metadata.json
│ ├── test-noisy-metadata.json
├── 8_turns/
│ ├── images/
│ ├── train-metadata.json
│ ├── test-metadata.json
│ ├── test-noisy-metadata.json
├── face_track_videos/
│ ├── s01e01/ // season name and episode name
│ │ ├── 001196-001272 // each folder contains the cropped face tracks for each turn. The numbers in folder name is start and end frame number.
│ │ │ ├── 0.avi 0.wav 1.avi 1.wav
│ │ ├── 001272-001375
│ │ ├── ...
│ ├── s01e02/
│ ├── s01e03/
│ ├── ...
├── face_track_annotations/
│ ├── train/
│ │ ├── s01e01.pkl // each pickle file stores metadata (frame number and bounding box in the original video for each frame) of the cropped face track video
│ │ ├── s01e02.pkl
│ │ ├── ...
│ ├── test/ // same format as files in `train` folder, but for season 03 (test set)
│ │ ├── s03e01.pkl
│ │ ├── s03e02.pkl
│ │ ├── ...
│ ├── test-noisy/ // some face tracks removed from the files in `test` folder
│ │ ├── s03e01.pkl
│ │ ├── s03e02.pkl
│ │ ├── ...
├── raw_videos/ // contains raw video of the TV Series
├── ubuntu_dialogue_corpus/ // The Ubuntu Dialogue Corpus [1] which is used for training the text module of the CSI model
├── README.md
[1] Hu, W., Chan, Z., Liu, B., Zhao, D., Ma, J., & Yan, R. (2019). GSN: A Graph-Structured Network for Multi-Party Dialogues. International Joint Conference on Artificial Intelligence.
Download the dataset
The face_track_videos/
, face_track_annotations/
, ubuntu_dialogue_corpus
, 5_turns/images/
and 8_turns/images/
folders are stored in zip files. Unzip them after downloading:
unzip -q face_track_annotations.zip
unzip -q face_track_videos.zip
unzip -q ubuntu_dialogue_corpus.zip
cd 5_turns
unzip -q images.zip
cd ../8_turns
unzip -q images.zip
cd ..
The raw_videos/
folder is also stored in a zip file. However, as the raw videos are not used in the experiments, you can optionally download and unzip:
cat raw_videos.zip* > raw_videos.zip
unzip -q raw_videos.zip
Data Format
Metadata
Dialogue annotations are stored in train-metadata.json
, test-metadata.json
and test-noisy-metadata.json
. Take an example from 5_turns/train-metadata.json
, each example is formatted as follows:
[
{
"frame": "s01e01-001259", "video": "s01e01-001196-001272", "speaker": "monica",
"content": "There's nothing to tell! He's just some guy I work with!",
"faces": [[[763, 254, 807, 309], "carol"], [[582, 265, 620, 314], "monica"]]
},
{
"frame": "s01e01-001323", "video": "s01e01-001272-001375", "speaker": "joey",
"content": "C'mon, you're going out with the guy! There's gotta be something wrong with him!",
"faces": [[[569, 175, 715, 371], "joey"]]
},
{...}, {...}, {...} // three more above-like dicts
]
- "frame" corresponds to the filename of the single frame of this turn sampled from the video (
5_turns/images/s01e01-001259.jpg
), - "content" is the textual content of this turn,
- "faces" is a list of face bounding boxes (x1, y1, x2, y2) and their corresponding speaker names in the image
5_turns/images/s01e01-001259.jpg
, - "video" corresponds to the filname of folder of face tracks (
s01e01/001196-001272
) in theface_track_videos/
folder, - "speaker" is the ground truth speaker annotation.
Face tracks
The face tracks that appears in the video corresponding to each turn are stored in a folder in the face_track_videos/
folder. The folder name is the start and end frame number of the track. For example, s01e01/001196-001272
contains the face tracks for the turn from frame 196 to frame 272 in episode s01e01
. Each face track is stored in two files: .avi
and .wav
. The .avi
file is the cropped face track video, and the .wav
file is the corresponding audio of the frames.
Face track annotations
The face track annotation for each episode is a python dictionary. Take an example from face_track_annotations/s01e01.pkl
, each turn is formatted as follows:
"s01e01-001196-001272": [
{"face_track_id": 0, "name": "carol", "frame": [1251, 1252, ...], "bbox": [[762.22, 257.18, 805.59, 309.45], [762.29, 256.34, 806.16, 309.51], ...]}, // face track 1
{"face_track_id": 1, "name": "monica", "frame": [frame 1, frame 2, ...], "bbox": [bbox 1, bbox 2, ...]}, // face track 2
]
Each python dictionary in this example marks the track of a face.
- "face_track_id" corresponds to the face track file name in
face_track_videos
. In this example, the face track for "carol" isface_track_videos/s01e01/001196-001272/0.avi(.wav)
. - "frame" is a list of frame numbers in the turn. Each frame number >= start frame number and <= end frame number,
- "bbox" is a list of bounding boxes (x1, y1, x2, y2). Each bounding box marks a face in its corresponding frame (e.g., the box [762.22, 257.18, 805.59, 309.45] of frame 1251 marks an appearence of a carol's face).
Citation
If you use this work in your research, please cite:
@misc{wang2024friendsmmcdatasetmultimodalmultiparty,
title={Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding},
author={Yueqian Wang and Xiaojun Meng and Yuxuan Wang and Jianxin Liang and Qun Liu and Dongyan Zhao},
year={2024},
eprint={2412.17295},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2412.17295},
}
- Downloads last month
- 5