File size: 6,409 Bytes
b17dde8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: mit
---

This repository contains the multi-modal multi-party conversation dataset described in the paper **Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding**.

## Related Resources
- Paper: [Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding](https://arxiv.org/abs/2412.17295)
- Conversation Speaker Identification Model: [CSI model](https://huggingface.co/datasets/wangyueqian/friends_mmc)


## Friends-MMC dataset
The structure of this repository is as follows:
```
datasets/
├── 5_turns/
│   ├── images/
│   ├── train-metadata.json
│   ├── test-metadata.json
│   ├── test-noisy-metadata.json
├── 8_turns/
│   ├── images/
│   ├── train-metadata.json
│   ├── test-metadata.json
│   ├── test-noisy-metadata.json
├── face_track_videos/
│   ├── s01e01/  // season name and episode name
│   │   ├── 001196-001272   // each folder contains the cropped face tracks for each turn. The numbers in folder name is start and end frame number.
│   │   │   ├── 0.avi  0.wav  1.avi  1.wav
│   │   ├── 001272-001375
│   │   ├── ...
│   ├── s01e02/
│   ├── s01e03/
│   ├── ...
├── face_track_annotations/
│   ├── train/
│   │   ├── s01e01.pkl  // each pickle file stores metadata (frame number and bounding box in the original video for each frame) of the cropped face track video
│   │   ├── s01e02.pkl
│   │   ├── ...
│   ├── test/       // same format as files in `train` folder, but for season 03 (test set) 
│   │   ├── s03e01.pkl
│   │   ├── s03e02.pkl
│   │   ├── ...
│   ├── test-noisy/     // some face tracks removed from the files in `test` folder
│   │   ├── s03e01.pkl
│   │   ├── s03e02.pkl
│   │   ├── ...
├── raw_videos/   // contains raw video of the TV Series
├── ubuntu_dialogue_corpus/   // The Ubuntu Dialogue Corpus [1] which is used for training the text module of the CSI model
├── README.md
```

[1] Hu, W., Chan, Z., Liu, B., Zhao, D., Ma, J., & Yan, R. (2019). GSN: A Graph-Structured Network for Multi-Party Dialogues. International Joint Conference on Artificial Intelligence.

## Download the dataset
The `face_track_videos/`, `face_track_annotations/`, `ubuntu_dialogue_corpus`, `5_turns/images/` and `8_turns/images/` folders are stored in zip files. Unzip them after downloading:
```shell
unzip -q face_track_annotations.zip
unzip -q face_track_videos.zip
unzip -q ubuntu_dialogue_corpus.zip
cd 5_turns
unzip -q images.zip
cd ../8_turns
unzip -q images.zip
cd ..
```

The `raw_videos/` folder is also stored in a zip file. However, as the raw videos are not used in the experiments, you can optionally download and unzip:
```shell
cat raw_videos.zip* > raw_videos.zip
unzip -q raw_videos.zip
```

## Data Format
### Metadata
Dialogue annotations are stored in `train-metadata.json`, `test-metadata.json` and `test-noisy-metadata.json`. Take an example from `5_turns/train-metadata.json`, each example is formatted as follows:
```json
[
    {
        "frame": "s01e01-001259", "video": "s01e01-001196-001272", "speaker": "monica",
        "content": "There's nothing to tell! He's just some guy I work with!",
        "faces": [[[763, 254, 807, 309], "carol"], [[582, 265, 620, 314], "monica"]]
    },
    {
        "frame": "s01e01-001323", "video": "s01e01-001272-001375", "speaker": "joey",
        "content": "C'mon, you're going out with the guy! There's gotta be something wrong with him!", 
        "faces": [[[569, 175, 715, 371], "joey"]]
    },
    {...}, {...}, {...}     // three more above-like dicts
]
```
- "frame" corresponds to the filename of the single frame of this turn sampled from the video (`5_turns/images/s01e01-001259.jpg`),
- "content" is the textual content of this turn,
- "faces" is a list of face bounding boxes (x1, y1, x2, y2) and their corresponding speaker names in the image `5_turns/images/s01e01-001259.jpg`,
- "video" corresponds to the filname of folder of face tracks (`s01e01/001196-001272`) in the`face_track_videos/` folder,
- "speaker" is the ground truth speaker annotation.

### Face tracks
The face tracks that appears in the video corresponding to each turn are stored in a folder in the `face_track_videos/` folder. The folder name is the start and end frame number of the track. For example, `s01e01/001196-001272` contains the face tracks for the turn from frame 196 to frame 272 in episode `s01e01`. Each face track is stored in two files: `.avi` and `.wav`. The `.avi` file is the cropped face track video, and the `.wav` file is the corresponding audio of the frames.

### Face track annotations
The face track annotation for each episode is a python dictionary. Take an example from `face_track_annotations/s01e01.pkl`, each turn is formatted as follows:
```json
"s01e01-001196-001272": [
    {"face_track_id": 0, "name": "carol", "frame": [1251, 1252, ...], "bbox": [[762.22, 257.18, 805.59, 309.45], [762.29, 256.34, 806.16, 309.51], ...]},        // face track 1
    {"face_track_id": 1, "name": "monica", "frame": [frame 1, frame 2, ...], "bbox": [bbox 1, bbox 2, ...]},        // face track 2
]
```

Each python dictionary in this example marks the track of a face.
- "face_track_id" corresponds to the face track file name in `face_track_videos`. In this example, the face track for "carol" is `face_track_videos/s01e01/001196-001272/0.avi(.wav)`.
- "frame" is a list of frame numbers in the turn. Each frame number >= start frame number and <= end frame number,
- "bbox" is a list of bounding boxes (x1, y1, x2, y2). Each bounding box marks a face in its corresponding frame (e.g., the box [762.22, 257.18, 805.59, 309.45] of frame 1251 marks an appearence of a carol's face).


## Citation
If you use this work in your research, please cite:
```bibtex
@misc{wang2024friendsmmcdatasetmultimodalmultiparty,
      title={Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding}, 
      author={Yueqian Wang and Xiaojun Meng and Yuxuan Wang and Jianxin Liang and Qun Liu and Dongyan Zhao},
      year={2024},
      eprint={2412.17295},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.17295}, 
}
```