juliehunter commited on
Commit
098ed3c
1 Parent(s): 3dcb112

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -36
README.md CHANGED
@@ -36,7 +36,7 @@ configs:
36
  # Claire English Dialogue Dataset (CEDD) <br />*A collection of English dialogue transcripts*
37
 
38
  This is the first packaged version of the datasets used to train the english variants of the Claire family of large language models
39
- ([OpenLLM-France/Claire-7B-EN-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-EN-0.1)).
40
 
41
  The Claire English Dialogue Dataset (CEDD) is a collection of transcripts of English dialogues from various sources, including parliamentary proceedings, interviews, broadcast, meetings, and free conversations.
42
  Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker, or a unique identifier if the speaker is unknown.
@@ -53,8 +53,8 @@ Each dialogue is split into speech turns, and each speech turn is labeled with t
53
  ## Dataset composition
54
 
55
  CEDD can be broken down into:
56
- * 962550 conversations in total (812705 in train, 11992 in test)
57
- * 20863917 speech turns in total (18576327 in train, 359527 in test)
58
  * around 864M words
59
 
60
  It is a collection of several independent datasets, classified by the types of conversations they contain. This categorization is designed to more evenly balance the influence of different styles of dialogue on model training and to facilitate future applications of CEDD for which certain types of dialogue might be more helpful than others.
@@ -95,7 +95,7 @@ For more information, you can look at the following documents:
95
  <td>200K</td>
96
  <td>2.7K</td>
97
  <td>93</td>
98
- <td><a href="https://anc.org/data/oanc/download/">Available for download and use for research and development, including commercial development.</a></td>
99
  </tr>
100
  <tr>
101
  <td><a href="https://anc.org/data/oanc/contents/#switchboard">Switchboard</a></td>
@@ -103,14 +103,14 @@ For more information, you can look at the following documents:
103
  <td>3M</td>
104
  <td>290K</td>
105
  <td>2320</td>
106
- <td><a href="https://catalog.ldc.upenn.edu/LDC97S62">LDC User Ageement for Non-Members.</a></td>
107
  </tr>
108
 
109
  <tr>
110
  <td colspan="6"><h4>Broadcast</h4></td></tr>
111
  <tr>
112
- <td><a href="https://huggingface.co/datasets/ccdv/mediasum">MediaSum</a></td>
113
- <td>MediaSum dataset for summarization</td>
114
  <td>720M</td>
115
  <td>13M</td>
116
  <td>458K</td>
@@ -120,42 +120,42 @@ For more information, you can look at the following documents:
120
  <tr>
121
  <td colspan="6"><h4>Meetings</h4></td></tr>
122
  <tr>
123
- <td><a href="https://groups.inf.ed.ac.uk/ami/corpus/">AMI</a></td>
124
  <td>The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings.</td>
125
  <td>712K</td>
126
  <td>75K</td>
127
  <td>139</td>
128
- <td><a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></td>
129
  </tr>
130
  <tr>
131
- <td><a href="https://groups.inf.ed.ac.uk/ami/icsi/">ICSI</a></td>
132
  <td>About 70 hours of meeting recordings.</td>
133
  <td>804K</td>
134
  <td>64K</td>
135
  <td><1K</td>
136
- <td><a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></td>
137
  </tr>
138
 
139
  <tr>
140
  <td colspan="6"><h4>Assistance</h4></td></tr>
141
  <tr>
142
- <td><a href="https://redialdata.github.io/website/">ReDial</a></td>
143
  <td>ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users recommend movies to each other.</td>
144
  <td>1.5M</td>
145
  <td>139K</td>
146
  <td>11K</td>
147
- <td><a href="https://creativecommons.org/licenses/by/4.0/">CC BY 4.0</a></td>
148
  </tr>
149
  <tr>
150
- <td><a href="https://github.com/facebookresearch/opendialkg">OpenDialKG</a></td>
151
  <td>OpenDialKG is a dataset of conversations between two crowdsourcing agents engaging in a dialog about a given topic.</td>
152
  <td>1M</td>
153
  <td>84K</td>
154
  <td>12K</td>
155
- <td><a href="https://creativecommons.org/licenses/by-nc/4.0/legalcode">CC-BY-NC-4.0</a></td>
156
  </tr>
157
  <tr>
158
- <td><a href="https://github.com/asappresearch/abcd">ABCD</a></td>
159
  <td>Action-Based Conversations Dataset.</td>
160
  <td>1.5M</td>
161
  <td>142K</td>
@@ -163,7 +163,7 @@ For more information, you can look at the following documents:
163
  <td><a href="https://github.com/asappresearch/abcd/blob/master/LICENSE">MIT</a></td>
164
  </tr>
165
  <tr>
166
- <td><a href="https://github.com/google/airdialogue">AirDialogue</a></td>
167
  <td>AirDialogue is a benchmark dataset for goal-oriented dialogue generation research.</td>
168
  <td>37M</td>
169
  <td>4.6M</td>
@@ -171,15 +171,15 @@ For more information, you can look at the following documents:
171
  <td><a href="https://github.com/google/airdialogue/blob/master/LICENSE">Apache License 2.0</a></td>
172
  </tr>
173
  <tr>
174
- <td><a href="https://huggingface.co/datasets/pfb30/multi_woz_v22">MULTIWOZ2_2</a></td>
175
  <td>Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics.</td>
176
  <td>1.9M</td>
177
  <td>143K</td>
178
  <td>10.4K</td>
179
- <td><a href="https://choosealicense.com/licenses/apache-2.0/">Apache License 2.0</a></td>
180
  </tr>
181
  <tr>
182
- <td><a href="https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset">MulDoGO</a></td>
183
  <td>Conversations from the airline, fastfood, finance, insurance, media, and software domains.</td>
184
  <td>10M</td>
185
  <td>892K</td>
@@ -190,7 +190,7 @@ For more information, you can look at the following documents:
190
  <tr>
191
  <td colspan="6"><h4>Free Chat</h4></td></tr>
192
  <tr>
193
- <td><a href="https://github.com/BYU-PCCL/chitchat-dataset">Chit-Chat</a></td>
194
  <td>Open-domain conversational dataset from the BYU Perception, Control & Cognition lab's Chit-Chat Challenge.</td>
195
  <td>2.3M</td>
196
  <td>7.1K</td>
@@ -203,14 +203,14 @@ For more information, you can look at the following documents:
203
  <td>1.2M</td>
204
  <td>102K</td>102K</td>
205
  <td>13K</td>
206
- <td><a href="https://creativecommons.org/licenses/by-nc-sa/4.0/">CC BY-NC-SA 4.0</a></td>
207
  </tr>
208
 
209
 
210
  <tr>
211
  <td colspan="6"><h4>Misc</h4></td></tr>
212
  <tr>
213
- <td><a href="">British National Corpus (BNC)</a></td>
214
  <td>Collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of British English, both spoken and written, from the late twentieth century.</td>
215
  <td>110M</td>
216
  <td>663K</td>
@@ -264,7 +264,7 @@ All datasets were normalized in text files so that:
264
  * Conversations are separated by a single blank line.
265
  * Each line corresponds to a single speech turn.
266
  * Each line begins with a speaker label of the form "`[***:]`".
267
- * When speaker names are anonymized or otherwise unknown, speakers are distinguished by numbers in the following format: "**`[speaker001:]`**", "**`[speaker002:]`**", … <br /> Otherwise, speakers are labeled with their names or roles, e.g. "`[Paul:]`", "`[François Mitterrand:]`", "`[M. le président:]`".
268
  * There are no parentheses: special annotations are always between square brackets.
269
  * Commong tags include:
270
  * "**`[PII]`**": Personally Identifiable Information (anonymized name...)
@@ -311,32 +311,32 @@ You should also provide citations for all of the original corpora. They are list
311
  * **Switchboard**
312
  * John J. Godfrey, Edward Holliman (1993). [Switchboard-1 Release 2](https://catalog.ldc.upenn.edu/LDC97S62), Linguistic Data Consortium (LDC), Philadelphia.
313
  * **MediaSum**
314
- * Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael (2021). [MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization](https://aclanthology.org/2021.naacl-main.474/). _arXiv preprint arXiv:2103.06410_.
315
  * **AMI**
316
- * I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, M.Kronenthal, G. Lathoud, M. Lincoln, A. Lisowska, W. Post, D. Reidsma, and P. Wellne (2005). [The AMI meeting corpus: a pre-announcement](https://dl.acm.org/doi/10.1007/11677482_3), _Machine Learning for Multimodal Interaction_, Edinburgh, UK.
317
  * **ICSI**
318
- * Virgile Rennard, Guokan Shang, Julie Hunter, Michalis Vazirgiannis (2023). [Abstractive Meeting Summarization: A Survey](https://arxiv.org/abs/2208.04163). _TACL_, Cambridge, MA.
319
  * **ReDial**
320
- * Li, Raymond and Kahou, Samira Ebrahimi and Schulz, Hannes and Michalski, Vincent and Charlin, Laurent and Pal, Chris (2018). [Towards Deep Conversational Recommendations](https://link). _NeurIPS 2018_, Montreal.
321
  * **OpenDialKG**
322
- * Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba (2019). [OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs](https://aclanthology.org/P19-1081/). _ACL_, Florence, Italy.
323
  * **ABCD**
324
- * Derek Chen, Howard Chen, Yi Yang, Alexander Lin, Zhou Yu (2021). [Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems](https://aclanthology.org/2021.naacl-main.239/). _NAACL_, Online.
325
  * **AirDialogue**
326
- * Wei Wei, Quoc Le, Andrew Dai, Jia Li (2018). [paper title](https://aclanthology.org/D18-1419/). _EMNLP_, Brussels, Belgium.
327
  * **MULTIWOZ2_2**
328
- * Eric, Mihail and Goel, Rahul and Paul, Shachi and Sethi, Abhishek and Agarwal, Sanchit and Gao, Shuyag and Hakkani-Tur, Dilek (2019). [MultiWOZ 2.1: Multi-Domain Dialogue State Corrections and State Tracking Baselines](https://arxiv.org/abs/2007.12720). _arxiv_.
329
  * **MultiDoGO**
330
- * Denis Peskov, Nancy Clarke, Jason Krone, Brigi Fodor, Yi Zhang, Adel Youssef, Mona Diab (2019). [Multi-Domain Goal-Oriented Dialogues (MultiDoGO): Strategies toward Curating and Annotating Large Scale Dialogue Data](https://www.aclweb.org/anthology/D19-1460). _EMNLP_, Hong Kong, China.
331
  * **Chit-Chat**
332
- * Myers, Will and Etchart, Tyler and Fulda, Nancy (2020). [Conversational Scaffolding: An Analogy-based Approach to Response Prioritization in Open-domain Dialogs](https://www.scitepress.org/Papers/2020/89399/89399.pdf).
333
  * **DailyDialog**
334
- * Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu (2017). [DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset](https://aclanthology.org/I17-1099/). _IJCNLP_, Taipei, Taiwan.
335
  * **British National Corpus (BNC)**
336
  * [The British National Corpus online](http://www.natcorp.ox.ac.uk/).
337
 
338
 
339
- Some of the listed datasets were collected from the DialogStudio compilation, which is also to be cited:
340
  * **DialogStudio**
341
  * Zhang, Jianguo and Qian, Kun and Liu, Zhiwei and Heinecke, Shelby and Meng, Rui and Liu, Ye and Yu, Zhou and Savarese, Silvio and Xiong, Caiming (2023). [DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI](https://arxiv.org/abs/2307.10172). _arXiv preprint arXiv:2307.10172_.
342
 
 
36
  # Claire English Dialogue Dataset (CEDD) <br />*A collection of English dialogue transcripts*
37
 
38
  This is the first packaged version of the datasets used to train the english variants of the Claire family of large language models
39
+ ([OpenLLM-France/Claire-7B-EN-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-EN-0.1)). (A related French dataset can be found [here](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1).)
40
 
41
  The Claire English Dialogue Dataset (CEDD) is a collection of transcripts of English dialogues from various sources, including parliamentary proceedings, interviews, broadcast, meetings, and free conversations.
42
  Each dialogue is split into speech turns, and each speech turn is labeled with the name of the speaker, or a unique identifier if the speaker is unknown.
 
53
  ## Dataset composition
54
 
55
  CEDD can be broken down into:
56
+ * 962,550 conversations in total (812,705 in train, 11,992 in test)
57
+ * 20,863,917 speech turns in total (18,576,327 in train, 359,527 in test)
58
  * around 864M words
59
 
60
  It is a collection of several independent datasets, classified by the types of conversations they contain. This categorization is designed to more evenly balance the influence of different styles of dialogue on model training and to facilitate future applications of CEDD for which certain types of dialogue might be more helpful than others.
 
95
  <td>200K</td>
96
  <td>2.7K</td>
97
  <td>93</td>
98
+ <td><a href="https://anc.org/data/oanc/download/">Available for download and use for research and development, including commercial development</a></td>
99
  </tr>
100
  <tr>
101
  <td><a href="https://anc.org/data/oanc/contents/#switchboard">Switchboard</a></td>
 
103
  <td>3M</td>
104
  <td>290K</td>
105
  <td>2320</td>
106
+ <td><a href="https://catalog.ldc.upenn.edu/LDC97S62">LDC User Ageement for Non-Members</a></td>
107
  </tr>
108
 
109
  <tr>
110
  <td colspan="6"><h4>Broadcast</h4></td></tr>
111
  <tr>
112
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio">MediaSum</a> <a href="https://huggingface.co/datasets/ccdv/mediasum">(GitHub)</a></td>
113
+ <td>MediaSum dataset for summarization. A collection of transcripts of CNN and NPR interviews with short summaries.</td>
114
  <td>720M</td>
115
  <td>13M</td>
116
  <td>458K</td>
 
120
  <tr>
121
  <td colspan="6"><h4>Meetings</h4></td></tr>
122
  <tr>
123
+ <td><a href="https://github.com/guokan-shang/ami-and-icsi-corpora">AMI</a> <a href="https://groups.inf.ed.ac.uk/ami/corpus/">(project page)</a></td>
124
  <td>The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings.</td>
125
  <td>712K</td>
126
  <td>75K</td>
127
  <td>139</td>
128
+ <td><a href="https://groups.inf.ed.ac.uk/ami/corpus/">CC BY 4.0</a></td>
129
  </tr>
130
  <tr>
131
+ <td><a href="https://github.com/guokan-shang/ami-and-icsi-corpora">ICSI</a> <a href="https://groups.inf.ed.ac.uk/ami/icsi/">(project page)</a></td>
132
  <td>About 70 hours of meeting recordings.</td>
133
  <td>804K</td>
134
  <td>64K</td>
135
  <td><1K</td>
136
+ <td><a href="https://groups.inf.ed.ac.uk/ami/icsi/">CC BY 4.0</a></td>
137
  </tr>
138
 
139
  <tr>
140
  <td colspan="6"><h4>Assistance</h4></td></tr>
141
  <tr>
142
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/conversational_recommendation/Redial">ReDial</a> <a href="https://redialdata.github.io/website/">(GitHub)</a></td>
143
  <td>ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users recommend movies to each other.</td>
144
  <td>1.5M</td>
145
  <td>139K</td>
146
  <td>11K</td>
147
+ <td><a href="https://redialdata.github.io/website/">CC BY 4.0</a></td>
148
  </tr>
149
  <tr>
150
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/conversational_recommendation/OpenDialKG">OpenDialKG</a> <a href="https://github.com/facebookresearch/opendialkg">(GitHub)</a></td>
151
  <td>OpenDialKG is a dataset of conversations between two crowdsourcing agents engaging in a dialog about a given topic.</td>
152
  <td>1M</td>
153
  <td>84K</td>
154
  <td>12K</td>
155
+ <td><a href="https://github.com/facebookresearch/opendialkg">CC-BY-NC-4.0</a></td>
156
  </tr>
157
  <tr>
158
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/ABCD">ABCD</a> <a href="https://github.com/asappresearch/abcd">(GitHub)</a></td>
159
  <td>Action-Based Conversations Dataset.</td>
160
  <td>1.5M</td>
161
  <td>142K</td>
 
163
  <td><a href="https://github.com/asappresearch/abcd/blob/master/LICENSE">MIT</a></td>
164
  </tr>
165
  <tr>
166
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/AirDialogue">AirDialogue</a> <a href="https://github.com/google/airdialogue">(GitHub)</a></td>
167
  <td>AirDialogue is a benchmark dataset for goal-oriented dialogue generation research.</td>
168
  <td>37M</td>
169
  <td>4.6M</td>
 
171
  <td><a href="https://github.com/google/airdialogue/blob/master/LICENSE">Apache License 2.0</a></td>
172
  </tr>
173
  <tr>
174
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/MULTIWOZ2_2">MULTIWOZ2_2</a> <a href="https://huggingface.co/datasets/pfb30/multi_woz_v22">(pfb30)</a></td>
175
  <td>Multi-Domain Wizard-of-Oz dataset (MultiWOZ), a fully-labeled collection of human-human written conversations spanning over multiple domains and topics.</td>
176
  <td>1.9M</td>
177
  <td>143K</td>
178
  <td>10.4K</td>
179
+ <td><a href="https://huggingface.co/datasets/pfb30/multi_woz_v22">Apache License 2.0</a></td>
180
  </tr>
181
  <tr>
182
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/task_oriented/MulDoGO">MulDoGO2</a> <a href="https://github.com/awslabs/multi-domain-goal-oriented-dialogues-dataset">(GitHub)</a></td>
183
  <td>Conversations from the airline, fastfood, finance, insurance, media, and software domains.</td>
184
  <td>10M</td>
185
  <td>892K</td>
 
190
  <tr>
191
  <td colspan="6"><h4>Free Chat</h4></td></tr>
192
  <tr>
193
+ <td><a href="https://huggingface.co/datasets/Salesforce/dialogstudio/tree/main/open_domain/chitchat-dataset">Chit-Chat</a> <a href="https://github.com/BYU-PCCL/chitchat-dataset">(GitHub)</a></td>
194
  <td>Open-domain conversational dataset from the BYU Perception, Control & Cognition lab's Chit-Chat Challenge.</td>
195
  <td>2.3M</td>
196
  <td>7.1K</td>
 
203
  <td>1.2M</td>
204
  <td>102K</td>102K</td>
205
  <td>13K</td>
206
+ <td><a href="https://huggingface.co/datasets/li2017dailydialog/daily_dialog">CC BY-NC-SA 4.0</a></td>
207
  </tr>
208
 
209
 
210
  <tr>
211
  <td colspan="6"><h4>Misc</h4></td></tr>
212
  <tr>
213
+ <td><a href="http://www.phon.ox.ac.uk/AudioBNC#Access">British National Corpus (BNC)</a></td>
214
  <td>Collection of samples of written and spoken language from a wide range of sources, designed to represent a wide cross-section of British English, both spoken and written, from the late twentieth century.</td>
215
  <td>110M</td>
216
  <td>663K</td>
 
264
  * Conversations are separated by a single blank line.
265
  * Each line corresponds to a single speech turn.
266
  * Each line begins with a speaker label of the form "`[***:]`".
267
+ * When speaker names are anonymized or otherwise unknown, speakers are distinguished by numbers in the following format: "**`[speaker001:]`**", "**`[speaker002:]`**", … <br /> Otherwise, speakers are labeled with their names or roles, e.g. "`[Paul:]`", "`[John King:]`", "`[White House Correspondent:]`".
268
  * There are no parentheses: special annotations are always between square brackets.
269
  * Commong tags include:
270
  * "**`[PII]`**": Personally Identifiable Information (anonymized name...)
 
311
  * **Switchboard**
312
  * John J. Godfrey, Edward Holliman (1993). [Switchboard-1 Release 2](https://catalog.ldc.upenn.edu/LDC97S62), Linguistic Data Consortium (LDC), Philadelphia.
313
  * **MediaSum**
314
+ * Zhu, Chenguang and Liu, Yang and Mei, Jie and Zeng, Michael (2021). [MediaSum: A Large-scale Media Interview Dataset for Dialogue Summarization](https://aclanthology.org/2021.naacl-main.474/). North American Chapter of the Association for Computational Linguistics (NAACL), Mexico City, Mexico, 2021.
315
  * **AMI**
316
+ * I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, M.Kronenthal, G. Lathoud, M. Lincoln, A. Lisowska, W. Post, D. Reidsma, and P. Wellner (2005). [The AMI meeting corpus](https://d1wqtxts1xzle7.cloudfront.net/50793769/The_AMI_meeting_corpus20161208-17868-1xaka8f-libre.pdf?1481255943=&response-content-disposition=inline%3B+filename%3DThe_AMI_Meeting_Corpus.pdf&Expires=1725287059&Signature=BtJK8AeKwsBmEEJZDF5C2ISWnB8Ss~IWyi1DLBrLS0A5JOVYcvTCdyn63ANd~dZYeIp3W23PuQOPHQfJYhkf1i2TryegDH82JL2v7ODCtKEWmmpXEGyAdBMdPQPdvu3M2lXEccqFaOq~4-2uzAb7goPkGl0~ZdLV1Jsy5ybc3epkMoZwNV947QNKWuW4t-dsfZJaGx8JeoX6GdpzgdmKGC7wcMnD-3uvYugoTggv-5htWofL~pvZ-mUZ9hAORcEbs3nYm-w9TyqhCwE2au~LyiD6nzaEbZCyiIICulsltNIYtu1X1AYRv7ECpw-9KOgiAENzx-7b~UoDg9TSY2x8Ow__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA), _Proc. International Conference on Methods and Techniques in Behavioral Research_. 2005. p. 1-4.
317
  * **ICSI**
318
+ * Adam Janin, Don Baron, Jane Edwards, Dan Ellis, David Gelbart, Nelson Morgan, Barbara Peskin, Thilo Pfau, Elizabeth Shriberg, Andreas Stolcke, et al. (2003). [The ICSI meeting corpus](https://d1wqtxts1xzle7.cloudfront.net/71218943/icassp03-janin-libre.pdf?1633309989=&response-content-disposition=inline%3B+filename%3DThe_ICSI_meeting_corpus.pdf&Expires=1725287256&Signature=Uh44rCSC1WPAwavIeqA2zouS7H4-XiED1HSHtU45KJuC06w94tuj3khieSS6ZkFavB1swZXCZOp4rZ8fHSpjDB~E-iYStkYB8HlSy1sAUWJ86XONkBem6VeTV6vzJRxdBzj3KLZL3BNubWc6ypOMsorjymoTthbmHyH1zJXjeHbmD1R4ZRLZ2eThImTqN3CE2uXtC8JIzn9vCfGV0cpyRd4JPYTpRojcIHivlSOyY8msZ2syA8-Ca1efmtBDo96EV9PQuDKrKdlbzGj2M1bD9sF3i1W~mrpIp~xPwz3ElHv~lZchrG-56e2wOutPHYFT7vBjMc1FCV0CWah46ATaqA__&Key-Pair-Id=APKAJLOHF5GGSLRBV4ZA). In _2003 IEEE International Conference on Acoustics, Speech, and Signal Processing_ (ICASSP’03), volume 1. IEEE.
319
  * **ReDial**
320
+ * Li, Raymond and Kahou, Samira Ebrahimi and Schulz, Hannes and Michalski, Vincent and Charlin, Laurent and Pal, Chris (2018). [Towards Deep Conversational Recommendations](https://proceedings.neurips.cc/paper/2018/file/800de15c79c8d840f4e78d3af937d4d4-Paper.pdf). _Advances in Neural Information Processing Systems 31 (NeurIPS 2018)_, Montreal.
321
  * **OpenDialKG**
322
+ * Seungwhan Moon, Pararth Shah, Anuj Kumar, Rajen Subba (2019). [OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs](https://aclanthology.org/P19-1081/). _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL)_, Florence, Italy.
323
  * **ABCD**
324
+ * Derek Chen, Howard Chen, Yi Yang, Alexander Lin, Zhou Yu (2021). [Action-Based Conversations Dataset: A Corpus for Building More In-Depth Task-Oriented Dialogue Systems](https://aclanthology.org/2021.naacl-main.239/). _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL)_, Online.
325
  * **AirDialogue**
326
+ * Wei Wei, Quoc Le, Andrew Dai, Jia Li (2018). [AirDialogue: An Environment for Goal-Oriented Dialogue Research ](https://aclanthology.org/D18-1419/). _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)_, Brussels, Belgium.
327
  * **MULTIWOZ2_2**
328
+ * Xiaoxue Zang, Abhinav Rastogi, Srinivas Sunkara, Raghav Gupta, Jianguo Zhang, Jindong Chen (2020). [MultiWOZ 2.2 : A Dialogue Dataset with Additional Annotation Corrections and State Tracking Baselines](https://arxiv.org/abs/2007.12720). _Arxiv_.
329
  * **MultiDoGO**
330
+ * Denis Peskov, Nancy Clarke, Jason Krone, Brigi Fodor, Yi Zhang, Adel Youssef, Mona Diab (2019). [Multi-Domain Goal-Oriented Dialogues (MultiDoGO): Strategies toward Curating and Annotating Large Scale Dialogue Data](https://www.aclweb.org/anthology/D19-1460). _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_, Hong Kong, China.
331
  * **Chit-Chat**
332
+ * Myers, Will and Etchart, Tyler and Fulda, Nancy (2020). [Conversational Scaffolding: An Analogy-based Approach to Response Prioritization in Open-domain Dialogs](https://www.scitepress.org/Papers/2020/89399/89399.pdf). _Proceedings of the 12th International Conference on Agents and Artificial Intelligence (ICAART 2020)_, volume 2, pages 69-78.
333
  * **DailyDialog**
334
+ * Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu (2017). [DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset](https://aclanthology.org/I17-1099/). _Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP)_, Taipei, Taiwan.
335
  * **British National Corpus (BNC)**
336
  * [The British National Corpus online](http://www.natcorp.ox.ac.uk/).
337
 
338
 
339
+ Our versions of MediaSum, ReDial, OpenDialKG, ABCD, AirDialogue, MultiWOZ2.2, MulDoGo2 and Chit-Chat were collected from the DialogStudio compilation, which is also to be cited if using these datasets:
340
  * **DialogStudio**
341
  * Zhang, Jianguo and Qian, Kun and Liu, Zhiwei and Heinecke, Shelby and Meng, Rui and Liu, Ye and Yu, Zhou and Savarese, Silvio and Xiong, Caiming (2023). [DialogStudio: Towards Richest and Most Diverse Unified Dataset Collection for Conversational AI](https://arxiv.org/abs/2307.10172). _arXiv preprint arXiv:2307.10172_.
342