Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
7fb99ee
1 Parent(s): 2a6b79b

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,37 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ftz filter=lfs diff=lfs merge=lfs -text
6
- *.gz filter=lfs diff=lfs merge=lfs -text
7
- *.h5 filter=lfs diff=lfs merge=lfs -text
8
- *.joblib filter=lfs diff=lfs merge=lfs -text
9
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
- *.model filter=lfs diff=lfs merge=lfs -text
11
- *.msgpack filter=lfs diff=lfs merge=lfs -text
12
- *.onnx filter=lfs diff=lfs merge=lfs -text
13
- *.ot filter=lfs diff=lfs merge=lfs -text
14
- *.parquet filter=lfs diff=lfs merge=lfs -text
15
- *.pb filter=lfs diff=lfs merge=lfs -text
16
- *.pt filter=lfs diff=lfs merge=lfs -text
17
- *.pth filter=lfs diff=lfs merge=lfs -text
18
- *.rar filter=lfs diff=lfs merge=lfs -text
19
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
20
- *.tar.* filter=lfs diff=lfs merge=lfs -text
21
- *.tflite filter=lfs diff=lfs merge=lfs -text
22
- *.tgz filter=lfs diff=lfs merge=lfs -text
23
- *.wasm filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- # Audio files - uncompressed
29
- *.pcm filter=lfs diff=lfs merge=lfs -text
30
- *.sam filter=lfs diff=lfs merge=lfs -text
31
- *.raw filter=lfs diff=lfs merge=lfs -text
32
- # Audio files - compressed
33
- *.aac filter=lfs diff=lfs merge=lfs -text
34
- *.flac filter=lfs diff=lfs merge=lfs -text
35
- *.mp3 filter=lfs diff=lfs merge=lfs -text
36
- *.ogg filter=lfs diff=lfs merge=lfs -text
37
- *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,91 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- language:
7
- - en
8
- license:
9
- - cc-by-nc-nd-4.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - summarization
18
- task_ids: []
19
- paperswithcode_id: samsum-corpus
20
- pretty_name: SAMSum Corpus
21
- tags:
22
- - conversations-summarization
23
- ---
24
- # Dataset Card for SAMSum Corpus
25
- ## Dataset Description
26
- ### Links
27
- - **Homepage:** hhttps://arxiv.org/abs/1911.12237v2
28
- - **Repository:** https://arxiv.org/abs/1911.12237v2
29
- - **Paper:** https://arxiv.org/abs/1911.12237v2
30
- - **Point of Contact:** https://huggingface.co/knkarthick
31
-
32
- ### Dataset Summary
33
- The SAMSum dataset contains about 16k messenger-like conversations with summaries. Conversations were created and written down by linguists fluent in English. Linguists were asked to create conversations similar to those they write on a daily basis, reflecting the proportion of topics of their real-life messenger conversations. The style and register are diversified - conversations could be informal, semi-formal or formal, they may contain slang words, emoticons and typos. Then, the conversations were annotated with summaries. It was assumed that summaries should be a concise brief of what people talked about in the conversation in third person.
34
- The SAMSum dataset was prepared by Samsung R&D Institute Poland and is distributed for research purposes (non-commercial licence: CC BY-NC-ND 4.0).
35
-
36
- ### Languages
37
- English
38
-
39
- ## Dataset Structure
40
- ### Data Instances
41
- SAMSum dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each utterance contains the name of the speaker. Most conversations consist of dialogues between two interlocutors (about 75% of all conversations), the rest is between three or more people
42
- The first instance in the training set:
43
- {'id': '13818513', 'summary': 'Amanda baked cookies and will bring Jerry some tomorrow.', 'dialogue': "Amanda: I baked cookies. Do you want some?\r\nJerry: Sure!\r\nAmanda: I'll bring you tomorrow :-)"}
44
-
45
- ### Data Fields
46
- - dialogue: text of dialogue.
47
- - summary: human written summary of the dialogue.
48
- - id: unique file id of an example.
49
-
50
- ### Data Splits
51
- - train: 14732
52
- - val: 818
53
- - test: 819
54
-
55
- ## Dataset Creation
56
- ### Curation Rationale
57
- In paper:
58
- In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily communication data. Unfortunately, they all differed in some respect from the conversations that are typically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assistant and a client buying petrol.
59
- As a consequence, we decided to create a chat dialogue dataset by constructing such conversations that would epitomize the style of a messenger app.
60
-
61
- ### Who are the source language producers?
62
- linguists
63
- ### Who are the annotators?
64
- language experts
65
- ### Annotation process
66
- In paper:
67
- Each dialogue was created by one person. After collecting all of the conversations, we asked language experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) include names of interlocutors, (4) be written in the third person. Each dialogue contains only one reference summary.
68
-
69
-
70
- ## Licensing Information
71
- non-commercial licence: CC BY-NC-ND 4.0
72
-
73
- ## Citation Information
74
- ```
75
- @inproceedings{gliwa-etal-2019-samsum,
76
- title = "{SAMS}um Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization",
77
- author = "Gliwa, Bogdan and
78
- Mochol, Iwona and
79
- Biesek, Maciej and
80
- Wawer, Aleksander",
81
- booktitle = "Proceedings of the 2nd Workshop on New Frontiers in Summarization",
82
- month = nov,
83
- year = "2019",
84
- address = "Hong Kong, China",
85
- publisher = "Association for Computational Linguistics",
86
- url = "https://www.aclweb.org/anthology/D19-5409",
87
- doi = "10.18653/v1/D19-5409",
88
- pages = "70--79"
89
- }
90
- ```
91
- ## Contributions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
knkarthick--samsum/csv-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cecd6af75d1f1ab4d2d26e084aae6d986ad05553a74f4e86f3732d3c7d18f96a
3
+ size 344016
knkarthick--samsum/csv-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86fe8cf32ad015f3d01795f1f8df0395a7700f7f22b9068f6a2462010f50fabd
3
+ size 6002880
knkarthick--samsum/csv-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66dd6cb84c5a77a91f613b0dfd84b4221ed7567f595824212f850a5196236ce1
3
+ size 331693
test.csv DELETED
The diff for this file is too large to render. See raw diff
 
train.csv DELETED
The diff for this file is too large to render. See raw diff
 
validation.csv DELETED
The diff for this file is too large to render. See raw diff