Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
RonanMcGovern commited on
Commit
c2c50b0
1 Parent(s): e0520f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +405 -1
README.md CHANGED
@@ -1,2 +1,406 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # Sampled big_patent Dataset
2
- This is a sampled big_patent dataset containing 5000 train and 500 test rows.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ language:
7
+ - en
8
+ license:
9
+ - cc-by-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 100K<n<1M
14
+ - 10K<n<100K
15
+ - 1M<n<10M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - summarization
20
+ task_ids: []
21
+ paperswithcode_id: bigpatent
22
+ pretty_name: Big Patent
23
+ tags:
24
+ - patent-summarization
25
+ dataset_info:
26
+ - config_name: all
27
+ features:
28
+ - name: description
29
+ dtype: string
30
+ - name: abstract
31
+ dtype: string
32
+ splits:
33
+ - name: train
34
+ num_bytes: 38367048389
35
+ num_examples: 1207222
36
+ - name: validation
37
+ num_bytes: 2115827002
38
+ num_examples: 67068
39
+ - name: test
40
+ num_bytes: 2129505280
41
+ num_examples: 67072
42
+ download_size: 10142923776
43
+ dataset_size: 42612380671
44
+ - config_name: a
45
+ features:
46
+ - name: description
47
+ dtype: string
48
+ - name: abstract
49
+ dtype: string
50
+ splits:
51
+ - name: train
52
+ num_bytes: 5683460620
53
+ num_examples: 174134
54
+ - name: validation
55
+ num_bytes: 313324505
56
+ num_examples: 9674
57
+ - name: test
58
+ num_bytes: 316633277
59
+ num_examples: 9675
60
+ download_size: 10142923776
61
+ dataset_size: 6313418402
62
+ - config_name: b
63
+ features:
64
+ - name: description
65
+ dtype: string
66
+ - name: abstract
67
+ dtype: string
68
+ splits:
69
+ - name: train
70
+ num_bytes: 4236070976
71
+ num_examples: 161520
72
+ - name: validation
73
+ num_bytes: 234425138
74
+ num_examples: 8973
75
+ - name: test
76
+ num_bytes: 231538734
77
+ num_examples: 8974
78
+ download_size: 10142923776
79
+ dataset_size: 4702034848
80
+ - config_name: c
81
+ features:
82
+ - name: description
83
+ dtype: string
84
+ - name: abstract
85
+ dtype: string
86
+ splits:
87
+ - name: train
88
+ num_bytes: 4506249306
89
+ num_examples: 101042
90
+ - name: validation
91
+ num_bytes: 244684775
92
+ num_examples: 5613
93
+ - name: test
94
+ num_bytes: 252566793
95
+ num_examples: 5614
96
+ download_size: 10142923776
97
+ dataset_size: 5003500874
98
+ - config_name: d
99
+ features:
100
+ - name: description
101
+ dtype: string
102
+ - name: abstract
103
+ dtype: string
104
+ splits:
105
+ - name: train
106
+ num_bytes: 264717412
107
+ num_examples: 10164
108
+ - name: validation
109
+ num_bytes: 14560482
110
+ num_examples: 565
111
+ - name: test
112
+ num_bytes: 14403430
113
+ num_examples: 565
114
+ download_size: 10142923776
115
+ dataset_size: 293681324
116
+ - config_name: e
117
+ features:
118
+ - name: description
119
+ dtype: string
120
+ - name: abstract
121
+ dtype: string
122
+ splits:
123
+ - name: train
124
+ num_bytes: 881101433
125
+ num_examples: 34443
126
+ - name: validation
127
+ num_bytes: 48646158
128
+ num_examples: 1914
129
+ - name: test
130
+ num_bytes: 48586429
131
+ num_examples: 1914
132
+ download_size: 10142923776
133
+ dataset_size: 978334020
134
+ - config_name: f
135
+ features:
136
+ - name: description
137
+ dtype: string
138
+ - name: abstract
139
+ dtype: string
140
+ splits:
141
+ - name: train
142
+ num_bytes: 2146383473
143
+ num_examples: 85568
144
+ - name: validation
145
+ num_bytes: 119632631
146
+ num_examples: 4754
147
+ - name: test
148
+ num_bytes: 119596303
149
+ num_examples: 4754
150
+ download_size: 10142923776
151
+ dataset_size: 2385612407
152
+ - config_name: g
153
+ features:
154
+ - name: description
155
+ dtype: string
156
+ - name: abstract
157
+ dtype: string
158
+ splits:
159
+ - name: train
160
+ num_bytes: 8877854206
161
+ num_examples: 258935
162
+ - name: validation
163
+ num_bytes: 492581177
164
+ num_examples: 14385
165
+ - name: test
166
+ num_bytes: 496324853
167
+ num_examples: 14386
168
+ download_size: 10142923776
169
+ dataset_size: 9866760236
170
+ - config_name: h
171
+ features:
172
+ - name: description
173
+ dtype: string
174
+ - name: abstract
175
+ dtype: string
176
+ splits:
177
+ - name: train
178
+ num_bytes: 8075621958
179
+ num_examples: 257019
180
+ - name: validation
181
+ num_bytes: 447602356
182
+ num_examples: 14279
183
+ - name: test
184
+ num_bytes: 445460513
185
+ num_examples: 14279
186
+ download_size: 10142923776
187
+ dataset_size: 8968684827
188
+ - config_name: y
189
+ features:
190
+ - name: description
191
+ dtype: string
192
+ - name: abstract
193
+ dtype: string
194
+ splits:
195
+ - name: train
196
+ num_bytes: 3695589005
197
+ num_examples: 124397
198
+ - name: validation
199
+ num_bytes: 200369780
200
+ num_examples: 6911
201
+ - name: test
202
+ num_bytes: 204394948
203
+ num_examples: 6911
204
+ download_size: 10142923776
205
+ dataset_size: 4100353733
206
+ config_names:
207
+ - a
208
+ - all
209
+ - b
210
+ - c
211
+ - d
212
+ - e
213
+ - f
214
+ - g
215
+ - h
216
+ - y
217
+ ---
218
  # Sampled big_patent Dataset
219
+ This is a sampled big_patent dataset containing 5000 train and 500 test rows.
220
+
221
+ The original repo card follows below.
222
+
223
+ # Dataset Card for Big Patent
224
+
225
+ ## Table of Contents
226
+ - [Dataset Description](#dataset-description)
227
+ - [Dataset Summary](#dataset-summary)
228
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
229
+ - [Languages](#languages)
230
+ - [Dataset Structure](#dataset-structure)
231
+ - [Data Instances](#data-instances)
232
+ - [Data Fields](#data-fields)
233
+ - [Data Splits](#data-splits)
234
+ - [Dataset Creation](#dataset-creation)
235
+ - [Curation Rationale](#curation-rationale)
236
+ - [Source Data](#source-data)
237
+ - [Annotations](#annotations)
238
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
239
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
240
+ - [Social Impact of Dataset](#social-impact-of-dataset)
241
+ - [Discussion of Biases](#discussion-of-biases)
242
+ - [Other Known Limitations](#other-known-limitations)
243
+ - [Additional Information](#additional-information)
244
+ - [Dataset Curators](#dataset-curators)
245
+ - [Licensing Information](#licensing-information)
246
+ - [Citation Information](#citation-information)
247
+ - [Contributions](#contributions)
248
+
249
+ ## Dataset Description
250
+
251
+ - **Homepage:** [Big Patent](https://evasharma.github.io/bigpatent/)
252
+ - **Repository:**
253
+ - **Paper:** [BIGPATENT: A Large-Scale Dataset for Abstractive and Coherent Summarization](https://arxiv.org/abs/1906.03741)
254
+ - **Leaderboard:**
255
+ - **Point of Contact:** [Lu Wang](mailto:wangluxy@umich.edu)
256
+
257
+ ### Dataset Summary
258
+
259
+ BIGPATENT, consisting of 1.3 million records of U.S. patent documents along with human written abstractive summaries.
260
+ Each US patent application is filed under a Cooperative Patent Classification (CPC) code.
261
+ There are nine such classification categories:
262
+ - a: Human Necessities
263
+ - b: Performing Operations; Transporting
264
+ - c: Chemistry; Metallurgy
265
+ - d: Textiles; Paper
266
+ - e: Fixed Constructions
267
+ - f: Mechanical Engineering; Lightning; Heating; Weapons; Blasting
268
+ - g: Physics
269
+ - h: Electricity
270
+ - y: General tagging of new or cross-sectional technology
271
+
272
+ Current defaults are 2.1.2 version (fix update to cased raw strings) and 'all' CPC codes:
273
+ ```python
274
+ from datasets import load_dataset
275
+ ds = load_dataset("big_patent") # default is 'all' CPC codes
276
+ ds = load_dataset("big_patent", "all") # the same as above
277
+ ds = load_dataset("big_patent", "a") # only 'a' CPC codes
278
+ ds = load_dataset("big_patent", codes=["a", "b"])
279
+ ```
280
+
281
+ To use 1.0.0 version (lower cased tokenized words), pass both parameters `codes` and `version`:
282
+ ```python
283
+ ds = load_dataset("big_patent", codes="all", version="1.0.0")
284
+ ds = load_dataset("big_patent", codes="a", version="1.0.0")
285
+ ds = load_dataset("big_patent", codes=["a", "b"], version="1.0.0")
286
+ ```
287
+
288
+
289
+ ### Supported Tasks and Leaderboards
290
+
291
+ [More Information Needed]
292
+
293
+ ### Languages
294
+
295
+ English
296
+
297
+ ## Dataset Structure
298
+
299
+ ### Data Instances
300
+
301
+ Each instance contains a pair of `description` and `abstract`. `description` is extracted from the Description section of the Patent while `abstract` is extracted from the Abstract section.
302
+ ```
303
+ {
304
+ 'description': 'FIELD OF THE INVENTION \n [0001] This invention relates to novel calcium phosphate-coated implantable medical devices and processes of making same. The unique calcium-phosphate coated implantable medical devices minimize...',
305
+ 'abstract': 'This invention relates to novel calcium phosphate-coated implantable medical devices...'
306
+ }
307
+ ```
308
+
309
+ ### Data Fields
310
+
311
+ - `description`: detailed description of patent.
312
+ - `abstract`: Patent abastract.
313
+
314
+ ### Data Splits
315
+
316
+ | | train | validation | test |
317
+ |:----|------------------:|-------------:|-------:|
318
+ | all | 1207222 | 67068 | 67072 |
319
+ | a | 174134 | 9674 | 9675 |
320
+ | b | 161520 | 8973 | 8974 |
321
+ | c | 101042 | 5613 | 5614 |
322
+ | d | 10164 | 565 | 565 |
323
+ | e | 34443 | 1914 | 1914 |
324
+ | f | 85568 | 4754 | 4754 |
325
+ | g | 258935 | 14385 | 14386 |
326
+ | h | 257019 | 14279 | 14279 |
327
+ | y | 124397 | 6911 | 6911 |
328
+
329
+ ## Dataset Creation
330
+
331
+ ### Curation Rationale
332
+
333
+ [More Information Needed]
334
+
335
+ ### Source Data
336
+
337
+ #### Initial Data Collection and Normalization
338
+
339
+ [More Information Needed]
340
+
341
+ #### Who are the source language producers?
342
+
343
+ [More Information Needed]
344
+
345
+ ### Annotations
346
+
347
+ #### Annotation process
348
+
349
+ [More Information Needed]
350
+
351
+ #### Who are the annotators?
352
+
353
+ [More Information Needed]
354
+
355
+ ### Personal and Sensitive Information
356
+
357
+ [More Information Needed]
358
+
359
+ ## Considerations for Using the Data
360
+
361
+ ### Social Impact of Dataset
362
+
363
+ [More Information Needed]
364
+
365
+ ### Discussion of Biases
366
+
367
+ [More Information Needed]
368
+
369
+ ### Other Known Limitations
370
+
371
+ [More Information Needed]
372
+
373
+ ## Additional Information
374
+
375
+ ### Dataset Curators
376
+
377
+ [More Information Needed]
378
+
379
+ ### Licensing Information
380
+
381
+ [More Information Needed]
382
+
383
+ ### Citation Information
384
+
385
+ ```bibtex
386
+ @article{DBLP:journals/corr/abs-1906-03741,
387
+ author = {Eva Sharma and
388
+ Chen Li and
389
+ Lu Wang},
390
+ title = {{BIGPATENT:} {A} Large-Scale Dataset for Abstractive and Coherent
391
+ Summarization},
392
+ journal = {CoRR},
393
+ volume = {abs/1906.03741},
394
+ year = {2019},
395
+ url = {http://arxiv.org/abs/1906.03741},
396
+ eprinttype = {arXiv},
397
+ eprint = {1906.03741},
398
+ timestamp = {Wed, 26 Jun 2019 07:14:58 +0200},
399
+ biburl = {https://dblp.org/rec/journals/corr/abs-1906-03741.bib},
400
+ bibsource = {dblp computer science bibliography, https://dblp.org}
401
+ }
402
+ ```
403
+
404
+ ### Contributions
405
+
406
+ Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.