albertvillanova HF staff commited on
Commit
36c1d37
1 Parent(s): 2b6370b

Convert dataset sizes from base 2 to base 10 in the dataset card (#4)

Browse files

- Convert dataset sizes from base 2 to base 10 in the dataset card (5d7d83d8746da1004db067f440c6732ef6777c80)

Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -118,9 +118,9 @@ dataset_info:
118
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
- - **Size of downloaded dataset files:** 120.49 MB
122
- - **Size of the generated dataset:** 158.72 MB
123
- - **Total amount of disk used:** 279.21 MB
124
 
125
  ### Dataset Summary
126
 
@@ -140,9 +140,9 @@ Korean Natural Language Inference datasets.
140
 
141
  #### multi_nli
142
 
143
- - **Size of downloaded dataset files:** 40.16 MB
144
- - **Size of the generated dataset:** 80.80 MB
145
- - **Total amount of disk used:** 120.97 MB
146
 
147
  An example of 'train' looks as follows.
148
  ```
@@ -151,9 +151,9 @@ An example of 'train' looks as follows.
151
 
152
  #### snli
153
 
154
- - **Size of downloaded dataset files:** 40.16 MB
155
- - **Size of the generated dataset:** 76.42 MB
156
- - **Total amount of disk used:** 116.59 MB
157
 
158
  An example of 'train' looks as follows.
159
  ```
@@ -162,9 +162,9 @@ An example of 'train' looks as follows.
162
 
163
  #### xnli
164
 
165
- - **Size of downloaded dataset files:** 40.16 MB
166
- - **Size of the generated dataset:** 1.49 MB
167
- - **Total amount of disk used:** 41.66 MB
168
 
169
  An example of 'validation' looks as follows.
170
  ```
 
118
  - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
119
  - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
120
  - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
121
+ - **Size of downloaded dataset files:** 126.34 MB
122
+ - **Size of the generated dataset:** 166.43 MB
123
+ - **Total amount of disk used:** 292.77 MB
124
 
125
  ### Dataset Summary
126
 
 
140
 
141
  #### multi_nli
142
 
143
+ - **Size of downloaded dataset files:** 42.11 MB
144
+ - **Size of the generated dataset:** 84.72 MB
145
+ - **Total amount of disk used:** 126.85 MB
146
 
147
  An example of 'train' looks as follows.
148
  ```
 
151
 
152
  #### snli
153
 
154
+ - **Size of downloaded dataset files:** 42.11 MB
155
+ - **Size of the generated dataset:** 80.13 MB
156
+ - **Total amount of disk used:** 122.25 MB
157
 
158
  An example of 'train' looks as follows.
159
  ```
 
162
 
163
  #### xnli
164
 
165
+ - **Size of downloaded dataset files:** 42.11 MB
166
+ - **Size of the generated dataset:** 1.56 MB
167
+ - **Total amount of disk used:** 43.68 MB
168
 
169
  An example of 'validation' looks as follows.
170
  ```