Commit
•
36c1d37
1
Parent(s):
2b6370b
Convert dataset sizes from base 2 to base 10 in the dataset card (#4)
Browse files- Convert dataset sizes from base 2 to base 10 in the dataset card (5d7d83d8746da1004db067f440c6732ef6777c80)
README.md
CHANGED
@@ -118,9 +118,9 @@ dataset_info:
|
|
118 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
119 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
120 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
121 |
-
- **Size of downloaded dataset files:**
|
122 |
-
- **Size of the generated dataset:**
|
123 |
-
- **Total amount of disk used:**
|
124 |
|
125 |
### Dataset Summary
|
126 |
|
@@ -140,9 +140,9 @@ Korean Natural Language Inference datasets.
|
|
140 |
|
141 |
#### multi_nli
|
142 |
|
143 |
-
- **Size of downloaded dataset files:**
|
144 |
-
- **Size of the generated dataset:**
|
145 |
-
- **Total amount of disk used:**
|
146 |
|
147 |
An example of 'train' looks as follows.
|
148 |
```
|
@@ -151,9 +151,9 @@ An example of 'train' looks as follows.
|
|
151 |
|
152 |
#### snli
|
153 |
|
154 |
-
- **Size of downloaded dataset files:**
|
155 |
-
- **Size of the generated dataset:**
|
156 |
-
- **Total amount of disk used:**
|
157 |
|
158 |
An example of 'train' looks as follows.
|
159 |
```
|
@@ -162,9 +162,9 @@ An example of 'train' looks as follows.
|
|
162 |
|
163 |
#### xnli
|
164 |
|
165 |
-
- **Size of downloaded dataset files:**
|
166 |
-
- **Size of the generated dataset:** 1.
|
167 |
-
- **Total amount of disk used:**
|
168 |
|
169 |
An example of 'validation' looks as follows.
|
170 |
```
|
|
|
118 |
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
119 |
- **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
120 |
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
121 |
+
- **Size of downloaded dataset files:** 126.34 MB
|
122 |
+
- **Size of the generated dataset:** 166.43 MB
|
123 |
+
- **Total amount of disk used:** 292.77 MB
|
124 |
|
125 |
### Dataset Summary
|
126 |
|
|
|
140 |
|
141 |
#### multi_nli
|
142 |
|
143 |
+
- **Size of downloaded dataset files:** 42.11 MB
|
144 |
+
- **Size of the generated dataset:** 84.72 MB
|
145 |
+
- **Total amount of disk used:** 126.85 MB
|
146 |
|
147 |
An example of 'train' looks as follows.
|
148 |
```
|
|
|
151 |
|
152 |
#### snli
|
153 |
|
154 |
+
- **Size of downloaded dataset files:** 42.11 MB
|
155 |
+
- **Size of the generated dataset:** 80.13 MB
|
156 |
+
- **Total amount of disk used:** 122.25 MB
|
157 |
|
158 |
An example of 'train' looks as follows.
|
159 |
```
|
|
|
162 |
|
163 |
#### xnli
|
164 |
|
165 |
+
- **Size of downloaded dataset files:** 42.11 MB
|
166 |
+
- **Size of the generated dataset:** 1.56 MB
|
167 |
+
- **Total amount of disk used:** 43.68 MB
|
168 |
|
169 |
An example of 'validation' looks as follows.
|
170 |
```
|