Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
ced3d84
1 Parent(s): a65868a

Upload dataset

Browse files
CC-MAIN-2016-22/train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abc7efe5428979c2ed76a1f8fa09504fc87bf629239946b1b0c43c8641eceef8
3
+ size 398132689
CC-MAIN-2016-22/train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48c6a202ebd6ea92d11ab74ee3fdf3135bf608c4841ef7a43095ee45824a47a3
3
+ size 400663050
CC-MAIN-2016-22/train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77c453b22c67f775bec139441ecbc69d810e72f9799f04046b28b92d4bf27e38
3
+ size 400530878
CC-MAIN-2016-22/train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:412e7ebf347e2e52c8f6b0a2198e98b531129eaac792ee6e28328dbb8d6908df
3
+ size 400790499
CC-MAIN-2016-22/train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14d96336aaaef8bdb5d812f55a022ec8ce9fc05a61ef6c85a0704303facb1945
3
+ size 400507738
README.md CHANGED
@@ -1152,6 +1152,58 @@ dataset_info:
1152
  num_examples: 747570
1153
  download_size: 1675500816
1154
  dataset_size: 3897220786
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1155
  configs:
1156
  - config_name: CC-MAIN-2013-20
1157
  data_files:
@@ -1241,6 +1293,10 @@ configs:
1241
  data_files:
1242
  - split: train
1243
  path: CC-MAIN-2016-18/train-*
 
 
 
 
1244
  ---
1245
 
1246
  We are uploading the dataset files ~
 
1152
  num_examples: 747570
1153
  download_size: 1675500816
1154
  dataset_size: 3897220786
1155
+ - config_name: CC-MAIN-2016-22
1156
+ features:
1157
+ - name: general_metadata
1158
+ struct:
1159
+ - name: domain
1160
+ sequence: string
1161
+ - name: fluency_prob
1162
+ dtype: float64
1163
+ - name: id
1164
+ dtype: string
1165
+ - name: non_advertisement_prob
1166
+ dtype: float64
1167
+ - name: politics_prob
1168
+ dtype: float64
1169
+ - name: porn_prob
1170
+ dtype: float64
1171
+ - name: toxic_prob
1172
+ dtype: float64
1173
+ - name: url
1174
+ dtype: string
1175
+ - name: images
1176
+ sequence: string
1177
+ - name: texts
1178
+ sequence: string
1179
+ - name: metadata
1180
+ list:
1181
+ - name: aesthetic_prob
1182
+ dtype: float64
1183
+ - name: bytes
1184
+ dtype: int64
1185
+ - name: d_hash
1186
+ dtype: string
1187
+ - name: d_hash_dup_count
1188
+ dtype: int64
1189
+ - name: height
1190
+ dtype: int64
1191
+ - name: img_url_sha
1192
+ dtype: string
1193
+ - name: p_hash
1194
+ dtype: string
1195
+ - name: p_hash_dup_count
1196
+ dtype: int64
1197
+ - name: unsafe_prob
1198
+ dtype: float64
1199
+ - name: width
1200
+ dtype: int64
1201
+ splits:
1202
+ - name: train
1203
+ num_bytes: 4623903344
1204
+ num_examples: 857060
1205
+ download_size: 2000624854
1206
+ dataset_size: 4623903344
1207
  configs:
1208
  - config_name: CC-MAIN-2013-20
1209
  data_files:
 
1293
  data_files:
1294
  - split: train
1295
  path: CC-MAIN-2016-18/train-*
1296
+ - config_name: CC-MAIN-2016-22
1297
+ data_files:
1298
+ - split: train
1299
+ path: CC-MAIN-2016-22/train-*
1300
  ---
1301
 
1302
  We are uploading the dataset files ~