Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
2e70c03
1 Parent(s): ced3d84

Upload dataset

Browse files
CC-MAIN-2016-26/train-00000-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5eb9a85a14ed2aa7e1516348043ebe66debad4724114023c3af83cf4ad7a4e3
3
+ size 350268527
CC-MAIN-2016-26/train-00001-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed78071babea819c223dd0145d703fb557d031cd38e8c72c8e9c3f3c07dcae3e
3
+ size 351006508
CC-MAIN-2016-26/train-00002-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92f4dbc6b6d5877aa66534228a272097c8244864cabae6d3716109fe761bd5ab
3
+ size 349786304
CC-MAIN-2016-26/train-00003-of-00004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec38d6da85bf0bd8630534a2de5814c937c33ccd3b34b2bd191e123d4a330406
3
+ size 352829545
README.md CHANGED
@@ -1204,6 +1204,58 @@ dataset_info:
1204
  num_examples: 857060
1205
  download_size: 2000624854
1206
  dataset_size: 4623903344
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1207
  configs:
1208
  - config_name: CC-MAIN-2013-20
1209
  data_files:
@@ -1297,6 +1349,10 @@ configs:
1297
  data_files:
1298
  - split: train
1299
  path: CC-MAIN-2016-22/train-*
 
 
 
 
1300
  ---
1301
 
1302
  We are uploading the dataset files ~
 
1204
  num_examples: 857060
1205
  download_size: 2000624854
1206
  dataset_size: 4623903344
1207
+ - config_name: CC-MAIN-2016-26
1208
+ features:
1209
+ - name: general_metadata
1210
+ struct:
1211
+ - name: domain
1212
+ sequence: string
1213
+ - name: fluency_prob
1214
+ dtype: float64
1215
+ - name: id
1216
+ dtype: string
1217
+ - name: non_advertisement_prob
1218
+ dtype: float64
1219
+ - name: politics_prob
1220
+ dtype: float64
1221
+ - name: porn_prob
1222
+ dtype: float64
1223
+ - name: toxic_prob
1224
+ dtype: float64
1225
+ - name: url
1226
+ dtype: string
1227
+ - name: images
1228
+ sequence: string
1229
+ - name: texts
1230
+ sequence: string
1231
+ - name: metadata
1232
+ list:
1233
+ - name: aesthetic_prob
1234
+ dtype: float64
1235
+ - name: bytes
1236
+ dtype: int64
1237
+ - name: d_hash
1238
+ dtype: string
1239
+ - name: d_hash_dup_count
1240
+ dtype: int64
1241
+ - name: height
1242
+ dtype: int64
1243
+ - name: img_url_sha
1244
+ dtype: string
1245
+ - name: p_hash
1246
+ dtype: string
1247
+ - name: p_hash_dup_count
1248
+ dtype: int64
1249
+ - name: unsafe_prob
1250
+ dtype: float64
1251
+ - name: width
1252
+ dtype: int64
1253
+ splits:
1254
+ - name: train
1255
+ num_bytes: 3414418701
1256
+ num_examples: 627995
1257
+ download_size: 1403890884
1258
+ dataset_size: 3414418701
1259
  configs:
1260
  - config_name: CC-MAIN-2013-20
1261
  data_files:
 
1349
  data_files:
1350
  - split: train
1351
  path: CC-MAIN-2016-22/train-*
1352
+ - config_name: CC-MAIN-2016-26
1353
+ data_files:
1354
+ - split: train
1355
+ path: CC-MAIN-2016-26/train-*
1356
  ---
1357
 
1358
  We are uploading the dataset files ~