Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
a9c16c4
1 Parent(s): cb3b5be

Upload dataset

Browse files
CC-MAIN-2016-40/train-00000-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49b98481ffd7e9bb1cb502394668276810d3b66f1218eedc31bfa52a4d935157
3
+ size 422042761
CC-MAIN-2016-40/train-00001-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:387afba89fc14c27b5723da6d0fc00a825ca4e03acfc7418e37450f3d82c57fb
3
+ size 422798657
CC-MAIN-2016-40/train-00002-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1622e06317f94fbf55b0c48f5ee0f2d5d94377502f118f288449919b456e0bb9
3
+ size 420957237
CC-MAIN-2016-40/train-00003-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ab08820c76ff7665a6f4befbdca4a71b1045186cbc162333865fa0d67fa83bc
3
+ size 420940607
CC-MAIN-2016-40/train-00004-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14690feca7437ca3abcbe4eea0fa5d42911fa6c8eea111d5f01458933d2a21ec
3
+ size 422630991
CC-MAIN-2016-40/train-00005-of-00006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dd67482a5baf3c6e5db5c56b65a4b70edd2e1d76a43bf91fb978b4fb1d688f8
3
+ size 421534372
README.md CHANGED
@@ -1360,6 +1360,58 @@ dataset_info:
1360
  num_examples: 915878
1361
  download_size: 2248454753
1362
  dataset_size: 5402565529
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1363
  configs:
1364
  - config_name: CC-MAIN-2013-20
1365
  data_files:
@@ -1465,6 +1517,10 @@ configs:
1465
  data_files:
1466
  - split: train
1467
  path: CC-MAIN-2016-36/train-*
 
 
 
 
1468
  ---
1469
 
1470
  We are uploading the dataset files ~
 
1360
  num_examples: 915878
1361
  download_size: 2248454753
1362
  dataset_size: 5402565529
1363
+ - config_name: CC-MAIN-2016-40
1364
+ features:
1365
+ - name: general_metadata
1366
+ struct:
1367
+ - name: domain
1368
+ sequence: string
1369
+ - name: fluency_prob
1370
+ dtype: float64
1371
+ - name: id
1372
+ dtype: string
1373
+ - name: non_advertisement_prob
1374
+ dtype: float64
1375
+ - name: politics_prob
1376
+ dtype: float64
1377
+ - name: porn_prob
1378
+ dtype: float64
1379
+ - name: toxic_prob
1380
+ dtype: float64
1381
+ - name: url
1382
+ dtype: string
1383
+ - name: images
1384
+ sequence: string
1385
+ - name: texts
1386
+ sequence: string
1387
+ - name: metadata
1388
+ list:
1389
+ - name: aesthetic_prob
1390
+ dtype: float64
1391
+ - name: bytes
1392
+ dtype: int64
1393
+ - name: d_hash
1394
+ dtype: string
1395
+ - name: d_hash_dup_count
1396
+ dtype: int64
1397
+ - name: height
1398
+ dtype: int64
1399
+ - name: img_url_sha
1400
+ dtype: string
1401
+ - name: p_hash
1402
+ dtype: string
1403
+ - name: p_hash_dup_count
1404
+ dtype: int64
1405
+ - name: unsafe_prob
1406
+ dtype: float64
1407
+ - name: width
1408
+ dtype: int64
1409
+ splits:
1410
+ - name: train
1411
+ num_bytes: 5938544915
1412
+ num_examples: 1113534
1413
+ download_size: 2530904625
1414
+ dataset_size: 5938544915
1415
  configs:
1416
  - config_name: CC-MAIN-2013-20
1417
  data_files:
 
1517
  data_files:
1518
  - split: train
1519
  path: CC-MAIN-2016-36/train-*
1520
+ - config_name: CC-MAIN-2016-40
1521
+ data_files:
1522
+ - split: train
1523
+ path: CC-MAIN-2016-40/train-*
1524
  ---
1525
 
1526
  We are uploading the dataset files ~