Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
10cd9b9
1 Parent(s): 2e70c03

Upload dataset

Browse files
CC-MAIN-2016-30/train-00000-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b6798994a641a774985236fdfea0a50b9b23dcd5afe468195da9a4317856e54
3
+ size 364404044
CC-MAIN-2016-30/train-00001-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aabdcc7940ed5cecff364e304108aead0f53dc8da253fc19cf1f2ff349bd235e
3
+ size 364305015
CC-MAIN-2016-30/train-00002-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9d24f2f0c3902ea5f9ccc1f590f5c1c790450fb39001b71e552641167818c938
3
+ size 365047618
CC-MAIN-2016-30/train-00003-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:362e1821492b2fa69a40341db6e127c5467eee93abbef5bf8e7d0c1b08493bec
3
+ size 364650086
CC-MAIN-2016-30/train-00004-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68f0aa5dd92f94389112aa103180c5835fee7f0e9e1dbb8534552bb5e54432a5
3
+ size 364655445
CC-MAIN-2016-30/train-00005-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:890ff3bd84120d1bc3b17d0941c35a4a3c3ef08c81489b2cc7d6fa29199593ce
3
+ size 363698778
CC-MAIN-2016-30/train-00006-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a13eee4b7e0adb3850f17bddac67ed835327016f7b2aa5fbc6b3cd52f8d5e104
3
+ size 361893669
CC-MAIN-2016-30/train-00007-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:675c3ce37c0f7a27eef0e0d280939772d856e63346cf3eb9c1f0ff591d7cf849
3
+ size 364740185
README.md CHANGED
@@ -1256,6 +1256,58 @@ dataset_info:
1256
  num_examples: 627995
1257
  download_size: 1403890884
1258
  dataset_size: 3414418701
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1259
  configs:
1260
  - config_name: CC-MAIN-2013-20
1261
  data_files:
@@ -1353,6 +1405,10 @@ configs:
1353
  data_files:
1354
  - split: train
1355
  path: CC-MAIN-2016-26/train-*
 
 
 
 
1356
  ---
1357
 
1358
  We are uploading the dataset files ~
 
1256
  num_examples: 627995
1257
  download_size: 1403890884
1258
  dataset_size: 3414418701
1259
+ - config_name: CC-MAIN-2016-30
1260
+ features:
1261
+ - name: general_metadata
1262
+ struct:
1263
+ - name: domain
1264
+ sequence: string
1265
+ - name: fluency_prob
1266
+ dtype: float64
1267
+ - name: id
1268
+ dtype: string
1269
+ - name: non_advertisement_prob
1270
+ dtype: float64
1271
+ - name: politics_prob
1272
+ dtype: float64
1273
+ - name: porn_prob
1274
+ dtype: float64
1275
+ - name: toxic_prob
1276
+ dtype: float64
1277
+ - name: url
1278
+ dtype: string
1279
+ - name: images
1280
+ sequence: string
1281
+ - name: texts
1282
+ sequence: string
1283
+ - name: metadata
1284
+ list:
1285
+ - name: aesthetic_prob
1286
+ dtype: float64
1287
+ - name: bytes
1288
+ dtype: int64
1289
+ - name: d_hash
1290
+ dtype: string
1291
+ - name: d_hash_dup_count
1292
+ dtype: int64
1293
+ - name: height
1294
+ dtype: int64
1295
+ - name: img_url_sha
1296
+ dtype: string
1297
+ - name: p_hash
1298
+ dtype: string
1299
+ - name: p_hash_dup_count
1300
+ dtype: int64
1301
+ - name: unsafe_prob
1302
+ dtype: float64
1303
+ - name: width
1304
+ dtype: int64
1305
+ splits:
1306
+ - name: train
1307
+ num_bytes: 7244342539
1308
+ num_examples: 1183776
1309
+ download_size: 2913394840
1310
+ dataset_size: 7244342539
1311
  configs:
1312
  - config_name: CC-MAIN-2013-20
1313
  data_files:
 
1405
  data_files:
1406
  - split: train
1407
  path: CC-MAIN-2016-26/train-*
1408
+ - config_name: CC-MAIN-2016-30
1409
+ data_files:
1410
+ - split: train
1411
+ path: CC-MAIN-2016-30/train-*
1412
  ---
1413
 
1414
  We are uploading the dataset files ~