Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
d79528d
1 Parent(s): 3230195

Upload dataset

Browse files
CC-MAIN-2015-14/train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a092d505d7cf37c4334c34fd0ac59e9ca0ec594e1fe274a18ec05caeaa43aa5
3
+ size 385993637
CC-MAIN-2015-14/train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b5082bd056193500bd8a8cca72c93709d351ff80b3d6eb07706900a354ac121
3
+ size 387349692
CC-MAIN-2015-14/train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80d1a2f0ebf11655541cc4990d314112a0ab63d995ff24d095f22cbf1d8d9f5e
3
+ size 388825465
CC-MAIN-2015-14/train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cefce1855b2c59c0e05de1b8145065616b44ffd720693a64b1bb19c67a9089dc
3
+ size 388395073
CC-MAIN-2015-14/train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04a5b3e9fc41ed3f6f71cf99d76ab20a27cc353c22d372e6309e9d91a3cbaaea
3
+ size 388658244
README.md CHANGED
@@ -632,6 +632,58 @@ dataset_info:
632
  num_examples: 1129411
633
  download_size: 2528026633
634
  dataset_size: 6263650452
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
635
  configs:
636
  - config_name: CC-MAIN-2013-20
637
  data_files:
@@ -681,6 +733,10 @@ configs:
681
  data_files:
682
  - split: train
683
  path: CC-MAIN-2015-11/train-*
 
 
 
 
684
  ---
685
 
686
  We are uploading the dataset files ~
 
632
  num_examples: 1129411
633
  download_size: 2528026633
634
  dataset_size: 6263650452
635
+ - config_name: CC-MAIN-2015-14
636
+ features:
637
+ - name: general_metadata
638
+ struct:
639
+ - name: domain
640
+ sequence: string
641
+ - name: fluency_prob
642
+ dtype: float64
643
+ - name: id
644
+ dtype: string
645
+ - name: non_advertisement_prob
646
+ dtype: float64
647
+ - name: politics_prob
648
+ dtype: float64
649
+ - name: porn_prob
650
+ dtype: float64
651
+ - name: toxic_prob
652
+ dtype: float64
653
+ - name: url
654
+ dtype: string
655
+ - name: images
656
+ sequence: string
657
+ - name: texts
658
+ sequence: string
659
+ - name: metadata
660
+ list:
661
+ - name: aesthetic_prob
662
+ dtype: float64
663
+ - name: bytes
664
+ dtype: int64
665
+ - name: d_hash
666
+ dtype: string
667
+ - name: d_hash_dup_count
668
+ dtype: int64
669
+ - name: height
670
+ dtype: int64
671
+ - name: img_url_sha
672
+ dtype: string
673
+ - name: p_hash
674
+ dtype: string
675
+ - name: p_hash_dup_count
676
+ dtype: int64
677
+ - name: unsafe_prob
678
+ dtype: float64
679
+ - name: width
680
+ dtype: int64
681
+ splits:
682
+ - name: train
683
+ num_bytes: 4524425019
684
+ num_examples: 885221
685
+ download_size: 1939222111
686
+ dataset_size: 4524425019
687
  configs:
688
  - config_name: CC-MAIN-2013-20
689
  data_files:
 
733
  data_files:
734
  - split: train
735
  path: CC-MAIN-2015-11/train-*
736
+ - config_name: CC-MAIN-2015-14
737
+ data_files:
738
+ - split: train
739
+ path: CC-MAIN-2015-14/train-*
740
  ---
741
 
742
  We are uploading the dataset files ~