Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Qingyun commited on
Commit
7053ca1
1 Parent(s): dd55375

Upload dataset

Browse files
CC-MAIN-2015-32/train-00000-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c34b7c5cff42b51c47db995e83cb6ec885a439fdf4da8722f55b2c51045ff15b
3
+ size 411356743
CC-MAIN-2015-32/train-00001-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:902f14714110ad543045c9910102a50b7e1f63275924f8cd77082d2515af7f05
3
+ size 411756522
CC-MAIN-2015-32/train-00002-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d327a197a2086b24065c1b306ecc2d18facc35e1a32596fb6eec28569b0846e6
3
+ size 413764299
CC-MAIN-2015-32/train-00003-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2be7d54abcbd878dd3be6b390b07713434da736678581ba36d7a060ce8a120dc
3
+ size 415369802
CC-MAIN-2015-32/train-00004-of-00005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:223f223a1dc23c304d48e3ef378a198752e7321304d048fa75e9a89029af8b82
3
+ size 412959733
README.md CHANGED
@@ -840,6 +840,58 @@ dataset_info:
840
  num_examples: 784496
841
  download_size: 1828575226
842
  dataset_size: 4320140953
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
843
  configs:
844
  - config_name: CC-MAIN-2013-20
845
  data_files:
@@ -905,6 +957,10 @@ configs:
905
  data_files:
906
  - split: train
907
  path: CC-MAIN-2015-27/train-*
 
 
 
 
908
  ---
909
 
910
  We are uploading the dataset files ~
 
840
  num_examples: 784496
841
  download_size: 1828575226
842
  dataset_size: 4320140953
843
+ - config_name: CC-MAIN-2015-32
844
+ features:
845
+ - name: general_metadata
846
+ struct:
847
+ - name: domain
848
+ sequence: string
849
+ - name: fluency_prob
850
+ dtype: float64
851
+ - name: id
852
+ dtype: string
853
+ - name: non_advertisement_prob
854
+ dtype: float64
855
+ - name: politics_prob
856
+ dtype: float64
857
+ - name: porn_prob
858
+ dtype: float64
859
+ - name: toxic_prob
860
+ dtype: float64
861
+ - name: url
862
+ dtype: string
863
+ - name: images
864
+ sequence: string
865
+ - name: texts
866
+ sequence: string
867
+ - name: metadata
868
+ list:
869
+ - name: aesthetic_prob
870
+ dtype: float64
871
+ - name: bytes
872
+ dtype: int64
873
+ - name: d_hash
874
+ dtype: string
875
+ - name: d_hash_dup_count
876
+ dtype: int64
877
+ - name: height
878
+ dtype: int64
879
+ - name: img_url_sha
880
+ dtype: string
881
+ - name: p_hash
882
+ dtype: string
883
+ - name: p_hash_dup_count
884
+ dtype: int64
885
+ - name: unsafe_prob
886
+ dtype: float64
887
+ - name: width
888
+ dtype: int64
889
+ splits:
890
+ - name: train
891
+ num_bytes: 4952806590
892
+ num_examples: 875601
893
+ download_size: 2065207099
894
+ dataset_size: 4952806590
895
  configs:
896
  - config_name: CC-MAIN-2013-20
897
  data_files:
 
957
  data_files:
958
  - split: train
959
  path: CC-MAIN-2015-27/train-*
960
+ - config_name: CC-MAIN-2015-32
961
+ data_files:
962
+ - split: train
963
+ path: CC-MAIN-2015-32/train-*
964
  ---
965
 
966
  We are uploading the dataset files ~