zerostratos's picture
Update README.md
b5ff0c3 verified
|
raw
history blame
1.77 kB
metadata
license: apache-2.0
dataset_info:
  features:
    - name: 'Unnamed: 0'
      dtype: int64
    - name: text
      dtype: string
    - name: timestamp
      dtype: string
    - name: url
      dtype: string
    - name: label
      dtype: int64
  splits:
    - name: train
      num_bytes: 2442035868
      num_examples: 589988
  download_size: 1250972059
  dataset_size: 2442035868
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Data sources come from the following categories: 1.Web crawler dataset:

  • Website UET (ĐH Công nghệ): tuyensinh.uet.vnu.edu.vn; new.uet.vnu.edu.vn
  • Website HUS (ĐH KHTN): hus.vnu.edu.vn
  • Website EUB (ĐH Kinh tế): ueb.vnu.edu.vn
  • Website IS (ĐH Quốc tế): is.vnu.edu.vn
  • Website Eduacation (ĐH Giáo dục): education.vnu.edu.vn
  • Website NXB ĐHQG: press.vnu.edu.vn
    List domain web crawler
  1. CC100:
    link to CC100 vi
  2. Vietnews:
    link to bk vietnews dataset
  3. C4_vi:
    link to C4_vi

Folder Toxic store files demo for toxic filtering.We filtered C4_validation dataset, vietnews samples dataset and a part(1/50) of CC100_vi dataset.After the process, datasets split into nontoxic part and toxic part.

Folder Dedup store files after the deduplication process of above files.
Folder Toxic_2, Dedup_2, Tokenized_2 is the result of the second process that we executed on 17 files of C4_vi dataset wich contains 1b tokens
Folders with index 3 and final is the best result of filtering C4_vi which contains 1,156,218,780 from first 20 files