metadata
license: apache-2.0
dataset_info:
features:
- name: id
dtype: int64
- name: link
dtype: string
- name: publish
struct:
- name: $date
dtype: string
- name: text
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2386016762
num_examples: 566813
download_size: 1178341444
dataset_size: 2386016762
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Data sources come from the following categories: 1.Web crawler dataset:
- Website UET (ĐH Công nghệ): tuyensinh.uet.vnu.edu.vn; new.uet.vnu.edu.vn
- Website HUS (ĐH KHTN): hus.vnu.edu.vn
- Website EUB (ĐH Kinh tế): ueb.vnu.edu.vn
- Website IS (ĐH Quốc tế): is.vnu.edu.vn
- Website Eduacation (ĐH Giáo dục): education.vnu.edu.vn
- Website NXB ĐHQG: press.vnu.edu.vn
List domain web crawler
- CC100:
link to CC100 vi - Vietnews:
link to bk vietnews dataset - C4_vi:
link to C4_vi
Folder Toxic store files demo for toxic filtering.We filtered C4_validation dataset, vietnews samples dataset and a part(1/50) of CC100_vi dataset.After the process, datasets split into nontoxic part and toxic part.
Folder Dedup store files after the deduplication process of above files.