Datasets:
metadata
dataset_info:
features:
- name: tokens
sequence: string
splits:
- name: train
num_bytes: 803588
num_examples: 12290
- name: validation
num_bytes: 12403
num_examples: 221
- name: test
num_bytes: 16702
num_examples: 257
download_size: 229583
dataset_size: 832693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
license: cc-by-4.0
language:
- yue
This data is the subset of the Hong Kong Cantonese Corpus (HKCanCor) that has been re-segmented by the multi-tiered word segmentation scheme described in the following paper:
Charles Lam, Chaak-ming Lau, and Jackson L. Lee. 2024. Multi-Tiered Cantonese Word Segmentation. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 11993–12002, Torino, Italy. ELRA and ICCL.
This HuggingFace dataset is produced by splitting all separators (spaces, dashes, and pipes) in the original dataset to arrive at the most fine-grained segmentations.