Datasets:
metadata
language:
- vi
license: mit
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- question-answering
- text-classification
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: answer_start
dtype: int64
splits:
- name: train
num_bytes: 54478998
num_examples: 48460
- name: test
num_bytes: 6041628
num_examples: 5385
download_size: 33267124
dataset_size: 60520626
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
Dataset Describe
This dataset is collected from internet sources, SQuAD dataset, wiki, etc. It has been translated into Vietnamese using "google translate" and word segmented using VnCoreNLP (https://github.com/vncorenlp/VnCoreNLP).
Data structure
Dataset includes the following columns:
question
: Question related to the content of the text.context
: Paragraph of text.answer
: The answer to the question is based on the content of the text.answer_start
: The starting position of the answer in the text.
How to use
You can load this dataset using Hugging Face's datasets
library:
from datasets import load_dataset
dataset = load_dataset("ShynBui/Vietnamese_Reading_Comprehension_Dataset")
Split data
The dataset is divided into train/test sections.
DatasetDict({
train: Dataset({
features: ['context', 'question', 'answer', 'answer_start'],
num_rows: 48460
})
test: Dataset({
features: ['context', 'question', 'answer', 'answer_start'],
num_rows: 5385
})
})
Task categories
This dataset can be used for the following main tasks
question-answering
.reading-comprehension
.natural-language-processing
.
Contribute
We welcome all contributions to this dataset. If you discover an error or have feedback, please create an Issue or Pull Request on our Hub repository.
License
This dataset is released under the MIT License.
Contact
If you have any questions, please contact us via email: buitienphat2462002@gmail.com.