Datasets:
Modalities:
Text
Formats:
parquet
Languages:
Thai
Size:
10K - 100K
Tags:
instruction-finetuning
License:
metadata
license: cc-by-sa-3.0
task_categories:
- question-answering
- summarization
language:
- th
tags:
- instruction-finetuning
size_categories:
- 10K<n<100K
Summary
🇹🇭 Thai-instructed dataset translated from gbharti/wealth-alpaca_lora using Google Cloud Translation. This dataset is a combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5 Script for tuning through Kaggle's (https://www.kaggle.com) free resources using PEFT/LoRa: https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora