Datasets:
Formats:
parquet
Size:
10K - 100K
File size: 1,916 Bytes
c5fc269 baf67fc c5fc269 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
tags:
- audio
- khmer
- english
- speech-to-text
- translation
dataset_info:
features:
- name: audio
dtype: audio
- name: kh
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 2387180819.0
num_examples: 18000
- name: test
num_bytes: 239838689.75
num_examples: 1850
download_size: 2381427254
dataset_size: 2627019508.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for khmer-speech-large-english-google-translation
Audio recordings of khmer speech with varying speakers and background noises.
English transcriptions were transcribed from the Khmer labels using Google Translate.
Based off of [seanghay/khmer-speech-large](https://huggingface.co/datasets/seanghay/khmer-speech-large).
## Dataset Details
### Dataset Description
- **Language(s) (NLP):** Khmer, English
- **License:** [More Information Needed]
### Dataset Sources
- **Huggingface:** [seanghay/khmer-speech-large](https://huggingface.co/datasets/seanghay/khmer-speech-large)
## Usage
```python
from datasets import load_dataset
ds = load_dataset("djsamseng/khmer-speech-large-english-google-translations")
ds["train"] # First 18,000 records
ds["test"] # Remaining 1,900 records
ds["train"][0]["audio"] # { "array": [0.01, 0.02, ...], "sampling_rate": 16000 } }
ds["train"][0]["kh"] # "ααα αα
αααα»α ααααα ααα½αααααΆαα αα·α ααααα ααααα ααααααΆαα ααΆα α₯αα·ααΆαα ααΆαα ααααα α’ααααα"
ds["train"][0]["en"] # "Live in a society that recognizes and values ββas well as behaves in a way that pleases you"
```
## Data Cleaning
- If desired, remove `\u200b` characters from both "en" and "kh"
- If desired, replace `'` with `'` from "en"
|