File size: 3,283 Bytes
1291170
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5acdaf7
 
 
 
 
 
2051a47
 
 
 
 
51f21ca
2051a47
 
 
 
 
f3c3607
82de1ff
e997e89
 
 
f28a58b
2051a47
 
 
 
f28a58b
 
 
 
 
 
 
 
2051a47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
dataset_info:
  features:
  - name: uid
    dtype: string
  - name: file_id
    dtype: string
  - name: audio
    dtype:
      audio:
        sampling_rate: 16000
  - name: sentence
    dtype: string
  - name: n_segment
    dtype: int32
  - name: duration_ms
    dtype: float32
  - name: language
    dtype: string
  - name: sample_rate
    dtype: int32
  - name: course
    dtype: string
  - name: sentence_length
    dtype: int32
  - name: n_tokens
    dtype: int32
  splits:
  - name: train
    num_bytes: 99661277809.752
    num_examples: 75924
  download_size: 83572532883
  dataset_size: 99661277809.752
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
task_categories:
- automatic-speech-recognition
language:
- he
size_categories:
- 10K<n<100K
---


## Data Description

Hebrew Speech Recognition dataset from [Campus IL](https://campus.gov.il/).     

Data was scraped from the Campus website, which contains video lectures from various courses in Hebrew.     
Then subtitles were extracted from the videos and aligned with the audio.    
Subtitles that are not on Hebrew were removed (WIP: need to remove non-Hebrew audio as well, e.g. using simple classifier).    
Samples with duration less than 3 second were removed.    
Total duration of the dataset is 152 hours.    
Outliers in terms of the duration/char ratio were not removed, so it's possible to find suspiciously long or short sentences compared to the duration.       
WIP: dataset suspiciously is huge. fix it (probably original files with 22050Hz are in). if loading is slow, just clone it :     
`git clone hebrew_speech_campus && cd hebrew_speech_campus && git lfs pull`    
and load it from the folder `load_dataset("./hebrew_speech_campus")`

## Data Format

Audio files are in WAV format, 16kHz sampling rate, 16bit, mono. Ignore `path` field, use `audio.array` field value.

## Data Usage
```python
from datasets import load_dataset

ds = load_dataset("imvladikon/hebrew_speech_campus", split="train", streaming=True)
print(next(iter(ds)))
```

## Data Sample
```
{'uid': '10c3eda27cf173ab25bde755d0023abed301fcfd',
 'file_id': '10c3eda27cf173ab25bde755d0023abed301fcfd_13',
 'audio': {'path': '/content/hebrew_speech_campus/data/from_another_angle-_mathematics_teaching_practices/10c3eda27cf173ab25bde755d0023abed301fcfd_13.wav',
  'array': array([ 5.54326562e-07,  3.60812592e-05, -2.35188054e-04, ...,
          2.34067178e-04,  1.55649337e-04,  6.32447700e-05]),
  'sampling_rate': 16000},
 'sentence': 'ื”ื“ื•ื‘ืจื™ื ืฆืจื™ื›ื™ื ืœืงื—ืช ืขืœื™ื• ืื—ืจื™ื•ืช, ื•ืœื”ื™ื•ืช ืžื—ื•ื™ื‘ื™ื ืœื• ื›ืœื•ืžืจ, ื”ืฉื™ื— ืฆืจื™ืš ืœื”ื™ื•ืช ืžื—ื•ื™ื‘',
 'n_segment': 13,
 'duration_ms': 6607.98193359375,
 'language': 'he',
 'sample_rate': 16000,
 'course': 'from_another_angle-_mathematics_teaching_practices',
 'sentence_length': 79,
 'n_tokens': 13}
```

## Data Splits and Stats
Split: train    
Number of samples: 75924    

## Citation

Please cite the following if you use this dataset in your work:     

```
@misc{imvladikon2023hebrew_speech_campus,
  author = {Gurevich, Vladimir},
  title = {Hebrew Speech Recognition Dataset: Campus},
  year = {2023},
  howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_campus},
}
```