Datasets:

ArXiv:
License:
Anjaly commited on
Commit
871757f
1 Parent(s): e4c5d6c

dataset card, loading script

Browse files
Files changed (2) hide show
  1. README.md +209 -0
  2. snow_mountain.py +186 -0
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: 'Snow Mountain'
3
+ language:
4
+ - hi
5
+ - bgc
6
+ - kfs
7
+ - dgo
8
+ - bhd
9
+ - gbk
10
+ - xnr
11
+ - kfx
12
+ - mjl
13
+ - kfo
14
+ - bfz
15
+ annotations_creators:
16
+ - ?
17
+ language_creators:
18
+ - ?
19
+ license: []
20
+ multilinguality:
21
+ - multilingual
22
+ size_categories:
23
+ -
24
+ source_datasets:
25
+ - Snow Mountain
26
+ tags: []
27
+ task_categories:
28
+ - automatic-speech-recognition
29
+ task_ids: []
30
+ configs:
31
+ - hi
32
+ - bgc
33
+ dataset_info:
34
+ - config_name: hi
35
+ features:
36
+ - name: Unnamed
37
+ dtype: int64
38
+ - name: sentence
39
+ dtype: string
40
+ - name: path
41
+ dtype: string
42
+ splits:
43
+ - name: train_500
44
+ num_examples: 400
45
+ - name: val_500
46
+ num_examples: 100
47
+ - name: train_1000
48
+ num_examples: 800
49
+ - name: val_1000
50
+ num_examples: 200
51
+ - name: test_common
52
+ num_examples: 500
53
+ dataset_size: 71.41 hrs
54
+ - config_name: bgc
55
+ features:
56
+ - name: Unnamed
57
+ dtype: int64
58
+ - name: sentence
59
+ dtype: string
60
+ - name: path
61
+ dtype: string
62
+ splits:
63
+ - name: train_500
64
+ num_examples: 400
65
+ - name: val_500
66
+ num_examples: 100
67
+ - name: train_1000
68
+ num_examples: 800
69
+ - name: val_1000
70
+ num_examples: 200
71
+ - name: test_common
72
+ num_examples: 500
73
+ dataset_size: 27.41 hrs
74
+
75
+ ---
76
+
77
+ # Dataset Card for [Dataset Name]
78
+
79
+ ## Table of Contents
80
+ - [Table of Contents](#table-of-contents)
81
+ - [Dataset Description](#dataset-description)
82
+ - [Dataset Summary](#dataset-summary)
83
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
84
+ - [Languages](#languages)
85
+ - [Dataset Structure](#dataset-structure)
86
+ - [Data Instances](#data-instances)
87
+ - [Data Fields](#data-fields)
88
+ - [Data Splits](#data-splits)
89
+ - [Dataset Creation](#dataset-creation)
90
+ - [Curation Rationale](#curation-rationale)
91
+ - [Source Data](#source-data)
92
+ - [Annotations](#annotations)
93
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
94
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
95
+ - [Social Impact of Dataset](#social-impact-of-dataset)
96
+ - [Discussion of Biases](#discussion-of-biases)
97
+ - [Other Known Limitations](#other-known-limitations)
98
+ - [Additional Information](#additional-information)
99
+ - [Dataset Curators](#dataset-curators)
100
+ - [Licensing Information](#licensing-information)
101
+ - [Citation Information](#citation-information)
102
+ - [Contributions](#contributions)
103
+
104
+ ## Dataset Description
105
+
106
+ - **Homepage:**
107
+ - **Repository:https://gitlabdev.bridgeconn.com/software/research/datasets/snow-mountain**
108
+ - **Paper:https://arxiv.org/abs/2206.01205**
109
+ - **Leaderboard:**
110
+ - **Point of Contact:**
111
+
112
+ ### Dataset Summary
113
+
114
+ The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription.
115
+
116
+ We have used this dataset for experiments in ASR tasks. But these could be used for other applications in speech domain, like speaker recognition, language identification or even as unlabelled corpus for pre-training.
117
+
118
+ ### Supported Tasks and Leaderboards
119
+
120
+ Atomatic speech recognition, Speaker recognition, Language identification
121
+
122
+ ### Languages
123
+
124
+ Hindi, Haryanvi, Bilaspuri, Dogri, Bhadrawahi, Gaddi, Kangri, Kulvi, Mandeali, Kulvi Outer Seraji, Pahari Mahasui
125
+
126
+ ## Dataset Structure
127
+
128
+ ### Data Instances
129
+
130
+ [More Information Needed]
131
+
132
+ ### Data Fields
133
+
134
+ [More Information Needed]
135
+
136
+ ### Data Splits
137
+
138
+ [More Information Needed]
139
+
140
+ ## Dataset Creation
141
+
142
+ ### Curation Rationale
143
+
144
+ [More Information Needed]
145
+
146
+ ### Source Data
147
+
148
+ The Bible recordings were done in a studio setting by native speakers.
149
+
150
+ #### Initial Data Collection and Normalization
151
+
152
+ [More Information Needed]
153
+
154
+ #### Who are the source language producers?
155
+
156
+ [More Information Needed]
157
+
158
+ ### Annotations
159
+
160
+ #### Annotation process
161
+
162
+ [More Information Needed]
163
+
164
+ #### Who are the annotators?
165
+
166
+ [More Information Needed]
167
+
168
+ ### Personal and Sensitive Information
169
+
170
+ [More Information Needed]
171
+
172
+ ## Considerations for Using the Data
173
+
174
+ ### Social Impact of Dataset
175
+
176
+ [More Information Needed]
177
+
178
+ ### Discussion of Biases
179
+
180
+ [More Information Needed]
181
+
182
+ ### Other Known Limitations
183
+
184
+ [More Information Needed]
185
+
186
+ ## Additional Information
187
+
188
+ ### Dataset Curators
189
+
190
+ [More Information Needed]
191
+
192
+ ### Licensing Information
193
+
194
+ The data is licensed under the Creative Commons Attribution-ShareAlike 4.0 International Public License (CC BY-SA 4.0)
195
+
196
+
197
+ ### Citation Information
198
+
199
+ @inproceedings{Raju2022SnowMD,
200
+ title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
201
+ author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
202
+ year={2022}
203
+ }
204
+
205
+
206
+
207
+ ### Contributions
208
+
209
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
snow_mountain.py ADDED
@@ -0,0 +1,186 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ # TODO: Address all TODOs and remove all explanatory comments
15
+ """TODO: Add a description here."""
16
+
17
+ import os
18
+ import csv
19
+ import json
20
+ import pandas as pd
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @inproceedings{Raju2022SnowMD,
27
+ title={Snow Mountain: Dataset of Audio Recordings of The Bible in Low Resource Languages},
28
+ author={Kavitha Raju and V. Anjaly and R. Allen Lish and Joel Mathew},
29
+ year={2022}
30
+ }
31
+
32
+ """
33
+
34
+ _DESCRIPTION = """\
35
+ The Snow Mountain dataset contains the audio recordings (in .mp3 format) and the corresponding text of The Bible
36
+ in 11 Indian languages. The recordings were done in a studio setting by native speakers. Each language has a single
37
+ speaker in the dataset. Most of these languages are geographically concentrated in the Northern part of India around
38
+ the state of Himachal Pradesh. Being related to Hindi they all use the Devanagari script for transcription.
39
+ """
40
+
41
+ _HOMEPAGE = "https://gitlabdev.bridgeconn.com/software/research/datasets/snow-mountain"
42
+
43
+ _LICENSE = ""
44
+
45
+ _URL = "https://gitlabdev.bridgeconn.com/software/research/datasets/snow-mountain/"
46
+
47
+ _FILES = {}
48
+ _LANGUAGES = ['hindi']
49
+ for lang in _LANGUAGES:
50
+ file_dic = {
51
+ "train_500": f"data/experiments/{lang}/train_500.csv",
52
+ "val_500": f"data/experiments/{lang}/val_500.csv",
53
+ "train_1000": f"data/experiments/{lang}/train_1000.csv",
54
+ "val_1000": f"data/experiments/{lang}/val_1000.csv",
55
+ "train_2500": f"data/experiments/{lang}/train_2500.csv",
56
+ "val_2500": f"data/experiments/{lang}/val_2500.csv",
57
+ "train_short": f"data/experiments/{lang}/train_short.csv",
58
+ "val_short": f"data/experiments/{lang}/val_short.csv",
59
+ "train_full": f"data/experiments/{lang}/train_full.csv",
60
+ "val_full": f"data/experiments/{lang}/val_full.csv",
61
+ "test_common": f"data/experiments/{lang}/test_common.csv",
62
+ }
63
+ _FILES[lang] = file_dic
64
+
65
+
66
+ class Test(datasets.GeneratorBasedBuilder):
67
+
68
+ VERSION = datasets.Version("1.0.0")
69
+
70
+ BUILDER_CONFIGS = []
71
+ for lang in _LANGUAGES:
72
+ text = lang.capitalize()+" data"
73
+ BUILDER_CONFIGS.append(datasets.BuilderConfig(name=f"{lang}", version=VERSION, description=text))
74
+
75
+
76
+ DEFAULT_CONFIG_NAME = "hindi"
77
+
78
+ def _info(self):
79
+ features = datasets.Features(
80
+ {
81
+ "sentence": datasets.Value("string"),
82
+ "audio": datasets.Audio(sampling_rate=16_000),
83
+ "path": datasets.Value("string"),
84
+ }
85
+ )
86
+ return datasets.DatasetInfo(
87
+ description=_DESCRIPTION,
88
+ features=features,
89
+ supervised_keys=("sentence", "path"),
90
+ homepage=_HOMEPAGE,
91
+ license=_LICENSE,
92
+ citation=_CITATION,
93
+ )
94
+
95
+ def _split_generators(self, dl_manager):
96
+
97
+ downloaded_files = dl_manager.download(_FILES[self.config.name])
98
+
99
+ train_splits = [
100
+ datasets.SplitGenerator(
101
+ name="train_500",
102
+ gen_kwargs={
103
+ "filepath": downloaded_files["train_500"],
104
+ },
105
+ ),
106
+ datasets.SplitGenerator(
107
+ name="train_1000",
108
+ gen_kwargs={
109
+ "filepath": downloaded_files["train_1000"],
110
+ },
111
+ ),
112
+ datasets.SplitGenerator(
113
+ name="train_2500",
114
+ gen_kwargs={
115
+ "filepath": downloaded_files["train_2500"],
116
+ },
117
+ ),
118
+ datasets.SplitGenerator(
119
+ name="train_short",
120
+ gen_kwargs={
121
+ "filepath": downloaded_files["train_short"],
122
+ },
123
+ ),
124
+ datasets.SplitGenerator(
125
+ name="train_full",
126
+ gen_kwargs={
127
+ "filepath": downloaded_files["train_full"],
128
+ },
129
+ ),
130
+ ]
131
+
132
+ dev_splits = [
133
+ datasets.SplitGenerator(
134
+ name="val_500",
135
+ gen_kwargs={
136
+ "filepath": downloaded_files["val_500"],
137
+ },
138
+ ),
139
+ datasets.SplitGenerator(
140
+ name="val_1000",
141
+ gen_kwargs={
142
+ "filepath": downloaded_files["val_1000"],
143
+ },
144
+ ),
145
+ datasets.SplitGenerator(
146
+ name="val_2500",
147
+ gen_kwargs={
148
+ "filepath": downloaded_files["val_2500"],
149
+ },
150
+ ),
151
+ datasets.SplitGenerator(
152
+ name="val_short",
153
+ gen_kwargs={
154
+ "filepath": downloaded_files["val_short"],
155
+ },
156
+ ),
157
+ datasets.SplitGenerator(
158
+ name="val_full",
159
+ gen_kwargs={
160
+ "filepath": downloaded_files["val_full"],
161
+ },
162
+ ),
163
+ ]
164
+
165
+ test_splits = [
166
+ datasets.SplitGenerator(
167
+ name="test_common",
168
+ gen_kwargs={
169
+ "filepath": downloaded_files["test_common"],
170
+ },
171
+ ),
172
+ ]
173
+ return train_splits + dev_splits + test_splits
174
+
175
+
176
+ def _generate_examples(self, filepath):
177
+ key = 0
178
+ with open(filepath) as f:
179
+ data_df = pd.read_csv(f,sep=',')
180
+ transcripts = []
181
+ for index,row in data_df.iterrows():
182
+ yield key, {
183
+ "sentence": row["sentence"],
184
+ "path": row["path"],
185
+ }
186
+ key+=1