Datasets:

ArXiv:
License:
admin commited on
Commit
69fd0a2
1 Parent(s): da51e92

upl script

Browse files
Files changed (2) hide show
  1. README.md +277 -1
  2. erhu_playing_tech.py +172 -0
README.md CHANGED
@@ -1,3 +1,279 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - audio-classification
5
+ language:
6
+ - zh
7
+ - en
8
+ tags:
9
+ - music
10
+ - art
11
+ pretty_name: Erhu Playing Technique Dataset
12
+ size_categories:
13
+ - 1K<n<10K
14
+ viewer: false
15
  ---
16
+
17
+ # Dataset Card for Erhu Playing Technique
18
+ The raw dataset is sourced from [ErhuPT](https://ccmusic-database.github.io/en/database/ccm.html#shou8) and all performances are conducted by professional erhu players. These clips are categorized by annotators with proficiency in erhu performance into 11 classes, namely: split bow, pad bow, overtone, legato & glissando & slur, strike bow, plucked string, throw bow, staccato bow, tremolo, and vibrato. For certain playing techniques, multiple audio clips are available, each played at different dynamics. This dataset was created and has been utilized for erhu playing technique detection. The label system is hierarchical and contains three levels in the raw dataset. The first level consists of four categories: trill, staccato, slide, and others; the second level comprises seven categories: trill\short\up, trill\long, staccato, slide up, slide\legato, slide\down, and others; the third level consists of 11 categories, representing the 11 playing techniques described earlier.
19
+
20
+ After organizing the aforementioned data, we constructed the `default subset` of the current integrated version dataset based on its 11 classification data and optimized the names of the 11 categories. The data structure can be seen in the [viewer](https://www.modelscope.cn/datasets/ccmusic-database/erhu_playing_tech/dataPeview).
21
+
22
+ Although the raw dataset has been cited in some articles, the experiments in those articles lack reproducibility. In order to demonstrate the effectiveness of the default subset, we further processed the data and constructed the `eval subset` to supplement the evaluation of this integrated version dataset. The results of the evaluation can be viewed in the [erhu_playing_tech](https://www.modelscope.cn/models/ccmusic-database/erhu_playing_tech).
23
+
24
+ In addition, the labels of categories 4 and 7 in the raw dataset were not discarded. Instead, they were separately constructed into `4_class subset` and `7_class subset`. However, these two subsets have not been evaluated and therefore are not reflected in the articles. The following are the statistical charts for the 11_class (Default), 7_class, and 4_class subsets:
25
+
26
+ <img src="https://www.modelscope.cn/api/v1/datasets/ccmusic-database/erhu_playing_tech/repo?Revision=master&FilePath=.%2Fdata%2Ferhu.png&View=true">
27
+
28
+ ## Viewer
29
+ <https://www.modelscope.cn/datasets/ccmusic-database/erhu_playing_tech/dataPeview>
30
+
31
+ ## Dataset Structure
32
+ ### Default subset
33
+ <style>
34
+ .erhu td {
35
+ vertical-align: middle !important;
36
+ text-align: center;
37
+ }
38
+ .erhu th {
39
+ text-align: center;
40
+ }
41
+ </style>
42
+ <table class="erhu">
43
+ <tr>
44
+ <th>audio(.wav, 44100Hz)</th>
45
+ <th>mel(.jpg, 44100Hz)</th>
46
+ <th>label</th>
47
+ </tr>
48
+ <tr>
49
+ <td><audio controls src="https://huggingface.co/datasets/ccmusic-database/erhu_playing_tech/resolve/main/data/stick_004.wav"></td>
50
+ <td><img src="./data/stick_004.jpg"></td>
51
+ <td>4/7/11-class</td>
52
+ </tr>
53
+ <tr>
54
+ <td>...</td>
55
+ <td>...</td>
56
+ <td>...</td>
57
+ </tr>
58
+ </table>
59
+
60
+ ### Eval subset
61
+ <table class="erhu">
62
+ <tr>
63
+ <th>mel</th>
64
+ <th>cqt</th>
65
+ <th>chroma</th>
66
+ <th>label</th>
67
+ </tr>
68
+ <tr>
69
+ <td>.jpg, 44100Hz</td>
70
+ <td>.jpg, 44100Hz</td>
71
+ <td>.jpg, 44100Hz</td>
72
+ <td>11-class</td>
73
+ </tr>
74
+ <tr>
75
+ <td>...</td>
76
+ <td>...</td>
77
+ <td>...</td>
78
+ <td>...</td>
79
+ </tr>
80
+ </table>
81
+
82
+ ### Data Instances
83
+ .zip(.wav, .jpg)
84
+
85
+ ### Data Fields
86
+ ```
87
+ + detache 分弓 (72)
88
+ + forte (8)
89
+ + medium (8)
90
+ + piano (56)
91
+ + diangong 垫弓 (28)
92
+ + harmonic 泛音 (18)
93
+ + natural 自然泛音 (6)
94
+ + artificial 人工泛音 (12)
95
+ + legato&slide&glissando 连弓&滑音&大滑音 (114)
96
+ + glissando_down 大滑音 下行 (4)
97
+ + glissando_up 大滑音 上行 (4)
98
+ + huihuayin_down 下回滑音 (18)
99
+ + huihuayin_long_down 后下回滑音 (12)
100
+ + legato&slide_up 向上连弓 包含滑音 (24)
101
+ + forte (8)
102
+ + medium (8)
103
+ + piano (8)
104
+ + slide_dianzhi 垫指滑音 (4)
105
+ + slide_down 向下滑音 (16)
106
+ + slide_legato 连线滑音 (16)
107
+ + slide_up 向上滑音 (16)
108
+ + percussive 打击类音效 (21)
109
+ + dajigong 大击弓 (11)
110
+ + horse 马嘶 (2)
111
+ + stick 敲击弓 (8)
112
+ + pizzicato 拨弦 (96)
113
+ + forte (30)
114
+ + medium (29)
115
+ + piano (30)
116
+ + left 左手勾弦 (6)
117
+ + ricochet 抛弓 (36)
118
+ + staccato 顿弓 (141)
119
+ + forte (47)
120
+ + medium (46)
121
+ + piano (48)
122
+ + tremolo 颤弓 (144)
123
+ + forte (48)
124
+ + medium (48)
125
+ + piano (48)
126
+ + trill 颤音 (202)
127
+ + long 长颤音 (141)
128
+ + forte (46)
129
+ + medium (47)
130
+ + piano (48)
131
+ + short 短颤音 (61)
132
+ + down 下颤音 (30)
133
+ + up 上颤音 (31)
134
+ + vibrato 揉弦 (56)
135
+ + late (13)
136
+ + press 压揉 (6)
137
+ + roll 滚揉 (28)
138
+ + slide 滑揉 (9)
139
+ ```
140
+
141
+ ### Data Splits
142
+ train, validation, test
143
+
144
+ ## Dataset Description
145
+ - **Homepage:** <https://ccmusic-database.github.io>
146
+ - **Repository:** <https://huggingface.co/datasets/ccmusic-database/erhu_playing_tech>
147
+ - **Paper:** <https://doi.org/10.5281/zenodo.5676893>
148
+ - **Leaderboard:** <https://ccmusic-database.github.io/team.html>
149
+ - **Point of Contact:** <https://www.modelscope.cn/datasets/ccmusic-database/erhu_playing_tech>
150
+
151
+ ### Dataset Summary
152
+ The label system is hierarchical and contains three levels in the raw dataset. The first level consists of four categories: _trill, staccato, slide_, and _others_; the second level comprises seven categories: _trill\short\up, trill\long, staccato, slide up, slide\legato, slide\down_, and _others_; the third level consists of 11 categories, representing the 11 playing techniques described earlier. Although it also employs a three-level label system, the higher-level labels do not exhibit complete downward compatibility with the lower-level labels. Therefore, we cannot merge these three-level labels into the same split but must treat them as three separate subsets.
153
+
154
+ ### Supported Tasks and Leaderboards
155
+ Erhu Playing Technique Classification
156
+
157
+ ### Languages
158
+ Chinese, English
159
+
160
+ ## Usage
161
+ ### Eval
162
+ ```python
163
+ from datasets import load_dataset
164
+
165
+ dataset = load_dataset("ccmusic-database/erhu_playing_tech", name="eval")
166
+ for item in ds["train"]:
167
+ print(item)
168
+
169
+ for item in ds["validation"]:
170
+ print(item)
171
+
172
+ for item in ds["test"]:
173
+ print(item)
174
+ ```
175
+
176
+ ### 4-class
177
+ ```python
178
+ from datasets import load_dataset
179
+
180
+ dataset = load_dataset("ccmusic-database/erhu_playing_tech", name="4_classes")
181
+ for item in ds["train"]:
182
+ print(item)
183
+
184
+ for item in ds["validation"]:
185
+ print(item)
186
+
187
+ for item in ds["test"]:
188
+ print(item)
189
+ ```
190
+
191
+ ### 7-class
192
+ ```python
193
+ from datasets import load_dataset
194
+
195
+ ds = load_dataset("ccmusic-database/erhu_playing_tech", name="7_classes")
196
+ for item in ds["train"]:
197
+ print(item)
198
+
199
+ for item in ds["validation"]:
200
+ print(item)
201
+
202
+ for item in ds["test"]:
203
+ print(item)
204
+ ```
205
+
206
+ ### 11-class
207
+ ```python
208
+ from datasets import load_dataset
209
+ # default
210
+ ds = load_dataset("ccmusic-database/erhu_playing_tech", name="11_classes")
211
+ for item in ds["train"]:
212
+ print(item)
213
+
214
+ for item in ds["validation"]:
215
+ print(item)
216
+
217
+ for item in ds["test"]:
218
+ print(item)
219
+ ```
220
+
221
+ ## Maintenance
222
+ ```bash
223
+ GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/erhu_playing_tech
224
+ cd erhu_playing_tech
225
+ ```
226
+
227
+ ## Dataset Creation
228
+ ### Curation Rationale
229
+ Lack of a dataset for Erhu playing tech
230
+
231
+ ### Source Data
232
+ #### Initial Data Collection and Normalization
233
+ Zhaorui Liu, Monan Zhou
234
+
235
+ #### Who are the source language producers?
236
+ Students from CCMUSIC
237
+
238
+ ### Annotations
239
+ #### Annotation process
240
+ This dataset is an audio dataset containing 927 audio clips recorded by the China Conservatory of Music, each with a performance technique of erhu.
241
+
242
+ #### Who are the annotators?
243
+ Students from CCMUSIC
244
+
245
+ ### Personal and Sensitive Information
246
+ None
247
+
248
+ ## Considerations for Using the Data
249
+ ### Social Impact of Dataset
250
+ Advancing the Digitization Process of Traditional Chinese Instruments
251
+
252
+ ### Discussion of Biases
253
+ Only for Erhu
254
+
255
+ ### Other Known Limitations
256
+ Not Specific Enough in Categorization
257
+
258
+ ## Additional Information
259
+ ### Dataset Curators
260
+ Zijin Li
261
+
262
+ ### Evaluation
263
+ [Wang, Zehao et al. “Musical Instrument Playing Technique Detection Based on FCN: Using Chinese Bowed-Stringed Instrument as an Example.” ArXiv abs/1910.09021 (2019): n. pag.](https://arxiv.org/pdf/1910.09021.pdf)
264
+
265
+ ### Citation Information
266
+ ```bibtex
267
+ @dataset{zhaorui_liu_2021_5676893,
268
+ author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
269
+ title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
270
+ month = {mar},
271
+ year = {2024},
272
+ publisher = {HuggingFace},
273
+ version = {1.2},
274
+ url = {https://huggingface.co/ccmusic-database}
275
+ }
276
+ ```
277
+
278
+ ### Contributions
279
+ Provide a dataset for Erhu playing tech
erhu_playing_tech.py ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import random
3
+ import hashlib
4
+ import datasets
5
+
6
+
7
+ _NAMES = {
8
+ "4_classes": [
9
+ "trill",
10
+ "staccato",
11
+ "slide",
12
+ "others",
13
+ ],
14
+ "7_classes": [
15
+ "trill_short_up",
16
+ "trill_long",
17
+ "staccato",
18
+ "slide_up",
19
+ "slide_legato",
20
+ "slide_down",
21
+ "others",
22
+ ],
23
+ "11_classes": [
24
+ "vibrato",
25
+ "trill",
26
+ "tremolo",
27
+ "staccato",
28
+ "ricochet",
29
+ "pizzicato",
30
+ "percussive",
31
+ "legato_slide_glissando",
32
+ "harmonic",
33
+ "diangong",
34
+ "detache",
35
+ ],
36
+ }
37
+
38
+ _DBNAME = os.path.basename(__file__).split(".")[0]
39
+
40
+ _DOMAIN = f"https://www.modelscope.cn/api/v1/datasets/ccmusic-database/{_DBNAME}/repo?Revision=master&FilePath=data"
41
+
42
+ _HOMEPAGE = f"https://www.modelscope.cn/datasets/ccmusic-database/{_DBNAME}"
43
+
44
+
45
+ _URLS = {
46
+ "audio": f"{_DOMAIN}/audio.zip",
47
+ "mel": f"{_DOMAIN}/mel.zip",
48
+ "eval": f"{_DOMAIN}/eval.zip",
49
+ }
50
+
51
+
52
+ class erhu_playing_tech(datasets.GeneratorBasedBuilder):
53
+ def _info(self):
54
+ if self.config.name == "default":
55
+ self.config.name = "11_classes"
56
+
57
+ return datasets.DatasetInfo(
58
+ features=(
59
+ datasets.Features(
60
+ {
61
+ "audio": datasets.Audio(sampling_rate=44100),
62
+ "mel": datasets.Image(),
63
+ "label": datasets.features.ClassLabel(
64
+ names=_NAMES[self.config.name]
65
+ ),
66
+ }
67
+ )
68
+ if self.config.name != "eval"
69
+ else datasets.Features(
70
+ {
71
+ "mel": datasets.Image(),
72
+ "cqt": datasets.Image(),
73
+ "chroma": datasets.Image(),
74
+ "label": datasets.features.ClassLabel(
75
+ names=_NAMES["11_classes"]
76
+ ),
77
+ }
78
+ )
79
+ ),
80
+ homepage=_HOMEPAGE,
81
+ license="CC-BY-NC-ND",
82
+ version="1.2.0",
83
+ )
84
+
85
+ def _str2md5(self, original_string: str):
86
+ md5_obj = hashlib.md5()
87
+ md5_obj.update(original_string.encode("utf-8"))
88
+ return md5_obj.hexdigest()
89
+
90
+ def _split_generators(self, dl_manager):
91
+ if self.config.name != "eval":
92
+ audio_files = dl_manager.download_and_extract(_URLS["audio"])
93
+ mel_files = dl_manager.download_and_extract(_URLS["mel"])
94
+ files = {}
95
+ for fpath in dl_manager.iter_files([audio_files]):
96
+ fname = os.path.basename(fpath)
97
+ dirname = os.path.dirname(fpath)
98
+ subset = os.path.basename(os.path.dirname(dirname))
99
+ if self.config.name == subset and fname.endswith(".wav"):
100
+ cls = f"{subset}/{os.path.basename(dirname)}/"
101
+ item_id = self._str2md5(cls + fname.split(".wa")[0])
102
+ files[item_id] = {"audio": fpath}
103
+
104
+ for fpath in dl_manager.iter_files([mel_files]):
105
+ fname = os.path.basename(fpath)
106
+ dirname = os.path.dirname(fpath)
107
+ subset = os.path.basename(os.path.dirname(dirname))
108
+ if self.config.name == subset and fname.endswith(".jpg"):
109
+ cls = f"{subset}/{os.path.basename(dirname)}/"
110
+ item_id = self._str2md5(cls + fname.split(".jp")[0])
111
+ files[item_id]["mel"] = fpath
112
+
113
+ dataset = list(files.values())
114
+
115
+ else:
116
+ eval_files = dl_manager.download_and_extract(_URLS["eval"])
117
+ dataset = []
118
+ for fpath in dl_manager.iter_files([eval_files]):
119
+ fname: str = os.path.basename(fpath)
120
+ if "_mel" in fname and fname.endswith(".jpg"):
121
+ dataset.append({"mel": fpath, "label": fname.split("__")[0]})
122
+
123
+ categories = {}
124
+ names = _NAMES["11_classes" if "eval" in self.config.name else self.config.name]
125
+ for name in names:
126
+ categories[name] = []
127
+
128
+ for data in dataset:
129
+ if self.config.name != "eval":
130
+ data["label"] = os.path.basename(os.path.dirname(data["audio"]))
131
+
132
+ categories[data["label"]].append(data)
133
+
134
+ testset, validset, trainset = [], [], []
135
+ for cls in categories:
136
+ random.shuffle(categories[cls])
137
+ count = len(categories[cls])
138
+ p60 = int(count * 0.6)
139
+ p80 = int(count * 0.8)
140
+ trainset += categories[cls][:p60]
141
+ validset += categories[cls][p60:p80]
142
+ testset += categories[cls][p80:]
143
+
144
+ random.shuffle(trainset)
145
+ random.shuffle(validset)
146
+ random.shuffle(testset)
147
+
148
+ return [
149
+ datasets.SplitGenerator(
150
+ name=datasets.Split.TRAIN, gen_kwargs={"files": trainset}
151
+ ),
152
+ datasets.SplitGenerator(
153
+ name=datasets.Split.VALIDATION, gen_kwargs={"files": validset}
154
+ ),
155
+ datasets.SplitGenerator(
156
+ name=datasets.Split.TEST, gen_kwargs={"files": testset}
157
+ ),
158
+ ]
159
+
160
+ def _generate_examples(self, files):
161
+ if self.config.name != "eval":
162
+ for i, item in enumerate(files):
163
+ yield i, item
164
+
165
+ else:
166
+ for i, item in enumerate(files):
167
+ yield i, {
168
+ "mel": item["mel"],
169
+ "cqt": item["mel"].replace("_mel", "_cqt"),
170
+ "chroma": item["mel"].replace("_mel", "_chroma"),
171
+ "label": item["label"],
172
+ }