File size: 1,959 Bytes
743f27b
 
 
 
 
 
 
 
 
 
 
 
 
bfbaa53
 
743f27b
bfbaa53
 
 
 
743f27b
 
 
 
 
 
 
 
61f45fc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
dataset_info:
  features:
  - name: audio_tokens
    sequence: int64
  - name: genre_id
    dtype: int64
  - name: genre
    dtype: string
  - name: song_id
    dtype: int64
  splits:
  - name: train
    num_bytes: 479627928
    num_examples: 19909
  - name: test
    num_bytes: 122306220
    num_examples: 5076
  download_size: 123311267
  dataset_size: 601934148
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

This dataset contains tokenized audio from [lewtun/music_genres](https://huggingface.co/datasets/lewtun/music_genres) using [SemantiCodec](https://arxiv.org/abs/2405.00233) for performing experiments on AR music generation

The following script is used for tokenization
```python
from datasets import load_dataset, Dataset, DatasetDict

model_id = ""
repo_name = ""
user_name = ""
token = ""
cache_dir = "cache"
vocab_size = 4096

dataset = load_dataset(model_id, cache_dir=cache_dir, trust_remote_code=True)

from semanticodec import SemantiCodec

semanticodec = SemantiCodec(token_rate=100, semantic_vocab_size=vocab_size)

import soundfile as sf
from tqdm import tqdm
import math

dd = {
    "train": 0,
    "test": 0
}

for split in ["train", "test"]:
    tkns = []
    for idx in tqdm(range(len(dataset[split]))):
        sample = dataset[split][idx]["audio"]
        array = sample["array"]
        sr = sample["sampling_rate"]
        
        sf.write("output.wav", array, sr)
        
        tokens = semanticodec.encode("output.wav").detach().cpu().numpy().flatten()
        tkns.append(tokens)
        
    ds = Dataset.from_dict({
        "audio_tokens": tkns,
        "genre_id": list(dataset[split]["genre_id"]),
        "genre": list(dataset[split]["genre"]),
        "song_id": list(dataset[split]["song_id"])
    })
    dd[split] = ds

dd = DatasetDict(dd)
dd.save_to_disk(repo_name)
dd.push_to_hub(f"{user_name}/{repo_name}", token=token)
```