darshanmakwana's picture
Upload dataset
bfbaa53 verified
metadata
dataset_info:
  features:
    - name: audio_tokens
      sequence: int64
    - name: genre_id
      dtype: int64
    - name: genre
      dtype: string
    - name: song_id
      dtype: int64
  splits:
    - name: train
      num_bytes: 479627928
      num_examples: 19909
    - name: test
      num_bytes: 122306220
      num_examples: 5076
  download_size: 123311267
  dataset_size: 601934148
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

This dataset contains tokenized audio from lewtun/music_genres using SemantiCodec for performing experiments on AR music generation

The following script is used for tokenization

from datasets import load_dataset, Dataset, DatasetDict

model_id = ""
repo_name = ""
user_name = ""
token = ""
cache_dir = "cache"
vocab_size = 4096

dataset = load_dataset(model_id, cache_dir=cache_dir, trust_remote_code=True)

from semanticodec import SemantiCodec

semanticodec = SemantiCodec(token_rate=100, semantic_vocab_size=vocab_size)

import soundfile as sf
from tqdm import tqdm
import math

dd = {
    "train": 0,
    "test": 0
}

for split in ["train", "test"]:
    tkns = []
    for idx in tqdm(range(len(dataset[split]))):
        sample = dataset[split][idx]["audio"]
        array = sample["array"]
        sr = sample["sampling_rate"]
        
        sf.write("output.wav", array, sr)
        
        tokens = semanticodec.encode("output.wav").detach().cpu().numpy().flatten()
        tkns.append(tokens)
        
    ds = Dataset.from_dict({
        "audio_tokens": tkns,
        "genre_id": list(dataset[split]["genre_id"]),
        "genre": list(dataset[split]["genre"]),
        "song_id": list(dataset[split]["song_id"])
    })
    dd[split] = ds

dd = DatasetDict(dd)
dd.save_to_disk(repo_name)
dd.push_to_hub(f"{user_name}/{repo_name}", token=token)