TristanBehrens's picture
Create README.md
1e735e8
|
raw
history blame
1.44 kB
metadata
tags:
  - gpt2
  - text-generation
  - music-modeling
  - music-generation
widget:
  - text: PIECE_START
  - text: PIECE_START PIECE_START TRACK_START INST=34 DENSITY=8
  - text: PIECE_START TRACK_START INST=1
license: apache-2.0

GPT-2 for Music

Language Models such as GPT-2 can be used for Music Generation. The idea is to represent pieces of music as texts, effectively reducing the task to Language Generation.

This model is a rather small instance of GPT-2 trained the Lakhclean dataset. The model generates 4 bars at a time at a 16th note resolution with 4/4 meter.

If you want to contribute, if you want to say hello, if you want to know more, find me on LinkedIn

Model description

The model is GPT-2 with 6 decoders and 8 attention-heads each. The context length is 2048. The embedding dimensions are 512 as well.

Intended uses & limitations

This model is just a proof of concept. It shows that HuggingFace can be used to compose music.

How to use

There is a notebook in the repo that you can use to generate symbolic music and then render it.

Limitations and bias

Since this model has been trained on a very small corpus of music, it is overfitting heavily.

Acknowledgements

This model has been created with support from NVIDIA. I am very grateful for the GPU compute they provided!