jadechoghari
commited on
Commit
โข
72c85ef
1
Parent(s):
91f1a24
update readme
Browse files
app.py
CHANGED
@@ -33,19 +33,18 @@ def generate_waveform(description):
|
|
33 |
|
34 |
|
35 |
intro = """
|
36 |
-
# ๐ถ OpenMusic:
|
37 |
|
38 |
-
Welcome to **OpenMusic**, a next-gen diffusion model designed to generate high-quality audio from text descriptions!
|
39 |
|
40 |
-
Simply enter a
|
|
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
- [GitHub](https://github.com/ivcylc/qa-mdt) [@changli](https://github.com/ivcylc) ๐.
|
45 |
- [Paper](https://arxiv.org/pdf/2405.15863)
|
46 |
- [HuggingFace](https://huggingface.co/jadechoghari/qa_mdt) [@jadechoghari](https://github.com/jadechoghari) ๐ค.
|
47 |
|
48 |
-
|
49 |
---
|
50 |
|
51 |
"""
|
|
|
33 |
|
34 |
|
35 |
intro = """
|
36 |
+
# ๐ถ OpenMusic: Diffusion That Plays Music ๐ง ๐น
|
37 |
|
38 |
+
Welcome to **OpenMusic**, a next-gen diffusion model designed to generate high-quality music audio from text descriptions!
|
39 |
|
40 |
+
Simply enter a few words describing the vibe, and watch as the model generates a unique track for your input.
|
41 |
+
Powered by the QA-MDT model, based on the new research paper linked below.
|
42 |
|
43 |
+
- [GitHub Repo](https://github.com/ivcylc/qa-mdt) by [@changli](https://github.com/ivcylc) ๐.
|
|
|
|
|
44 |
- [Paper](https://arxiv.org/pdf/2405.15863)
|
45 |
- [HuggingFace](https://huggingface.co/jadechoghari/qa_mdt) [@jadechoghari](https://github.com/jadechoghari) ๐ค.
|
46 |
|
47 |
+
Note: The music generation process will take 1-2 minutes ๐ถ
|
48 |
---
|
49 |
|
50 |
"""
|