Update README.md (#2)
Browse files- Update README.md (eb0e23ecca85e0011ad64bb9b35287130451e940)
README.md
CHANGED
@@ -1,3 +1,6 @@
|
|
|
|
|
|
|
|
1 |
# lina-speech (beta)
|
2 |
|
3 |
Exploring "linear attention" for text-to-speech.
|
@@ -25,17 +28,6 @@ Following the linear complexity LM you choose, follow respective instructions fi
|
|
25 |
- For GLA/RWKV inference check [flash-linear-attention](https://github.com/sustcsonglin/flash-linear-attention).
|
26 |
- For RWKV training check [RWKV-LM](https://github.com/BlinkDL/RWKV-LM)
|
27 |
|
28 |
-
### Inference
|
29 |
-
|
30 |
-
Download configuration and weights above, then check `Inference.ipynb`.
|
31 |
-
|
32 |
-
### TODO
|
33 |
-
|
34 |
-
- [x] Fix RWKV6 inference and/or switch to FLA implem.
|
35 |
-
- [ ] Provide a Datamodule for training (_lhotse_ might also work well).
|
36 |
-
- [ ] Implement CFG.
|
37 |
-
- [ ] Scale up.
|
38 |
-
|
39 |
### Acknowledgment
|
40 |
|
41 |
- The RWKV authors and the community around for carrying high-level truly opensource research.
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
+
---
|
4 |
# lina-speech (beta)
|
5 |
|
6 |
Exploring "linear attention" for text-to-speech.
|
|
|
28 |
- For GLA/RWKV inference check [flash-linear-attention](https://github.com/sustcsonglin/flash-linear-attention).
|
29 |
- For RWKV training check [RWKV-LM](https://github.com/BlinkDL/RWKV-LM)
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
### Acknowledgment
|
32 |
|
33 |
- The RWKV authors and the community around for carrying high-level truly opensource research.
|