hexgrad commited on
Commit
3e6dd32
·
verified ·
1 Parent(s): f46e482

Delete HEARME.txt

Browse files
Files changed (1) hide show
  1. HEARME.txt +0 -47
HEARME.txt DELETED
@@ -1,47 +0,0 @@
1
- Kokoro is a frontier TTS model for its size of 82 million parameters.
2
-
3
- On the 25th of December, 2024, Kokoro v0 point 19 weights were permissively released in full fp32 precision along with 2 voicepacks (Bella and Sarah), all under an Apache 2 license.
4
-
5
- At the time of release, Kokoro v0 point 19 was the number 1 ranked model in TTS Spaces Arena. With 82 million parameters trained for under 20 epics on under 100 total hours of audio, Kokoro achieved higher Eelo in this single-voice Arena setting, over larger models. Kokoro's ability to top this Eelo ladder using relatively low compute and data, suggests that the scaling law for traditional TTS models might have a steeper slope than previously expected.
6
-
7
- Licenses. Apache 2 weights in this repository. MIT inference code. GPLv3 dependency in espeak NG.
8
-
9
- The inference code was originally MIT licensed by the paper author. Note that this card applies only to this model, Kokoro.
10
-
11
- Evaluation. Metric: Eelo rating. Leaderboard: TTS Spaces Arena.
12
-
13
- The voice ranked in the Arena is a 50 50 mix of Bella and Sarah. For your convenience, this mix is included in this repository as A-F dot PT, but you can trivially re-produce it.
14
-
15
- Training Details.
16
-
17
- Compute: Kokoro was trained on "A100 80GB v-ram instances" rented from Vast.ai. Vast was chosen over other compute providers due to its competitive on-demand hourly rates. The average hourly cost for the A100 80GB v-ram instances used for training was below $1 per hour per GPU, which was around half the quoted rates from other providers at the time.
18
-
19
- Data: Kokoro was trained exclusively on permissive non-copyrighted audio data and IPA phoneme labels. Examples of permissive non-copyrighted audio include:
20
-
21
- Public domain audio. Audio licensed under Apache, MIT, etc.
22
-
23
- Synthetic audio[1] generated by closed[2] TTS models from large providers.
24
-
25
- Epics: Less than 20 Epics. Total Dataset Size: Less than 100 hours of audio.
26
-
27
- Limitations. Kokoro v0 point 19 is limited in some ways, in its training set and architecture:
28
-
29
- Lacks voice cloning capability, likely due to small, under 100 hour training set.
30
-
31
- Relies on external g2p, which introduces a class of g2p failure modes.
32
-
33
- Training dataset is mostly long-form reading and narration, not conversation.
34
-
35
- At 82 million parameters, Kokoro almost certainly falls to a well-trained 1B+ parameter diffusion transformer, or a many-billion-parameter M LLM like GPT 4o or Gemini 2 Flash.
36
-
37
- Multilingual capability is architecturally feasible, but training data is almost entirely English.
38
-
39
- Will the other voicepacks be released?
40
-
41
- There is currently no release date scheduled for the other voicepacks, but in the meantime you can try them in the hosted demo.
42
-
43
- Acknowledgements. yL4 5 7 9 for architecting StyleTTS 2.
44
-
45
- Pendrokar for adding Kokoro as a contender in the TTS Spaces Arena.
46
-
47
- Model Card Contact. @rzvzn on Discord.