AlexWortega
commited on
Commit
•
4507b36
1
Parent(s):
242245f
Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,40 @@
|
|
1 |
# Salt - Speech and Langauage transformer
|
2 |
|
|
|
|
|
|
|
3 |
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|
4 |
|------------------|------:|------|------|------|---|-----:|---|-----:|
|
5 |
|mmlu | 2|none | |acc |↑ |0.2691|± |0.0037|
|
6 |
| - humanities | 2|none | |acc |↑ |0.2442|± |0.0063|
|
7 |
| - other | 2|none | |acc |↑ |0.2478|± |0.0076|
|
8 |
| - social sciences| 2|none | |acc |↑ |0.3094|± |0.0083|
|
9 |
-
| - stem | 2|none | |acc |↑ |0.2880|± |0.0080|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
# Salt - Speech and Langauage transformer
|
2 |
|
3 |
+
Vikhr Salt is an advanced speech and language transformer model designed for seamless handling of Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) tasks. Built upon a pre-trained large language model, Vikhr Salt extends its vocabulary to include new audio tokens, enabling it to process multimodal data effectively while leveraging the rich prior knowledge embedded in the model.
|
4 |
+
|
5 |
+
|
6 |
| Groups |Version|Filter|n-shot|Metric| |Value | |Stderr|
|
7 |
|------------------|------:|------|------|------|---|-----:|---|-----:|
|
8 |
|mmlu | 2|none | |acc |↑ |0.2691|± |0.0037|
|
9 |
| - humanities | 2|none | |acc |↑ |0.2442|± |0.0063|
|
10 |
| - other | 2|none | |acc |↑ |0.2478|± |0.0076|
|
11 |
| - social sciences| 2|none | |acc |↑ |0.3094|± |0.0083|
|
12 |
+
| - stem | 2|none | |acc |↑ |0.2880|± |0.0080|
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
Key Features:
|
17 |
+
|
18 |
+
• Unified Multimodal Approach: Combines text and audio processing in a single framework with one LM loss, ensuring coherent and efficient learning.
|
19 |
+
• Dual Tokenization System: Supports both Encodec and SpeechTokenizer tokens, enabling flexibility in training and inference.
|
20 |
+
• Optimized Training Pipeline: Achieves stable training with mixed-precision settings, utilizing tf32 for improved numerical stability.
|
21 |
+
• Comprehensive Metrics: Evaluated using industry benchmarks such as PESQ, STOI, and SI-SDR for audio quality, and SIMO for zero-shot TTS.
|
22 |
+
|
23 |
+
Model Highlights:
|
24 |
+
|
25 |
+
• Multimodal Compatibility: Handles both semantic and acoustic token sequences, supporting diverse TTS and ASR scenarios.
|
26 |
+
• Training Efficiency: Trained on 150 A100 GPU hours, balancing performance and computational cost.
|
27 |
+
• Competitive Benchmarks: Demonstrates strong performance on MMLU and other standard benchmarks.
|
28 |
+
|
29 |
+
Applications:
|
30 |
+
|
31 |
+
• Text-to-Speech synthesis with customizable styles and tones.
|
32 |
+
• Automatic Speech Recognition for accurate transcription.
|
33 |
+
• Multimodal research and development in speech and language understanding.
|
34 |
+
|
35 |
+
Example Use Cases:
|
36 |
+
|
37 |
+
• Generate expressive, natural-sounding speech from text.
|
38 |
+
• Transcribe audio recordings into text with high accuracy.
|
39 |
+
• Explore emergent multimodal capabilities in large-scale models.
|
40 |
+
|