reach-vb HF staff commited on
Commit
8e0a255
1 Parent(s): bb6a23e

Update README.md (#1)

Browse files

- Update README.md (798c9cdbfb1c386e7c81d850c36eaebc01bbebc3)

Files changed (1) hide show
  1. README.md +85 -27
README.md CHANGED
@@ -1,50 +1,108 @@
1
  ---
2
- license: unknown
3
  ---
4
 
5
- # Seamless Streaming
 
6
 
7
- It is the streaming only model and Seamless is the expressive streaming model.
 
 
 
8
 
9
- ## Quick start:
10
 
11
- Evaluation can be run with the `streaming_evaluate` CLI.
 
 
 
12
 
13
- We use the `seamless_streaming_unity` for loading the speech encoder and T2U models, and `seamless_streaming_monotonic_decoder` for loading the text decoder for streaming evaluation. This is already set as defaults for the `streaming_evaluate` CLI, but can be overridden using the `--unity-model-name` and `--monotonic-decoder-model-name` args if required.
14
 
15
- Note that the numbers in our paper use single precision floating point format (fp32) for evaluation by setting `--dtype fp32`. Also note that the results from running these evaluations might be slightly different from the results reported in our paper (which will be updated soon with the new results).
16
 
17
- ### S2TT:
18
- Set the task to `s2tt` for evaluating the speech-to-text translation part of the SeamlessStreaming model.
19
 
20
- ```bash
21
- streaming_evaluate --task s2tt --data-file <path_to_data_tsv_file> --audio-root-dir <path_to_audio_root_directory> --output <path_to_evaluation_output_directory> --tgt-lang <3_letter_lang_code>
22
- ```
 
 
 
 
 
 
 
 
23
 
24
- Note: The `--ref-field` can be used to specify the name of the reference column in the dataset.
 
 
 
25
 
26
- ### ASR:
27
- Set the task to `asr` for evaluating the automatic speech recognition part of the SeamlessStreaming model. Make sure to pass the source language as the `--tgt-lang` arg.
 
 
 
 
 
 
 
28
 
29
- ```bash
30
- streaming_evaluate --task asr --data-file <path_to_data_tsv_file> --audio-root-dir <path_to_audio_root_directory> --output <path_to_evaluation_output_directory> --tgt-lang <3_letter_source_lang_code>
 
 
 
 
 
31
  ```
32
 
33
- ### S2ST:
34
 
35
- #### SeamlessStreaming:
36
 
37
- Set the task to `s2st` for evaluating the speech-to-speech translation part of the SeamlessStreaming model.
 
38
 
39
- ```bash
40
- streaming_evaluate --task s2st --data-file <path_to_data_tsv_file> --audio-root-dir <path_to_audio_root_directory> --output <path_to_evaluation_output_directory> --tgt-lang <3_letter_lang_code>
41
  ```
 
 
 
 
 
42
 
43
- #### Seamless:
44
- The Seamless model is an unified model for streaming expressive speech-to-speech tranlsation. Use the `--expressive` arg for running evaluation of this unified model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
 
46
- ```bash
47
- streaming_evaluate --task s2st --data-file <path_to_data_tsv_file> --audio-root-dir <path_to_audio_root_directory> --output <path_to_evaluation_output_directory> --tgt-lang <3_letter_lang_code> --expressive
 
 
 
 
 
 
48
  ```
49
 
50
- Note: In the current version of our paper, we use vocoder_pretssel_16khz for the evaluation , so in order to reproduce those results please add this arg to the above command: `--vocoder-name vocoder_pretssel_16khz`
 
1
  ---
2
+ license: cc-by-nc-4.0
3
  ---
4
 
5
+ # SeamlessStreaming
6
+ SeamlessStreaming is a multilingual streaming translation model. It supports:
7
 
8
+ - Streaming Automatic Speech Recognition on 96 languages.
9
+ - Simultaneous translation on 101 source languages for speech input.
10
+ - Simultaneous translation on 96 target languages for text output.
11
+ - Simultaneous translation on 36 target languages for speech output.
12
 
13
+ ![SeamlessStreaming architecture](streaming_arch.png)
14
 
15
+ ## SeamlessStreaming models
16
+ | Model Name | #params | checkpoint | metrics |
17
+ | ------------------ | ------- | --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
18
+ | SeamlessStreaming | 2.5B | [🤗 Model card](https://huggingface.co/facebook/seamless-streaming) - [monotonic decoder checkpoint](https://huggingface.co/facebook/seamless-streaming/resolve/main/seamless_streaming_monotonic_decoder.pt) - [streaming UnitY2 checkpoint](https://huggingface.co/facebook/seamless-streaming/resolve/main/seamless_streaming_unity.pt) | [metrics](https://dl.fbaipublicfiles.com/seamless/metrics/streaming/seamless_streaming.zip) |
19
 
20
+ The evaluation data ids for FLEURS, CoVoST2 and CVSS-C can be found [here](https://dl.fbaipublicfiles.com/seamless/metrics/evaluation_data_ids.zip)
21
 
 
22
 
23
+ ## Evaluating SeamlessStreaming models
24
+ To reproduce our results, or to evaluate using the same metrics over your own test sets, please check out the [Evaluation README here](../../src/seamless_communication/cli/streaming/README.md). Streaming evaluation depends on the SimulEval library.
25
 
26
+ ## Seamless Streaming demo
27
+
28
+ ### Running on HF spaces
29
+ You can simply duplicate the space to run it. [🤗 HF Space](https://huggingface.co/spaces/facebook/seamless-streaming)
30
+
31
+ ## Running locally
32
+
33
+ ### Install backend seamless_server dependencies
34
+
35
+ > [!NOTE]
36
+ > Please note: we *do not* recommend running the model on CPU. CPU inference will be slow and introduce noticable delays in the simultaneous translation.
37
 
38
+ > [!NOTE]
39
+ > The example below is for PyTorch stable (2.1.1) and variant cu118.
40
+ > Check [here](https://pytorch.org/get-started/locally/) to find the torch/torchaudio command for your variant.
41
+ > Check [here](https://github.com/facebookresearch/fairseq2#variants) to find the fairseq2 command for your variant.
42
 
43
+ If running for the first time, create conda environment and install the desired torch version. Then install the rest of the requirements:
44
+ ```
45
+ cd seamless_server
46
+ conda create --yes --name smlss_server python=3.8 libsndfile==1.0.31
47
+ conda activate smlss_server
48
+ conda install --yes pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
49
+ pip install fairseq2 --pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.1.1/cu118
50
+ pip install -r requirements.txt
51
+ ```
52
 
53
+ ### Install frontend streaming-react-app dependencies
54
+ ```
55
+ conda install -c conda-forge nodejs
56
+ cd streaming-react-app
57
+ npm install --global yarn
58
+ yarn
59
+ yarn build # this will create the dist/ folder
60
  ```
61
 
 
62
 
63
+ ### Running the server
64
 
65
+ The server can be run locally with uvicorn below.
66
+ Run the server in dev mode:
67
 
 
 
68
  ```
69
+ cd seamless_server
70
+ uvicorn app_pubsub:app --reload --host localhost
71
+ ```
72
+
73
+ Run the server in prod mode:
74
 
75
+ ```
76
+ cd seamless_server
77
+ uvicorn app_pubsub:app --host 0.0.0.0
78
+ ```
79
+
80
+ To enable additional logging from uvicorn pass `--log-level debug` or `--log-level trace`.
81
+
82
+
83
+ ### Debuging
84
+
85
+ If you enable "Server Debug Flag" when starting streaming from the client, this enables extensive debug logging and it saves audio files in /debug folder.
86
+
87
+ ## Citation
88
+
89
+ For EMMA, please cite :
90
+ ```bibtex
91
+ @article{ma_efficient_2023,
92
+ author={Ma, Xutai and Sun, Anna and Ouyang, Siqi and Inaguma, Hirofumi and Tomasello, Paden},
93
+ title={Efficient Monotonic Multihead Attention},
94
+ year={2023},
95
+ url={https://ai.meta.com/research/publications/efficient-monotonic-multihead-attention/},
96
+ }
97
+ ```
98
 
99
+ For SeamlessStreaming, please cite :
100
+ ```bibtex
101
+ @inproceedings{seamless2023,
102
+ title="Seamless: Multilingual Expressive and Streaming Speech Translation",
103
+ author="{Seamless Communication}, Lo{\"i}c Barrault, Yu-An Chung, Mariano Coria Meglioli, David Dale, Ning Dong, Mark Duppenthaler, Paul-Ambroise Duquenne, Brian Ellis, Hady Elsahar, Justin Haaheim, John Hoffman, Min-Jae Hwang, Hirofumi Inaguma, Christopher Klaiber, Ilia Kulikov, Pengwei Li, Daniel Licht, Jean Maillard, Ruslan Mavlyutov, Alice Rakotoarison, Kaushik Ram Sadagopan, Abinesh Ramakrishnan, Tuan Tran, Guillaume Wenzek, Yilin Yang, Ethan Ye, Ivan Evtimov, Pierre Fernandez, Cynthia Gao, Prangthip Hansanti, Elahe Kalbassi, Amanda Kallet, Artyom Kozhevnikov, Gabriel Mejia, Robin San Roman, Christophe Touret, Corinne Wong, Carleigh Wood, Bokai Yu, Pierre Andrews, Can Balioglu, Peng-Jen Chen, Marta R. Costa-juss{\`a}, Maha Elbayad, Hongyu Gong, Francisco Guzm{\'a}n, Kevin Heffernan, Somya Jain, Justine Kao, Ann Lee, Xutai Ma, Alex Mourachko, Benjamin Peloquin, Juan Pino, Sravya Popuri, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, Anna Sun, Paden Tomasello, Changhan Wang, Jeff Wang, Skyler Wang, Mary Williamson",
104
+ journal={ArXiv},
105
+ year={2023}
106
+ }
107
  ```
108