Spaces:
Running
on
T4
Running
on
T4
Anna Sun
commited on
Commit
•
5067fec
1
Parent(s):
81c556d
clarify installation
Browse files- Dockerfile +3 -1
- README.md +4 -1
- seamless_server/requirements.txt +0 -1
Dockerfile
CHANGED
@@ -65,7 +65,9 @@ RUN pyenv install $PYTHON_VERSION && \
|
|
65 |
|
66 |
COPY --chown=user:user ./seamless_server ./seamless_server
|
67 |
# change dir since pip needs to seed whl folder
|
68 |
-
RUN cd seamless_server
|
|
|
|
|
69 |
COPY --from=frontend /app/dist ./streaming-react-app/dist
|
70 |
|
71 |
WORKDIR $HOME/app/seamless_server
|
|
|
65 |
|
66 |
COPY --chown=user:user ./seamless_server ./seamless_server
|
67 |
# change dir since pip needs to seed whl folder
|
68 |
+
RUN cd seamless_server
|
69 |
+
pip install fairseq2 --pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.1.1/cu118
|
70 |
+
pip install --no-cache-dir --upgrade -r requirements.txt
|
71 |
COPY --from=frontend /app/dist ./streaming-react-app/dist
|
72 |
|
73 |
WORKDIR $HOME/app/seamless_server
|
README.md
CHANGED
@@ -19,7 +19,9 @@ You can simply duplicate the space to run it.
|
|
19 |
> Please note: we *do not* recommend running the model on CPU. CPU inference will be slow and introduce noticable delays in the simultaneous translation.
|
20 |
|
21 |
> [!NOTE]
|
22 |
-
> The example below is for PyTorch stable (2.1.1) and variant cu118.
|
|
|
|
|
23 |
|
24 |
If running for the first time, create conda environment and install the desired torch version. Then install the rest of the requirements:
|
25 |
```
|
@@ -27,6 +29,7 @@ cd seamless_server
|
|
27 |
conda create --yes --name smlss_server python=3.8 libsndfile==1.0.31
|
28 |
conda activate smlss_server
|
29 |
conda install --yes pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
|
|
|
30 |
pip install -r requirements.txt
|
31 |
```
|
32 |
|
|
|
19 |
> Please note: we *do not* recommend running the model on CPU. CPU inference will be slow and introduce noticable delays in the simultaneous translation.
|
20 |
|
21 |
> [!NOTE]
|
22 |
+
> The example below is for PyTorch stable (2.1.1) and variant cu118.
|
23 |
+
> Check [here](https://pytorch.org/get-started/locally/) to find the torch/torchaudio command for your variant.
|
24 |
+
> Check [here](https://github.com/facebookresearch/fairseq2#variants) to find the fairseq2 command for your variant.
|
25 |
|
26 |
If running for the first time, create conda environment and install the desired torch version. Then install the rest of the requirements:
|
27 |
```
|
|
|
29 |
conda create --yes --name smlss_server python=3.8 libsndfile==1.0.31
|
30 |
conda activate smlss_server
|
31 |
conda install --yes pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
|
32 |
+
pip install fairseq2 --pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.1.1/cu118
|
33 |
pip install -r requirements.txt
|
34 |
```
|
35 |
|
seamless_server/requirements.txt
CHANGED
@@ -1,4 +1,3 @@
|
|
1 |
-
--pre --extra-index-url https://fair.pkg.atmeta.com/fairseq2/whl/nightly/pt2.1.1/cu118
|
2 |
simuleval==1.1.3
|
3 |
# seamless_communication
|
4 |
./whl/seamless_communication-1.0.0-py3-none-any.whl
|
|
|
|
|
1 |
simuleval==1.1.3
|
2 |
# seamless_communication
|
3 |
./whl/seamless_communication-1.0.0-py3-none-any.whl
|