Spaces:
Running
Running
Update dockerfile and README
Browse files- README.md +31 -5
- dockerfile +1 -1
README.md
CHANGED
@@ -51,19 +51,19 @@ Note that this requires a VAD to function properly, otherwise only the first GPU
|
|
51 |
of running Silero-Vad, at a slight cost to accuracy.
|
52 |
|
53 |
This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also
|
54 |
-
set the `vad_process_timeout` option
|
55 |
The default value is 30 minutes.
|
56 |
|
57 |
```
|
58 |
python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600
|
59 |
```
|
60 |
|
61 |
-
You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to free video memory after a period of time.
|
62 |
|
63 |
# Docker
|
64 |
|
65 |
-
To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU.
|
66 |
-
check out this repository and build an image:
|
67 |
```
|
68 |
sudo docker build -t whisper-webui:1 .
|
69 |
```
|
@@ -78,11 +78,37 @@ Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and
|
|
78 |
sudo docker run -d -p 7860:7860 whisper-webui:1
|
79 |
```
|
80 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
## Caching
|
82 |
|
83 |
Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand.
|
84 |
To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally)
|
85 |
prepopulate the directory with the different Whisper models.
|
86 |
```
|
87 |
-
sudo docker run -d --gpus=all -p 7860:7860 --mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper whisper-webui:
|
88 |
```
|
|
|
51 |
of running Silero-Vad, at a slight cost to accuracy.
|
52 |
|
53 |
This is achieved by creating N child processes (where N is the number of selected devices), where Whisper is run concurrently. In `app.py`, you can also
|
54 |
+
set the `vad_process_timeout` option. This configures the number of seconds until a process is killed due to inactivity, freeing RAM and video memory.
|
55 |
The default value is 30 minutes.
|
56 |
|
57 |
```
|
58 |
python app.py --input_audio_max_duration -1 --vad_parallel_devices 0,1 --vad_process_timeout 3600
|
59 |
```
|
60 |
|
61 |
+
You may also use `vad_process_timeout` with a single device (`--vad_parallel_devices 0`), if you prefer to always free video memory after a period of time.
|
62 |
|
63 |
# Docker
|
64 |
|
65 |
+
To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU.
|
66 |
+
Then either use the GitLab hosted container below, or check out this repository and build an image:
|
67 |
```
|
68 |
sudo docker build -t whisper-webui:1 .
|
69 |
```
|
|
|
78 |
sudo docker run -d -p 7860:7860 whisper-webui:1
|
79 |
```
|
80 |
|
81 |
+
# GitLab Docker Registry
|
82 |
+
|
83 |
+
This Docker container is also hosted on GitLab:
|
84 |
+
|
85 |
+
```
|
86 |
+
sudo docker run -d --gpus=all -p 7860:7860 registry.gitlab.com/aadnk/whisper-webui:latest
|
87 |
+
```
|
88 |
+
|
89 |
+
## Custom Arguments
|
90 |
+
|
91 |
+
You can also pass custom arguments to `app.py` in the Docker container, for instance to be able to use all the GPUs in parallel:
|
92 |
+
```
|
93 |
+
sudo docker run -d --gpus all -p 7860:7860 --mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper --restart=on-failure:15 registry.gitlab.com/aadnk/whisper-webui:latest \
|
94 |
+
app.py --input_audio_max_duration -1 --server_name 0.0.0.0 --vad_parallel_devices 0,1 --default_vad silero-vad --default_model_name large
|
95 |
+
```
|
96 |
+
|
97 |
+
You can also call `cli.py` the same way:
|
98 |
+
```
|
99 |
+
sudo docker run --gpus all \
|
100 |
+
--mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper \
|
101 |
+
--mount type=bind,source=${PWD},target=/app/data \
|
102 |
+
registry.gitlab.com/aadnk/whisper-webui:latest \
|
103 |
+
cli.py --model large --vad_parallel_devices 0,1 --vad silero-vad \
|
104 |
+
--output_dir /app/data /app/data/YOUR-FILE-HERE.mp4
|
105 |
+
```
|
106 |
+
|
107 |
## Caching
|
108 |
|
109 |
Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand.
|
110 |
To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally)
|
111 |
prepopulate the directory with the different Whisper models.
|
112 |
```
|
113 |
+
sudo docker run -d --gpus=all -p 7860:7860 --mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper registry.gitlab.com/aadnk/whisper-webui:latest
|
114 |
```
|
dockerfile
CHANGED
@@ -17,4 +17,4 @@ ENV PYTHONUNBUFFERED=1
|
|
17 |
|
18 |
WORKDIR /opt/whisper-webui/
|
19 |
ENTRYPOINT ["python3"]
|
20 |
-
CMD ["app
|
|
|
17 |
|
18 |
WORKDIR /opt/whisper-webui/
|
19 |
ENTRYPOINT ["python3"]
|
20 |
+
CMD ["app.py", "--input_audio_max_duration -1", "--server_name 0.0.0.0"]
|