sdelangen commited on
Commit
aef3c9c
1 Parent(s): 58e6e61

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -280,20 +280,18 @@ demo.launch(server_name=args.ip, server_port=args.port)
280
  </details>
281
 
282
  ### Inference on GPU
 
283
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
284
 
285
  ## Parallel Inference on a Batch
286
 
287
- TODO
288
-
289
- Please, [see this Colab notebook](https://colab.research.google.com/drive/1hX5ZI9S4jHIjahFCZnhwwQmFoGAi3tmu?usp=sharing) to figure out how to transcribe in parallel a batch of input sentences using a pre-trained model.
290
 
291
  ### Training
292
 
293
- TODO
294
-
295
- The model was trained with SpeechBrain (Commit hash: 'f73fcc35').
296
- To train it from scratch follow these steps:
297
  1. Clone SpeechBrain:
298
  ```bash
299
  git clone https://github.com/speechbrain/speechbrain/
@@ -307,10 +305,12 @@ pip install -e .
307
 
308
  3. Run Training:
309
  ```bash
310
- cd recipes/LibriSpeech/ASR/transformer
311
- python train.py hparams/conformer_large.yaml --data_folder=your_data_folder
312
  ```
313
 
 
 
314
  ### Limitations
315
  The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
316
 
 
280
  </details>
281
 
282
  ### Inference on GPU
283
+
284
  To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
285
 
286
  ## Parallel Inference on a Batch
287
 
288
+ Currently, the high level transcription interfaces do not support batched inference, but the low-level interfaces (i.e. `encode_chunk`) do.
289
+ We hope to provide efficient functionality for this in the future.
 
290
 
291
  ### Training
292
 
293
+ The model was trained with SpeechBrain (Commit hash: `3f9e33a`).
294
+ To train it from scratch, follow these steps:
 
 
295
  1. Clone SpeechBrain:
296
  ```bash
297
  git clone https://github.com/speechbrain/speechbrain/
 
305
 
306
  3. Run Training:
307
  ```bash
308
+ cd recipes/LibriSpeech/ASR/transducer
309
+ python train.py hparams/conformer_transducer.yaml --data_folder=your_data_folder
310
  ```
311
 
312
+ See the [recipe directory](https://github.com/speechbrain/speechbrain/tree/develop/recipes/LibriSpeech/ASR/transducer) for details.
313
+
314
  ### Limitations
315
  The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
316