|
--- |
|
language: |
|
- ja |
|
tags: |
|
- audio |
|
- automatic-speech-recognition |
|
library_name: ctranslate2 |
|
--- |
|
|
|
# whisper-large-v2-mix-jp model for CTranslate2 |
|
|
|
This repository contains the conversion of [vumichien/whisper-large-v2-mix-jp](https://huggingface.co/vumichien/whisper-large-v2-mix-jp) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format. |
|
|
|
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper). |
|
|
|
## Example |
|
|
|
```python |
|
from faster_whisper import WhisperModel |
|
|
|
model = WhisperModel("arc-r/faster-whisper-large-v2-mix-jp") |
|
|
|
segments, info = model.transcribe("audio.mp3") |
|
for segment in segments: |
|
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text)) |
|
``` |
|
|
|
## Conversion details |
|
|
|
The original model was converted with the following command: |
|
|
|
``` |
|
ct2-transformers-converter --model vumichien/whisper-large-v2-mix-jp --output_dir faster-whisper-large-v2-mix-jp \ |
|
--quantization float16 |
|
``` |
|
|
|
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html). |
|
|
|
## More information |
|
|
|
**For more information about the original model, see its [model card](https://huggingface.co/vumichien/whisper-large-v2-mix-jp).** |
|
|