arc-r commited on
Commit
4925b72
1 Parent(s): d3c3d93

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +41 -0
README.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ja
4
+ tags:
5
+ - audio
6
+ - automatic-speech-recognition
7
+ library_name: ctranslate2
8
+ ---
9
+
10
+ # whisper-large-v2-mix-jp model for CTranslate2
11
+
12
+ This repository contains the conversion of [whisper-large-v2-jp](https://huggingface.co/vumichien/whisper-large-v2-jp) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
13
+
14
+ This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
15
+
16
+ ## Example
17
+
18
+ ```python
19
+ from faster_whisper import WhisperModel
20
+
21
+ model = WhisperModel("whisper-large-v2-jp")
22
+
23
+ segments, info = model.transcribe("audio.mp3")
24
+ for segment in segments:
25
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
26
+ ```
27
+
28
+ ## Conversion details
29
+
30
+ The original model was converted with the following command:
31
+
32
+ ```
33
+ ct2-transformers-converter --model vumichien/whisper-large-v2-jp --output_dir faster-whisper-large-v2-jp \
34
+ --quantization float16
35
+ ```
36
+
37
+ Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
38
+
39
+ ## More information
40
+
41
+ **For more information about the original model, see its [model card](https://huggingface.co/vumichien/whisper-large-v2-jp).**