license: apache-2.0
datasets:
- japanese-asr/en_asr.mls
- japanese-asr/ja_asr.reazon_speech_all
language:
- en
- ja
pipeline_tag: automatic-speech-recognition
library_name: transformers
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
Kotoba-Whisper-Bilingual (v1.0)
Kotoba-Whisper-Bilingual is a collection of distilled Whisper models trained for
- Japanese ASR
- English ASR
- Speech-to-text translation (Japanese -> English)
- Speech-to-text translation (English -> Japanese)
developed through the collaboration bewteen Asahi Ushio and Kotoba Technologies. Following the original work of distil-whisper (Robust Knowledge Distillation via Large-Scale Pseudo Labelling), we employ OpenAI's Whisper large-v3 as the teacher model for Japanese and English ASR, while we translate the transcription into English and Japanese by chatgpt to obtain training dataset for speech-to-text translation. We employ ReazonSpeech for Japanese ASR and Japanese speech to English text translation, and Multilingual LibriSpeech for English ASR and English speech to Japanese text translation. Kotoba-whisper-bilingual's loss objective consists of cross-entropy on both of ASR and translation tasks, while KL divergence loss only for ASR task. The student model consists the full encoder of the teacher large-v3 model and the decoder with two layers initialized from the first and last layer of the large-v3 model.
Kotoba-Whisper is 6.3x faster than large-v3, while retaining as low error rate as the large-v3.
Evaluation
Speech2Text Translation (Japanese->English)
model | CoVoST2 (Ja->En) | Fleurs (Ja->En) |
---|---|---|
kotoba-tech/kotoba-whisper-bilingual-v1.0 | 73.9 | 98.7 |
japanese-asr/ja-cascaded-s2t-translation (facebook/nllb-200-3.3B) | 64.3 | 67.1 |
japanese-asr/ja-cascaded-s2t-translation (facebook/nllb-200-1.3B) | 65.4 | 68.9 |
japanese-asr/ja-cascaded-s2t-translation (facebook/nllb-200-distilled-1.3B) | 65.6 | 67.4 |
japanese-asr/ja-cascaded-s2t-translation (facebook/nllb-200-distilled-600M) | 68.2 | 72.2 |
openai/whisper-large-v3 | 71 | 86.1 |
openai/whisper-large-v2 | 66.4 | 78.8 |
openai/whisper-large | 66.5 | 86.1 |
openai/whisper-medium | 70.3 | 97.2 |
openai/whisper-small | 97.3 | 132.2 |
openai/whisper-base | 186.2 | 349.6 |
openai/whisper-tiny | 377.2 | 474 |
Speech2Text Translation (English->Japanese)
model | CoVoST2 (En->Ja) | Fleurs (En->JA) |
---|---|---|
kotoba-tech/kotoba-whisper-bilingual-v1.0 | 69.1 | 74.4 |
japanese-asr/en-cascaded-s2t-translation (facebook/nllb-200-3.3B) | 62.4 | 63.5 |
japanese-asr/en-cascaded-s2t-translation (facebook/nllb-200-1.3B) | 64.4 | 67.2 |
japanese-asr/en-cascaded-s2t-translation (facebook/nllb-200-distilled-1.3B) | 62.4 | 62.9 |
japanese-asr/en-cascaded-s2t-translation (facebook/nllb-200-distilled-600M) | 63.4 | 66.2 |
openai/whisper-large-v3 | 178.9 | 209.5 |
openai/whisper-large-v2 | 179.6 | 201.8 |
openai/whisper-large | 178.7 | 201.8 |
openai/whisper-medium | 178.7 | 202 |
openai/whisper-small | 178.9 | 206.8 |
openai/whisper-base | 179.5 | 214.2 |
openai/whisper-tiny | 185.2 | 200.5 |
ASR (Japanese)
model | CommonVoice 8 (Japanese test set) | JSUT Basic 5000 | ReazonSpeech (held out test set) |
---|---|---|---|
kotoba-tech/kotoba-whisper-bilingual-v1.0 | 9.8 | 9.3 | 16.8 |
kotoba-tech/kotoba-whisper-v2.0 | 9.2 | 8.4 | 11.6 |
kotoba-tech/kotoba-whisper-v1.0 | 9.4 | 8.5 | 12.2 |
openai/whisper-large-v3 | 8.5 | 7.1 | 14.9 |
openai/whisper-large-v2 | 9.7 | 8.2 | 28.1 |
openai/whisper-large | 10 | 8.9 | 34.1 |
openai/whisper-medium | 11.5 | 10 | 33.2 |
openai/whisper-small | 15.1 | 14.2 | 41.5 |
openai/whisper-base | 28.6 | 24.9 | 70.4 |
openai/whisper-tiny | 53.7 | 36.5 | 137.9 |
reazon-research/reazonspeech-nemo-v2 | 9.1 | 7.4 | 11.2 |
ASR (English)
model | ESB (ami) | ESB (earnings22) | ESB (librispeech) | ESB (tedlium) | ESB (voxpopuli) |
---|---|---|---|---|---|
kotoba-tech/kotoba-whisper-bilingual-v1.0 | 16.7 | 15.3 | 2.4 | 4.1 | 8.3 |
openai/whisper-large-v3 | 17.9 | 14.9 | 2.1 | 3.8 | 12.7 |
openai/whisper-large-v2 | 18.9 | 16.7 | 2.3 | 4.9 | 7.7 |
openai/whisper-large | 18.8 | 14.9 | 2.6 | 4.2 | 7.7 |
openai/whisper-medium | 18.3 | 14.9 | 2.5 | 4.3 | 7.9 |
openai/whisper-small | 23.1 | 17.2 | 3.5 | 5.3 | 10.8 |
openai/whisper-base | 26.6 | 21 | 6 | 6.1 | 11.3 |
openai/whisper-tiny | 31.9 | 30.5 | 8.2 | 11.7 | 15.1 |
japanese-asr/distil-whisper-bilingual-v1.0 | 20.7 | 18.6 | 2.4 | 6.4 | 10 |
- Latency: As kotoba-whisper uses the same architecture as distil-whisper/distil-large-v3, it inherits the benefit of the improved latency compared to openai/whisper-large-v3 (6.3x faster than large-v3, see the table below taken from distil-whisper/distil-large-v3).
Model | Params / M | Rel. Latency |
---|---|---|
kotoba-tech/kotoba-whisper-v2.0 | 756 | 6.3 |
kotoba-tech/kotoba-whisper-v1.0 | 756 | 6.3 |
openai/whisper-large-v3 | 1550 | 1.0 |
Transformers Usage
Kotoba-Whisper is supported in the Hugging Face 🤗 Transformers library from version 4.39 onwards. To run the model, first install the latest version of Transformers.
pip install --upgrade pip
pip install --upgrade transformers accelerate
Short-Form Transcription
The model can be used with the pipeline
class to transcribe short-form audio files (< 30-seconds) as follows:
Download sample audio.
wget
import torch
from transformers import pipeline
from datasets import load_dataset
# config
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
# load model
pipe = pipeline(
"automatic-speech-recognition",
model="kotoba-tech/kotoba-whisper-bilingual-v1.0",
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs
)
generate_kwargs = {"language": "ja", "task": "transcribe"}
# load sample audio
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
sample = dataset[0]["audio"]
# run inference
result = pipe(sample, generate_kwargs=generate_kwargs)
print(result["text"])
- To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline (make sure the audio is sampled in 16kHz):
- result = pipe(sample, generate_kwargs=generate_kwargs)
+ result = pipe("audio.mp3", generate_kwargs=generate_kwargs)
- For segment-level timestamps, pass the argument
return_timestamps=True
and return the"chunks"
output:
result = pipe(sample, return_timestamps=True, generate_kwargs=generate_kwargs)
print(result["chunks"])
Sequential Long-Form: Kotoba-whisper is designed to be compatible with OpenAI's sequential long-form transcription algorithm. This algorithm uses a sliding window for buffered inference of long audio files (> 30-seconds), and returns more accurate transcriptions compared to the chunked long-form algorithm. As default, if long audio files are passed to the model, it will transcribes with the sequential long-form transcription. The sequential long-form algorithm should be used in either of the following scenarios:
- Transcription accuracy is the most important factor, and latency is less of a consideration
- You are transcribing batches of long audio files, in which case the latency of sequential is comparable to chunked, while being up to 0.5% WER more accurate
If you are transcribing single long audio files and latency is the most important factor, you should use the chunked algorithm
described below. For a detailed explanation of the different algorithms, refer to Sections 5 of
the Distil-Whisper paper. The pipeline
class can be used to transcribe long audio files with the sequential algorithm as follows:
Chunked Long-Form
This algorithm should be used when a single large audio file is being transcribed and the fastest possible inference is required. In such circumstances,
the chunked algorithm is up to 9x faster than OpenAI's sequential long-form implementation (see Table 7 of the Distil-Whisper paper).
To enable chunking, pass the chunk_length_s
parameter to the pipeline
. For distil-large-v3, a chunk length of 25-seconds
is optimal. To activate batching over long audio files, pass the argument batch_size
:
import torch
from transformers import pipeline
from datasets import load_dataset
# config
model_id = "kotoba-tech/kotoba-whisper-v2.0"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "ja", "task": "transcribe"}
# load model
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs,
chunk_length_s=15,
batch_size=16
)
# load sample audio (concatenate instances to create a long audio)
dataset = load_dataset("japanese-asr/ja_asr.reazonspeech_test", split="test")
sample = {"array": np.concatenate([i["array"] for i in dataset[:20]["audio"]]), "sampling_rate": dataset[0]['audio']['sampling_rate']}
# run inference
result = pipe(sample, generate_kwargs=generate_kwargs)
print(result["text"])
Additional Speed & Memory Improvements
You can apply additional speed and memory improvements to further reduce the inference speed and VRAM requirements. These optimisations primarily target the attention kernel, swapping it from an eager implementation to a more efficient flash attention version.
Flash Attention 2
We recommend using Flash-Attention 2 if your GPU allows for it. To do so, you first need to install Flash Attention:
pip install flash-attn --no-build-isolation
Then pass attn_implementation="flash_attention_2"
to from_pretrained
:
- model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
+ model_kwargs = {"attn_implementation": "flash_attention_2"} if torch.cuda.is_available() else {}
Model Details
See https://huggingface.co/distil-whisper/distil-large-v3#model-details.
Training
Please refer to https://github.com/kotoba-tech/kotoba-whisper for the model training detail. Datasets used in distillation and the whole model variations can be found at https://huggingface.co/japanese-asr.
Evaluation
The following code-snippets demonstrates how to evaluate the kotoba-whisper model on the Japanese subset of the CommonVoice 8.0. First, we need to install the required packages, including 🤗 Datasets to load the audio data, and 🤗 Evaluate to perform the WER calculation:
pip install --upgrade pip
pip install --upgrade transformers datasets[audio] evaluate jiwer
Evaluation can then be run end-to-end with the following example:
import torch
from transformers import pipeline
from datasets import load_dataset
from evaluate import load
from transformers.models.whisper.english_normalizer import BasicTextNormalizer
# model config
model_id = "kotoba-tech/kotoba-whisper-v2.0"
torch_dtype = torch.bfloat16 if torch.cuda.is_available() else torch.float32
device = "cuda:0" if torch.cuda.is_available() else "cpu"
model_kwargs = {"attn_implementation": "sdpa"} if torch.cuda.is_available() else {}
generate_kwargs = {"language": "japanese", "task": "transcribe"}
normalizer = BasicTextNormalizer()
# data config
dataset_name = "japanese-asr/ja_asr.reazonspeech_test"
audio_column = 'audio'
text_column = 'transcription'
# load model
pipe = pipeline(
"automatic-speech-recognition",
model=model_id,
torch_dtype=torch_dtype,
device=device,
model_kwargs=model_kwargs,
batch_size=16
)
# load the dataset and sample the audio with 16kHz
dataset = load_dataset(dataset_name, split="test")
transcriptions = pipe(dataset['audio'])
transcriptions = [normalizer(i['text']).replace(" ", "") for i in transcriptions]
references = [normalizer(i).replace(" ", "") for i in dataset['transcription']]
# compute the CER metric
cer_metric = load("cer")
cer = 100 * cer_metric.compute(predictions=transcriptions, references=references)
print(cer)
The huggingface links to the major Japanese ASR datasets for evaluation are summarized at here.
For example, to evaluate the model on JSUT Basic5000, change the dataset_name
:
- dataset_name = "japanese-asr/ja_asr.reazonspeech_test"
+ dataset_name = "japanese-asr/ja_asr.jsut_basic5000"
Acknowledgements
- OpenAI for the Whisper model.
- Hugging Face 🤗 Transformers for the model integration.
- Hugging Face 🤗 for the Distil-Whisper codebase.
- Reazon Human Interaction Lab for the ReazonSpeech dataset.