Edit model card

Whisper-Small-En: Optimized for Mobile Deployment

Automatic speech recognition (ASR) model for English transcription as well as translation

OpenAI’s Whisper ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a mean decoded length specified below.

This model is an implementation of Whisper-Small-En found here.

This repository provides scripts to run Whisper-Small-En on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Speech recognition
  • Model Stats:
    • Model checkpoint: small.en
    • Input resolution: 80x3000 (30 seconds audio)
    • Mean decoded sequence length: 112 tokens
    • Number of parameters (WhisperEncoder): 102M
    • Model size (WhisperEncoder): 390 MB
    • Number of parameters (WhisperDecoder): 139M
    • Model size (WhisperDecoder): 531 MB
Model Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Precision Primary Compute Unit Target Model
WhisperEncoder Samsung Galaxy S23 Snapdragon® 8 Gen 2 TFLITE 701.302 ms 72 - 469 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder Samsung Galaxy S23 Snapdragon® 8 Gen 2 QNN 882.816 ms 0 - 210 MB FP16 NPU Whisper-Small-En.so
WhisperEncoder Samsung Galaxy S24 Snapdragon® 8 Gen 3 TFLITE 518.612 ms 110 - 198 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder Samsung Galaxy S24 Snapdragon® 8 Gen 3 ONNX 806.969 ms 198 - 4389 MB FP16 NPU Whisper-Small-En.onnx
WhisperEncoder Snapdragon 8 Elite QRD Snapdragon® 8 Elite TFLITE 555.939 ms 46 - 72 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 551.503 ms 0 - 909 MB FP16 NPU Use Export Script
WhisperEncoder Snapdragon 8 Elite QRD Snapdragon® 8 Elite ONNX 696.967 ms 115 - 2771 MB FP16 NPU Whisper-Small-En.onnx
WhisperEncoder QCS8550 (Proxy) QCS8550 Proxy TFLITE 696.87 ms 110 - 448 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder QCS8550 (Proxy) QCS8550 Proxy QNN 672.35 ms 1 - 2 MB FP16 NPU Use Export Script
WhisperEncoder SA8255 (Proxy) SA8255P Proxy TFLITE 686.872 ms 0 - 457 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder SA8255 (Proxy) SA8255P Proxy QNN 725.596 ms 1 - 3 MB FP16 NPU Use Export Script
WhisperEncoder SA8775 (Proxy) SA8775P Proxy TFLITE 695.077 ms 92 - 485 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder SA8775 (Proxy) SA8775P Proxy QNN 699.338 ms 6 - 35 MB FP16 NPU Use Export Script
WhisperEncoder SA8650 (Proxy) SA8650P Proxy TFLITE 694.861 ms 92 - 485 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder SA8650 (Proxy) SA8650P Proxy QNN 703.611 ms 1 - 2 MB FP16 NPU Use Export Script
WhisperEncoder SA8295P ADP SA8295P TFLITE 658.845 ms 108 - 140 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder SA8295P ADP SA8295P QNN 728.17 ms 3 - 8 MB FP16 NPU Use Export Script
WhisperEncoder QCS8450 (Proxy) QCS8450 Proxy TFLITE 914.664 ms 110 - 208 MB FP16 GPU Whisper-Small-En.tflite
WhisperEncoder Snapdragon X Elite CRD Snapdragon® X Elite QNN 525.457 ms 0 - 0 MB FP16 NPU Use Export Script
WhisperEncoder Snapdragon X Elite CRD Snapdragon® X Elite ONNX 1357.199 ms 449 - 449 MB FP16 NPU Whisper-Small-En.onnx
WhisperDecoder Samsung Galaxy S23 Snapdragon® 8 Gen 2 TFLITE 25.194 ms 16 - 19 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder Samsung Galaxy S23 Snapdragon® 8 Gen 2 QNN 11.873 ms 61 - 130 MB FP16 NPU Whisper-Small-En.so
WhisperDecoder Samsung Galaxy S23 Snapdragon® 8 Gen 2 ONNX 56.651 ms 120 - 122 MB FP16 NPU Whisper-Small-En.onnx
WhisperDecoder Samsung Galaxy S24 Snapdragon® 8 Gen 3 TFLITE 19.434 ms 16 - 1127 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder Samsung Galaxy S24 Snapdragon® 8 Gen 3 QNN 9.3 ms 54 - 151 MB FP16 NPU Whisper-Small-En.so
WhisperDecoder Samsung Galaxy S24 Snapdragon® 8 Gen 3 ONNX 46.458 ms 85 - 1561 MB FP16 NPU Whisper-Small-En.onnx
WhisperDecoder Snapdragon 8 Elite QRD Snapdragon® 8 Elite TFLITE 15.335 ms 14 - 262 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder Snapdragon 8 Elite QRD Snapdragon® 8 Elite QNN 8.052 ms 57 - 190 MB FP16 NPU Use Export Script
WhisperDecoder Snapdragon 8 Elite QRD Snapdragon® 8 Elite ONNX 39.762 ms 108 - 881 MB FP16 NPU Whisper-Small-En.onnx
WhisperDecoder QCS8550 (Proxy) QCS8550 Proxy TFLITE 24.804 ms 13 - 16 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder QCS8550 (Proxy) QCS8550 Proxy QNN 12.306 ms 57 - 58 MB FP16 NPU Use Export Script
WhisperDecoder SA8255 (Proxy) SA8255P Proxy TFLITE 25.3 ms 16 - 19 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder SA8255 (Proxy) SA8255P Proxy QNN 12.783 ms 61 - 62 MB FP16 NPU Use Export Script
WhisperDecoder SA8775 (Proxy) SA8775P Proxy TFLITE 25.293 ms 16 - 18 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder SA8775 (Proxy) SA8775P Proxy QNN 12.605 ms 57 - 58 MB FP16 NPU Use Export Script
WhisperDecoder SA8650 (Proxy) SA8650P Proxy TFLITE 25.415 ms 16 - 19 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder SA8650 (Proxy) SA8650P Proxy QNN 12.632 ms 64 - 65 MB FP16 NPU Use Export Script
WhisperDecoder SA8295P ADP SA8295P TFLITE 27.126 ms 16 - 243 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder SA8295P ADP SA8295P QNN 14.275 ms 57 - 62 MB FP16 NPU Use Export Script
WhisperDecoder QCS8450 (Proxy) QCS8450 Proxy TFLITE 27.358 ms 16 - 1105 MB FP16 NPU Whisper-Small-En.tflite
WhisperDecoder QCS8450 (Proxy) QCS8450 Proxy QNN 14.504 ms 57 - 156 MB FP16 NPU Use Export Script
WhisperDecoder Snapdragon X Elite CRD Snapdragon® X Elite QNN 10.992 ms 61 - 61 MB FP16 NPU Use Export Script
WhisperDecoder Snapdragon X Elite CRD Snapdragon® X Elite ONNX 53.612 ms 232 - 232 MB FP16 NPU Whisper-Small-En.onnx

Installation

This model can be installed as a Python package via pip.

pip install "qai-hub-models[whisper_small_en]"

Configure Qualcomm® AI Hub to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.whisper_small_en.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.whisper_small_en.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.whisper_small_en.export
Profiling Results
------------------------------------------------------------
WhisperEncoder
Device                          : Samsung Galaxy S23 (13)   
Runtime                         : TFLITE                    
Estimated inference time (ms)   : 701.3                     
Estimated peak memory usage (MB): [72, 469]                 
Total # Ops                     : 911                       
Compute Unit(s)                 : GPU (900 ops) CPU (11 ops)

------------------------------------------------------------
WhisperDecoder
Device                          : Samsung Galaxy S23 (13)
Runtime                         : TFLITE                 
Estimated inference time (ms)   : 25.2                   
Estimated peak memory usage (MB): [16, 19]               
Total # Ops                     : 2573                   
Compute Unit(s)                 : NPU (2573 ops)         

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.whisper_small_en import WhisperEncoder,WhisperDecoder

# Load the model
encoder_model = WhisperEncoder.from_pretrained()
decoder_model = WhisperDecoder.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S23")

# Trace model
encoder_input_shape = encoder_model.get_input_spec()
encoder_sample_inputs = encoder_model.sample_inputs()

traced_encoder_model = torch.jit.trace(encoder_model, [torch.tensor(data[0]) for _, data in encoder_sample_inputs.items()])

# Compile model on a specific device
encoder_compile_job = hub.submit_compile_job(
    model=traced_encoder_model ,
    device=device,
    input_specs=encoder_model.get_input_spec(),
)

# Get target model to run on-device
encoder_target_model = encoder_compile_job.get_target_model()
# Trace model
decoder_input_shape = decoder_model.get_input_spec()
decoder_sample_inputs = decoder_model.sample_inputs()

traced_decoder_model = torch.jit.trace(decoder_model, [torch.tensor(data[0]) for _, data in decoder_sample_inputs.items()])

# Compile model on a specific device
decoder_compile_job = hub.submit_compile_job(
    model=traced_decoder_model ,
    device=device,
    input_specs=decoder_model.get_input_spec(),
)

# Get target model to run on-device
decoder_target_model = decoder_compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

encoder_profile_job = hub.submit_profile_job(
    model=encoder_target_model,
    device=device,
)
decoder_profile_job = hub.submit_profile_job(
    model=decoder_target_model,
    device=device,
)

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

encoder_input_data = encoder_model.sample_inputs()
encoder_inference_job = hub.submit_inference_job(
    model=encoder_target_model,
    device=device,
    inputs=encoder_input_data,
)
encoder_inference_job.download_output_data()
decoder_input_data = decoder_model.sample_inputs()
decoder_inference_job = hub.submit_inference_job(
    model=decoder_target_model,
    device=device,
    inputs=decoder_input_data,
)
decoder_inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub. Sign up for access.

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on Whisper-Small-En's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of Whisper-Small-En can be found here.
  • The license for the compiled assets for on-device deployment can be found here

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Inference API (serverless) does not yet support pytorch models for this pipeline type.