Whisper Medium EN Fine-Tuned for Air Traffic Control (ATC) - Faster-Whisper Optimized
Model Overview
This model is a fine-tuned version of OpenAI's Whisper Medium EN model, specifically trained on Air Traffic Control (ATC) communication datasets. The fine-tuning process significantly improves transcription accuracy on domain-specific aviation communications, reducing the Word Error Rate (WER) by 84%, compared to the original pretrained model. The model is particularly effective at handling accent variations and ambiguous phrasing often encountered in ATC communications.
This model has been converted to an optimized .bin
format, making it compatible with Faster-Whisper for faster and more efficient inference.
- Base Model: OpenAI Whisper Medium EN
- Fine-tuned Model WER: 15.08%
- Pretrained Model WER: 94.59%
- Relative Improvement: 84.06%
- Optimized Format: Compatible with Faster-Whisper
You can access the fine-tuned model on Hugging Face:
Model Description
Whisper Medium EN fine-tuned for ATC is optimized to handle short, distinct transmissions between pilots and air traffic controllers. It is fine-tuned using data from the ATC Dataset, a combined and cleaned dataset sourced from the following:
- ATCO2 corpus (1-hour test subset)
- UWB-ATCC corpus
The ATC Dataset merges these two original sources, filtering and refining the data to enhance transcription accuracy for domain-specific ATC communications. The model has been further optimized to a .bin
format for compatibility with Faster-Whisper, ensuring faster and more efficient processing.
Intended Use
The fine-tuned Whisper model is designed for:
- Transcribing aviation communication: Providing accurate transcriptions for ATC communications, including accents and variations in English phrasing.
- Air Traffic Control Systems: Assisting in real-time transcription of pilot-ATC conversations, helping improve situational awareness.
- Research and training: Useful for researchers, developers, or aviation professionals studying ATC communication or developing new tools for aviation safety.
You can test the model online using the ATC Transcription Assistant, which lets you upload audio files and generate transcriptions.
Training Procedure
- Hardware: Fine-tuning was conducted on two A100 GPUs with 80GB memory.
- Epochs: 10
- Learning Rate: 1e-5
- Batch Size: 32 (effective batch size with gradient accumulation)
- Augmentation: Dynamic data augmentation techniques (Gaussian noise, pitch shifting, etc.) were applied during training.
- Evaluation Metric: Word Error Rate (WER)
Limitations
While the fine-tuned model performs well in ATC-specific communications, it may not generalize as effectively to other domains of speech. Additionally, like most speech-to-text models, transcription accuracy can be affected by extremely poor-quality audio or heavily accented speech not encountered during training.
References
- Blog Post: Fine-Tuning Whisper for ATC: 84% Improvement in Transcription Accuracy
- GitHub Repository: Fine-Tuning Whisper on ATC Data
- Downloads last month
- 30
Model tree for jacktol/whisper-medium.en-fine-tuned-for-ATC-faster-whisper
Base model
openai/whisper-medium.enDataset used to train jacktol/whisper-medium.en-fine-tuned-for-ATC-faster-whisper
Space using jacktol/whisper-medium.en-fine-tuned-for-ATC-faster-whisper 1
Evaluation results
- Word Error Rate (WER) on ATC DatasetATC Transcription Evaluation15.080