Papers
arxiv:2305.13516

Scaling Speech Technology to 1,000+ Languages

Published on May 22, 2023
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Expanding the language coverage of speech technology has the potential to improve access to information for many more people. However, current speech technology is restricted to about one hundred languages which is a small fraction of the over 7,000 languages spoken around the world. The Massively Multilingual Speech (MMS) project increases the number of supported languages by 10-40x, depending on the task. The main ingredients are a new dataset based on readings of publicly available religious texts and effectively leveraging self-supervised learning. We built pre-trained wav2vec 2.0 models covering 1,406 languages, a single multilingual automatic speech recognition model for 1,107 languages, speech synthesis models for the same number of languages, as well as a language identification model for 4,017 languages. Experiments show that our multilingual speech recognition model more than halves the word error rate of Whisper on 54 languages of the FLEURS benchmark while being trained on a small fraction of the labeled data.

Community

We've also released an in-depth blog post on how to fine-tune the model: https://huggingface.co/blog/mms_adapters

Take the Massively Multilingual Speech models out for a spin πŸ€—:

  1. Gradio Demo: https://huggingface.co/spaces/mms-meta/MMS
  2. Model Docs: https://huggingface.co/docs/transformers/main/en/model_doc/mms#overview
  3. Fine-tune MMS ASR models in your language: https://huggingface.co/blog/mms_adapters

The paper mentions Slovak language but there is no model for Slovak on HF. Is there an omission or was the model not developed for Slovak?

Sign up or log in to comment

Models citing this paper 1,000

Browse 1,000+ models citing this paper

Datasets citing this paper 2

Spaces citing this paper 295

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.