Edit model card

Audio Spectrogram Transformer (fine-tuned on AudioSet)

Audio Spectrogram Transformer (AST) model fine-tuned on AudioSet. It was introduced in the paper AST: Audio Spectrogram Transformer by Gong et al. and first released in this repository.

Disclaimer: The team releasing Audio Spectrogram Transformer did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

The Audio Spectrogram Transformer is equivalent to ViT, but applied on audio. Audio is first turned into an image (as a spectrogram), after which a Vision Transformer is applied. The model gets state-of-the-art results on several audio classification benchmarks.

Usage

You can use the raw model for classifying audio into one of the AudioSet classes. See the documentation for more info.

Downloads last month
46
Safetensors
Model size
86.1M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MIT/ast-finetuned-audioset-16-16-0.442

Finetunes
1 model
Quantizations
1 model