Edit model card

This speech tagger performs transcription, annotates entities, predict intent for SLURP dataset

Model is suitable for voiceAI applications.

Model Details

  • Model type: NeMo ASR
  • Architecture: Conformer CTC
  • Language: English
  • Training data: Slurp dataset
  • Performance metrics: [Metrics]

Usage

To use this model, you need to install the NeMo library:

pip install nemo_toolkit

How to run

import nemo.collections.asr as nemo_asr

# Step 1: Load the ASR model from Hugging Face
model_name = 'WhissleAI/speech-tagger_en_slurp-iot'
asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name)

# Step 2: Provide the path to your audio file
audio_file_path = '/path/to/your/audio_file.wav'

# Step 3: Transcribe the audio
transcription = asr_model.transcribe(paths2audio_files=[audio_file_path])
print(f'Transcription: {transcription[0]}')
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results