phi-2-audio-super / README.md
Thytu's picture
Update README.md
2e2e3c6 verified
|
raw
history blame
1.76 kB
metadata
language:
  - en
license: mit
tags:
  - convAI
  - conversational
  - ASR
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
widget:
  - text: Hello who are you?
    example_title: Identity
  - text: What can you do?
    example_title: Capabilities
  - text: Create a fastapi endpoint to retrieve the weather given a zip code.
    example_title: Coding
pipeline_tag: text-generation

Phi-2-audio-super

Base Model: microsoft/phi-2

Fine-tuned version of abacaj/phi-2-super for ASR on librispeech_asr.

How to run inference for text only:

import transformers
import torch

if __name__ == "__main__":
  model_name = "abacaj/phi-2-audio-super"
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
  
  model = (
      transformers.AutoModelForCausalLM.from_pretrained(
          model_name,
      )
      .to("cuda:0")
      .eval()
  )

  # Exactly like for phi-2-super :D
  messages = [
      {"role": "user", "content": "Hello, who are you?"}
  ]
  inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
  input_ids_cutoff = inputs.size(dim=1)
  
  with torch.no_grad():
      generated_ids = model.generate(
          input_ids=inputs,
          use_cache=True,
          max_new_tokens=512,
          temperature=0.2,
          top_p=0.95,
          do_sample=True,
          eos_token_id=tokenizer.eos_token_id,
          pad_token_id=tokenizer.pad_token_id,
      )
  
  completion = tokenizer.decode(
      generated_ids[0][input_ids_cutoff:],
      skip_special_tokens=True,
  )
  
  print(completion)

How to run inference for ASR:

TODO