Safetensors
English
llama
sound language model
jan-hq commited on
Commit
40a072d
1 Parent(s): 9deb2ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -10,10 +10,10 @@ tags:
10
 
11
  ## Model Details
12
 
13
- We have developed and released the family [llama3s](https://huggingface.co/collections/homebrew-research/llama3-s-669df2139f0576abc6eb7405). This family is natively understanding audio and text input.
14
-
15
- We expand the Semantic tokens experiment with WhisperVQ as a tokenizer for audio files from [homebrewltd/llama3.1-s-base-v0.2](https://huggingface.co/homebrewltd/llama3.1-s-base-v0.2) with nearly 1B tokens from [Instruction Speech WhisperVQ v2](https://huggingface.co/datasets/homebrewltd/instruction-speech-whispervq-v2) dataset.
16
 
 
 
17
  **Model developers** Homebrew Research.
18
 
19
  **Input** Text and sound.
 
10
 
11
  ## Model Details
12
 
13
+ We have developed and released the family [Ichigo-llama3s](https://huggingface.co/collections/homebrew-research/llama3-s-669df2139f0576abc6eb7405). This family is natively understanding audio and text input.
 
 
14
 
15
+ We expand the Semantic tokens experiment with WhisperVQ as a tokenizer for audio files from [homebrewltd/Ichigo-llama3.1-s-base-v0.3](https://huggingface.co/homebrewltd/Ichigo-llama3.1-s-base-v0.3) with nearly 1B tokens from [Instruction Speech WhisperVQ v3](homebrewltd/mixed-instruction-speech-whispervq-v3-full) dataset.
16
+ This is the model checkpoint from step 7000. Due to some noise in the training data, it has an artificially higher score on the Speech Instruction benchmark.
17
  **Model developers** Homebrew Research.
18
 
19
  **Input** Text and sound.