Text Generation
Transformers
Safetensors
English
Japanese
llama
conversational
text-generation-inference
Inference Endpoints
s-mizuki-nlp commited on
Commit
b60b1cb
1 Parent(s): 8d5357f

Updated Note

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ coding contents, etc (see the Training Datasets section of the base model) for c
23
  The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
24
  See the Swallow Model Index section to find other model variants.
25
 
26
- **Note**: [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) model was continually pre-trained from the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). All other Llama 3.1 Swallow models were pre-trained from their respective base models.
27
 
28
 
29
  # Release History
 
23
  The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese.
24
  See the Swallow Model Index section to find other model variants.
25
 
26
+ **Note**: [Llama-3.1-Swallow-8B-Instruct-v0.2](https://huggingface.co/tokyotech-llm/Llama-3.1-Swallow-8B-Instruct-v0.2) model was continually pre-trained from the [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) and then instruction-tuned with our instruction datasets.
27
 
28
 
29
  # Release History