[Llama-3.1-8B-EZO-1.1-it] Model Card
ใขใใซๆ ๅ ฑ / Model Information
ใใฎใขใใซใฏใMeta AI ใฎ Llama 3.1 ใใใผในใซใๆฅๆฌ่ชใฟในใฏใงใฎๆง่ฝใๅไธใใใใใใซใใกใคใณใใฅใผใใณใฐใ่กใฃใใใฎใงใใ ใใผในใจใชใLlama-3.1-8B-Instructใใๅคงๅน ใชๆฅๆฌ่ชๆง่ฝๅไธใ้ๆใใพใใใ
This model is based on Meta AI's Llama 3.1 with fine tuning to improve performance on Japanese tasks. Significant Japanese language performance improvement was achieved from the base Llama-3.1-8B-Instruct.
ๆณ็้็ฅ / Legal Notice
This model is subject to the Llama 3.1 Community License Agreement. For detailed information, please refer to the official Llama license page: Llama 3.1 License
ใใฎใขใใซใฏ Llama 3.1 Community License Agreement ใซๅพใใพใใ่ฉณ็ดฐใซใคใใฆใฏใLlama ใฎๅ ฌๅผใฉใคใปใณในใใผใธใใๅ็ งใใ ใใใ
ไฝฟ็จๆนๆณ / Usage
import transformers
import torch
model_id = "HODACHI/Llama-3.1-8B-EZO-1.1-it"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "ใใชใใฏ่ช ๅฎใงๅช็งใชๆฅๆฌไบบใฎใขใทในใฟใณใใงใใ็นใซๆ็คบใ็กใๅ ดๅใฏใๅๅๆฅๆฌ่ชใงๅ็ญใใฆใใ ใใใ"},
{"role": "user", "content": "ไปไบใฎ็ฑๆใๅใๆปใใใใฎใขใคใใขใ5ใคๆใใฆใใ ใใใ"},
]
outputs = pipeline(
messages,
max_new_tokens=512,
)
print(outputs[0]["generated_text"][-1])
ใใณใใใผใฏ็ตๆ / Benchmark Results
ๅถ้ไบ้ ใจๅซ็็่ๆ ฎไบ้ / Limitations and Ethical Considerations
ๆฌใขใใซใฏใLlama 3.1ใใใผในใซใใฆใใใใใๅๆงใฎๅถ้ไบ้ ใจๅซ็็่ๆ ฎไบ้ ใ้ฉ็จใใใพใ๏ผ
ไบๆธฌไธๅฏ่ฝใชๅบๅ: ๅ จใฆใฎLLMใจๅๆงใซใๆฌใขใใซใฎๆฝๅจ็ใชๅบๅใไบๅใซไบๆธฌใใใใจใฏใงใใพใใใๅ ดๅใซใใฃใฆใฏใไธๆญฃ็ขบใๅ่ฆใฎใใใใใใใฏๅ้กใฎใใๅฟ็ญใ็ๆใใๅฏ่ฝๆงใใใใพใใ
ๅฎๅ จๆงใในใใฎๅฟ ่ฆๆง: ้็บ่ ใฏใๆฌใขใใซใ็จใใใขใใชใฑใผใทใงใณใใใใญใคใใๅใซใ็นๅฎใฎใขใใชใฑใผใทใงใณใซๅใใใๅฎๅ จๆงใในใใจใใฅใผใใณใฐใๅฎๆฝใใๅฟ ่ฆใใใใพใใ
ใใซใใชใณใฌใซๅฏพๅฟ: ๆฌใขใใซใฏ่คๆฐใฎ่จ่ชใใตใใผใใใฆใใพใใใใตใใผใใใใฆใใชใ่จ่ชใงใฎไฝฟ็จใฏๆจๅฅจใใใพใใใใตใใผใใใใฆใใชใ่จ่ชใงไฝฟ็จใใๅ ดๅใฏใ้ฉๅใชๆน้ใซๆฒฟใฃใใใกใคใณใใฅใผใใณใฐใจใทในใใ ๅถๅพกใๅฎ่ฃ ใใๅฟ ่ฆใใใใพใใ
ๆฐใใๆ่กใจใใฆใฎใชในใฏ: ๆฌใขใใซใฏๆฐใใๆ่กใงใใใไปใฎๆฐๆ่กใจๅๆงใซใใใฎไฝฟ็จใซใฏใชในใฏใไผดใใพใใใใใพใงใฎใในใใงใฏใในใฆใฎใทใใชใชใใซใใผใใฆใใชใๅฏ่ฝๆงใใใใพใใ
็ถ็ถ็ใชๆนๅใฎๅฟ ่ฆๆง: ใณใใฅใใใฃใใใฎใใฃใผใใใใฏใๅ ฑๅใกใซใใบใ ใ้ใใฆใใขใใซใฎ็ถ็ถ็ใชๆนๅใๅฟ ่ฆใงใใ
้็บ่ ใจไฝฟ็จ่ ใฏใใใใใฎๅถ้ไบ้ ใ่ช่ญใใ่ฒฌไปปใใไฝฟ็จใๅฟใใใใใจใ้่ฆใงใใ่ฉณ็ดฐใซใคใใฆใฏใLlama 3.1ใฎResponsible Use Guideใๅ็ งใใฆใใ ใใใ
This model, being based on Llama 3.1, carries similar limitations and ethical considerations:
Unpredictable Outputs: Like all LLMs, this model's potential outputs cannot be predicted in advance. It may sometimes generate inaccurate, biased, or problematic responses.
Need for Safety Testing: Developers should perform safety testing and tuning tailored to their specific applications before deploying any applications using this model.
Multilingual Considerations: While this model supports multiple languages, use in non-supported languages is not recommended without implementing fine-tuning and system controls aligned with appropriate policies.
Risks as New Technology: This model represents new technology and, like any new technology, there are risks associated with its use. Testing to date may not have covered all scenarios.
Need for Continuous Improvement: Continuous improvement of the model is necessary through community feedback and reporting mechanisms.
It's crucial for developers and users to be aware of these limitations and strive for responsible use. For more information, please refer to the Llama 3.1 Responsible Use Guide.
[Model Data]
Training Dataset]
We extracted high-quality data from Japanese Wikipedia and FineWeb to create instruction data. Our innovative training approach allows for performance improvements across various languages and domains, making the model suitable for global use despite its focus on Japanese data.
ๆฅๆฌ่ชใฎWikiใใผใฟใใใณใFineWebใใ่ฏ่ณชใชใใผใฟใฎใฟใๆฝๅบใใInstructionใใผใฟใไฝๆใใพใใใใใฎใขใใซใงใฏๆฅๆฌ่ชใซ็นๅใใใฆใใพใใใไธ็ไธญใฎใฉใใชใฆใผในใฑใผในใงใๅฉ็จๅฏ่ฝใชใขใใญใผใใงใใ
https://huggingface.co/datasets/legacy-datasets/wikipedia https://huggingface.co/datasets/HuggingFaceFW/fineweb
Data Preprocessing
We used a plain instruction tuning method to train the model on exemplary responses. This approach enhances the model's ability to understand and generate high-quality responses across various languages and contexts.
ใใฌใคใณในใใฉใฏใใใฅใผใใณใฐๆๆณ+QLoRAใ็จใใฆใๆจก็ฏ็ๅ็ญใๅญฆ็ฟใใใพใใใใใฎๆๆณใซใใใใขใใซใฏๆงใ ใช่จ่ชใใณใณใใญในใใซใใใฆ้ซๅ่ณชใชๅฟ็ญใ็่งฃใ็ๆใใ่ฝๅใๅไธใใฆใใพใใ
Implementation Information
[Pre-Instruction Training]
https://huggingface.co/instruction-pretrain/instruction-synthesizer
[Disclaimer]
ใใฎใขใใซใฏ็ ็ฉถ้็บใฎใฟใ็ฎ็ใจใใฆๆไพใใใใใฎใงใใใๅฎ้จ็ใชใใญใใฟใคใใจใฟใชใใใในใใขใใซใงใใ ๅๆฅญ็ใชไฝฟ็จใใใใทใงใณใฏใชใใฃใซใซใช็ฐๅขใธใฎ้ ๅใๆๅณใใใใฎใงใฏใใใพใใใ ๆฌใขใใซใฎไฝฟ็จใฏใไฝฟ็จ่ ใฎ่ฒฌไปปใซใใใฆ่กใใใใใฎใจใใใใฎๆง่ฝใใใณ็ตๆใฏไฟ่จผใใใพใใใ Axcxeptๆ ชๅผไผ็คพใฏใ็ดๆฅ็ใ้ๆฅ็ใ็นๅฅใๅถ็บ็ใ็ตๆ็ใชๆๅฎณใใพใใฏๆฌใขใใซใฎไฝฟ็จใใ็ใใใใใชใๆๅคฑใซๅฏพใใฆใใๅพใใใ็ตๆใซใใใใใใไธๅใฎ่ฒฌไปปใ่ฒ ใใพใใใ ๅฉ็จ่ ใฏใๆฌใขใใซใฎไฝฟ็จใซไผดใใชในใฏใๅๅใซ็่งฃใใ่ชๅทฑใฎๅคๆญใงไฝฟ็จใใใใฎใจใใพใใ
[Hardware]
H100 ร 1(Running in 8h)
ใฏใฌใธใใ / Credits
This model is based on Meta AI's Llama 3.1. We acknowledge and thank the Meta AI team for their work on the base model.
ใใฎใขใใซใฏ Meta AI ใฎ Llama 3.1 ใใใผในใซใใฆใใพใใใใผในใขใใซใฎ้็บใซๆบใใฃใ Meta AI ใใผใ ใซๆ่ฌใจๅฐๆฌใฎๆใ่กจใใพใใ
[We are.]
- Downloads last month
- 9,297