metadata
library_name: transformers
language:
- en
- ko
pipeline_tag: translation
tags:
- llama-3-ko
license: mit
datasets:
- recipes
Model Card for Model ID
Model Details
Model Card: llama3-pre1-pre2-ds-lora3 with Fine-Tuning Model Overview Model Name: llama3-pre1-pre2-ds-lora3
Model Type: Transformer-based Language Model
Model Size: 8 billion parameters
by: 4yo1
Languages: English and Korean
Model Description
llama3-pre1-pre2-ds-lora3 is a language model pre-trained on a diverse corpus of English and Korean texts. This fine-tuning approach allows the model to adapt to specific tasks or datasets with a minimal number of additional parameters, making it efficient and effective for specialized applications.
how to use - sample code
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("4yo1/llama3-pre1-pre2-ds-lora3")
model = AutoModel.from_pretrained("4yo1/llama3-pre1-pre2-ds-lora3")
tokenizer = AutoTokenizer.from_pretrained("4yo1/llama3-pre1-pre2-ds-lora3")
datasets:
- recipes
license: mit