metadata
base_model: meta-llama/Llama-3.2-1B-Instruct
datasets:
- generator
library_name: peft
license: llama3.2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-Indonesian
results: []
language:
- id
pipeline_tag: text-generation
Llama-3.2-1B-Indonesian
This model is a fine-tuned version of meta-llama/Llama-3.2-1B-Instruct that has been optimized for Indonesian language understanding and generation..
Training and evaluation data
Ichsan2895/alpaca-gpt4-indonesian
Use WIth Transformers
import torch
from transformers import pipeline
model_id = "meta-llama/Llama-3.2-3B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
{"role": "user", "content": "Who are you?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 6
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
Training results
![Train Loss]
Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.46.1
- Pytorch 2.4.0+cu121
- Datasets 2.16.1
- Tokenizers 0.20.1