Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Stanford Alpaca

This is a replica of Alpaca by Stanford' tatsu

Trained using the original instructions with a minor modification in FSDP mode

Other versions:

13B: https://huggingface.co/chavinlo/alpaca-13b

13B -> GPT4 : https://huggingface.co/chavinlo/gpt4-x-alpaca

Compute Used

Trained on 4xA100s for 6H Donated by redmond.ai

NO LORA HAS BEEN USED, this is a natively-finetuned model, hence "alpaca-native"

If you are interested on more llama-based models, you can check out my profile or search for other models at https://huggingface.co/models?other=llama

This (MIGHT) be a quantized version of this model, but be careful: https://boards.4channel.org/g/thread/92173062#p92182396

CONFIGURATION (default except fsdp):

torchrun --nproc_per_node=4 --master_port=3045 train.py \
    --model_name_or_path /workspace/llama-7b-hf \
    --data_path ./alpaca_data.json \
    --bf16 True \
    --output_dir /workspace/output \
    --num_train_epochs 3 \
    --per_device_train_batch_size 4 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 8 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 200 \
    --save_total_limit 1 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --fsdp "shard_grad_op auto_wrap" \
    --fsdp_transformer_layer_cls_to_wrap 'LLaMADecoderLayer' \
    --tf32 True --report_to="wandb"

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 41.96
ARC (25-shot) 52.3
HellaSwag (10-shot) 77.09
MMLU (5-shot) 41.6
TruthfulQA (0-shot) 37.58
Winogrande (5-shot) 69.46
GSM8K (5-shot) 1.44
DROP (3-shot) 14.23
Downloads last month
3,291
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for chavinlo/alpaca-native

Adapters
6 models

Spaces using chavinlo/alpaca-native 35

Collection including chavinlo/alpaca-native