Stanford Alpaca-7B

This repo hosts the weight diff for Stanford Alpaca-7B that can be used to reconstruct the original model weights when applied to Meta's LLaMA weights.

To recover the original Alpaca-7B weights, follow these steps:

1. Convert Meta's released weights into huggingface format. Follow this guide:
    https://huggingface.co/docs/transformers/main/model_doc/llama
2. Make sure you cloned the released weight diff into your local machine. The weight diff is located at:
    https://huggingface.co/tatsu-lab/alpaca-7b/tree/main
3. Run this function with the correct paths. E.g.,
    python weight_diff.py recover --path_raw <path_to_step_1_dir> --path_diff <path_to_step_2_dir> --path_tuned <path_to_store_recovered_weights>

Once step 3 completes, you should have a directory with the recovered weights, from which you can load the model like the following

import transformers
alpaca_model = transformers.AutoModelForCausalLM.from_pretrained("<path_to_store_recovered_weights>")
alpaca_tokenizer = transformers.AutoTokenizer.from_pretrained("<path_to_store_recovered_weights>")
Downloads last month
813
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tatsu-lab/alpaca-7b-wdiff

Adapters
1 model
Finetunes
1 model

Spaces using tatsu-lab/alpaca-7b-wdiff 2