This is a Llama-2 version of Guanaco. It was finetuned from the base Llama-7b model using the official training scripts found in the QLoRA repo. I wanted it to be as faithful as possible and therefore changed nothing in the training script beyond the model it was pointing to. The model prompt is therefore also the same as the original Guanaco model.

This repo contains the merged f16 model. The QLoRA adaptor can be found here.

A 13b version of the model can be found here.

Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.

Downloads last month
845
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Mikael110/llama-2-7b-guanaco-fp16

Finetunes
1 model
Quantizations
3 models

Spaces using Mikael110/llama-2-7b-guanaco-fp16 25