Edit model card
zefiro

Model Card for zefiro-base-7b-ITA

Last Update: 20/02/2024

Zefiro base is a continual pretrained model for the Italian language based on Mistral-7b trained on a subset of the Italian subdataset of Oscar and wikipedia dataset.

Model Details

Zefiro base is a continual pre-trained language model started from mistralai/Mistral-7B-v0.1 model to the italian language.

Model description

Code

Lost

Computation

It has been trained on a GPUs cluster of 4 H100s from runpod

Evaluations:

Model Arc-c HellaS MMUL AVG
Mixtral 7x8 52.8 75.1 70.9 66.26666667
LLama2 70b 49.4 70.9 65.1 61.8
zefiro-dpo-7b 52.69 67.09 50.8 56.86
zefiro-base-7b 51.07 63.47 52.97 55.83666667
zefiro-sft-7b 50.98 62.71 51.96 55.21666667
LLama1 34B 42.9 65.4 49.0 52.43333333

Intended uses & limitations

Here's how you can run the model using Transformers from 🤗 :

# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mii-community/zefiro-7b-base-ITA"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.to('cuda')
tokenizer = AutoTokenizer.from_pretrained(model_id, padding_side="left")


sys_prompt = "Sei un assistente disponibile, rispettoso e onesto. " \
         "Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \
         "Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
         "Assicurati che le tue risposte siano socialmente imparziali e positive. " \
         "Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
         "Se non conosci la risposta a una domanda, non condividere informazioni false."

messages = [{ 'content' : sys_prompt, 'role' : 'assistant'}, 
            {'content' : 'Crea una lista su cosa mangiare a pranzo ogni giorno della settimana a pranzo e cena', 'role' : 'user'}]


def generate_text(sys_prompt, user_prompt):
    messages = [{ 'content' : sys_prompt, 'role' : 'assistant'}, 
            {'content' : user_prompt, 'role' : 'user'}]
    prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
    model_inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
    generated_ids = model.generate(**model_inputs, max_new_tokens=1024)
    return tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]


generate_text(sys_prompt, 'cosa ne pensi della politica italiana?')

Bias, Risks, and Limitations

Zefiro-7b-base-ITA has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (mistralai/Mistral-7B-v0.1), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.

Training Data

We used a subset of the italian version of Oscar and Wikipedia a as training data.

Summary

Zefiro-7b-beta-ITA-v0.1 is a continula pre-trained version of mistral-7b for the italian language.

Citation

@misc{tunstall2023zephyr,
      title={Zephyr: Direct Distillation of LM Alignment}, 
      author={Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Kashif Rasul and Younes Belkada and Shengyi Huang and Leandro von Werra and Clémentine Fourrier and Nathan Habib and Nathan Sarrazin and Omar Sanseviero and Alexander M. Rush and Thomas Wolf},
      year={2023},
      eprint={2310.16944},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

@misc{basile2023llamantino,
      title={LLaMAntino: LLaMA 2 Models for Effective Text Generation in Italian Language}, 
      author={Pierpaolo Basile and Elio Musacchio and Marco Polignano and Lucia Siciliani and Giuseppe Fiameni and Giovanni Semeraro},
      year={2023},
      eprint={2312.09993},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Model Card Authors

giux78

Model Card Contact

**ale.ercolani@gmail.com

Downloads last month
526
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mii-community/zefiro-7b-base-ITA

Quantizations
2 models

Datasets used to train mii-community/zefiro-7b-base-ITA