Edit model card

Model Card for Model raicrits/Hermes7b_ITA

An open-source LLaMa2 language model of 7b parameters fine-tuned (using as base model NousResearch/Nous-Hermes-llama-2-7b) to follow instructions in italian.

Model Description

This model is a LLM of 7b parameters based on NousResearch/Nous-Hermes-llama-2-7b, a version of meta-llama/Llama-2-7b fine-tuned to follow instructions. The model was further fine-tuned in order to follow instructions in italian, using LoRA approach and a dataset of 120k random pairs of instruction/answer from raicrits/Orca_ITA_200k.

This repository contains the model weights merged with the LoRA adapters obtained in the fine-tuning procedure.

Uses

The model can be used as is to respond to simple instructions in Italian or can be further fine-tuned to perform specific tasks.

Bias, Risks, and Limitations

As any other LLM it is possible that the model generates content which does not correspond to the reality as well as wrong, biased, offensive and inappropriate answers.

How to Get Started with the Model

Prompt template:

"""### Instruction: {instruction}

### Response:
"""

Usage: Use the code below to get started with the model.

import os
import torch
import sys
from transformers import LlamaForCausalLM, AutoTokenizer


def generate_prompt_test(instruction):    
   prompt = f"""### Instruction: {instruction}
   
### Response:
"""
   return prompt

model_name = "raicrits/Hermes7b_ITA"

model = LlamaForCausalLM.from_pretrained(
   model_name,
   device_map="auto",
   torch_dtype=torch.bfloat16                
)

model.config.use_cache = True


tokenizer = AutoTokenizer.from_pretrained(model_name, add_eos_token=False)

prompt = generate_prompt_test("Cosa puoi dirmi sul dio Hermes?")
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

outputs = model.generate(**inputs, do_sample = True, num_beams = 2, top_k=50, top_p= 0.95, max_new_tokens=256, early_stopping = True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True).split("Response:")[1].strip())
"""Hermes è un dio dell'antica Grecia. Era il dio del commercio, della comunicazione e del trasporto. Era anche il dio della mente e della intelligenza. Era noto per il suo eloquente linguaggio e la sua capacità di spostarsi velocemente. Era considerato il messaggero degli dèi e spesso veniva raffigurato con un cappello di pelle di capra e sandali."""

Training Details

Training Data

The model was fine-tuned on 120k random records of raicrits/Orca_ITA_200k.

Training Procedure

The fine-tuning procedure was done using LoRA approach.

Training Hyperparameters

Training setting:

  • train epochs=3,

  • learning_rate=2e-4,

  • mixed precision training: float16

LoRA configuration:

  • r= 8

  • lora_alpha=16

  • target_modules=["q_proj","v_proj"]

  • lora_dropout=0.05

  • bias="none"

  • task_type=TaskType.CAUSAL_LM

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: 1 NVIDIA A100/40Gb
  • Hours used: 78
  • Cloud Provider: Private Infrastructure
  • Carbon Emitted: 8.42 kg eq. CO2

Model Card Authors

Stefano Scotta (stefano.scotta@rai.it)

Model Card Contact

stefano.scotta@rai.it

Downloads last month
2,710
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train raicrits/Hermes7b_ITA