rafaelgeraldini
commited on
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
base_model: codellama/CodeLlama-7b-Instruct-hf
|
4 |
+
license: llama2
|
5 |
+
datasets:
|
6 |
+
- semantixai/LloroV3
|
7 |
+
language:
|
8 |
+
- pt
|
9 |
+
tags:
|
10 |
+
- code
|
11 |
+
- analytics
|
12 |
+
- analise-dados
|
13 |
+
- portugues-BR
|
14 |
+
|
15 |
+
co2_eq_emissions:
|
16 |
+
emissions: 1320
|
17 |
+
source: "Lacoste, Alexandre, et al. “Quantifying the Carbon Emissions of Machine Learning.” ArXiv (Cornell University), 21 Oct. 2019, https://doi.org/10.48550/arxiv.1910.09700."
|
18 |
+
training_type: "fine-tuning"
|
19 |
+
geographical_location: "Council Bluffs, Iowa, USA."
|
20 |
+
hardware_used: "1 A100 40GB GPU"
|
21 |
+
---
|
22 |
+
|
23 |
+
**Lloro 7B**
|
24 |
+
|
25 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/653176dc69fffcfe1543860a/h0kNd9OTEu1QdGNjHKXoq.png" width="300" alt="Lloro-7b Logo"/>
|
26 |
+
|
27 |
+
Lloro, developed by Semantix Research Labs , is a language Model that was trained to effectively perform Portuguese Data Analysis in Python. It is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf, that was trained on synthetic datasets . The fine-tuning process was performed using the QLORA metodology on a GPU V100 with 16 GB of RAM.
|
28 |
+
|
29 |
+
**Model description**
|
30 |
+
|
31 |
+
Model type: A 7B parameter fine-tuned on synthetic datasets.
|
32 |
+
|
33 |
+
Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well
|
34 |
+
|
35 |
+
Finetuned from model: codellama/CodeLlama-7b-Instruct-hf
|
36 |
+
|
37 |
+
**What is Lloro's intended use(s)?**
|
38 |
+
|
39 |
+
Lloro is built for data analysis in Portuguese contexts .
|
40 |
+
|
41 |
+
Input : Text
|
42 |
+
|
43 |
+
Output : Text (Code)
|
44 |
+
|
45 |
+
**Usage**
|
46 |
+
|
47 |
+
Using Transformers
|
48 |
+
|
49 |
+
```python
|
50 |
+
#Import required libraries
|
51 |
+
import torch
|
52 |
+
from transformers import (
|
53 |
+
AutoModelForCausalLM,
|
54 |
+
AutoTokenizer
|
55 |
+
)
|
56 |
+
|
57 |
+
#Load Model
|
58 |
+
model_name = "semantixai/LloroV2"
|
59 |
+
base_model = AutoModelForCausalLM.from_pretrained(
|
60 |
+
model_name,
|
61 |
+
return_dict=True,
|
62 |
+
torch_dtype=torch.float16,
|
63 |
+
device_map="auto",
|
64 |
+
)
|
65 |
+
|
66 |
+
#Load Tokenizer
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
68 |
+
|
69 |
+
|
70 |
+
#Define Prompt
|
71 |
+
user_prompt = "Desenvolva um algoritmo em Python para calcular a média e a mediana dos preços de vendas por tipo de material do produto."
|
72 |
+
system = "Provide answers in Python without explanations, only the code"
|
73 |
+
prompt_template = f"[INST] <<SYS>>\\n{system}\\n<</SYS>>\\n\\n{user_prompt}[/INST]"
|
74 |
+
|
75 |
+
#Call the model
|
76 |
+
input_ids = tokenizer([prompt_template], return_tensors="pt")["input_ids"].to("cuda")
|
77 |
+
|
78 |
+
|
79 |
+
outputs = base_model.generate(
|
80 |
+
input_ids,
|
81 |
+
do_sample=True,
|
82 |
+
top_p=0.95,
|
83 |
+
max_new_tokens=1024,
|
84 |
+
temperature=0.1,
|
85 |
+
)
|
86 |
+
|
87 |
+
#Decode and retrieve Output
|
88 |
+
output_text = tokenizer.batch_decode(outputs, skip_prompt=True, skip_special_tokens=False)
|
89 |
+
display(output_text)
|
90 |
+
```
|
91 |
+
|
92 |
+
Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html))
|
93 |
+
|
94 |
+
```python
|
95 |
+
from openai import OpenAI
|
96 |
+
|
97 |
+
client = OpenAI(
|
98 |
+
api_key="EMPTY",
|
99 |
+
base_url="http://localhost:8000/v1",
|
100 |
+
)
|
101 |
+
user_prompt = "Desenvolva um algoritmo em Python para calcular a média e a mediana dos preços de vendas por tipo de material do produto."
|
102 |
+
completion = client.chat.completions.create(temperature=0.1,frequency_penalty=0.1,model="semantixai/Lloro",messages=[{"role":"system","content":"Provide answers in Python without explanations, only the code"},{"role":"user","content":user_prompt}])
|
103 |
+
```
|
104 |
+
|
105 |
+
**Params**
|
106 |
+
Training Parameters
|
107 |
+
| Params | Training Data | Examples | Tokens | LR |
|
108 |
+
|----------------------------------|-----------------------------------|---------------------------------|----------|--------|
|
109 |
+
| 7B | Pairs synthetic instructions/code | 74222 | 3 031 188| 2e-4 |
|
110 |
+
|
111 |
+
**Model Sources**
|
112 |
+
|
113 |
+
Test Dataset Repository: <https://huggingface.co/datasets/semantixai/LloroV3>
|
114 |
+
|
115 |
+
Model Dates Lloro was trained between February 2024 and April 2024.
|
116 |
+
|
117 |
+
**Performance**
|
118 |
+
| Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 |
|
119 |
+
|----------------|--------------|------------------|---------|----------------------|-----------------|-------------|-------------|
|
120 |
+
| GPT 3.5 | 91.22% | 0.2745 | 0.2189 | 0.7502 | 0.7146 | 0.7303 | 0.7175 |
|
121 |
+
| Instruct -Base | 88.77% | 0.3666 | 0.3351 | 0.8244 | 0.8025 | 0.8121 | 0.8052 |
|
122 |
+
| Instruct -FT | 94.06% | 0.5584 | 0.6209 | 0.8943 | 0.9033 | 0.8979 | 0.9021 |
|
123 |
+
|
124 |
+
**Training Infos:**
|
125 |
+
The following hyperparameters were used during training:
|
126 |
+
|
127 |
+
| Parameter | Value |
|
128 |
+
|---------------------------|--------------------------|
|
129 |
+
| learning_rate | 2e-4 |
|
130 |
+
| weight_decay | 0.0001 |
|
131 |
+
| train_batch_size | 7 |
|
132 |
+
| eval_batch_size | 7 |
|
133 |
+
| seed | 42 |
|
134 |
+
| optimizer | Adam - paged_adamw_32bit |
|
135 |
+
| lr_scheduler_type | cosine |
|
136 |
+
| lr_scheduler_warmup_ratio | 0.06 |
|
137 |
+
| num_epochs | 4.0 |
|
138 |
+
|
139 |
+
**QLoRA hyperparameters**
|
140 |
+
The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:
|
141 |
+
|
142 |
+
| Parameter | Value |
|
143 |
+
|------------------|-----------|
|
144 |
+
| lora_r | 64 |
|
145 |
+
| lora_alpha | 256 |
|
146 |
+
| lora_dropout | 0.1 |
|
147 |
+
| storage_dtype | "nf4" |
|
148 |
+
| compute_dtype | "bfloat16"|
|
149 |
+
|
150 |
+
**Experiments**
|
151 |
+
| Model | Epochs | Overfitting | Final Epochs | Training Hours | CO2 Emission (Kg) |
|
152 |
+
|-----------------------|--------|-------------|--------------|-----------------|-------------------|
|
153 |
+
| Code Llama Instruct | 1 | No | 1 | 3.01 | 0.43 |
|
154 |
+
| Code Llama Instruct | 4 | Yes | 3 | 9.25 | 1.32 |
|
155 |
+
|
156 |
+
**Framework versions**
|
157 |
+
|
158 |
+
| Library | Version |
|
159 |
+
|---------------|-----------|
|
160 |
+
| bitsandbytes | 0.40.2 |
|
161 |
+
| Datasets | 2.14.3 |
|
162 |
+
| Pytorch | 2.0.1 |
|
163 |
+
| Tokenizers | 0.14.1 |
|
164 |
+
| Transformers | 4.34.0 |
|