Files changed (1) hide show
  1. README.md +47 -48
README.md CHANGED
@@ -3,7 +3,7 @@ library_name: transformers
3
  base_model: codellama/CodeLlama-7b-Instruct-hf
4
  license: llama2
5
  datasets:
6
- - semantixai/Test-Dataset-Lloro
7
  language:
8
  - pt
9
  tags:
@@ -11,41 +11,45 @@ tags:
11
  - analytics
12
  - analise-dados
13
  - portugues-BR
 
 
 
 
 
 
 
14
  ---
15
 
16
  **Lloro 7B**
17
 
18
  <img src="https://cdn-uploads.huggingface.co/production/uploads/653176dc69fffcfe1543860a/h0kNd9OTEu1QdGNjHKXoq.png" width="300" alt="Lloro-7b Logo"/>
19
 
20
-
21
- Lloro, developed by Semantix Research Labs , is a language Model that was trained to effectively perform Portuguese Data Analysis in Python. It is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf, that was trained on synthetic datasets . The fine-tuning process was performed using the QLORA metodology on a GPU V100 with 16 GB of RAM.
22
-
23
-
24
 
25
  **Model description**
26
 
27
-
28
  Model type: A 7B parameter fine-tuned on synthetic datasets.
29
 
30
  Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well
31
 
32
  Finetuned from model: codellama/CodeLlama-7b-Instruct-hf
33
 
34
-
35
-
36
  **What is Lloro's intended use(s)?**
37
 
38
-
39
  Lloro is built for data analysis in Portuguese contexts .
40
 
41
  Input : Text
42
 
43
  Output : Text (Code)
44
 
 
 
 
45
 
46
  **Usage**
47
 
48
  Using Transformers
 
49
  ```python
50
  #Import required libraries
51
  import torch
@@ -55,7 +59,7 @@ from transformers import (
55
  )
56
 
57
  #Load Model
58
- model_name = "semantixai/LloroV2"
59
  base_model = AutoModelForCausalLM.from_pretrained(
60
  model_name,
61
  return_dict=True,
@@ -80,7 +84,7 @@ outputs = base_model.generate(
80
  input_ids,
81
  do_sample=True,
82
  top_p=0.95,
83
- max_new_tokens=1024,
84
  temperature=0.1,
85
  )
86
 
@@ -90,6 +94,7 @@ display(output_text)
90
  ```
91
 
92
  Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html))
 
93
  ```python
94
  from openai import OpenAI
95
 
@@ -98,65 +103,59 @@ client = OpenAI(
98
  base_url="http://localhost:8000/v1",
99
  )
100
  user_prompt = "Desenvolva um algoritmo em Python para calcular a média e a mediana dos preços de vendas por tipo de material do produto."
101
- completion = client.chat.completions.create(temperature=0.1,frequency_penalty=0.1,model="semantixai/LloroV2",messages=[{"role":"system","content":"Provide answers in Python without explanations, only the code"},{"role":"user","content":user_prompt}])
102
  ```
103
-
104
 
105
  **Params**
106
  Training Parameters
107
- | Params | Training Data | Examples | Tokens | LR |
108
- |----------------------------------|---------------------------------|---------------------------------|----------|--------|
109
- | 7B | Pairs synthetic instructions/code | 28907 | 3 031 188 | 1e-5 |
110
-
111
 
112
  **Model Sources**
113
 
114
- Test Dataset Repository: https://huggingface.co/datasets/semantixai/Test-Dataset-Lloro
115
-
116
- Model Dates Lloro was trained between November 2023 and January 2024.
117
 
118
-
119
 
120
  **Performance**
121
  | Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 |
122
  |----------------|--------------|------------------|---------|----------------------|-----------------|-------------|-------------|
123
- | GPT 3.5 | 91.22% | 0.2745 | 0.2189 | 0.7502 | 0.7146 | 0.7303 | 0.7175 |
124
- | Instruct -Base | 97.40% | 0.2487 | 0.1146 | 0.6997 | 0.6473 | 0.6713 | 0.6518 |
125
- | Instruct -FT | 97.76% | 0.3264 | 0.3602 | 0.7942 | 0.8178 | 0.8042 | 0.8147 |
126
-
127
 
128
  **Training Infos:**
129
  The following hyperparameters were used during training:
130
 
131
- | Parameter | Value |
132
- |---------------------------|----------------------|
133
- | learning_rate | 1e-5 |
134
- | weight_decay | 0.0001 |
135
- | train_batch_size | 1 |
136
- | eval_batch_size | 1 |
137
- | seed | 42 |
138
  | optimizer | Adam - paged_adamw_32bit |
139
- | lr_scheduler_type | cosine |
140
- | lr_scheduler_warmup_ratio | 0.03 |
141
- | num_epochs | 5.0 |
142
 
143
  **QLoRA hyperparameters**
144
  The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:
145
 
146
- | Parameter | Value |
147
- |------------------|---------|
148
- | lora_r | 16 |
149
- | lora_alpha | 64 |
150
- | lora_dropout | 0.1 |
151
- | storage_dtype | "nf4" |
152
- | compute_dtype | "float16"|
153
-
154
 
155
  **Experiments**
156
- | Model | Epochs | Overfitting | Final Epochs | Training Hours | CO2 Emission (Kg) |
157
- |-----------------------|--------|-------------|--------------|-----------------|--------------------|
158
- | Code Llama Instruct | 1 | No | 1 | 8.1 | 1.337 |
159
- | Code Llama Instruct | 5 | Yes | 3 | 45.6 | 9.12 |
160
 
161
  **Framework versions**
162
 
@@ -166,4 +165,4 @@ The following parameters related with the Quantized Low-Rank Adaptation and Qua
166
  | Datasets | 2.14.3 |
167
  | Pytorch | 2.0.1 |
168
  | Tokenizers | 0.14.1 |
169
- | Transformers | 4.34.0 |
 
3
  base_model: codellama/CodeLlama-7b-Instruct-hf
4
  license: llama2
5
  datasets:
6
+ - semantixai/LloroV3
7
  language:
8
  - pt
9
  tags:
 
11
  - analytics
12
  - analise-dados
13
  - portugues-BR
14
+
15
+ co2_eq_emissions:
16
+ emissions: 1320
17
+ source: "Lacoste, Alexandre, et al. “Quantifying the Carbon Emissions of Machine Learning.” ArXiv (Cornell University), 21 Oct. 2019, https://doi.org/10.48550/arxiv.1910.09700."
18
+ training_type: "fine-tuning"
19
+ geographical_location: "Council Bluffs, Iowa, USA."
20
+ hardware_used: "1 A100 40GB GPU"
21
  ---
22
 
23
  **Lloro 7B**
24
 
25
  <img src="https://cdn-uploads.huggingface.co/production/uploads/653176dc69fffcfe1543860a/h0kNd9OTEu1QdGNjHKXoq.png" width="300" alt="Lloro-7b Logo"/>
26
 
27
+ Lloro, developed by Semantix Research Labs , is a language Model that was trained to effectively perform Portuguese Data Analysis in Python. It is a fine-tuned version of codellama/CodeLlama-7b-Instruct-hf, that was trained on synthetic datasets. The fine-tuning process was performed using the QLORA metodology on a GPU A100 with 40 GB of RAM.
 
 
 
28
 
29
  **Model description**
30
 
 
31
  Model type: A 7B parameter fine-tuned on synthetic datasets.
32
 
33
  Language(s) (NLP): Primarily Portuguese, but the model is capable to understand English as well
34
 
35
  Finetuned from model: codellama/CodeLlama-7b-Instruct-hf
36
 
 
 
37
  **What is Lloro's intended use(s)?**
38
 
 
39
  Lloro is built for data analysis in Portuguese contexts .
40
 
41
  Input : Text
42
 
43
  Output : Text (Code)
44
 
45
+ **V3 Release**
46
+ - Context Lenght increased to 2048.
47
+ - Fine-tuning dataset increased to 74222 examples.
48
 
49
  **Usage**
50
 
51
  Using Transformers
52
+
53
  ```python
54
  #Import required libraries
55
  import torch
 
59
  )
60
 
61
  #Load Model
62
+ model_name = "semantixai/Lloro"
63
  base_model = AutoModelForCausalLM.from_pretrained(
64
  model_name,
65
  return_dict=True,
 
84
  input_ids,
85
  do_sample=True,
86
  top_p=0.95,
87
+ max_new_tokens=2048,
88
  temperature=0.1,
89
  )
90
 
 
94
  ```
95
 
96
  Using an OpenAI compatible inference server (like [vLLM](https://docs.vllm.ai/en/latest/index.html))
97
+
98
  ```python
99
  from openai import OpenAI
100
 
 
103
  base_url="http://localhost:8000/v1",
104
  )
105
  user_prompt = "Desenvolva um algoritmo em Python para calcular a média e a mediana dos preços de vendas por tipo de material do produto."
106
+ completion = client.chat.completions.create(temperature=0.1,frequency_penalty=0.1,model="semantixai/Lloro",messages=[{"role":"system","content":"Provide answers in Python without explanations, only the code"},{"role":"user","content":user_prompt}])
107
  ```
 
108
 
109
  **Params**
110
  Training Parameters
111
+ | Params | Training Data | Examples | Tokens | LR |
112
+ |----------------------------------|-----------------------------------|---------------------------------|----------|--------|
113
+ | 7B | Pairs synthetic instructions/code | 74222 | 9 351 532| 2e-4 |
 
114
 
115
  **Model Sources**
116
 
117
+ Test Dataset Repository: <https://huggingface.co/datasets/semantixai/LloroV3>
 
 
118
 
119
+ Model Dates: Lloro was trained between February 2024 and April 2024.
120
 
121
  **Performance**
122
  | Modelo | LLM as Judge | Code Bleu Score | Rouge-L | CodeBert- Precision | CodeBert-Recall | CodeBert-F1 | CodeBert-F3 |
123
  |----------------|--------------|------------------|---------|----------------------|-----------------|-------------|-------------|
124
+ | GPT 3.5 | 94.29% | 0.3538 | 0.3756 | 0.8099 | 0.8176 | 0.8128 | 0.8164 |
125
+ | Instruct -Base | 88.77% | 0.3666 | 0.3351 | 0.8244 | 0.8025 | 0.8121 | 0.8052 |
126
+ | Instruct -FT | 97.95% | 0.5967 | 0.6717 | 0.9090 | 0.9182 | 0.9131 | 0.9171 |
 
127
 
128
  **Training Infos:**
129
  The following hyperparameters were used during training:
130
 
131
+ | Parameter | Value |
132
+ |---------------------------|--------------------------|
133
+ | learning_rate | 2e-4 |
134
+ | weight_decay | 0.0001 |
135
+ | train_batch_size | 7 |
136
+ | eval_batch_size | 7 |
137
+ | seed | 42 |
138
  | optimizer | Adam - paged_adamw_32bit |
139
+ | lr_scheduler_type | cosine |
140
+ | lr_scheduler_warmup_ratio | 0.06 |
141
+ | num_epochs | 4.0 |
142
 
143
  **QLoRA hyperparameters**
144
  The following parameters related with the Quantized Low-Rank Adaptation and Quantization were used during training:
145
 
146
+ | Parameter | Value |
147
+ |------------------|-----------|
148
+ | lora_r | 64 |
149
+ | lora_alpha | 256 |
150
+ | lora_dropout | 0.1 |
151
+ | storage_dtype | "nf4" |
152
+ | compute_dtype | "bfloat16"|
 
153
 
154
  **Experiments**
155
+ | Model | Epochs | Overfitting | Final Epochs | Training Hours | CO2 Emission (Kg) |
156
+ |-----------------------|--------|-------------|--------------|-----------------|-------------------|
157
+ | Code Llama Instruct | 1 | No | 1 | 3.01 | 0.43 |
158
+ | Code Llama Instruct | 4 | Yes | 3 | 9.25 | 1.32 |
159
 
160
  **Framework versions**
161
 
 
165
  | Datasets | 2.14.3 |
166
  | Pytorch | 2.0.1 |
167
  | Tokenizers | 0.14.1 |
168
+ | Transformers | 4.34.0 |