File size: 5,805 Bytes
7bee052
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d9573b
7bee052
 
 
 
 
0d9573b
7bee052
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d9573b
7bee052
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
---
license: llama2
datasets:
- bugdaryan/sql-create-context-instruction
language:
- en
pipeline_tag: text-generation
  
---
# **Llama-2-7B-instruct-text2sql Model Card**

**Model Name**: Llama-2-7B-instruct-text2sql

**Description**: This model is a fine-tuned version of the Llama 2 with 7 billion parameters, specifically tailored for text-to-SQL tasks. It has been trained to generate SQL queries given a database schema and a natural language question.

## Model Information

- **Base Model**: [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
- **Reference Model**: [bugdaryan/Code-Llama-2-13B-instruct-text2sql](https://huggingface.co/bugdaryan/Code-Llama-2-13B-instruct-text2sql)
- **Finetuning Dataset**: [bugdaryan/sql-create-context-instruction](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction)
- **Training Time**: Approximately 8 hours on 1 A100 40GB GPU

## LoRA Parameters

- **lora_r**: 64
- **lora_alpha**: 16
- **lora_dropout**: 0.1

## bitsandbytes Parameters

- **use_4bit**: True
- **bnb_4bit_compute_dtype**: float16
- **bnb_4bit_quant_type**: nf4
- **use_nested_quant**: False

## Training Parameters

- **Number of Training Epochs**: 1
- **Mixed-Precision Training (fp16/bf16)**: False
- **Batch Size per GPU for Training**: 32
- **Batch Size per GPU for Evaluation**: 4
- **Gradient Accumulation Steps**: 1
- **Gradient Checkpointing**: True
- **Maximum Gradient Norm (Gradient Clipping)**: 0.3
- **Initial Learning Rate**: 2e-4
- **Weight Decay**: 0.001
- **Optimizer**: paged_adamw_32bit
- **Learning Rate Scheduler Type**: cosine
- **Max Steps**: -1
- **Warmup Ratio**: 0.03
- **Group Sequences by Length**: True
- **Save Checkpoint Every X Update Steps**: 0
- **Log Every X Update Steps**: 25

## License

This model is governed by a custom commercial license from Llama. For details, please visit: [Custom Commercial License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)

## Intended Use

**Intended Use Cases**: This model is intended for commercial and research use in English. It is designed for text-to-SQL tasks, enabling users to generate SQL queries from natural language questions.

**Out-of-Scope Uses**: Any use that violates applicable laws or regulations, use in languages other than English, or any other use prohibited by the Acceptable Use Policy and Licensing Agreement for Llama and its variants.

## Model Capabilities

- Code completion.
- Infilling.
- Instructions / chat.

## Model Architecture

Llama-2-7B-instruct-text2sql is an auto-regressive language model that uses an optimized transformer architecture.

## Model Dates

This model was trained between January 2023 and July 2023.

## Ethical Considerations and Limitations

Llama-2-7B-instruct-text2sql is a powerful language model, but it may produce inaccurate or objectionable responses in some instances. Safety testing and tuning are recommended before deploying this model in specific applications.

## Hardware and Software

- **Training Libraries**: Custom training libraries
- **Training Hardware**: 1 A100 40GB GPU provided by Google Colab Pro+
- **Carbon Footprint**: Training all Llama models required 400K GPU hours on A100-80GB hardware with emissions offset by Meta's sustainability program.

## Training Data

This model was trained and fine-tuned on the same data as Llama 2 with different weights.

## Evaluation Results

For evaluation results, please refer to Section 3 and safety evaluations in Section 4 of the research paper.

## Example Code

You can use the Llama-2-7B-instruct-text2sql model to generate SQL queries from natural language questions, as demonstrated in the following code snippet:
```cmd
pip install -q accelerate==0.24.1 transformers==4.35.0 torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0
```

```python
import torch
from transformers import (
    AutoModelForCausalLM,
    AutoTokenizer
)

model_name = 'support-pvelocity/Llama-2-7B-instruct-text2sql'

model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto', torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained(model_name)

table = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"

question = 'Find the salesperson who made the most sales.'

prompt = f"[INST] Write SQLite query to answer the following question given the database schema. Please wrap your code answer using ```: Schema: {table} Question: {question} [/INST] Here is the SQLite query to answer to the question: {question}: ``` "

tokens = tokenizer(prompt, return_tensors="pt").to('cuda:0')
input_ids = tokens.input_ids

generated_ids = model.generate(input_ids=input_ids, max_length=4048, pad_token_id=tokenizer.eos_token_id)
output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
output = output.split('```')[2]
print(output)
```

This code demonstrates how to utilize the model for generating SQL queries based on a provided database schema and a natural language question. It showcases the model's capability to assist in SQL query generation for text-to-SQL tasks.