support-pvelocity commited on
Commit
7bee052
1 Parent(s): d76d448

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -0
README.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ datasets:
4
+ - bugdaryan/sql-create-context-instruction
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+
9
+ ---
10
+ # **Llama-2-7B-instruct-text2sql Model Card**
11
+
12
+ **Model Name**: Llama-2-7B-instruct-text2sql
13
+
14
+ **Description**: This model is a fine-tuned version of the Llama 2 with 7 billion parameters, specifically tailored for text-to-SQL tasks. It has been trained to generate SQL queries given a database schema and a natural language question.
15
+
16
+ ## Model Information
17
+
18
+ - **Base Model**: [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
19
+ - **Reference Model**: [bugdaryan/Code-Llama-2-13B-instruct-text2sql](https://huggingface.co/bugdaryan/Code-Llama-2-13B-instruct-text2sql)
20
+ - **Finetuning Dataset**: [bugdaryan/sql-create-context-instruction](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction)
21
+ - **Training Time**: Approximately 8 hours on 1 A100 40GB GPU
22
+
23
+ ## LoRA Parameters
24
+
25
+ - **lora_r**: 64
26
+ - **lora_alpha**: 16
27
+ - **lora_dropout**: 0.1
28
+
29
+ ## bitsandbytes Parameters
30
+
31
+ - **use_4bit**: True
32
+ - **bnb_4bit_compute_dtype**: float16
33
+ - **bnb_4bit_quant_type**: nf4
34
+ - **use_nested_quant**: False
35
+
36
+ ## Training Parameters
37
+
38
+ - **Number of Training Epochs**: 1
39
+ - **Mixed-Precision Training (fp16/bf16)**: False
40
+ - **Batch Size per GPU for Training**: 32
41
+ - **Batch Size per GPU for Evaluation**: 4
42
+ - **Gradient Accumulation Steps**: 1
43
+ - **Gradient Checkpointing**: True
44
+ - **Maximum Gradient Norm (Gradient Clipping)**: 0.3
45
+ - **Initial Learning Rate**: 2e-4
46
+ - **Weight Decay**: 0.001
47
+ - **Optimizer**: paged_adamw_32bit
48
+ - **Learning Rate Scheduler Type**: cosine
49
+ - **Max Steps**: -1
50
+ - **Warmup Ratio**: 0.03
51
+ - **Group Sequences by Length**: True
52
+ - **Save Checkpoint Every X Update Steps**: 0
53
+ - **Log Every X Update Steps**: 25
54
+
55
+ ## License
56
+
57
+ This model is governed by a custom commercial license from Code Llama. For details, please visit: [Custom Commercial License](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
58
+
59
+ ## Intended Use
60
+
61
+ **Intended Use Cases**: This model is intended for commercial and research use in English. It is designed for text-to-SQL tasks, enabling users to generate SQL queries from natural language questions.
62
+
63
+ **Out-of-Scope Uses**: Any use that violates applicable laws or regulations, use in languages other than English, or any other use prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants.
64
+
65
+ ## Model Capabilities
66
+
67
+ - Code completion.
68
+ - Infilling.
69
+ - Instructions / chat.
70
+
71
+ ## Model Architecture
72
+
73
+ Llama-2-7B-instruct-text2sql is an auto-regressive language model that uses an optimized transformer architecture.
74
+
75
+ ## Model Dates
76
+
77
+ This model was trained between January 2023 and July 2023.
78
+
79
+ ## Ethical Considerations and Limitations
80
+
81
+ Llama-2-7B-instruct-text2sql is a powerful language model, but it may produce inaccurate or objectionable responses in some instances. Safety testing and tuning are recommended before deploying this model in specific applications.
82
+
83
+ ## Hardware and Software
84
+
85
+ - **Training Libraries**: Custom training libraries
86
+ - **Training Hardware**: 1 A100 40GB GPU provided by Google Colab Pro+
87
+ - **Carbon Footprint**: Training all Code Llama models required 400K GPU hours on A100-80GB hardware with emissions offset by Meta's sustainability program.
88
+
89
+ ## Training Data
90
+
91
+ This model was trained and fine-tuned on the same data as Llama 2 with different weights.
92
+
93
+ ## Evaluation Results
94
+
95
+ For evaluation results, please refer to Section 3 and safety evaluations in Section 4 of the research paper.
96
+
97
+ ## Example Code
98
+
99
+ You can use the Llama-2-7B-instruct-text2sql model to generate SQL queries from natural language questions, as demonstrated in the following code snippet:
100
+ ```cmd
101
+ pip install -q accelerate==0.24.1 transformers==4.35.0 torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0
102
+ ```
103
+
104
+ ```python
105
+ import torch
106
+ from transformers import (
107
+ AutoModelForCausalLM,
108
+ AutoTokenizer
109
+ )
110
+
111
+ model_name = 'support-pvelocity/Llama-2-7B-instruct-text2sql'
112
+
113
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto', torch_dtype=torch.float16)
114
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
115
+
116
+ table = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"
117
+
118
+ question = 'Find the salesperson who made the most sales.'
119
+
120
+ prompt = f"[INST] Write SQLite query to answer the following question given the database schema. Please wrap your code answer using ```: Schema: {table} Question: {question} [/INST] Here is the SQLite query to answer to the question: {question}: ``` "
121
+
122
+ tokens = tokenizer(prompt, return_tensors="pt").to('cuda:0')
123
+ input_ids = tokens.input_ids
124
+
125
+ generated_ids = model.generate(input_ids=input_ids, max_length=4048, pad_token_id=tokenizer.eos_token_id)
126
+ output = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
127
+ output = output.split('```')[2]
128
+ print(output)
129
+ ```
130
+
131
+ This code demonstrates how to utilize the model for generating SQL queries based on a provided database schema and a natural language question. It showcases the model's capability to assist in SQL query generation for text-to-SQL tasks.