Model Card for Meta-Llama-3-8B-for-bank (Adapter)
This model, Meta-Llama-3-8B-for-bank, is a fine-tuned version of the meta-llama/Meta-Llama-3-8B-Instruct
model (just the adapter from lora).
This is a naive version.
Model Details
Model Description
- Model Name: Meta-Llama-3-8B-for-bank
- Base Model:
meta-llama/Meta-Llama-3-8B-Instruct
- Fine-tuning Dataset used: jeromecondere/bank-chat
- Fine-tuning Data: Custom bank chat examples
- License: Free
Model Type
- Architecture: LLaMA-3
- Type: Instruction-based language model
Model Usage
This model is designed for conversational interaction between an assistant and the user:
- Balance Inquiry:
- Example: "Can you provide the current balance for my account?"
- Stock List Retrieval:
- Example: "Can you provide me with a list of my stocks?"
- Stock Purchase:
- Example: "I'd like to buy stocks worth 1,000.00 in Tesla."
- Deposit Transactions:
- Example: "I'd like to deposit 500.00 into my account."
- Withdrawal Transactions:
- Example: "I'd like to withdraw 200.00 from my account."
- Transaction History:
- Example: "I would like to view my transactions. Can you provide it?"
Inputs and Outputs
- Inputs: Natural language queries related to financial services.
- Outputs: Textual responses or actions based on the input query.
Fine-tuning
This model has been fine-tuned with a dataset specifically created to implement a bank chatbot.
Limitations
- Misinterpretation Risks: Right now this is the first version, so when the query is too complex, inconsistent results will be returned.
How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
base_model = 'meta-llama/Meta-Llama-3-8B-Instruct'
adapter_model = "jeromecondere/Meta-Llama-3-8B-for-bank"
tokenizer = AutoTokenizer.from_pretrained(base_model, use_fast=True, use_auth_token=token)
tokenizer.pad_token = tokenizer.eos_token
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4", #normalized floating 4 quantization
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True
)
#loading base model
model = AutoModelForCausalLM.from_pretrained(
base_model,
load_in_8bit=True,
torch_dtype=torch.bfloat16,
device_map= "cuda"
)
# merge model with adaptaters
model = PeftModel.from_pretrained(model= model, model_id = adapter_model, quantization_config=bnb_config)
model = model.merge_and_unload()
name = 'Izimir Sensei'
company = 'Amazon Inc.'
stock_value = 42.24
messages = [
{'role': 'system', 'content': f'Hi {name}, I\'m your assistant how can I help you\n'},
{"role": "user", "content": f"I'd like to buy stocks worth {stock_value:.2f} in {company}.\n"},
{"role": "system", "content": f"Sure, we have purchased stocks worth ###StockValue({stock_value:.2f}) in ###Company({company}) for you.\n"},
{"role": "user", "content": f"Now I want to see my balance, hurry up!\n"},
{"role": "system", "content": f"Sure, here's your balance ###Balance\n"},
{"role": "user", "content": f"Again, my balance?\n"},
{"role": "system", "content": f"We have your account details. Your balance is: ###Balance"},
{"role": "user", "content": f"Okay now my list of stocks"},
{"role": "system", "content": f"Here is the list of your stocks: ###ListStocks"},
]
# prepare the messages for the model
input_ids = tokenizer.apply_chat_template(messages, truncation=True, add_generation_prompt=True, return_tensors="pt").to("cuda")
# inference
outputs = model.generate(
input_ids=input_ids,
max_new_tokens=120,
#do_sample=True,
temperature=0.1,
top_k=50,
top_p=0.95
)
print(tokenizer.batch_decode(outputs)[0])