hipnologo's picture
Update README.md
fb0d381
|
raw
history blame
1.98 kB
metadata
library_name: peft
license: apache-2.0
datasets:
  - dltdojo/ecommerce-faq-chatbot-dataset
language:
  - en
pipeline_tag: text-generation
tags:
  - text-generation-inference

Falcon 7B LLM Fine Tune Model

Model description

This model is a fine-tuned version of the tiiuae/falcon-7b model using the QLoRa library and the PEFT library. It was fine-tuned on the Ecommerce-FAQ-Chatbot-Dataset from Kaggle.

Intended uses & limitations

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
import torch

model_id = "hipnologo/Falcon-7B-FineTune-Chatbot"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)

# generate text
input_prompt = "Hello, Bot!"
input_ids = tokenizer.encode(input_prompt, return_tensors='pt')
output = model.generate(input_ids)
output_text = tokenizer.decode(output[:, input_ids.shape[-1]:][0], skip_special_tokens=True)

Training procedure

The model was fine-tuned on the Ecommerce-FAQ-Chatbot-Dataset using the bitsandbytes quantization config:

  • load_in_8bit: False
  • load_in_4bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16

Framework versions

  • PEFT 0.4.0.dev0

Evaluation results

The model was trained for 80 steps, with the training loss decreasing from 0.184 to nearly 0. The final training loss was 0.03094411873175886.

  • Trainable params: 2359296
  • All params: 3611104128
  • Trainable%: 0.06533447711203746

License

This model is licensed under Apache 2.0. Please see the LICENSE for more information.