Edit model card

Training procedure

The following bitsandbytes quantization config was used during training:

  • quant_method: bitsandbytes
  • load_in_8bit: True
  • load_in_4bit: False
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False
  • llm_int8_has_fp16_weight: False
  • bnb_4bit_quant_type: fp4
  • bnb_4bit_use_double_quant: False
  • bnb_4bit_compute_dtype: float32

Framework versions

  • PEFT 0.5.0

Project Title

Short description of your project or the model you've fine-tuned.

Table of Contents

Overview

Provide a brief introduction to your project. Explain what your fine-tuned model does and its potential applications. Mention any notable achievements or improvements over the base model.

Training Procedure

Describe the training process for your fine-tuned model. Include details such as:

  • Dataset used (XSum).
  • Amount of data used (3% of the dataset).
  • Number of training epochs (1 epoch).
  • Any specific data preprocessing or augmentation.

Quantization Configuration

Explain the quantization configuration used during training. Include details such as:

  • Quantization method (bitsandbytes).
  • Whether you loaded data in 8-bit or 4-bit.
  • Threshold and skip modules for int8 quantization.
  • Use of FP32 CPU offload and FP16 weight.
  • Configuration for 4-bit quantization (fp4, double quant, compute dtype).

Framework Versions

List the versions of the frameworks or libraries you used for this project. Include specific versions, e.g., PEFT 0.5.0.

Usage

Provide instructions on how to use your fine-tuned model. Include code snippets or examples on how to generate summaries using the model. Mention any dependencies that need to be installed.

# Example usage command
python generate_summary.py --model your-model-name --input input.txt --output output.txt
Downloads last month
63
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Dataset used to train awaisakhtar/llama-2-7b-summarization-finetuned-on-xsum-lora-adapter