PEFT
Safetensors
AvaniSharma commited on
Commit
149139e
1 Parent(s): 2dc3c78

Update Readme

Browse files
Files changed (1) hide show
  1. README.md +2 -15
README.md CHANGED
@@ -21,11 +21,8 @@ and providing in quantization config when loading pretrained model
21
  - Using LORA we add small rank weight matrices whose parameters are modified while LLM's parameters are frozen.
22
  After finetuning is over we combine weights of these low rank matrices with LLMs weights to obtain new fine tuned weights.
23
  This makes fine tuning process faster and memory efficient
24
- -
25
-
26
-
27
-
28
-
29
 
30
  - **Developed by:** Avani Sharma
31
  - **Model type:** LLM
@@ -69,13 +66,6 @@ And following Hyperparameters for training
69
  report_to="wandb"
70
  ```
71
 
72
-
73
- ## Evaluation
74
-
75
-
76
-
77
-
78
-
79
  ### Compute Infrastructure
80
 
81
  Kaggle
@@ -88,9 +78,6 @@ Kaggle GPU T4x2
88
 
89
  Kaggle Notebook
90
 
91
-
92
-
93
-
94
  ### Framework versions
95
 
96
  - PEFT 0.7.1
 
21
  - Using LORA we add small rank weight matrices whose parameters are modified while LLM's parameters are frozen.
22
  After finetuning is over we combine weights of these low rank matrices with LLMs weights to obtain new fine tuned weights.
23
  This makes fine tuning process faster and memory efficient
24
+ - We train SFT (Supervised Fine-Tuning) trainer using LORA parameters and training hyperparameters listed under *Training Hyperparameters*
25
+ section to finetune the base model
 
 
 
26
 
27
  - **Developed by:** Avani Sharma
28
  - **Model type:** LLM
 
66
  report_to="wandb"
67
  ```
68
 
 
 
 
 
 
 
 
69
  ### Compute Infrastructure
70
 
71
  Kaggle
 
78
 
79
  Kaggle Notebook
80
 
 
 
 
81
  ### Framework versions
82
 
83
  - PEFT 0.7.1