fearlessdots commited on
Commit
80592df
1 Parent(s): 3edc5a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -0
README.md CHANGED
@@ -30,6 +30,13 @@ This model and its related LoRA was fine-tuned on [https://huggingface.co/failsp
30
 
31
  ## Fine Tuning
32
 
 
 
 
 
 
 
 
33
  ### - PEFT Parameters
34
 
35
  - lora_alpha=64
@@ -58,6 +65,7 @@ This model and its related LoRA was fine-tuned on [https://huggingface.co/failsp
58
  ## Credits
59
 
60
  - Meta ([https://huggingface.co/meta-llama](https://huggingface.co/meta-llama)): for the original Llama-3;
 
61
  - failspy ([https://huggingface.co/failspy](https://huggingface.co/failspy)): for the base model and the orthogonalization implementation;
62
  - NobodyExistsOnTheInternet ([https://huggingface.co/NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)): for the incredible dataset;
63
  - Undi95 ([https://huggingface.co/Undi95](https://huggingface.co/Undi95)) and Sao10k ([https://huggingface.co/Sao10K](https://huggingface.co/Sao10K)): my main inspirations for doing these models =]
 
30
 
31
  ## Fine Tuning
32
 
33
+ ### - Quantization Configuration
34
+
35
+ - load_in_4bit=True
36
+ - bnb_4bit_quant_type="fp4"
37
+ - bnb_4bit_compute_dtype=compute_dtype
38
+ - bnb_4bit_use_double_quant=False
39
+
40
  ### - PEFT Parameters
41
 
42
  - lora_alpha=64
 
65
  ## Credits
66
 
67
  - Meta ([https://huggingface.co/meta-llama](https://huggingface.co/meta-llama)): for the original Llama-3;
68
+ - HuggingFace: for hosting this model and for creating the fine-tuning tools;
69
  - failspy ([https://huggingface.co/failspy](https://huggingface.co/failspy)): for the base model and the orthogonalization implementation;
70
  - NobodyExistsOnTheInternet ([https://huggingface.co/NobodyExistsOnTheInternet](https://huggingface.co/NobodyExistsOnTheInternet)): for the incredible dataset;
71
  - Undi95 ([https://huggingface.co/Undi95](https://huggingface.co/Undi95)) and Sao10k ([https://huggingface.co/Sao10K](https://huggingface.co/Sao10K)): my main inspirations for doing these models =]