DavidAU commited on
Commit
d80d315
1 Parent(s): 20358b5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -81,6 +81,19 @@ Recommend using the larger quant you can "run" for quality.
81
 
82
  This repo also has the new "arm quants" : Q4_0_4_4, Q4_0_4_8 and Q4_0_8_8
83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
  <B>Model Template:</B>
85
 
86
  This is a LLAMA2 model, and requires Alpaca or Llama2 template, but may work with other template(s) and has maximum context of 4k / 4096.
 
81
 
82
  This repo also has the new "arm quants" : Q4_0_4_4, Q4_0_4_8 and Q4_0_8_8
83
 
84
+ <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
85
+
86
+ This a "Class 1" model:
87
+
88
+ For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
89
+
90
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
91
+
92
+ You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
93
+
94
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
95
+
96
+
97
  <B>Model Template:</B>
98
 
99
  This is a LLAMA2 model, and requires Alpaca or Llama2 template, but may work with other template(s) and has maximum context of 4k / 4096.