Update README.md
Browse files
README.md
CHANGED
@@ -49,12 +49,11 @@ whatever quest or other information to keep consistent in the interaction).
|
|
49 |
|
50 |
All datasets from all models and LoRAs used were documented and reviewed as model candidates for merging.
|
51 |
Model candidates were based on five core principles: creativity, logic, inference, instruction following,
|
52 |
-
and longevity of trained responses.
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
This is not a reflection of ChanSung's excellent work - it merely did not fit the purpose of this model.
|
58 |
|
59 |
## Language Models and LoRAs Used Credits:
|
60 |
|
|
|
49 |
|
50 |
All datasets from all models and LoRAs used were documented and reviewed as model candidates for merging.
|
51 |
Model candidates were based on five core principles: creativity, logic, inference, instruction following,
|
52 |
+
and longevity of trained responses. SuperHOT-prototype30b-8192 was used in this mix, not the 8K version;
|
53 |
+
the prototype LoRA seems to have been removed [from HF] as of this writing. The GPT4Alpaca LoRA from
|
54 |
+
Chansung was removed from this amalgam following a thorough review of where censorship and railroading
|
55 |
+
the user came from in 33B-Lazarus. This is not a reflection of ChanSung's excellent work - it merely did
|
56 |
+
not fit the purpose of this model.
|
|
|
57 |
|
58 |
## Language Models and LoRAs Used Credits:
|
59 |
|