jeiku commited on
Commit
22e381e
1 Parent(s): c93aa1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -27
README.md CHANGED
@@ -8,37 +8,20 @@ base_model:
8
  - jeiku/Gnosis_Reformatted_Mistral
9
  - ResplendentAI/Paradigm_7B
10
  library_name: transformers
11
- tags:
12
- - mergekit
13
- - merge
14
-
15
  ---
16
- # AuraV2
17
-
18
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
-
20
- ## Merge Details
21
- ### Merge Method
22
 
23
- This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) as a base.
24
 
25
- ### Models Merged
26
 
27
- The following models were included in the merge:
28
- * [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
29
- * [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) + [jeiku/selfbot_256_mistral](https://huggingface.co/jeiku/selfbot_256_mistral)
30
- * [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) + [jeiku/Gnosis_Reformatted_Mistral](https://huggingface.co/jeiku/Gnosis_Reformatted_Mistral)
31
 
32
- ### Configuration
33
 
34
- The following YAML configuration was used to produce this model:
35
 
36
- ```yaml
37
- models:
38
- - model: ResplendentAI/Paradigm_7B+jeiku/selfbot_256_mistral
39
- - model: ResplendentAI/Paradigm_7B+jeiku/Theory_of_Mind_Mistral
40
- - model: ResplendentAI/Paradigm_7B+jeiku/Gnosis_Reformatted_Mistral
41
- merge_method: model_stock
42
- base_model: ResplendentAI/Paradigm_7B
43
- dtype: bfloat16
44
- ```
 
8
  - jeiku/Gnosis_Reformatted_Mistral
9
  - ResplendentAI/Paradigm_7B
10
  library_name: transformers
11
+ license: apache-2.0
12
+ language:
13
+ - en
 
14
  ---
15
+ # Aura v2
 
 
 
 
 
16
 
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/tIy1fnUYHc7v_N6ym6Z7g.png)
18
 
19
+ The second version of the Aura line is a direct improvement over the original. Expect poetic and eloquent outputs with real emotion behind them.
20
 
21
+ I recommend keeping the temperature around 1.5 or lower with a Min P value of 0.05. This model can get carried away with prose at higher temperature. I will say though that the prose of this model is distinct from the GPT 3.5/4 variant, and lends an air of humanity to the outputs. I am aware that this model is overfit, but that was the point of the entire exercise.
 
 
 
22
 
23
+ If you have trouble getting the model to follow an asterisks/quote format, I recommend asterisks/plaintext instead. This model skews toward shorter outputs, so be prepared to lengthen your introduction and examples if you want longer outputs.
24
 
25
+ This model responds best to ChatML for multiturn conversations.
26
 
27
+ This model, like all other Mistral based models, is compatible with a Mistral compatible mmproj file for multimodal vision capabilities in KoboldCPP.