|
--- |
|
tags: ["manticore"] |
|
--- |
|
openaccess-ai-collective Manticore-13b-Chat-Pyg with the Guanaco 13b qLoRa from TimDettmers applied, uncensored |
|
--- |
|
|
|
Quantized by mindrage to 4bit GPTQ, groupsize 128, no-act-order. |
|
|
|
Command used to quantize: |
|
python3 llama.py Manticore-13B-Chat-Pyg-Guanaco-GPTQ-4bit-128g.no-act-order.safetensors c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors |
|
|
|
|
|
|
|
The model seems to have benefited from further augmentation by openaccess-ai-collective with the Guanaco qLora by TimDettmer, pending further testing. |
|
It is very diverse and can generate output that seems eerily smart at times. |
|
|
|
|
|
Promptimg style: |
|
|
|
user: "USER:" |
|
bot: "ASSISTANT:" |
|
context: "This is a conversation between an advanced AI and a human user." |
|
|
|
|