Edit model card

DaringMaid-20B-V1.1

Whats New?

This is an updated version of DaringMaid-20B, it is pretty much the same but with Noromaid-13b v0.3 instead of v0.1.1 and with a slightly higher weight for Noromaid.

I used v0.3 since it was the last to use Alpaca as to not break anything.

Quants

EXL2: 6bpw, 5bpw, 4bpw, 3.5bpw, 3bpw

GGUF: Q3_K_M - Q4_K_M - Q5_K_M - Q6_K_M

Recipe:

Prompt template:

I have been using Undi/Ikaris SillyTavern presets for Noromaid: Context template, Instruct template.

Alpaca:

Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Input:
{input}
### Response:

Contact

Kooten on discord.

Downloads last month
24
Safetensors
Model size
20B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Kooten/DaringMaid-20B-V1.1

Merges
3 models
Quantizations
1 model

Collection including Kooten/DaringMaid-20B-V1.1