Edit model card

Quantizations of https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B

From original readme

Prompt template: Custom format, or Alpaca

Custom format:

I found the best SillyTavern results from using the Noromaid template.

SillyTavern config files: Context, Instruct. Additionally, here is my Text Completion preset

Otherwise, I tried to ensure that most of the underlying merged models were Alpaca-ish.

Alpaca:

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:
Downloads last month
352
GGUF
Model size
7.24B params
Architecture
llama

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.