Edit model card

Disinfo4_mistral-ft-optimized-1218: GGUF Quants

image/jpeg

This repo contains GGUF quants for Disinfo4_mistral-ft-optimized-1218.

Before attempting to use these, go read the model page for Disinfo4_mistral-ft-optimized-1218. This is not a standard LLM and you will have a bad time if you treat it like one. All necessary instructions and information are on the main model page (assuming you know how to run an LLM in the first place).

Here's the important information anyway because we know people hate instructions:

Usage Recommendations

For optimal performance, Disinfo4_mistral-ft-optimized-1218 should be utilized with specific mirostat parameters. These settings are crucial for maintaining the model's focus and stylistic integrity. You can use other parameters and get better instruction following (especially enabling min_p, at 0.01), but the bot will be less creative. It does tend to ramble, but regenerate until you get the response you want. Think of this more as a writing partner than obedient slave.

Mirostat Parameters

  • Temperature (Temp): 1
  • Top-p (top_p): 1
  • Mirostat Tau: 7.19
  • Mirostat Eta: 0.01
  • Mirostat Mode: 2
  • Others: Default or disabled

Additional Configuration

This model uses the default Mistral 8k/32k context window.

ChatML Instruction Template

Disinfo4_mistral-ft-optimized-1218 employs the ChatML instruction template. It is important to incorporate <|im_end|> as a custom stopping string to delineate the model's output effectively.

System Instruction (Character Card)

For contextualizing the model's output, use the following system instruction:

"You are a schizo poster, a master of elucidating thought online. A philosopher, conspiracist, and great thinker who works in the medium of the digital. Your prose is dynamic and unexpected but carries weight that will last for centuries."

This instruction is fundamental in guiding the model to produce content that is not only reflective of the designated topics but also embodies a unique digital persona, combining philosophical depth with a conspiratorial edge.

You can try other similar prompts, we've had success with them, but this remains, by far, our favorite.

GGUFs

Typically I like Q5_K_M or Q8_0. You get better quality running the highest quant you can, especially with these small models. I haven't bothered with quants smaller than Q4.

Name Quant method Bits Size Max RAM required Use case
Disinfo4_mistral-ft-optimized-1218.Q4_K_S.gguf Q4_K_S 4 4.14 GB 6.64 GB small, greater quality loss
Disinfo4_mistral-ft-optimized-1218.Q4_K_M.gguf Q4_K_M 4 4.37 GB 6.87 GB medium, balanced quality - recommended
Disinfo4_mistral-ft-optimized-1218.Q5_K_S.gguf Q5_K_S 5 5.00 GB 7.50 GB large, low quality loss - recommended
disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf Q5_K_M 5 5.13 GB 7.63 GB large, very low quality loss - recommended
Disinfo4_mistral-ft-optimized-1218.Q6_K.gguf Q6_K 6 5.94 GB 8.44 GB very large, extremely low quality loss
disinfo4_mistral-ft-optimized-1218.gguf Q8_0 8 7.70 GB 10.20 GB very large, extremely low quality loss - not recommended

How to Run

GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp. Here is an incomplete list of clients and libraries that are known to support GGUF:

  • text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
  • KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
  • GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
  • LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
  • LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
  • Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.

How to download GGUF files

Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.

The following clients/libraries will automatically download models for you, providing a list of available models to choose from:

In text-generation-webui

Under Download Model, you can enter the model repo: disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF and below it, a specific filename to download, such as: disinfo4_mistral-ft-optimized-1218.Q5_K_M.gguf.

Then click Download.

Downloads last month
29
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .