These are GGUF quantized versions of sophosympatheia/Midnight-Rose-70B-v2.0.3.
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using wiki.train.raw
.
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version 147b17a
or later. The IQ3_XXS requires version f4d7e54
or later.
Some model files above 50GB are split into smaller files. To concatenate them, use the cat
command (on Windows, use PowerShell): cat foo-Q6_K.gguf.* > foo-Q6_K.gguf
- Downloads last month
- 526