File size: 814 Bytes
e0ea36e
 
 
 
 
 
8fd724f
e0ea36e
 
 
 
 
 
 
 
046a79c
 
 
 
 
 
e0ea36e
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
base_model: wolfram/miquliz-120b
inference: false
model_creator: Wolfram Ravenwolf
model_name: miquliz-120b
---
# miquliz-120b - Q4 GGUF

- Model creator: [Wolfram Ravenwolf](https://huggingface.co/wolfram)
- Original model: [miquliz-120b](https://huggingface.co/wolfram/miquliz-120b)

## Description

This repo contains Q4_K_S and Q4_K_M GGUF format model files for [Wolfram Ravenwolf's miquliz-120b](https://huggingface.co/wolfram/miquliz-120b).

## Prompt template: Mistral

```
[INST] {prompt} [/INST]
```

## Provided files
| Name | Quant method | Bits | Size |
| ---- | ---- | ---- | ---- |
| miquliz-120b.Q4_K_S.gguf | Q4_K_S | 4 | 66.81 GB|
| miquliz-120b.Q4_K_M.gguf | Q4_K_M | 4 | 70.64 GB|

Note: HF does not support uploading files larger than 50GB. Therefore the files are uploaded as split files.