Starling-LM-7B-alpha-GGUF

Available Quants

  • Q2_K
  • Q3_K_L
  • Q3_K_M
  • Q3_K_S
  • Q4_0
  • Q4_K_M
  • Q4_K_S
  • Q5_0
  • Q5_K_M
  • Q5_K_S
  • Q6_K
  • Q8_0
Downloads last month
259
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for QuantFactory/Starling-LM-7B-alpha-GGUF

Quantized
(9)
this model