🦅 🐍 FalconMamba 7B
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
- Running on Zero63🐍
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 29Note FalconMamba technical report
tiiuae/falcon-mamba-7b
Text Generation • Updated • 8.28k • 217Note First strong attention free model for general purpose usage, based on Mamba1 architecture
tiiuae/falcon-mamba-7b-instruct
Text Generation • Updated • 5.06k • 64Note FalconMamba-7B fine-tuned on instruction data, for chat-like interaction with the model
tiiuae/falcon-mamba-7b-4bit
Text Generation • Updated • 131 • 11Note FalconMamba-7B quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-4bit
Updated • 124 • 12Note FalconMamba-7B-instruct quantized in 4bit precision using `bitsandbytes` library for lighter memory requirements and smaller GPU hardwares
tiiuae/falcon-mamba-7b-instruct-BF16-GGUF
Updated • 73 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-instruct-F16-GGUF
Updated • 85 • 1Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-instruct-Q8_0-GGUF
Updated • 73 • 5Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-instruct-Q4_K_M-GGUF
Updated • 372 • 4Note Falcon Mamba 7b-instruct in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-BF16-GGUF
Updated • 76 • 2Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in BF16 format
tiiuae/falcon-mamba-7b-F16-GGUF
Updated • 64 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in F16 format
tiiuae/falcon-mamba-7b-Q8_0-GGUF
Updated • 77 • 2Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q8_0 format
tiiuae/falcon-mamba-7b-Q4_K_M-GGUF
Updated • 13 • 1Note Falcon Mamba 7b in GGUF format (compatible with llama.cpp) in quantized Q4_K_M format
tiiuae/falcon-mamba-7b-pre-decay
Updated • 41 • 3Note Pre-decay stage checkpoint useful for continuous pretraining