Text Generation
Transformers
Safetensors
English
mistral
rag
context obedient
TroyDoesAI
Mermaid
Flow
Diagram
Sequence
Map
Context
Accurate
Summarization
Story
Code
Coder
Architecture
Retrieval
Augmented
Generation
AI
LLM
Mistral
LLama
Large Language Model
Retrieval Augmented Generation
Troy Andrew Schultz
LookingForWork
OpenForHire
IdoCoolStuff
Knowledge Graph
Knowledge
Graph
Accelerator
Enthusiast
Chatbot
Personal Assistant
Copilot
lol
tags
Pruned
efficient
smaller
small
local
open
source
open source
quant
quantize
ablated
Ablation
uncensored
unaligned
bad
alignment
text-generation-inference
Inference Endpoints
For those trying to shoe horn this large model on your machine every GB of saved memory counts when offloading to System RAM!
Here is a pruned down the 22.2 Billion parameter model by 2 junk layers to make a 21.5B that doesnt appear to lose any sense of quality.
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.