Edit model card

What is is?

A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).

Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.

Worse than V1 in logic, but better in expression.

GGUF Version?

Here

Recipe?

You could see base model section

Why 3x7B?

I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.

Downloads last month
9
Safetensors
Model size
18.5B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for Alsebay/NaruMOE-3x7B-v2