Edit model card

What is is?

A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).

Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.

Better than V2 BTW.

GGUF Version?

Here

Recipe?

You could see base model section

Why 3x7B?

I test on 16GB VRAM card could fit < 20B model GGUF version with 4-8k context length. I don't want make a model that I can't use.

Downloads last month
13
Safetensors
Model size
18.5B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Alsebay/NaruMOE-v1-3x7B