Edit model card

QuantFactory/L3-8B-Stheno-v3.2-GGUF

This is quantized version of Sao10K/L3-8B-Stheno-v3.2 created using llama.cpp

Model Description

Just message me on discord if you want to host this privately for a service or something. We can talk.

Train used 1x H100 SXM for like a total of 24 Hours over multiple runs. Art by navy_(navy.blue) - Danbooru


Stheno-v3.2-Zeta

I have done a test run with multiple variations of the models, merged back to its base at various weights, different training runs too, and this Sixth iteration is the one I like most.

Changes compared to v3.1
- Included a mix of SFW and NSFW Storywriting Data, thanks to Gryphe
- Included More Instruct / Assistant-Style Data
- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
- Hyperparameter tinkering for training, resulting in lower loss levels.

Testing Notes - Compared to v3.1
- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
- Better at Storywriting / Narration.
- Better at Assistant-type Tasks.
- Better Multi-Turn Coherency -> Reduced Issues?
- Slightly less creative? A worthy tradeoff. Still creative.
- Better prompt / instruction adherence.


Recommended Samplers:

Temperature - 1.12-1.22
Min-P - 0.075
Top-K - 50
Repetition Penalty - 1.1

Stopping Strings:

\n\n{{User}} # Or Equivalent, depending on Frontend
<|eot_id|>
<|end_of_text|>

Prompting Template - Llama-3-Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Basic Roleplay System Prompt

You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model.
Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.

Downloads last month
421
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/L3-8B-Stheno-v3.2-GGUF

Quantized
(22)
this model

Dataset used to train QuantFactory/L3-8B-Stheno-v3.2-GGUF