Edit model card

(GGUF) Thanks:

mradermacher


This is a merge of pre-trained language models created using mergekit.


Leximaid - This experimental model was created based on discussions on Reddit, where models such as Lumimaid-v0.2-8B, Dusk_Rainbow, Stheno-v3.4, Celeste-V1.5, and Lexi-Uncensored-V2 were mentioned — each with its own strengths and weaknesses. Some of them received criticism, which inspired me to create a new synthesis.

To merge the models, I used the dare ties method, first combining Lumimaid-v0.2-8B, Dusk_Rainbow, and Stheno-v3.4, and then mixing the resulting model with Celeste-V1.5 and Lexi-Uncensored-V2. Leximaid is focused on roleplay and creative storytelling.


Merge Details

Merge Method

This model was merged using the DARE TIES.

Models Merged

The following models were included in the merge:

Lumimaid-v0.2-8B Dusk_Rainbow Stheno-v3.4

Celeste-V1.5 Lexi-Uncensored-V2

Configuration

The following YAML configuration was used to produce Leximaid:

models:
  - model: Arkana08/Maxi-Fail-L3-8b
    parameters:
      weight: 0.4
      density: 0.7
  - model: nothingiisreal/L3.1-8B-Celeste-V1.5
    parameters:
      weight: 0.3
      density: 0.75
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      weight: 0.3
      density: 0.65
merge_method: dare_ties
base_model: Arkana08/Maxi-Fail-L3-8b
parameters:
  int8_mask: true
dtype: bfloat16

Credits

Thanks to the creators of the models:

Downloads last month
34
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Arkana08/LexiMaid-L3-8B

Quantizations
2 models