Edit model card

Llama-3.1-Niitorm-8B-DPO

  • DPO Trained, Llama3.1-8B.

image/png

New: DPO'd Gutenberg Version (full epoch training).

RP model, Niitama 1.1 as a base, nearswapped with one of the smartest 3.1 models "Storm", then DPO'd, mostly abliterated.

Essentially, it's an improved Niitama 1.1


Gutenberg DPO creates more human-like prose/story writing and greately lessen synthetic feeling outputs.


llama.cpp:

thank you, mradermacher (GGUF)

thank you, QuantFactory (GGUF)

v0 (GGUF)

Finetune and merge

This is a merge and finetune of pre-trained language models.

Resultant merge finetuned on jondurbin/gutenberg-dpo-v0.1 for 1 epoch, 1.5e-5 learning rate, on Nvidia A100.

Merge Details

Merge Method

This model was merged using the NEARSWAP t0.0001 merge algorithm.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
        layer_range: [0, 32]
      - model: akjindal53244/Llama-3.1-Storm-8B
        layer_range: [0, 32]
merge_method: nearswap
base_model: Sao10K/L3.1-8B-Niitama-v1.1+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
  t:
    - value: 0.0001
dtype: float16

# Then, DPO Finetune
# [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1)

DPO Notes

I used a higher learning rate and full dataset when training compared to my "L3.1-Celestial-Stone-2x8B-DPO". This caused lower loss and better adaption to the chosen style.


Prompt Template:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Credit to Alchemonaut.

Credit to Sao10K.

Credit to Grimjim.

Credit to mlabonne.

Credit to jondurbin.

Credit to woofwolfy.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 27.89
IFEval (0-Shot) 76.89
BBH (3-Shot) 30.51
MATH Lvl 5 (4-Shot) 14.88
GPQA (0-shot) 5.93
MuSR (0-shot) 7.26
MMLU-PRO (5-shot) 31.85
Downloads last month
2,576
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for v000000/L3.1-Niitorm-8B-DPO-t0.0001

Dataset used to train v000000/L3.1-Niitorm-8B-DPO-t0.0001

Space using v000000/L3.1-Niitorm-8B-DPO-t0.0001 1

Collection including v000000/L3.1-Niitorm-8B-DPO-t0.0001

Evaluation results