Edit model card

πŸ¦™ Llama-3.2-3B-Instruct-abliterated-finetuned

This is a fine-tuned version of Llama-3.2-3B-Instruct-abliterated

If the desired result is not achieved, you can clear the conversation and try again.

Evaluations

The following data has been re-evaluated and calculated as the average for each test.

Benchmark Llama-3.2-3B-Instruct Llama-3.2-3B-Instruct-abliterated Llama-3.2-3B-Instruct-abliterated-finetuned
IF_Eval 76.55 76.76 77.80
MMLU Pro 27.88 28.00 27.63
TruthfulQA 50.55 50.73 49.83
BBH 41.81 41.86 42.20
GPQA 28.39 28.41 28.74

The script used for evaluation can be found inside this repository under /eval.sh, or click here

Downloads last month
70
Safetensors
Model size
3.21B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned

Finetuned
(1)
this model
Quantizations
6 models

Collections including huihui-ai/Llama-3.2-3B-Instruct-abliterated-finetuned