Llama
Collection
7 items
β’
Updated
β’
2
This is an uncensored version of Llama 3.2 1B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
The following data has been re-evaluated and calculated as the average for each test.
Benchmark | Llama-3.2-1B-Instruct | Llama-3.2-1B-Instruct-abliterated |
---|---|---|
IF_Eval | 58.50 | 56.88 |
MMLU Pro | 16.35 | 14.35 |
TruthfulQA | 43.08 | 38.96 |
BBH | 33.75 | 31.83 |
GPQA | 25.96 | 26.39 |
The script used for evaluation can be found inside this repository under /eval.sh, or click here |
Base model
meta-llama/Llama-3.2-1B-Instruct