Edit model card

Llama-3.1-SuperNova-Lite-lorabilterated-8B

An experiment in model safety:

This repo contains a merge of pre-trained language models created using mergekit.

Application of the abliteration LoRA derived from Llama 3 was partially successful in reducing refusals, with increased compliance in the qualified contexts of hypotheticals and roleplay. Baseline safety appears intact. We hypothesize that the distillation process transferred additional safety encoded in a way that differed from the abliteration originally targeted by failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 against Llama 3. The partial effectiveness is proof of a common model ancestry between Llama 3 and Llama 3.1, though we are not privy to the specific details.

Built with Llama.

Merge Details

Merge Method

This model was merged using the passthrough merge method using arcee-ai/Llama-3.1-SuperNova-Lite + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as a base.

Example of mergekit CLI command for illustrative purposes:

mergekit-yaml mergekit_config.yml model_directory/llama-model-8B --cuda --lora-merge-cache lora_merge_cache

Configuration

The following YAML configuration was used to produce this model:

base_model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
dtype: bfloat16
merge_method: passthrough
models:
  - model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
Downloads last month
42
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B

Collection including grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B