|
--- |
|
base_model: |
|
- grimjim/llama-3-merge-virt-req-8B |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
tags: |
|
- mergekit |
|
- merge |
|
- facebook |
|
- meta |
|
- pytorch |
|
- llama |
|
- llama-3 |
|
license: other |
|
license_name: llama3 |
|
license_link: LICENSE |
|
|
|
--- |
|
# ⚡ExLlamaV2 quant of : [Llama-3-8B-Irene-v0.2](https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.2) |
|
> [!note] |
|
> ➡️ **Exl2 version** : [0.0.20](https://github.com/turboderp/exllamav2/releases/tag/v0.0.20)<br/> |
|
> ➡️ **Cal. dataset** : Default.<br/> |
|
> 📄 <a href="https://huggingface.co/Meggido/Llama-3-8B-Irene-v0.2-6.5bpw-h8-exl2/resolve/main/measurement.json" download>Measurement.json</a> file. |
|
|
|
> [!IMPORTANT] |
|
> Quants:<br> |
|
> [mradermacher/Llama-3-8B-Irene-v0.2-GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.2-GGUF)<br> |
|
> [mradermacher/Llama-3-8B-Irene-v0.2-i1-GGUF](https://huggingface.co/mradermacher/Llama-3-8B-Irene-v0.2-i1-GGUF)<br> |
|
|
|
|
|
<img src="https://huggingface.co/Virt-io/Llama-3-8B-Irene-v0.2/resolve/main/Gnome.png"> |
|
|
|
|
|
# Llama-3-8B-Irene-v0.2 |
|
|
|
Mergin' o' models, ye say? Well, that be a task fit fer a clever gnome like meself! When combinin' similar models, I like to use model stock tae bring 'em together. And when I'm slerpin', I makes sure tae use a gradient that tapers off at both ends. That way, the model stays mostly uncensored, ye see. |
|
|
|
Now, if I'm mergin' two uncensored models with Slerp, I just favors the one I want more o'! But when it comes tae makin' the gradient, I likes tae get wild and fluctuate between low and high values, ye know what I mean? It's like addin' a bit o' magic tae the mix, helps keep the results from gettin' too boring. |
|
|
|
Course, this be just one gnome's way o' doin' things. I'm sure there be other clever methods out there |
|
|
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
This model was merged using the SLERP merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* Mergekit/llama3-SOVL-v1 |
|
* [grimjim/llama-3-merge-virt-req-8B](https://huggingface.co/grimjim/llama-3-merge-virt-req-8B) |
|
* NousResearch/Meta-Llama-3-8B-Instruct |
|
* Locutusque/llama-3-neural-chat-v2.2-8B |
|
* NousResearch/Hermes-2-Pro-Llama-3-8B |
|
* rombodawg/Llama-3-8B-Instruct-Coder-v2 |
|
* aaditya/Llama3-OpenBioLLM-8B |
|
* ResplendentAI/SOVL_Llama3_8B |
|
* openlynn/Llama-3-Soliloquy-8B-v2 |
|
* grimjim/llama-3-merge-pp-instruct-8B |
|
* ChaoticNeutrals/Poppy_Porpoise-0.72-L3-8B |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: grimjim/llama-3-merge-virt-req-8B |
|
layer_range: [0, 32] |
|
- model: Mergekit/llama3-SOVL-v1 |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: grimjim/llama-3-merge-virt-req-8B |
|
parameters: |
|
t: |
|
- value: [0.5, 0.35, 0.55, 0.35, 0.75, 0.35, 0.90, 0.35, 0.75, 0.35, 0.55, 0.35, 0.5] |
|
dtype: bfloat16 |
|
|
|
``` |
|
|
|
# llama3-SOVL-v1 |
|
``` |
|
slices: |
|
- sources: |
|
- model: Mergekit/SMART-CODER |
|
layer_range: [0, 32] |
|
- model: ResplendentAI/SOVL_Llama3_8B |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: Mergekit/SMART-CODER |
|
parameters: |
|
t: |
|
- value: [0.90, 0.55, 0.75, 0.35, 0.45, 0.90, 0.25, 0.90, 0.45, 0.35, 0.75, 0.55, 0.90] |
|
dtype: bfloat16 |
|
``` |
|
|
|
# SMART-CODER |
|
``` |
|
models: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
- model: Locutusque/llama-3-neural-chat-v2.2-8B |
|
- model: NousResearch/Hermes-2-Pro-Llama-3-8B |
|
- model: rombodawg/Llama-3-8B-Instruct-Coder-v2 |
|
- model: aaditya/Llama3-OpenBioLLM-8B |
|
merge_method: model_stock |
|
base_model: NousResearch/Meta-Llama-3-8B-Instruct |
|
dtype: bfloat16 |
|
``` |