Halu-8B-Llama3-Blackroot-GGUF
This is quantized version of Hastagaras/Halu-8B-Llama3-Blackroot created using llama.cpp
Model Description
VERY IMPORTANT: This model has not been extensively tested or evaluated, and its performance characteristics are currently unknown. It may generate harmful, biased, or inappropriate content. Please exercise caution and use it at your own risk and discretion.
I just tried saishf's merged model, and it's great. So I decided to try a similar merge method with Blackroot's LoRA that I had found earlier.
I don't know what to say about this model... this model is very strange...Maybe because Blackroot's amazing Loras used human data and not synthetic data, hence the model turned out to be very human-like...even the actions or narrations.
WARNING: This model is very unsafe in certain parts...especially in RP.
IMATRIX GGUF IS HERE made available by Lewdiculous
STATIC GGUF IS HERE made avaible by mradermacher
Merge Method
This model was merged using the Model Stock merge method using Hastagaras/Halu-8B-Llama3-v0.3 as a base.
Models Merged
The following models were included in the merge:
- Hastagaras/Halu-8B-Llama3-v0.3 + Blackroot/Llama-3-LongStory-LORA
- Hastagaras/Halu-8B-Llama3-v0.3 + Blackroot/Llama-3-8B-Abomination-LORA
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Hastagaras/Halu-8B-Llama3-v0.3+Blackroot/Llama-3-LongStory-LORA
- model: Hastagaras/Halu-8B-Llama3-v0.3+Blackroot/Llama-3-8B-Abomination-LORA
merge_method: model_stock
base_model: Hastagaras/Halu-8B-Llama3-v0.3
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 69.78 |
AI2 Reasoning Challenge (25-Shot) | 63.82 |
HellaSwag (10-Shot) | 84.55 |
MMLU (5-Shot) | 67.04 |
TruthfulQA (0-shot) | 53.28 |
Winogrande (5-shot) | 79.48 |
GSM8k (5-shot) | 70.51 |
- Downloads last month
- 389
Model tree for QuantFactory/Halu-8B-Llama3-Blackroot-GGUF
Base model
Hastagaras/Halu-8B-Llama3-BlackrootEvaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard63.820
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.550
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard67.040
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard53.280
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard79.480
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard70.510