Llama-3.1-MedIT-SUN-8B
Model Description
Llama-3.1-MedIT-SUN-8B is an experimental language model that leverages model merging techniques to combine the capabilities of multiple foundation models. This 8B parameter model is built upon the Llama-3.1-8B-Instruct architecture and represents an exploration in model fusion methodologies.
Key Features
- Base Architecture: Meta's Llama-3.1-8B-Instruct
- Parameter Count: 8 billion
- Development: Created by MedIT Solutions
- Merged Components:
- arcee-ai/Llama-3.1-SuperNova-Lite
- meta-llama/Llama-3.1-8B-Instruct
Technical Details
The model utilizes the proprietary MedIT-mesh technique for model merging, demonstrating an experimental approach to combining language models. This implementation serves as a proof of concept and testing ground for model fusion methodologies.
Purpose
This model was developed primarily for testing and research purposes, exploring the potential of model merging techniques in language model development. It should be considered an experimental release rather than a production-ready model.
Usage Notes
As this is a test model, it is recommended for research and experimental purposes only. Users should be aware of its experimental nature when considering it for any applications.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.04 |
IFEval (0-Shot) | 78.37 |
BBH (3-Shot) | 32.00 |
MATH Lvl 5 (4-Shot) | 20.02 |
GPQA (0-shot) | 7.83 |
MuSR (0-shot) | 9.64 |
MMLU-PRO (5-shot) | 32.40 |
- Downloads last month
- 137
Model tree for meditsolutions/Llama-3.1-MedIT-SUN-8B
Base model
meta-llama/Llama-3.1-8BEvaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard78.370
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard32.000
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard20.020
- acc_norm on GPQA (0-shot)Open LLM Leaderboard7.830
- acc_norm on MuSR (0-shot)Open LLM Leaderboard9.640
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard32.400