YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
AI Model Name: Llama 3 8B "Built with Meta Llama 3" https://llama.meta.com/llama3/license/
Full walkthrough to reproduce these results here: https://github.com/catid/AQLM/blob/main/catid_readme.md
Baseline evaluation results:
hf (pretrained=meta-llama/Meta-Llama-3-8B-Instruct), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 16
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|winogrande | 1|none | 0|acc |0.7198|± |0.0126|
|piqa | 1|none | 0|acc |0.7873|± |0.0095|
| | |none | 0|acc_norm|0.7867|± |0.0096|
|hellaswag | 1|none | 0|acc |0.5767|± |0.0049|
| | |none | 0|acc_norm|0.7585|± |0.0043|
|arc_easy | 1|none | 0|acc |0.8140|± |0.0080|
| | |none | 0|acc_norm|0.7971|± |0.0083|
|arc_challenge| 1|none | 0|acc |0.5290|± |0.0146|
| | |none | 0|acc_norm|0.5674|± |0.0145|
This repo evaluation results (AQLM with global fine-tuning):
hf (pretrained=catid/cat-llama-3-8b-instruct-aqlm), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 16
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|winogrande | 1|none | 0|acc |0.7119|± |0.0127|
|piqa | 1|none | 0|acc |0.7807|± |0.0097|
| | |none | 0|acc_norm|0.7824|± |0.0096|
|hellaswag | 1|none | 0|acc |0.5716|± |0.0049|
| | |none | 0|acc_norm|0.7539|± |0.0043|
|arc_easy | 1|none | 0|acc |0.8152|± |0.0080|
| | |none | 0|acc_norm|0.7866|± |0.0084|
|arc_challenge| 1|none | 0|acc |0.5043|± |0.0146|
| | |none | 0|acc_norm|0.5555|± |0.0145|
To reproduce evaluation results:
git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
conda create -n lmeval python=3.10 -y && conda activate lmeval
pip install -e .
pip install accelerate aqlm"[gpu,cpu]"
accelerate launch lm_eval --model hf \
--model_args pretrained=catid/cat-llama-3-8b-instruct-aqlm \
--tasks winogrande,piqa,hellaswag,arc_easy,arc_challenge \
--batch_size 16
You can run this model as a transformers
model using https://github.com/oobabooga/text-generation-webui
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.