Edit model card

meta-llama/Meta-Llama-3-8B (Quantized)

Description

This model is a quantized version of the original model meta-llama/Meta-Llama-3-8B. It has been quantized using int4_weight_only quantization with torchao.

Quantization Details

  • Quantization Type: int4_weight_only
  • Group Size: 128

Usage

You can use this model in your applications by loading it directly from the Hugging Face Hub:

from transformers import AutoModel

model = AutoModel.from_pretrained("meta-llama/Meta-Llama-3-8B")
Downloads last month
0
Inference API
Unable to determine this model's library. Check the docs .

Model tree for medmekk/Meta-Llama-3-8B-torchao-int4_weight_only-gs_128

Quantized
(234)
this model