Edit model card

This is TinyLlama/TinyLlama-1.1B-Chat-v1.0 quantized with AutoGPTQ in GPTQ 4-bit format.

Quantize config:

{
  "bits": 4,
  "group_size": 128,
  "damp_percent": 0.01,
  "desc_act": false,
  "static_groups": false,
  "sym": true,
  "true_sequential": true,
  "model_name_or_path": null,
  "model_file_base_name": null,
  "quant_method": "gptq",
  "checkpoint_format": "gptq"
}
Downloads last month
18,080
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.