metadata
tags:
- sparse sparsity quantized onnx embeddings int8
license: mit
language:
- en
gte-small-quant
This is the quantized (INT8) ONNX variant of the gte-small embeddings model created with DeepSparse Optimum for ONNX export/inference and Neural Magic's Sparsify for one-shot quantization.
Current list of sparse and quantized gte-small ONNX models:
Links | Sparsification Method |
---|---|
zeroshot/gte-small-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/gte-small-quant | Quantization (INT8) |
BGE models using this architecture:
Links | Sparsification Method |
---|---|
zeroshot/bge-large-en-v1.5-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/bge-large-en-v1.5-quant | Quantization (INT8) |
zeroshot/bge-base-en-v1.5-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/bge-base-en-v1.5-quant | Quantization (INT8) |
zeroshot/bge-small-en-v1.5-sparse | Quantization (INT8) & 50% Pruning |
zeroshot/bge-small-en-v1.5-quant | Quantization (INT8) |
For general questions on these models and sparsification methods, reach out to the engineering team on our community Slack.