Edit model card

indic-mALBERT-static-smooth-INT8-squad-v2

This model is a static-smooth-INT8 Quantized version of indic-mALBERT-squad-v2 on the squad_v2 dataset. Please Note that we use Intel庐 Neural Compressor for INT8 Quantization.

Downloads last month
16
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.