Edit model card

microsoft/rho-math-1b-v0.1 AWQ

Model Summary

Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.

Downloads last month
6
Safetensors
Model size
261M params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for solidrust/rho-math-1b-v0.1-AWQ

Quantized
this model

Collection including solidrust/rho-math-1b-v0.1-AWQ