This model is part of the GrammarCorrector tool.

"FlanT5 from scratch for the grammar correction tool" article about how this models was trained:

FlanT5 was trained using JFLEG dataset. The primary objective of the experiment was to develop a highly effective tool using relatively small models, minimal datasets, and constrained computational resources.

To accomplish this goal, we implemented two key strategies:

Downloads last month
56
Safetensors
Model size
248M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for akhmat-s/t5-base-grammar-corrector

Base model

google-t5/t5-base
Finetuned
(422)
this model

Dataset used to train akhmat-s/t5-base-grammar-corrector