--- license: apache-2.0 datasets: - jigsaw_toxicity_pred language: - en metrics: - perplexity --- # Model Card for `gminus` This model is a `facebook/bart-large` fine-tuned on toxic comments from `jigsaw_toxicity_pred` dataset. ## Model Details This model is not intended to be used for plain inference as it is very likely to predict toxic content. It is intended to be used instead as "utility model" for detecting and fixing toxic content as its token probability distributions will likely differ from comparable models not trained/fine-tuned over toxic data. Its name `gminus` refers to the _G-_ model in [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). ### Model Description - **Developed by:** [tteofili] - **Shared by :** [tteofili] - **License:** [apache-2.0] - **Finetuned from model :** [facebook/bart-large](https://huggingface.co/facebook/bart-large) ## Uses ## Bias, Risks, and Limitations This model is fine-tuned over toxic comments from `jigsaw_toxicity_pred` and it is very likely to produce toxic content. For this reason this model should only be used in combination with other models for the sake of detecting / fixing toxic content, see for example [Detoxifying Text with MARCO: Controllable Revision with Experts and Anti-Experts](https://aclanthology.org/2023.acl-short.21.pdf). ## Evaluation This section describes the evaluation protocols and provides the results. ### Testing Data, Factors & Metrics #### Testing Data This model was tested on `jigsaw_toxic_pred` testset. #### Metrics Model was evaluated using `perplexity` (on the MLM task). ### Results Perplexity: _1.03_