willwade's picture
fixing title
fb89fe7 verified
|
raw
history blame
No virus
5.08 kB
---
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- AAC
- assistive-technology
- spoken
---
# t5-small-spoken-typo
This model is a fine-tuned version of T5-small, adapted for correcting typographical errors and missing spaces in text. It has been trained on a combination of spoken corpora, including DailyDialog and BNC, with a focus on short utterances common in conversational English.
## Task
The primary task of this model is **Text Correction**, with a focus on:
- **Sentence Correction**: Enhancing readability by correcting sentences with missing spaces or typographical errors.
- **Text Normalization**: Standardizing text by converting informal or irregular forms into more grammatically correct formats.
This model is aimed to support processing user-generated content where informal language, abbreviations, and typos are prevalent, aiming to improve text clarity for further processing or human reading.
# Model Details
## Model Description
The `t5-small-spoken-typo` model is specifically designed to tackle the challenges of text correction within user-generated content, particularly in short, conversation-like sentences. It corrects for missing spaces, removes unnecessary punctuation, introduces and then corrects typos, and normalizes text by replacing informal contractions and abbreviations with their full forms.
## Developed by:
- **Name**: Will Wade
- **Affiliation**: Research & Innovation Manager, Occupational Therapist, Ace Centre, UK
- **Contact Info**: wwade@acecentre.org.uk
## Model type:
- Language model fine-tuned for text correction tasks.
## Language(s) (NLP):
- English (`en`)
## License:
- apache-2.0
## Parent Model:
- The model is fine-tuned from `t5-small`.
## Resources for more information:
- [GitHub Repo](https://github.com/willwade/dailyDialogCorrections/)
# Uses
## Direct Use
This model can be directly applied for correcting text in various applications, including but not limited to, enhancing the quality of user-generated content, preprocessing text for NLP tasks, and supporting assistive technologies.
## Out-of-Scope Use
The model might not perform well on text significantly longer than the training examples (2-5 words), highly formal documents, or languages other than English. Use in sensitive contexts should be approached with caution due to potential biases. **Our typical use case here is AAC users - i.e. users using technology to communicate face to face to people**
# Bias, Risks, and Limitations
The model may inherit biases present in its training data, potentially reflecting or amplifying societal stereotypes. Given its training on conversational English, it may not generalize well to formal text or other dialects and languages.
## Recommendations
Users are encouraged to critically assess the model's output, especially when used in sensitive or impactful contexts. Further fine-tuning with diverse and representative datasets could mitigate some limitations.
# Training Details
## Training Data
The model was trained on a curated subset of the DailyDialog and BNC corpora (2014 spoken), focusing on sentences 2-5 words in length, with manual introduction of typos and removal of spaces for robustness in text correction tasks.You can see the code to pre-process this [here](https://github.com/willwade/dailyDialogCorrections/tree/main)
## Training Procedure
### Preprocessing
Sentences were stripped of apostrophes and commas, spaces were removed, and typos were introduced programmatically to simulate common errors in user-generated content.
### Speeds, Sizes, Times
- Training was conducted on Google Colab, taking approximately 11 hrs to complete.
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
Evaluation was performed on a held-out test set derived from the same corpora and similar sentences, ensuring a diverse range of sentence structures and error types were represented.
### Metrics
Performance was measured using the accuracy of space insertion and typo correction alongside qualitative assessments of text normalisation.
## Results
The model demonstrates high efficacy in correcting short, erroneous sentences, with particular strength in handling real-world, conversational text.
# Environmental Impact
The training was conducted with an emphasis on efficiency and minimising carbon emissions. Users leveraging cloud compute resources are encouraged to consider the environmental impact of large-scale model training and inference.
# Technical Specifications
## Model Architecture and Objective
The model follows the T5 architecture, fine-tuned for the specific task of text correction with a focus on typo correction and space insertion.
## Compute Infrastructure
- **Hardware**: T4 GPU (Google Colab)
- **Software**: PyTorch 1.8.1 with Transformers 4.8.2
# Citation
**BibTeX:**
```bibtex
@misc{t5_small_spoken_typo_2021,
title={T5-small Spoken Typo Corrector},
author={Your Name},
year={2021},
howpublished={\url{https://huggingface.co/your-username/t5-small-spoken-typo}},
}