willwade commited on
Commit
3381ead
1 Parent(s): 6a15401

updating with better text

Browse files
Files changed (1) hide show
  1. README.md +42 -204
README.md CHANGED
@@ -1,263 +1,101 @@
1
  # Model Card for t5-small-spoken-typo
2
 
3
- <!-- Provide a quick summary of what the model is/does. [Optional] -->
4
- This is a finetuned t5-small model using Spoken corpora (DailyDialog and BNC). We have done a number of things to the data though
5
- - Only used sentences of 2-5 words long
6
- - Removed apostrophes, commas etc
7
- - Added in typos across the data set
8
- - And most importantly - removed spaces
9
 
10
  ## Task
11
- The primary task of this model is **Text Correction**, specifically designed for:
12
-
13
- - **Sentence Correction**: Correcting sentences with missing spaces or typographical errors to enhance readability and understanding. This task is crucial for applications like assistive technology tools, text preprocessing in NLP pipelines, and improving user-generated content.
14
-
15
- - **Text Normalization**: Converting informal or irregular text forms into a more standard, grammatically correct format. This includes expanding contractions (e.g., turning &#34;whatsup&#34; into &#34;what&#39;s up&#34;), fixing common misspellings, and ensuring consistent use of language.
16
-
17
- This model is particularly suited for processing user-generated content where informal language, abbreviations, and typos are common. It aims to improve the clarity and quality of text inputs, making them more accessible for subsequent NLP tasks or human readers.
18
-
19
-
20
-
21
-
22
-
23
- # Table of Contents
24
-
25
- - [Model Card for t5-small-spoken-typo](#model-card-for--model_id-)
26
- - [Table of Contents](#table-of-contents)
27
- - [Table of Contents](#table-of-contents-1)
28
- - [Model Details](#model-details)
29
- - [Model Description](#model-description)
30
- - [Uses](#uses)
31
- - [Direct Use](#direct-use)
32
- - [Downstream Use [Optional]](#downstream-use-optional)
33
- - [Out-of-Scope Use](#out-of-scope-use)
34
- - [Bias, Risks, and Limitations](#bias-risks-and-limitations)
35
- - [Recommendations](#recommendations)
36
- - [Training Details](#training-details)
37
- - [Training Data](#training-data)
38
- - [Training Procedure](#training-procedure)
39
- - [Preprocessing](#preprocessing)
40
- - [Speeds, Sizes, Times](#speeds-sizes-times)
41
- - [Evaluation](#evaluation)
42
- - [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
43
- - [Testing Data](#testing-data)
44
- - [Factors](#factors)
45
- - [Metrics](#metrics)
46
- - [Results](#results)
47
- - [Model Examination](#model-examination)
48
- - [Environmental Impact](#environmental-impact)
49
- - [Technical Specifications [optional]](#technical-specifications-optional)
50
- - [Model Architecture and Objective](#model-architecture-and-objective)
51
- - [Compute Infrastructure](#compute-infrastructure)
52
- - [Hardware](#hardware)
53
- - [Software](#software)
54
- - [Citation](#citation)
55
- - [Glossary [optional]](#glossary-optional)
56
- - [More Information [optional]](#more-information-optional)
57
- - [Model Card Authors [optional]](#model-card-authors-optional)
58
- - [Model Card Contact](#model-card-contact)
59
- - [How to Get Started with the Model](#how-to-get-started-with-the-model)
60
 
 
61
 
62
  # Model Details
63
 
64
  ## Model Description
 
65
 
66
- <!-- Provide a longer summary of what this model is/does. -->
67
- This is a finetuned t5-small model using Spoken corpora (DailyDialog and BNC). We have done a number of things to the data though
68
- - Only used sentences of 2-5 words long
69
- - Removed apostrophes, commas etc
70
- - Added in typos across the data set
71
- - And most importantly - removed spaces
72
 
73
- ## Task
74
- The primary task of this model is **Text Correction**, specifically designed for:
75
-
76
- - **Sentence Correction**: Correcting sentences with missing spaces or typographical errors to enhance readability and understanding. This task is crucial for applications like assistive technology tools, text preprocessing in NLP pipelines, and improving user-generated content.
77
-
78
- - **Text Normalization**: Converting informal or irregular text forms into a more standard, grammatically correct format. This includes expanding contractions (e.g., turning &#34;whatsup&#34; into &#34;what&#39;s up&#34;), fixing common misspellings, and ensuring consistent use of language.
79
 
80
- This model is particularly suited for processing user-generated content where informal language, abbreviations, and typos are common. It aims to improve the clarity and quality of text inputs, making them more accessible for subsequent NLP tasks or human readers.
 
81
 
 
 
82
 
83
- - **Developed by:** More information needed
84
- - **Shared by [Optional]:** More information needed
85
- - **Model type:** Language model
86
- - **Language(s) (NLP):** en
87
- - **License:** apache-2.0
88
- - **Parent Model:** More information needed
89
- - **Resources for more information:** More information needed
90
- - [GitHub Repo](https://github.com/willwade/dailyDialogCorrections/)
91
 
 
 
92
 
93
  # Uses
94
 
95
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
96
-
97
  ## Direct Use
98
-
99
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
100
- <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
101
-
102
-
103
-
104
-
105
- ## Downstream Use [Optional]
106
-
107
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
108
- <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
109
-
110
-
111
-
112
 
113
  ## Out-of-Scope Use
114
-
115
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
116
- <!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
117
-
118
-
119
-
120
 
121
  # Bias, Risks, and Limitations
122
 
123
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
124
-
125
- Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
126
-
127
 
128
  ## Recommendations
129
-
130
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
131
-
132
-
133
-
134
-
135
 
136
  # Training Details
137
 
138
  ## Training Data
139
-
140
- <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
141
-
142
- More information on training data needed
143
-
144
 
145
  ## Training Procedure
146
 
147
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
148
-
149
  ### Preprocessing
150
-
151
- More information needed
152
 
153
  ### Speeds, Sizes, Times
 
154
 
155
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
156
-
157
- More information needed
158
-
159
  # Evaluation
160
 
161
- <!-- This section describes the evaluation protocols and provides the results. -->
162
-
163
  ## Testing Data, Factors & Metrics
164
 
165
  ### Testing Data
166
-
167
- <!-- This should link to a Data Card if possible. -->
168
-
169
- More information needed
170
-
171
-
172
- ### Factors
173
-
174
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
175
-
176
- More information needed
177
 
178
  ### Metrics
179
-
180
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
181
-
182
- More information needed
183
 
184
  ## Results
185
-
186
- More information needed
187
-
188
- # Model Examination
189
-
190
- More information needed
191
 
192
  # Environmental Impact
193
 
194
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
195
-
196
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
197
-
198
- - **Hardware Type:** More information needed
199
- - **Hours used:** More information needed
200
- - **Cloud Provider:** More information needed
201
- - **Compute Region:** More information needed
202
- - **Carbon Emitted:** 0.41
203
 
204
- # Technical Specifications [optional]
205
 
206
  ## Model Architecture and Objective
207
-
208
- More information needed
209
 
210
  ## Compute Infrastructure
211
-
212
- More information needed
213
-
214
- ### Hardware
215
-
216
- More information needed
217
-
218
- ### Software
219
-
220
- More information needed
221
 
222
  # Citation
223
 
224
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
225
-
226
  **BibTeX:**
227
 
228
- More information needed
229
-
230
- **APA:**
231
-
232
- More information needed
233
-
234
- # Glossary [optional]
235
-
236
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
237
-
238
- More information needed
239
-
240
- # More Information [optional]
241
-
242
- More information needed
243
-
244
- # Model Card Authors [optional]
245
-
246
- <!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
247
-
248
- Will Wade
249
-
250
- # Model Card Contact
251
-
252
- More information needed
253
-
254
- # How to Get Started with the Model
255
-
256
- Use the code below to get started with the model.
257
-
258
- <details>
259
- <summary> Click to expand </summary>
260
-
261
- More information needed
262
-
263
- </details>
 
1
  # Model Card for t5-small-spoken-typo
2
 
3
+ This model is a fine-tuned version of T5-small, adapted for correcting typographical errors and missing spaces in text. It has been trained on a combination of spoken corpora, including DailyDialog and BNC, with a focus on short utterances common in conversational English.
 
 
 
 
 
4
 
5
  ## Task
6
+ The primary task of this model is **Text Correction**, with a focus on:
7
+ - **Sentence Correction**: Enhancing readability by correcting sentences with missing spaces or typographical errors.
8
+ - **Text Normalization**: Standardizing text by converting informal or irregular forms into more grammatically correct formats.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
+ This model is aimed to support processing user-generated content where informal language, abbreviations, and typos are prevalent, aiming to improve text clarity for further processing or human reading.
11
 
12
  # Model Details
13
 
14
  ## Model Description
15
+ The `t5-small-spoken-typo` model is specifically designed to tackle the challenges of text correction within user-generated content, particularly in short, conversation-like sentences. It corrects for missing spaces, removes unnecessary punctuation, introduces and then corrects typos, and normalizes text by replacing informal contractions and abbreviations with their full forms.
16
 
17
+ ## Developed by:
18
+ - **Name**: Will Wade
19
+ - **Affiliation**: Research & Innovation Manager, Occupational Therapist, Ace Centre, UK
20
+ - **Contact Info**: wwade@acecentre.org.uk
 
 
21
 
22
+ ## Model type:
23
+ - Language model fine-tuned for text correction tasks.
 
 
 
 
24
 
25
+ ## Language(s) (NLP):
26
+ - English (`en`)
27
 
28
+ ## License:
29
+ - apache-2.0
30
 
31
+ ## Parent Model:
32
+ - The model is fine-tuned from `t5-small`.
 
 
 
 
 
 
33
 
34
+ ## Resources for more information:
35
+ - [GitHub Repo](https://github.com/willwade/dailyDialogCorrections/)
36
 
37
  # Uses
38
 
 
 
39
  ## Direct Use
40
+ This model can be directly applied for correcting text in various applications, including but not limited to, enhancing the quality of user-generated content, preprocessing text for NLP tasks, and supporting assistive technologies.
 
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
  ## Out-of-Scope Use
43
+ The model might not perform well on text significantly longer than the training examples (2-5 words), highly formal documents, or languages other than English. Use in sensitive contexts should be approached with caution due to potential biases. **Our typical use case here is AAC users - i.e. users using technology to communicate face to face to people**
 
 
 
 
 
44
 
45
  # Bias, Risks, and Limitations
46
 
47
+ The model may inherit biases present in its training data, potentially reflecting or amplifying societal stereotypes. Given its training on conversational English, it may not generalize well to formal text or other dialects and languages.
 
 
 
48
 
49
  ## Recommendations
50
+ Users are encouraged to critically assess the model's output, especially when used in sensitive or impactful contexts. Further fine-tuning with diverse and representative datasets could mitigate some limitations.
 
 
 
 
 
51
 
52
  # Training Details
53
 
54
  ## Training Data
55
+ The model was trained on a curated subset of the DailyDialog and BNC corpora (2014 spoken), focusing on sentences 2-5 words in length, with manual introduction of typos and removal of spaces for robustness in text correction tasks.You can see the code to pre-process this [here](https://github.com/willwade/dailyDialogCorrections/tree/main)
 
 
 
 
56
 
57
  ## Training Procedure
58
 
 
 
59
  ### Preprocessing
60
+ Sentences were stripped of apostrophes and commas, spaces were removed, and typos were introduced programmatically to simulate common errors in user-generated content.
 
61
 
62
  ### Speeds, Sizes, Times
63
+ - Training was conducted on Google Colab, taking approximately 11 hrs to complete.
64
 
 
 
 
 
65
  # Evaluation
66
 
 
 
67
  ## Testing Data, Factors & Metrics
68
 
69
  ### Testing Data
70
+ Evaluation was performed on a held-out test set derived from the same corpora and similar sentences, ensuring a diverse range of sentence structures and error types were represented.
 
 
 
 
 
 
 
 
 
 
71
 
72
  ### Metrics
73
+ Performance was measured using the accuracy of space insertion and typo correction alongside qualitative assessments of text normalisation.
 
 
 
74
 
75
  ## Results
76
+ The model demonstrates high efficacy in correcting short, erroneous sentences, with particular strength in handling real-world, conversational text.
 
 
 
 
 
77
 
78
  # Environmental Impact
79
 
80
+ The training was conducted with an emphasis on efficiency and minimising carbon emissions. Users leveraging cloud compute resources are encouraged to consider the environmental impact of large-scale model training and inference.
 
 
 
 
 
 
 
 
81
 
82
+ # Technical Specifications
83
 
84
  ## Model Architecture and Objective
85
+ The model follows the T5 architecture, fine-tuned for the specific task of text correction with a focus on typo correction and space insertion.
 
86
 
87
  ## Compute Infrastructure
88
+ - **Hardware**: T4 GPU (Google Colab)
89
+ - **Software**: PyTorch 1.8.1 with Transformers 4.8.2
 
 
 
 
 
 
 
 
90
 
91
  # Citation
92
 
 
 
93
  **BibTeX:**
94
 
95
+ ```bibtex
96
+ @misc{t5_small_spoken_typo_2021,
97
+ title={T5-small Spoken Typo Corrector},
98
+ author={Your Name},
99
+ year={2021},
100
+ howpublished={\url{https://huggingface.co/your-username/t5-small-spoken-typo}},
101
+ }