Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ pipeline_tag: translation
|
|
22 |
### Model Description
|
23 |
|
24 |
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
|
25 |
-
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and
|
26 |
We will release more details in the upcoming technical report.
|
27 |
|
28 |
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
|
@@ -34,7 +34,7 @@ We will release more details in the upcoming technical report.
|
|
34 |
## Intended uses & limitations
|
35 |
|
36 |
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
|
37 |
-
- Translation
|
38 |
- Automatic Post Edition
|
39 |
- Machine Translation Evaluation
|
40 |
- Context-aware Translation
|
@@ -75,7 +75,8 @@ print(outputs[0]["generated_text"])
|
|
75 |
|
76 |
### Out-of-Scope Use
|
77 |
|
78 |
-
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
|
|
|
79 |
|
80 |
## Bias, Risks, and Limitations
|
81 |
|
|
|
22 |
### Model Description
|
23 |
|
24 |
TowerInstruct-7B is a language model that results from fine-tuning TowerBase on the TowerBlocks supervised fine-tuning dataset. TowerInstruct-7B-v0.1 is the first model in the series.
|
25 |
+
The model is trained to handle several translation-related tasks, such as general machine translation (e.g., sentence- and paragraph-level translation, terminology-aware translation, context-aware translation), automatic post edition, named-entity recognition, gramatical error correction, and paraphrase generation.
|
26 |
We will release more details in the upcoming technical report.
|
27 |
|
28 |
- **Developed by:** Unbabel, Instituto Superior Técnico, CentraleSupélec University of Paris-Saclay
|
|
|
34 |
## Intended uses & limitations
|
35 |
|
36 |
The model was initially fine-tuned on a filtered and preprocessed supervised fine-tuning dataset ([TowerBlocks](https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.1)), which contains a diverse range of data sources:
|
37 |
+
- Translation (sentence and paragraph-level)
|
38 |
- Automatic Post Edition
|
39 |
- Machine Translation Evaluation
|
40 |
- Context-aware Translation
|
|
|
75 |
|
76 |
### Out-of-Scope Use
|
77 |
|
78 |
+
The model is not guaranteed to perform for languages other than the 10 languages it supports. Even though we trained the model on conversational data and code instructions, it is not intended to be used as a conversational chatbot or code assistant.
|
79 |
+
We are currently working on improving quality and consistency on document-level translation. This model should is not intended to be use as a document-level translator.
|
80 |
|
81 |
## Bias, Risks, and Limitations
|
82 |
|