pinzhenchen
commited on
Commit
โข
ede8189
1
Parent(s):
4b906b4
update README
Browse files
README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
-
-
|
5 |
tags:
|
6 |
- translation
|
7 |
license: cc-by-4.0
|
@@ -9,21 +9,21 @@ license: cc-by-4.0
|
|
9 |
|
10 |
### HPLT MT release v1.0
|
11 |
|
12 |
-
This repository contains the translation model for en-
|
13 |
|
14 |
### Model Info
|
15 |
|
16 |
-
* Source language:
|
17 |
-
* Target language:
|
18 |
* Data: HPLT data only
|
19 |
* Model architecture: Transformer-base
|
20 |
* Tokenizer: SentencePiece (Unigram)
|
21 |
-
* Cleaning: We used OpusCleaner with a set of basic rules. Details can be found in the filter files in [Github](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/en
|
22 |
|
23 |
You can also read our deliverable report [here](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf) for more details.
|
24 |
|
25 |
### Usage
|
26 |
-
|
27 |
|
28 |
The model has been trained with Marian. To run inference, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository.
|
29 |
|
@@ -33,8 +33,8 @@ The model can be used with the Hugging Face framework if the weights are convert
|
|
33 |
|
34 |
| testset | BLEU | chrF++ | COMET22 |
|
35 |
| -------------------------------------- | ---- | ----- | ----- |
|
36 |
-
| flores200 |
|
37 |
-
| ntrex |
|
38 |
|
39 |
### Acknowledgements
|
40 |
|
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
+
- bs
|
5 |
tags:
|
6 |
- translation
|
7 |
license: cc-by-4.0
|
|
|
9 |
|
10 |
### HPLT MT release v1.0
|
11 |
|
12 |
+
This repository contains the translation model for en-bs trained with HPLT data only. For usage instructions, evaluation scripts, and inference scripts, please refer to the [HPLT-MT-Models v1.0](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0) GitHub repository.
|
13 |
|
14 |
### Model Info
|
15 |
|
16 |
+
* Source language: English
|
17 |
+
* Target language: Bosnian
|
18 |
* Data: HPLT data only
|
19 |
* Model architecture: Transformer-base
|
20 |
* Tokenizer: SentencePiece (Unigram)
|
21 |
+
* Cleaning: We used OpusCleaner with a set of basic rules. Details can be found in the filter files in [Github](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0/data/bs-en/raw/v0)
|
22 |
|
23 |
You can also read our deliverable report [here](https://hplt-project.org/HPLT_D5_1___Translation_models_for_select_language_pairs.pdf) for more details.
|
24 |
|
25 |
### Usage
|
26 |
+
**Note** that for quality considerations, we recommend using [HPLT/translate-en-bs-v1.0-hplt_opus](https://huggingface.co/HPLT/translate-en-bs-v1.0-hplt_opus) instead of this model.
|
27 |
|
28 |
The model has been trained with Marian. To run inference, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository.
|
29 |
|
|
|
33 |
|
34 |
| testset | BLEU | chrF++ | COMET22 |
|
35 |
| -------------------------------------- | ---- | ----- | ----- |
|
36 |
+
| flores200 | 4.7 | 26.0 | 0.4314 |
|
37 |
+
| ntrex | 4.1 | 23.9 | 0.4178 |
|
38 |
|
39 |
### Acknowledgements
|
40 |
|