Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ model-index:
|
|
38 |
|
39 |
The Finnish Wav2Vec2 Large has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
|
40 |
|
41 |
-
You can read more about the pre-trained model from [this paper](
|
42 |
|
43 |
## Intended uses & limitations
|
44 |
|
@@ -103,15 +103,14 @@ Evaluation results in terms of WER (word error rate) and CER (character error ra
|
|
103 |
If you use our models or scripts, please cite our article as:
|
104 |
|
105 |
```bibtex
|
106 |
-
@inproceedings{
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
year=2024,
|
111 |
-
booktitle={
|
112 |
-
pages={
|
113 |
-
doi={
|
114 |
-
issn={XXXX-XXXX}
|
115 |
}
|
116 |
```
|
117 |
|
|
|
38 |
|
39 |
The Finnish Wav2Vec2 Large has the same architecture and uses the same training objective as the English and multilingual one described in [Paper](https://arxiv.org/abs/2006.11477).
|
40 |
|
41 |
+
You can read more about the pre-trained model from [this paper](https://www.isca-archive.org/interspeech_2024/getman24_interspeech.html). The training scripts are available on [GitHub](https://github.com/aalto-speech/colloquial-Finnish-wav2vec2)
|
42 |
|
43 |
## Intended uses & limitations
|
44 |
|
|
|
103 |
If you use our models or scripts, please cite our article as:
|
104 |
|
105 |
```bibtex
|
106 |
+
@inproceedings{getman24_interspeech,
|
107 |
+
title = {What happens in continued pre-training? Analysis of self-supervised speech
|
108 |
+
models with continued pre-training for colloquial Finnish ASR},
|
109 |
+
author = {Yaroslav Getman and Tamas Grosz and Mikko Kurimo},
|
110 |
+
year = {2024},
|
111 |
+
booktitle = {Interspeech 2024},
|
112 |
+
pages = {5043--5047},
|
113 |
+
doi = {10.21437/Interspeech.2024-476},
|
|
|
114 |
}
|
115 |
```
|
116 |
|