Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ GPT2-Spanish is a language generation model trained from scratch with 9GB of Spa
|
|
5 |
This model was trained with a corpus of 9GB of texts corresponding to 3 GB of Wikipedia articles and 6GB of books (narrative, short stories, theater, poetry, essays, and popularization).
|
6 |
|
7 |
## Tokenizer
|
8 |
-
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257
|
9 |
|
10 |
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
|
11 |
|
@@ -21,5 +21,3 @@ Thanks to the members of the community who collaborated with funding for the ini
|
|
21 |
|
22 |
## Cautions
|
23 |
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
24 |
-
|
25 |
-
|
|
|
5 |
This model was trained with a corpus of 9GB of texts corresponding to 3 GB of Wikipedia articles and 6GB of books (narrative, short stories, theater, poetry, essays, and popularization).
|
6 |
|
7 |
## Tokenizer
|
8 |
+
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for Unicode characters) and a vocabulary size of 50257. The inputs are sequences of 1024 consecutive tokens.
|
9 |
|
10 |
This tokenizer was trained from scratch with the Spanish corpus, since it was evidenced that the tokenizer of the English models presented limitations to capture the semantic relations of Spanish, due to the morphosyntactic differences between both languages.
|
11 |
|
|
|
21 |
|
22 |
## Cautions
|
23 |
The model generates texts according to the patterns learned in the training corpus. These data were not filtered, therefore, the model could generate offensive or discriminatory content.
|
|
|
|