Update README
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ widget:
|
|
8 |
|
9 |
We pretrain a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
|
10 |
|
11 |
-
Contained in this repo
|
12 |
|
13 |
## Hyperparameters
|
14 |
|
@@ -16,6 +16,6 @@ The hyperparameters corresponding to model sizes mentioned above are as follows:
|
|
16 |
|
17 |
| Model Size | L | AH | HS | FFN | P | Seq |
|
18 |
|------------|----|----|-----|------|------|------|
|
19 |
-
| BASE | 12 | 12 | 768 | 3072 | 110M |
|
20 |
|
21 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
|
|
|
8 |
|
9 |
We pretrain a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
|
10 |
|
11 |
+
Contained in this repo is the original pretrained Flax model that was trained on a TPU v3.8 and it's corresponding PyTorch version.
|
12 |
|
13 |
## Hyperparameters
|
14 |
|
|
|
16 |
|
17 |
| Model Size | L | AH | HS | FFN | P | Seq |
|
18 |
|------------|----|----|-----|------|------|------|
|
19 |
+
| BASE | 12 | 12 | 768 | 3072 | 110M | 512 |
|
20 |
|
21 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
|