fgaim commited on
Commit
091ff14
1 Parent(s): 1c2b6a8

Update README

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,7 +8,7 @@ widget:
8
 
9
  We pretrain a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
10
 
11
- Contained in this repo are the original pretrained Flax model that was trained on a TPU v3.8 and it's correponding PyTorch version.
12
 
13
  ## Hyperparameters
14
 
@@ -16,6 +16,6 @@ The hyperparameters corresponding to model sizes mentioned above are as follows:
16
 
17
  | Model Size | L | AH | HS | FFN | P | Seq |
18
  |------------|----|----|-----|------|------|------|
19
- | BASE | 12 | 12 | 768 | 3072 | 110M | 128 |
20
 
21
  (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
 
8
 
9
  We pretrain a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
10
 
11
+ Contained in this repo is the original pretrained Flax model that was trained on a TPU v3.8 and it's corresponding PyTorch version.
12
 
13
  ## Hyperparameters
14
 
 
16
 
17
  | Model Size | L | AH | HS | FFN | P | Seq |
18
  |------------|----|----|-----|------|------|------|
19
+ | BASE | 12 | 12 | 768 | 3072 | 110M | 512 |
20
 
21
  (L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)