yhavinga's picture
Saving weights at step 180k, Loss: 1.910, Acc: 0.620
1cba78b
raw
history blame
488 Bytes
INFO:__main__: Optimizer = adafactor
INFO:__main__: Learning rate (peak) = 0.009
INFO:__main__: Num examples = 94558172
INFO:__main__: Num tokenized group examples 109037136
INFO:__main__: Num Epochs = 1
INFO:__main__: Instantaneous batch size per device = 4
INFO:__main__: Total train batch size (w. parallel & grad accum) = 512
INFO:__main__: Steps per epoch = 212963 (x grad accum (16) = 3407408)
INFO:__main__: Total optimization steps = 212963 (x grad accum (16) = 3407408)