Ahmadzei's picture
added 3 more tables for large emb model
5fa1a76
To configure PyTorch AMP-like fp16 mixed precision reduces memory usage and accelerates training speed.