Work-in-progress notes about cascade | |
I started using this dataset with ADAMW, but then started using Adafusion/adafusion, which was great. | |
Then I tried that out on cascade. | |
Turns out, LR of 6e06 did very little. So I kept increasing it until I saw noticable differences between epochs. | |
Then I found the reasonable upper bounds. | |
THEN I decided to damp it down a bit, with EMA use. | |
Has to be CPU based, with default attention and no gradient saving, or it wont fit in memory. | |
That seems to be the best combination for cascade training I've found so far. | |
Will upload a specific OneTrainer JSON when I feel like I've found "best" results |