# D-Adaptation Experiment Notes ## Learning rates Unet 1, text 0.5 as seen in thread: https://twitter.com/kohya_tech/status/1627194651034943490?cxt=HHwWhIDUtb66-pQtAAAA ## Alpha Alpha=Dim was recommended in the github thread https://github.com/kohya-ss/sd-scripts/issues/181 I have tried dim 8 alpha 1 with success as well as failure. Both Amber and Castoria are alpha=1 and seem to work fine. UMP ends up with image generations that look like a single brown square, still testing if alpha has a relationship to this issue. As noted in the same github issue, alpha/rank scaling modifies the gradient update to become smaller and thus d-adaptation to boost the learning rate. This could be the reason why it goes bad. ## Dim 128 dim shows some local noisy patterns. Reranking the model to a lower dim from 128 doesn't get rid of it. Converting the weights of the last up block in the unet does but also causes a noticable change in the generated character. Obviously you could reduce the last up block by a smaller amount. Lower dims show good performance. Need much larger test to check for accuracy between them. ## Resolution To be tested ## 2.X models To be tested. Candidate base models: wd1.5, replicant, subtly