File size: 101,369 Bytes
f71c233
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
# NEURAL NETWORKS AS KERNEL LEARNERS: THE SILENT ALIGNMENT EFFECT

**Alexander Atanasov[∗]** **, Blake Bordelon[∗]** **& Cengiz Pehlevan**
Harvard University
Cambridge, MA 02138, USA
_{atanasov,blake bordelon,cpehlevan}@g.harvard.edu_

ABSTRACT

Neural networks in the lazy training regime converge to kernel machines. Can
neural networks in the rich feature learning regime learn a kernel machine with
a data-dependent kernel? We demonstrate that this can indeed happen due to a
phenomenon we term silent alignment, which requires that the tangent kernel of
a network evolves in eigenstructure while small and before the loss appreciably
decreases, and grows only in overall scale afterwards. We empirically show that
such an effect takes place in homogenous neural networks with small initialization
and whitened data. We provide an analytical treatment of this effect in the fully
connected linear network case. In general, we find that the kernel develops a
low-rank contribution in the early phase of training, and then evolves in overall
scale, yielding a function equivalent to a kernel regression solution with the final
network’s tangent kernel. The early spectral learning of the kernel depends on
the depth. We also demonstrate that non-whitened data can weaken the silent
alignment effect.

1 INTRODUCTION

Despite the numerous empirical successes of deep learning, much of the underlying theory remains
poorly understood. One promising direction forward to an interpretable account of deep learning
is in the study of the relationship between deep neural networks and kernel machines. Several
studies in recent years have shown that gradient flow on infinitely wide neural networks with a
certain parameterization gives rise to linearized dynamics in parameter space (Lee et al., 2019; Liu
et al., 2020) and consequently a kernel regression solution with a kernel known as the neural tangent
kernel (NTK) in function space (Jacot et al., 2018; Arora et al., 2019). Kernel machines enjoy firmer
theoretical footing than deep neural networks, which allows one to accurately study their training
and generalization (Rasmussen & Williams, 2006; Sch¨olkopf & Smola, 2002). Moreover, they share
many of the phenomena that overparameterized neural networks exhibit, such as interpolating the
training data (Zhang et al., 2017; Liang & Rakhlin, 2018; Belkin et al., 2018). However, the exact
equivalence between neural networks and kernel machines breaks for finite width networks. Further,
the regime with approximately static kernel, also referred to as the lazy training regime (Chizat et al.,
2019), cannot account for the ability of deep networks to adapt their internal representations to the
structure of the data, a phenomenon widely believed to be crucial to their success.

In this present study, we pursue an alternative perspective on the NTK, and ask whether a neural
network with an NTK that changes significantly during training can ever be a kernel machine for a
_data-dependent kernel: i.e. does there exist a kernel function K for which the final neural network_
function f is f (x) _µ=1_ _[α][µ][K][(][x][,][ x][µ][)][ with coefficients][ α][µ][ that depend only on the training]_
_≈_ [P][P]
data? We answer in the affirmative: that a large class of neural networks at small initialization
trained on approximately whitened data are accurately approximated as kernel regression solutions
with their final, data-dependent NTKs up to an error dependent on initialization scale. Hence, our
results provide a further concrete link between kernel machines and deep learning which, unlike the
infinite width limit, allows for the kernel to be shaped by the data.

_∗These authors contributed equally._


-----

The phenomenon we study consists of two training phases. In the first phase, the kernel starts off
small in overall scale and quickly aligns its eigenvectors toward task-relevant directions. In the
second phase, the kernel increases in overall scale, causing the network to learn a kernel regression
solution with the final NTK. We call this phenomenon the silent alignment effect because the feature
learning happens before the loss appreciably decreases. Our contributions are the following

1. In Section 2, we demonstrate the silent alignment effect by considering a simplified model where
the kernel evolves while small and then subsequently increases only in scale. We theoretically
show that if these conditions are met, the final neural network is a kernel machine that uses the
final, data-dependent NTK. A proof is provided in Appendix B.

2. In Section 3, we provide an analysis of the NTK evolution of two layer linear MLPs with scalar
target function with small initialization. If the input training data is whitened, the kernel aligns its
eigenvectors towards the direction of the optimal linear function early on during training while
the loss does not decrease appreciably. After this, the kernel changes in scale only, showing this
setup satisfies the requirements for silent alignment discussed in Section 2.

3. In Section 4, we extend our analysis to deep MLPs by showing that the time required for alignment scales with initialization the same way as the time for the loss to decrease appreciably. Still,
these time scales can be sufficiently separated to lead to the silent alignment effect for which we
provide empirical evidence. We further present an explicit formula for the final kernel in linear
networks of any depth and width when trained from small initialization, showing that the final
NTK aligns to task-relevant directions.

4. In Section 5, we show empirically that the silent alignment phenomenon carries over to nonlinear
networks trained with ReLU and Tanh activations on isotropic data, as well as linear and nonlinear networks with multiple output classes. For anisotropic data, we show that the NTK must
necessarily change its eigenvectors when the loss is significantly decreasing, destroying the silent
alignment phenomenon. In these cases, the final neural network output deviates from a kernel
machine that uses the final NTK.

1.1 RELATED WORKS

Jacot et al. (2018) demonstrated that infinitely wide neural networks with an appropriate parameterization trained on mean square error loss evolve their predictions as a linear dynamical system
with the NTK at initalization. A limitation of this kernel regime is that the neural network internal representations and the kernel function do not evolve during training. Conditions under which
such lazy training can happen is studied further in (Chizat et al., 2019; Liu et al., 2020). Domingos
(2020) recently showed that every model, including neural networks, trained with gradient descent
leads to a kernel model with a path kernel and coefficients α[µ] that depend on the test point x. This
dependence on x makes the construction not a kernel method in the traditional sense that we pursue
here (see Remark 1 in (Domingos, 2020)).

Phenomenological studies and models of kernel evolution have been recently invoked to gain insight
into the difference between lazy and feature learning regimes of neural networks. These include
analysis of NTK dynamics which revealed that the NTK in the feature learning regime aligns its
eigenvectors to the labels throughout training, causing non-linear prediction dynamics (Fort et al.,
2020; Baratin et al., 2021; Shan & Bordelon, 2021; Woodworth et al., 2020; Chen et al., 2020; Geiger
et al., 2021; Bai et al., 2020). Experiments have shown that lazy learning can be faster but less robust
than feature learning (Flesch et al., 2021) and that the generalization advantage that feature learning
provides to the final predictor is heavily task and architecture dependent (Lee et al., 2020). Fort et al.
(2020) found that networks can undergo a rapid change of kernel early on in training after which
the network’s output function is well-approximated by a kernel method with a data-dependent NTK.
Our findings are consistent with these results.

St¨oger & Soltanolkotabi (2021) recently obtained a similar multiple-phase training dynamics involving an early alignment phase followed by spectral learning and refinement phases in the setting of
low-rank matrix recovery. Their results share qualitative similarities with our analysis of deep linear
networks. The second phase after alignment, where the kernel’s eigenspectrum grows, was studied
in linear networks in (Jacot et al., 2021), where it is referred to as the saddle-to-saddle regime.


-----

Unlike prior works (Dyer & Gur-Ari, 2020; Aitken & Gur-Ari, 2020; Andreassen & Dyer, 2020),
our results do not rely on perturbative expansions in network width. Also unlike the work of Saxe
et al. (2014), our solutions for the evolution of the kernel do not depend on choosing a specific set of
initial conditions, but rather follow only from assumptions of small initialization and whitened data.

2 THE SILENT ALIGNMENT EFFECT AND APPROXIMATE KERNEL SOLUTION

Neural networks in the overparameterized regime can find many interpolators: the precise function
that the network converges to is controlled by the time evolution of the NTK. As a concrete example,
we will consider learning a scalar target function with mean square error loss through gradient flow.
Let x ∈ R[D] represent an arbitrary input to the network f (x) and let {x[µ], y[µ]}µ[P]=1 [be a supervised]
learning training set. Under gradient flow the parameters θ of the neural network will evolve, so the
output function is time-dependent and we write this as f (x, t). The evolution for the predictions of
the network on a test point can be written in terms of the NTK K(x, x[′], t) = _[∂f]∂[(][x]θ[,t][)]_ _[∂f]_ [(]∂[x]θ[′][,t][)] as

_·_


_K(x, x[µ], t)(y[µ]_ _−_ _f_ (x[µ], t)), (1)


_dt_ _[f]_ [(][x][, t][) =][ η]


where η is the learning rate. If one had access to the dynamics of K(x, x[µ], t) throughout all t, one
could solve for the final learned function f _[∗]_ with integrating factors under conditions discussed in
Appendix A


**_Kt[′] dt[′]_** (y[ν] _f0(x[ν])) ._ (2)

_µν_ _−_




_∞_

0

Z


_t_
_dt kt(x)[µ]_ exp _η_
_−_ 0
  Z


_f_ (x) = f0(x) +

_[∗]_


_µν_


Here, kt(x)[µ] = K(x, x[µ], t), [Kt]µ,ν = K(x[µ], x[ν], t), and y[µ] _−_ _f0(x[µ]) is the initial error on_
point x[µ]. We see that the final function has contributions throughout the full training interval t ∈
(0, ∞). The seminal work by Jacot et al. (2018) considers an infinite-width limit of neural networks,
where the kernel function Kt(x, x[′]) stays constant throughout training time. In this setting where
the kernel is constant and f0(x[µ]) 0, then we obtain a true kernel regression solution f (x) =
_≈_
_µ,ν_ **_[k][(][x][)][µ][K]µν[−][1][y][ν][ for a kernel][ K][(][x][,][ x][′][)][ which does not depend on the training data.]_**
PMuch less is known about what happens in the rich, feature learning regime of neural networks,

where the kernel evolves significantly during time in a data-dependent manner. In this paper, we
consider a setting where the initial kernel is small in scale, aligns its eigenfunctions early on during
gradient descent, and then increases only in scale monotonically. As a concrete phenomenological
model, consider depth L networks with homogenous activation functions with weights initialized
with variance σ[2]. At initialization K0(x, x[′]) _O(σ[2][L][−][2]), f0(x)_ _O(σ[L]) (see Appendix B). We_
_∼_ _∼_
further assume that after time τ, the kernel only evolves in scale in a constant direction

_σ2L−2 ˜K(x, x[′], t)_ _t_ _τ_
_K(x, x[′], t) =_ _≤_ (3)
_g(t)K_ (x, x[′]) _t > τ [,]_
 _∞_

where _K[˜]_ (x, x[′], t) evolves from an initial kernel at time t = 0 to K (x, x[′]) by t = τ and g(t)
_∞_
increases monotonically from σ[2][L][−][2] to 1. In this model, one also obtains a kernel regression solution
in the limit where σ 0 with the final, rather than the initial kernel: f (x) = k (x) **_K[−][1]_**
_→_ _∞_ _·_ _∞_ **_[y][ +]_**
_O(σ[L]). We provide a proof of this in the Appendix B._

The assumption that the kernel evolves early on in gradient descent before increasing only in scale
may seem overly strict as a model of kernel evolution. However, we analytically show in Sections 3
and 4 that this can happen in deep linear networks initialized with small weights, and consequently
that the final learned function is a kernel regression with the final NTK. Moreover, we show that for
a linear network with small weight initialization, the final NTK depends on the training data in a
universal and predictable way.

We show empirically that our results carry over to nonlinear networks with ReLU and tanh activations under the condition that the data is whitened. For example, see Figure 1, where we show the
silent alignment effect on ReLU networks with whitened MNIST and CIFAR-10 images. We define
alignment as the overlap between the kernel and the target function _∥Ky[⊤]∥FKy |y|[2][, where][ y][ ∈]_ [R][P][ is]


-----

1.00.8 L|Kt (t)| 4 ReLU MLP on Whitened MNISTK0 ReLU MLP on Whitened CIFARK0

0.6 Alignment 2 K 5 K

0.4 0 0

0.2 2 5

Kernel and Loss

0.0 Test Prediction NTK 4 Test Prediction NTK

0 100 200 300 400 500 2 0 2 4 7.5 5.0 2.5 0.0 2.5 5.0 7.5

t Test Prediction NN Test Prediction NN


(a) Whitened Data MLP Dynamics


(b) Prediction MNIST


(c) Prediction CIFAR-10


1.00.8 L|Kt (t)| 2 Wide Res-Net on Whitened CIFAR

0.6 Alignment 1

0.4 0

0.2 1 K0

Loss and Alignment0.0 Test Prediction NTK 2 K

0 250 500 750 1000 1250 1500 2 1 0 1 2

t Test Prediction NN


(d) Wide Res-Net Dynamics


(e) Prediction Res-Net


Figure 1: A demonstration of the Silent Alignment effect. (a) We trained a 2-layer ReLU MLP
on P = 1000 MNIST images of handwritten 0’s and 1’s which were whitened. Early in training,
around t ≈ 50, the NTK aligns to the target function and stay fixed (green). The kernel’s overall
scale (orange) and the loss (blue) begin to move at around t = 300. The analytic solution for the
maximal final alignment value in linear networks is overlayed (dashed green), see Appendix E.2.
(b) We compare the predictions of the NTK and the trained network on MNIST test points. Due
to silent alignment, the final learned function is well described as a kernel regression solution with
the final NTK K . However, regression with the initial NTK is not a good model of the network’s
_∞_
predictions. (c) The same experiment on P = 1000 whitened CIFAR-10 images from the first two
classes. Here we use MSE loss on a width 100 network with initialization scale σ = 0.1. (d)
Wide-ResNet with width multiplier k = 4 and blocksize of b = 1 trained with P = 100 training
points from the first two classes of CIFAR-10. The dashed orange line marks when the kernel starts
growing significantly, by which point the alignment has already finished. (e) Predictions of the final
NTK are strongly correlated with the final NN function.

a vector of the target values, quantifying the projection of the labels onto the kernel, as discussed
in (Cortes et al., 2012). This quantity increases early in training but quickly stabilizes around its
asymptotic value before the loss decreases. Though Equation 2 was derived under assumption of
gradient flow with constant learning rate, the underlying conclusions can hold in more realistic
settings as well. In Figure 1 (d) and (e) we show learning dynamics and network predictions for
Wide-ResNet (Zagoruyko & Komodakis, 2017) on whitened CIFAR-10 trained with the Adam optimizer (Kingma & Ba, 2014) with learning rate 10[−][5], which exhibits silent alignment and strong
correlation with the final NTK predictor. In the unwhitened setting, this effect is partially degraded,
as we discuss in Section 5 and Appendix J. Our results suggest that the final NTK may be useful for
analyzing generalization and transfer as we discuss for the linear case in Appendix F.

3 KERNEL EVOLUTION IN 2 LAYER LINEAR NETWORKS

We will first study shallow linear networks trained with small initialization before providing analysis
for deeper networks in Section 4. We will focus our discussion in this section on the scalar output
case but we will provide similar analysis in the multiple output channel case in a subsequent section.
We demonstrate that our analytic solutions match empirical simulations in Appendix C.5.

We assume theΣ = _P1_ _Pµ=1_ **_[x] P[µ][x] data points[µ][⊤][. Further, we assume that the target values are generated by a linear teacher] x[µ]_** _∈_ R[D], µ = 1, . . ., P of zero mean with correlation matrix

function y[µ] = sβT **_x[µ]_** for a unit vector βT . The scalar s merely quantifies the size of the supervised learning signal: the variance ofP _·_ _|y|[2]_ = s[2]βT[⊤][Σ][β][T][ . We define the two-layer linear neu-]


-----

t = 0 t t1 t

2 2 2

1 1 1

(a) Initialization


(b) Phase 1


(c) Phase 2


Figure 2: The evolution of the kernel’s eigenfunctions happens during the early alignment phase for
_tContour plot of kernel’s norm for linear functions1 ≈_ [1]s [, but significant evolution in the network predictions happens for] f (x) = β · x. The black line represents the space[ t > t][2][ =][ 1]2 [log(][sσ][−][2][)][. (a)]

of weights which interpolate the training set, ie X _[⊤]β = y. At initialization, the kernel is isotropic,_
resulting in spherically symmetric level sets of RKHS norm. The network function is represented
as a blue dot. (b) During Phase I, the kernel’s eigenfunctions have evolved, enhancing power in the
direction of the min-norm interpolator, but the network function has not moved far from the origin.
(c) In Phase II, the network function W _[⊤]a moves from the origin to the final solution._

ral network with N hidden units as f (x) = a[⊤]W x. Concretely, we initialize the weights with
standard parameterization ai (0, σ[2]/N ), Wij (0, σ[2]/D). Understanding the role of σ
in the dynamics will be crucial to our study. We analyze gradient flow dynamics on MSE cost ∼N _∼N_
_L =_ 21P _µ_ [(][f] [(][x][µ][)][ −] _[y][µ][)][2][.]_

Under gradient flow with learning rateP _η = 1, the weight matrices in each layer evolve as_
_d_ _sβT_ **_W_** **_a_** _,_ _d_ _sβT_ **_W_** **_a_** _⊤_ **Σ.** (4)

_dt_ **_[a][ =][ −]_** _[∂L]∂a_ [=][ W][ Σ] _−_ _[⊤]_ _dt_ **_[W][ =][ −]_** _∂[∂L]W_ [=][ a] _−_ _[⊤]_

The NTK takes the following form throughout training.   

_K(x, x[′]; t) = x[⊤]W_ _[⊤]W x[′]_ + |a|[2]x[⊤]x[′]. (5)

Note that while the second term, a simple isotropic linear kernel, does not reflect the nature of the
learning task, the first term x[⊤]W _[⊤]W x[′]_ can evolve to yield an anisotropic kernel that has learned
a representation from the data.


3.1 PHASES OF TRAINING IN TWO LAYER LINEAR NETWORK

We next show that there are essentially two phases of training when training a two-layer linear
network from small initialization on whitened-input data.

-  Phase I: An alignment phase which occurs for t ∼ [1]s [. In this phase the weights align to their low]

rank structure and the kernel picks up a rank-one term of the form x[⊤]ββ[⊤]x[′]. In this setting, since
the network is initialized near W, a = 0, which is a saddle point of the loss function, the gradient
of the loss is small. Consequently, the magnitudes of the weights and kernel evolve slowly.

-  Phase II: A data fitting phase which begins around t ∼ [1]s [log(][sσ][−][2][)][. In this phase, the system]

escapes the initial saddle point W, a = 0 and loss decreases to zero. In this setting both the
kernel’s overall scale and the scale of the function f (x, t) increase substantially.

If Phase I and Phase II are well separated in time, which can be guaranteed by making σ small,
then the final function solves a kernel interpolation problem for the NTK which is only sensitive
to the geometry of gradients in the final basin of attraction. In fact, in the linear case, the kernel
interpolation at every point along the gradient descent trajectory would give the final solution as we
show in Appendix G. A visual summary of these phases is provided in Figure 2.

3.1.1 PHASE I: EARLY ALIGNMENT FOR SMALL INITIALIZATION

In this section we show how the kernel aligns to the correct eigenspace early in training. We focus
on the whitened setting, where the data matrix X has all of its nonzero singular values equal. We let


-----

**_β represent the normalized component of βT in the span of the training data {x[µ]}. We will discuss_**
general Σ in section 3.2. We approximate the dynamics early in training by recognizing that the
network output is small due to the small initialization. Early on, the dynamics are given by:
_d_ _d_

(6)

_dt_ **_[a][ =][ s][W β][ +][ O][(][σ][3][)][,]_** _dt_ **_[W][ =][ s][aβ][⊤]_** [+][ O][(][σ][3][)][.]

Truncating terms order σ[3] and higher, we can solve for the kernel’s dynamics early on in training

_K(x, x[′]; t) = q0 cosh(2ηst) x[⊤]_ []ββ[⊤] + I **_x[′]_** + O(σ[2]), _t ≪_ _s[−][1]_ log(s/σ[2]). (7)
where q0 is an initialization dependent quantity, see Appendix C.1. The bound on the error is ob-
tained in Appendix C.2. We see that the kernel picks up a rank one-correction ββ[⊤] which points
in the direction of the task vector β, indicating that the kernel evolves in a direction sensitive to
the target function y = sβT **_x. This term grows exponentially during the early stages of train-_**
ing, and overwhelms the original kernel · _K0 with timescale 1/s. Though the neural network has_
not yet achieved low loss in this phase, the alignment of the kernel and learned representation has
consequences for the transfer ability of the network on correlated tasks as we show in Appendix F.

3.1.2 PHASE II: SPECTRAL LEARNING

We now assume that the weights have approached their low rank structure, as predicted from the
previous analysis of Phase I dynamics, and study the subsequent NTK evolution. We will show that,
under the assumption of whitening, the kernel only evolves in overall scale.

First, following (Fukumizu, 1998; Arora et al., 2018; Du et al., 2018), we note the following conservation law _dt[d]_ **_a(t)a(t)[⊤]_** **_W (t)W (t)[⊤][]_** = 0 which holds for all time. If we assume small initial

_−_
weight variance _σ[2], aa[⊤]_ _−_ **_W W_** _[⊤]_ = O(σ[2]) ≈ 0 at initialization, and stays that way during the
training due to the conservation law. This condition is surprisingly informative, since it indicates
that W is rank-one up to O(σ) corrections. From the analysis of the alignment phase, we also have
that W _[⊤]W ∝_ **_ββ[⊤]. These two observations uniquely determine the rank one structure of W to be_**
**_aβ[⊤]_** + O(σ). Thus, from equation 5 it follows that in Phase II, the kernel evolution takes the form

_K(x, x[′]; t) = u(t)[2]x[⊤]_ []ββ[⊤] + I **_x[′]_** + O(σ), (8)

where u(t)[2] = **_a_** . This demonstrates that the kernel only changes in overall scale during Phase II.
_|_ _|[2]_

Once the weights are aligned with this scheme, we can get an expression for the evolution of u(t)[2]
analytically, u(t)[2] = se[2][st](e[2][st] _−_ 1 + s/u[2]0[)][−][1][, using the results of (Fukumizu, 1998; Saxe et al.,]
2014) as we discuss in C.4. This is a sigmoidal curve which starts at u[2]0 [and approaches][ s][. The]
transition time where active learning begins occurs when e[st] _s/u[2]0_ = _t_ _s[−][1]_ log(s/σ[2]).
_≈_ _⇒_ _≈_
This analysis demonstrates that the kernel only evolves in scale during this second phase in training
from the small initial value u[2]0

_[∼]_ _[O][(][σ][2][)][ to its asymptote.]_

Hence, kernel evolution in this scenario is equivalent to the assumptions discussed in Section 2,
with g(t) = u(t)[2], showing that the final solution is well approximated by kernel regression with
the final NTK. We stress that the timescale for the first phase t1 1/s, where eigenvectors evolve,
is independent of the scale of the initialization σ[2], whereas the second phase occurs around ∼ _t2_
effect. We illustrate these learning curves and for varyingt1 log(s/σ[2]). This separation of timescales t1 ≪ _t2 for small σ in Figure C.2. σ guarantees the silent alignment ≈_

3.2 UNWHITENED DATA


When data is unwhitened, the right singular vector of W aligns with Σβ early in training, as
we show in Appendix C.3. This happens since, early on, the dynamics for the first layer are
_d_

_dt_ **_[W][ ∼]_** **_[a][(][t][)][β][⊤][Σ][. Thus the early time kernel will have a rank-one spike in the][ Σ][β][ direction.]_**
However, this configuration is not stable as the network outputs grow. In fact, at late time W
must realign to converge to W ∝ **_aβ[⊤]_** since the network function converges to the optimum and
_f = a[⊤]W x = sβ · x, which is the minimum ℓ2 norm solution (Appendix G.1). Thus, the final_
kernel will always look like K (x, x[′]) = sx[⊤] []ββ[⊤] + I **_x[′]. However, since the realignment of_**
_∞_
**_W ’s singular vectors happens during the Phase II spectral learning, the kernel is not constant up to_**

overall scale, violating the conditions for silent alignment. We note that the learned function still is
a kernel regression solution of the final NTK, which is a peculiarity of the linear network case, but
this is not achieved through the silent alignment phenomenon as we explain in Appendix C.3.


-----

4 EXTENSION TO DEEP LINEAR NETWORKS

We next consider scalar target functions approximated by deep linear neural networks and show
that many of the insights from the two layer network carry over. The neural network function
_f : R[D]_ _→_ R takes the form f (x) = w[L][⊤]W _[L][−][1]...W_ [1]x. The gradient flow dynamics under mean
squared error (MSE) loss become


_⊤_
**_W_** _[ℓ][′]_
_ℓ[′]>ℓ_ !

Y


_⊤_
**_W_** _[ℓ][′]_
_ℓ[′]<ℓ_ !

Y


_d_

_dt_ **_[W][ ℓ]_** [=][ −][η ∂L]∂W _[ℓ]_ [=][ η]


(sβ − **_w˜)[⊤]_** **Σ**


(9)


where ˜w = W [1][⊤]W [2][⊤]...w[L] _∈_ R[D] is shorthand for the effective one-layer linear network weights.
Inspired by observations made in prior works (Fukumizu, 1998; Arora et al., 2018; Du et al., 2018),
we again note that the following set of conservation laws hold during the dynamics of gradient
descent _dtd_ **_W_** _[ℓ]W_ _[ℓ][⊤]_ **_W_** _[ℓ][+1][⊤]W_ _[ℓ][+1][]_ = 0. This condition indicates a balance in the size of

_−_
weight updates in adjacent layers and simplifies the analysis of linear networks. This balancing

condition between weights of adjacent layers is not specific to MSE loss, but will also hold for any
loss function, see Appendix D. We will use this condition to characterize the NTK’s evolution.

4.1 NTK UNDER SMALL INITIALIZATION

We now consider the effects of small initialization. When the initial weight variance σ[2] is sufficiently
small, W _[ℓ]W_ _[ℓ][⊤]−W_ _[ℓ][+1][⊤]W_ _[ℓ][+1]_ = O(σ[2]) ≈ 0 at initialization.[1] This conservation law implies that
these matrices remain approximately equal throughout training. Performing an SVD on each matrix
and inductively using the above formula from the last layer to the first, we find that all matrices
will be approximately rank-one w[L] = u(t)rL(t), W _[ℓ]_ = u(t)rℓ+1(t)rℓ(t)[⊤], where rℓ(t) are unit
vectors. Using only this balancing condition and expanding to leading order in σ, we find that the
NTK’s dynamics look like

_K(x, x[′], t) = u(t)[2(][L][−][1)]x[⊤]_ [](L 1)r1(t)r1(t)[⊤] + I **_x[′]_** + O(σ). (10)
_−_

We derive this formula in the Appendix E. We observe that the NTK consists of a rank- 1 correction
to the isotropic linear kernel x **_x[′]_** with the rank-one spike pointing along the r1(t) direction. This
_·_
is true dynamically throughout training under the assumption of small σ. At convergence r(t) → **_β,_**
which is the unique fixed point reachable through gradient descent. We discuss evolution of u(t)
below. The alignment of the NTK with the direction β increases with depth L.

4.1.1 WHITENED DATA VS ANISOTROPIC DATA

We now argue that in the case where the input data is whitened, the trained network function is again
a kernel machine that uses the final NTK. The unit vector r1(t) quickly aligns to β since the first
layer weight matrix evolves in the rank-one direction _dtd_ **_[W][ 1][ =][ v][(][t][)][β][⊤]_** [throughout training for a]

time dependent vector function v(t). As a consequence, early in training the top eigenvector of the
NTK aligns to β. Due to gradient descent dynamics, W [1][⊤]W [1] grows only in the ββ[⊤] direction.
Since the r1 quickly aligns to β due to W [1] growing only along the β direction, then the global
scalar function c(t) = u(t)[L] satisfies the dynamics ˙c(t) = c(t)[2][−][2][/L] [s − _c(t)] in the whitened data_
case, which is consistent with the dynamics obtained when starting from the orthogonal initialization
scheme of Saxe et al. (2014). We show in the Appendix E.1 that spectral learning occurs over a
timescale on the order of t1/2 _s(LL_ 2) _[σ][−][L][+2][, where][ t][1][/][2][ is the time required to reach half the]_
_≈_ _−_

value of the initial loss. We discuss this scaling in detail in Figure 3, showing that although the
timescale of alignment shares the same scaling with σ for L > 2, empirically alignment in deep
networks occurs faster than spectral learning. Hence, the silent alignment conditions of Section 2
are satisfied. In the case where the data is unwhitened, the r1(t) vector aligns with Σβ early in
training. This happens since, early on, the dynamics for the first layer are _dtd_ **_[W][ 1][ ∼]_** **_[v][(][t][)][β][⊤][Σ][ for]_**

time dependent vector v(t). However, for the same reasons we discussed in Section 3.2 the kernel
must realign at late times, violating the conditions for silent alignment.

1Though we focus on neglecting the O(σ2) initial weight matrices in the main text, an approximate analysis for wide networks at finite σ[2] and large width is provided in Appendix H.2, which reveals additional
dependence on relative layer widths.


-----

10[0] 10[1] 10[2] 10[3]

|1.0 Alignment 0.8 0.6 0.4 and 0.2 Loss 0.0|2 = 105 2 = 104 2 = 103 2 = 102 2 = 101|
|---|---|
||2 = 10 2 = 10 2 = 10 2 = 10|
|||


2 = 10 5

2 = 10 4

2 = 10 3

2 = 10 2

2 = 10 1

t

(b) L = 3 Dynamics


10[7] L = 3

10[6] L = 4

10[5] L = 5

10[4]

t1/210[3]

10[2]

10[1]

10[0]

10 2 10 1


t1/2

align 10[3] talign L + 2
, tt1/2 10[2]

10 2 10 1


(a) ODE Time to Learn


(c) Time To Learn L = 3


Figure 3: (a) Time to half loss scales in a power law with σ for networks with L 3:
_L_ _≥_
_t1/2_ (L 2) _[σ][−][L][+2][ (black dashed) is compared with numerically integrating the dynamics]_
_∼_ _−_

_c˙(t) = c[2][−][2][/L](s −_ _c) (solid). The power law scaling of t1/2 with σ is qualitatively different than_
what happens for L = 2, where we identified logarithmic scaling t1/2 log(σ[−][2]). (b) Linear
networks with D = 30 inputs and N = 50 hidden units trained on synthetic whitened data with ∼
_|β| = 1. We show for a L = 3 linear network the cosine similarity of W_ [1][⊤]W [1] with ββ[⊤] (dashed)
and the loss (solid) for different initialization scales. (c) The time to get to 1/2 the initial loss and
the time for the cosine similarity of W [1][⊤]W [1] with ββ[⊤] to reach 1/2 both scale as σ[−][L][+2], however
one can see that alignment occurs before half loss is achieved.

4.2 MULTIPLE OUTPUT CHANNELS


We next discuss the case where the network has multiple C output channels. Each network output,
we denote as fc(x[′]) resulting in C [2] kernel sub-blocks Kc,c′ (x, x[′]) = _fc(x)_ _fc′_ (x[′]). In this
_∇_ _· ∇_
context, the balanced condition W _[ℓ]W_ _[ℓ][⊤]_ _≈_ **_W_** _[ℓ][+1][⊤]W_ _[ℓ][+1]_ implies that each of the weight matrices
is rank-C, implying a rank-C kernel. We give an explicit formula for this kernel in Appendix H.
For concreteness, consider whitened input data Σ = I and a teacher with weights β ∈ R[C][×][D]. The
singular value decomposition of the teacher weights β = _α_ _[s][α][z][α][v]α[⊤]_ [determines the evolution of]

each mode (Saxe et al., 2014). Each singular mode begins to be learned at tα = _s1α_ [log] _sαu[−]0_ [2] .

To guarantee silent alignment, we need all of the Phase I time constants to be smaller than all of[P]
 
the Phase II time constants. In the case of a two layer network, this is equivalent to the condition

timescales of spectral learning. We see that alignment precedes learning in Figure H.1 (a). Forsmin1 _[≪]_ _smax1_ [log] _smaxu[−]0_ [2] so that the kernel alignment timescales are well separated from the
 
deeper networks, as discussed in 4.1.1, alignment scales in the same way as the time for learning.

5 SILENT ALIGNMENT ON REAL DATA AND RELU NETS


In this section, we empirically demonstrate that many of the phenomena described in the previous
sections carry over to the nonlinear homogenous networks with small initialization provided that
the data is not highly anisotropic. A similar separation in timescales is expected in the nonlinear
_L-homogenous case since, early in training, the kernel evolves more quickly than the network pre-_
dictions. This argument is based on a phenomenon discussed by Chizat et al. (2019). Consider an
initial scaling of the parameters by σ. We find that the relative change in the loss compared to the
relative change in the features has the form _[|][ d]dt|∇[∇]f[f]|_ _[|]_ _|_ _dt[d]L[L|][ ≈]_ _[O][(][σ][−][L][)][ which becomes very large for]_

small initialization σ as we show in Appendix I. This indicates, that from small initialization, the
parameter gradients and NTK evolve much more quickly than the loss. This is a necessary, but not
sufficient condition for the silent alignment effect. To guarantee the silent alignment, the gradients
must be finished evolving except for overall scale by the time the loss appreciably decreases. However, we showed that for whitened data that nonlinear ReLU networks do in fact enjoy the separation
of timescales necessary for the silent alignment effect in Figure 1. In even more realistic settings,
like ResNet in Figure 1 (d), we also see signatures of the silent alignment effect since the kernel
does not grow in magnitude until the alignment has stabilized.

We now explore how anisotropic data can interfere with silent alignment. We consider the partial
whitening transformation: let the singular value decomposition of the data matrix be X = USV _[⊤]_
and construct a new partially whitened dataset Xγ = US[γ]V _[⊤], where γ ∈_ (0, 1). As γ → 0


-----

the dataset becomes closer to perfectly whitened. We compute loss and kernel aligment for depth
2 ReLU MLPs on a subset of CIFAR-10 and show results in Figure 4. As γ → 0 the agreement
between the final NTK and the learned neural network function becomes much closer, since the
kernel alignment curve is stable after a smaller number of training steps. As the data becomes more
anisotropic, the kernel’s dynamics become less trivial at later time: rather than evolving only in
scale, the alignment with the target function varies in a non-trivial way while the loss is decreasing.
As a consequence, the NN function deviates from a kernel machine with the final NTK.

|0|Partial Whitened Spectrum|
|---|---|
|100 101 102 /2 k 103 104|= 0.00 = 0.25 = 0.50 = 0.75 = 1.00|
||= 0.00 = 0.25 = 0.50 = 0.75 = 1.00|


= 0.00

= 0.25

= 0.50

= 0.75

= 1.00


Loss Classification Test Error

1.0 0.32

0.8 0.30

t0.6 t0.28
L0.4 L0.26

0.2 0.24

0.0 0.22

0 200 400 600 800 0 200 400 600 800

t t


(a) Input Spectra

800600 Kernel Norm |K 0.6 Kernel Alignment ||/|fNN10 1 NTK Predictor vs NN

)|(|tK 4002000 /|, Kyy 0.40.20.0 |ffNTKNN1010 35 KK0

0 200 400 600 800 0 200 400 600 800 0.0 0.2 0.4 0.6 0.8 1.0

t t


(d) Kernel Norm


(b) Train Loss

Kernel Alignment

0 200 400 600

t

(e) Phase I alignment


(c) Test Error

NTK Predictor vs NN

10 1

10 3

10 5

0.0 0.2 0.4 0.6 0.8

(f) Predictor Comparison


Figure 4: Anisotropy in the data introduces multiple timescales which can interfere with the silent
alignment effect in a ReLU network. Here we train an MLP to do two-class regression using Adam
at learning rate 5 × 10[−][3]. (a) We consider the partial whitening transformation on the 1000 CIFARfor unwhitened data have a multitude of timescales rather than a single sigmoidal learning curve. As10 images λk → _λ[γ]k_ [for][ γ][ ∈] [(0][,][ 1)][ for covariance eigenvalues][ Σ][v][k][ =][ λ][k][v][k][. (b) The loss dynamics]
a consequence, kernel alignment does not happen all at once before the loss decreases and the final
solution is not a kernel machine with the final NTK. (c) The network’s test error on classification. (d)
Anisotropic data gives a slower evolution in the kernel’s Frobenius norm. (e) The kernel alignment
very rapidly approaches an asymptote for whitened data but exhibits a longer timescale for the
anisotropic data. (f) The final NTK predictor gives a better predictor for the neural network when
the data is whitened, but still substantially outperforms the initial kernel even in the anisotropic case.

6 CONCLUSION

We provided an example of a case where neural networks can learn a kernel regression solution while
in the rich regime. Our silent alignment phenomenon requires a separation of timescales between
the evolution of the NTK’s eigenfunctions and relative eigenvalues and a separate phase where the
NTK grows only in scale. We demonstrate that, if these conditions are satisfied, then the final neural
network function satisfies a representer theorem for the final NTK. We show analytically that these
assumptions are realized in linear neural networks with small initialization trained on approximately
whitened data and observe that the results hold for nonlinear networks and networks with multiple
outputs. We demonstrate that silent alignment is highly sensitive to anisotropy in the input data.

Our results demonstrate that representation learning is not at odds with the learned neural network
function being a kernel regression solution; i.e. a superposition of a kernel function on the training
data. While we provide one mechanism for a richly trained neural network to learn a kernel regression solution through the silent alignment effect, perhaps other temporal dynamics of the NTK could
also give rise to the neural network learning a kernel machine for a data-dependent kernel. Further,
by asking whether neural networks behave as kernel machines for some data-dependent kernel, one
can hopefully shed light on their generalization and transfer learning capabilities (Bordelon et al.,
2020; Canatar et al., 2021; Loureiro et al., 2021; Geiger et al., 2021) and see Appendix F.


-----

ACKNOWLEDGMENTS

CP acknowledges support from the Harvard Data Science Initiative. AA acknowledges support from
an NDSEG Fellowship and a Hertz Fellowship. BB acknowledges the support of the NSF-Simons
Center for Mathematical and Statistical Analysis of Biology at Harvard (award #1764269) and the
Harvard Q-Bio Initiative. We thank Jacob Zavatone-Veth and Abdul Canatar for helpful discussions
and feedback.

REFERENCES

Kyle Aitken and Guy Gur-Ari. On the asymptotics of wide networks with polynomial activations.
_ArXiv, abs/2006.06687, 2020._

Guillaume Alain and Yoshua Bengio. Understanding intermediate layers using linear classifier
probes. arXiv preprint arXiv:1610.01644, 2016.

Anders Johan Andreassen and Ethan Dyer. Asymptotics of wide convolutional neural networks.
_ArXiv, abs/2008.08675, 2020._

Sanjeev Arora, Nadav Cohen, and Elad Hazan. On the optimization of deep networks: Implicit
acceleration by overparameterization. In Jennifer Dy and Andreas Krause (eds.), Proceedings of
_the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine_
_[Learning Research, pp. 244–253. PMLR, 10–15 Jul 2018. URL https://proceedings.](https://proceedings.mlr.press/v80/arora18a.html)_
[mlr.press/v80/arora18a.html.](https://proceedings.mlr.press/v80/arora18a.html)

Sanjeev Arora, Simon Shaolei Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang.
On exact computation with an infinitely wide neural net. In NeurIPS, 2019.

Michael Baake and Ulrike Schlaegel. The peano-baker series. Proceedings of the Steklov Institute
_of Mathematics, 275(1):155–159, 2011._

Yu Bai, Ben Krause, Huan Wang, Caiming Xiong, and Richard Socher. Taylorized training: Towards
better approximation of neural network training at finite width, 2020.

Aristide Baratin, Thomas George, C´esar Laurent, R. Devon Hjelm, Guillaume Lajoie, Pascal Vincent, and Simon Lacoste-Julien. Implicit regularization via neural feature alignment. In AISTATS,
2021.

Mikhail Belkin, Siyuan Ma, and Soumik Mandal. To understand deep learning we need to understand kernel learning. In Jennifer Dy and Andreas Krause (eds.), Proceedings of the 35th
_International Conference on Machine Learning, volume 80 of Proceedings of Machine Learn-_
_[ing Research, pp. 541–549. PMLR, 10–15 Jul 2018. URL https://proceedings.mlr.](https://proceedings.mlr.press/v80/belkin18a.html)_
[press/v80/belkin18a.html.](https://proceedings.mlr.press/v80/belkin18a.html)

Blake Bordelon, Abdulkadir Canatar, and Cengiz Pehlevan. Spectrum dependent learning curves
in kernel regression and wide neural networks. In Hal Daum´e III and Aarti Singh (eds.), Pro_ceedings of the 37th International Conference on Machine Learning, volume 119 of Proceed-_
_[ings of Machine Learning Research, pp. 1024–1034. PMLR, 13–18 Jul 2020. URL https:](https://proceedings.mlr.press/v119/bordelon20a.html)_
[//proceedings.mlr.press/v119/bordelon20a.html.](https://proceedings.mlr.press/v119/bordelon20a.html)

Roger W Brockett. Finite dimensional linear systems. SIAM, 2015.

Abdulkadir Canatar, Blake Bordelon, and Cengiz Pehlevan. Spectral bias and task-model alignment
explain generalization in kernel regression and infinitely wide neural networks. Nature Commu_nications, 12, 2021._

Shuxiao Chen, Hangfeng He, and Weijie J. Su. Label-aware neural tangent kernel: Toward better
generalization and local elasticity, 2020.

L´ena¨ıc Chizat, Edouard Oyallon, and Francis R. Bach. On lazy training in differentiable programming. In NeurIPS, 2019.

Uri Cohen, SueYeon Chung, Daniel D Lee, and Haim Sompolinsky. Separability and geometry of
object manifolds in deep neural networks. Nature communications, 11(1):1–13, 2020.


-----

Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Algorithms for learning kernels based
on centered alignment. The Journal of Machine Learning Research, 13(1):795–828, 2012.

Pedro Domingos. Every model learned by gradient descent is approximately a kernel machine.
_arXiv preprint arXiv:2012.00152, 2020._

Simon Shaolei Du, Wei Hu, and J. Lee. Algorithmic regularization in learning deep homogeneous
models: Layers are automatically balanced. In NeurIPS, 2018.

Ethan Dyer and Guy Gur-Ari. Asymptotics of wide networks from feynman diagrams. In Interna_[tional Conference on Learning Representations, 2020. URL https://openreview.net/](https://openreview.net/forum?id=S1gFvANKDS)_
[forum?id=S1gFvANKDS.](https://openreview.net/forum?id=S1gFvANKDS)

Timo Flesch, Keno Juechems, Tsvetomira Dumbalska, Andrew Saxe, and Christopher Summerfield.
Rich and lazy learning of task representations in brains and neural networks. bioRxiv, 2021.
[doi: 10.1101/2021.04.23.441128. URL https://www.biorxiv.org/content/early/](https://www.biorxiv.org/content/early/2021/04/23/2021.04.23.441128)
[2021/04/23/2021.04.23.441128.](https://www.biorxiv.org/content/early/2021/04/23/2021.04.23.441128)

Stanislav Fort, Gintare Karolina Dziugaite, Mansheej Paul, Sepideh Kharaghani, Daniel M Roy, and
Surya Ganguli. Deep learning versus kernel learning: an empirical study of loss landscape geometry and the time evolution of the neural tangent kernel. In H. Larochelle, M. Ranzato, R. Hadsell,
M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33,
[pp. 5850–5861. Curran Associates, Inc., 2020. URL https://proceedings.neurips.](https://proceedings.neurips.cc/paper/2020/file/405075699f065e43581f27d67bb68478-Paper.pdf)
[cc/paper/2020/file/405075699f065e43581f27d67bb68478-Paper.pdf.](https://proceedings.neurips.cc/paper/2020/file/405075699f065e43581f27d67bb68478-Paper.pdf)

Kenji Fukumizu. Effect of batch learning in multilayer neural networks. Gen, 1(04):1E–03, 1998.

Mario Geiger, Leonardo Petrini, and Matthieu Wyart. Landscape and training regimes in deep
learning. Physics Reports, 924:1–18, 2021. ISSN 0370-1573. doi: https://doi.org/10.1016/j.
[physrep.2021.04.001. URL https://www.sciencedirect.com/science/article/](https://www.sciencedirect.com/science/article/pii/S0370157321001290)
[pii/S0370157321001290. Landscape and training regimes in deep learning.](https://www.sciencedirect.com/science/article/pii/S0370157321001290)

Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: convergence and generalization in neural networks (invited paper). Proceedings of the 53rd Annual ACM SIGACT
_Symposium on Theory of Computing, 2018._

Arthur Jacot, Franc¸ois Ged, Franck Gabriel, Berfin S¸ims¸ek, and Cl´ement Hongler. Deep linear
networks dynamics: Low-rank biases induced by initialization scale and l2 regularization. arXiv
_preprint arXiv:2106.15933, 2021._

Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint
_arXiv:1412.6980, 2014._

Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha SohlDickstein, and Jascha Sohl-Dickstein. Wide neural networks of any depth evolve as linear models
under gradient descent. ArXiv, abs/1902.06720, 2019.

Jaehoon Lee, Samuel S. Schoenholz, Jeffrey Pennington, Ben Adlam, Lechao Xiao, Roman Novak,
and Jascha Sohl-Dickstein. Finite versus infinite neural networks: an empirical study, 2020.

Tengyuan Liang and Alexander Rakhlin. Just interpolate: Kernel ”ridgeless” regression can gener[alize. CoRR, abs/1808.00387, 2018. URL http://arxiv.org/abs/1808.00387.](http://arxiv.org/abs/1808.00387)

Chaoyue Liu, Libin Zhu, and Misha Belkin. On the linearity of large non-linear models: when
and why the tangent kernel is constant. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural Information Processing Systems, volume 33, pp.
15954–15964. Curran Associates, Inc., 2020. [URL https://proceedings.neurips.](https://proceedings.neurips.cc/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Paper.pdf)
[cc/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Paper.pdf.](https://proceedings.neurips.cc/paper/2020/file/b7ae8fecf15b8b6c3c69eceae636d203-Paper.pdf)

Bruno Loureiro, C´edric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc M´ezard, and
Lenka Zdeborov´a. Capturing the learning curves of generic features maps for realistic data sets
[with a teacher-student model. CoRR, abs/2102.08127, 2021. URL https://arxiv.org/](https://arxiv.org/abs/2102.08127)
[abs/2102.08127.](https://arxiv.org/abs/2102.08127)


-----

Roman Novak, Lechao Xiao, Jiri Hron, Jaehoon Lee, Alexander A. Alemi, Jascha Sohl-Dickstein,
and Samuel S. Schoenholz. Neural tangents: Fast and easy infinite neural networks in python. In
_[International Conference on Learning Representations, 2020. URL https://openreview.](https://openreview.net/forum?id=SklD9yrFPS)_
[net/forum?id=SklD9yrFPS.](https://openreview.net/forum?id=SklD9yrFPS)

Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning.
Adaptive computation and machine learning. MIT Press, 2006. ISBN 026218253X.

Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural network. In In International Conference on Learning
_Representations, 2014._

Bernhard Sch¨olkopf and Alexander J. Smola. Learning with kernels : support vector machines, reg_ularization, optimization, and beyond. Adaptive computation and machine learning. MIT Press,_
[2002. URL http://www.worldcat.org/oclc/48970254.](http://www.worldcat.org/oclc/48970254)

Haozhe Shan and Blake Bordelon. Rapid feature evolution accelerates learning in neural networks,
2021.

Dominik St¨oger and Mahdi Soltanolkotabi. Small random initialization is akin to spectral learning:
Optimization and generalization guarantees for overparameterized low-rank matrix reconstruction. arXiv preprint arXiv:2106.15013, 2021.

Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan,
Daniel Soudry, and Nathan Srebro. Kernel and rich regimes in overparametrized models. In Jacob
Abernethy and Shivani Agarwal (eds.), Proceedings of Thirty Third Conference on Learning The_ory, volume 125 of Proceedings of Machine Learning Research, pp. 3635–3673. PMLR, 09–12_
[Jul 2020. URL https://proceedings.mlr.press/v125/woodworth20a.html.](https://proceedings.mlr.press/v125/woodworth20a.html)

Chulhee Yun, Shankar Krishnan, and Hossein Mobahi. A unifying view on implicit bias in training
linear neural networks. arXiv preprint arXiv:2010.02501, 2020.

Sergey Zagoruyko and Nikos Komodakis. Wide residual networks, 2017.

Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding
deep learning requires rethinking generalization. In 5th International Conference on Learning
_Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings._
[OpenReview.net, 2017. URL https://openreview.net/forum?id=Sy8gdB9xx.](https://openreview.net/forum?id=Sy8gdB9xx)


-----

## Appendix

A DERIVATION OF EQUATION 2

A.1 TRAINING POINT PREDICTIONS WITH TIME VARYING KERNEL

Given a training set of P data points {(x[µ], y[µ])}µ[P]=1[, the dynamics of the network training er-]
rors [∆t][µ] := f (x[µ], t) − _y[µ]_ close in terms of a time-varying neural tangent kernel [Kt]µν =
_K(x[µ], x[ν], t)_

_d_

(11)
_dt_ **[∆][t][ =][ −][K][t][∆][t][.]**

We introduce the transition matrix Φt ∈ R[P][ ×][P] which has the property that ∆t = Φt∆0 and
**Φ0 = I, we obtain the matrix evolution equation** **Φ[˙]** _t = −KtΦt. This equation can be solved_
formally in terms of the Peano-Baker series (Baake & Schlaegel, 2011; Brockett, 2015)

_t_ _t_ _s1_
**Φt =I −** 0 _ds1Ks1 +_ 0 _ds1Ks1_ 0 _ds2Ks2_ (12)
Z Z Z

_t_ _s1_ _s2_
_−_ 0 _ds1Ks1_ 0 _ds2Ks2_ 0 _ds3Ks3 + ..._ (13)
Z Z Z


which can easily be verified to solve _dt[d]_ **[Φ][(][t][) =][ −][K][(][t][)][Φ][(][t][)][ with initial condition][ Φ][(0) =][ I][. Under]**

_t_
the condition that 0 **_[K][(][t][)][ commutes with][ K][(][t][)][, which is true in the settings of interest in this]_**
paper, specifically the setting discussed in Appendix B, we can simplify the Peano-Baker series into
R
a simple matrix exponential

_t_ _t_ 2 _t_ 3 _t_ _k_
**Φt = I** _dsKs_ + [1] _dsKs_ _dsKs_ _... + [(][−][1)][k]_ _dsKs_ + ...
_−_ 0 2 0 _−_ 6[1] 0 _k!_ 0
Z  Z  Z  Z 

_t_
= exp **_Ksds_** _._ (14)
_−_ 0
 Z 


_t_
Thus, under the condition that Kt commutes with 0 **_[K][s][ds][ we can exactly solve for the training]_**
error dynamics in terms of integrating factors
R

_t_
**∆t = Φt∆0 = exp** _−_ 0 **_Ksds_** **∆0.** (15)
 Z 

We expect this formula to hold approximately whenever the eigenvectors of K are approximately
_t_
equal to the eigenvectors of 0 **_[K][s][ds][.]_**
R

A.2 TEST POINT PREDICTIONS WITH TIME VARYING KERNEL

Given access to the value of the function on training points, one can evaluate the function on test
points. We have that the evolution of the function on a test point f (x) is given by

_d_

(16)
_dt_ _[f][t][(][x][) =][ −][k][t][(][x][)][∆][t][,]_

where [kt(x)][µ] := K(x, x[µ], t). This gives the final value of f to be


_∞_
_f_ (x) := f (x) = f0(x) +

_[∗]_ _∞_ 0
Z

This is exactly equation 2.


_t_
_dt kt(x)[µ]_ exp _η_
_−_ 0
  Z


**_Kt′_** _dt[′]_ (y **_f0)._** (17)
_−_



-----

B KERNEL EVOLUTION IN SCALE ONLY

We consider the model of kernel evolution introduced in Section 2 where the kernel evolves only in
scale for t > τ (ϵ) and is of small overall size for t < τ (ϵ),

_K(x, x[′], t) =_ _ϵK0(x, x′, t)_ _t ≤_ _τ_ (ϵ) (18)
_g(t)K_ (x, x[′]) _t > τ_ (ϵ) _[.]_
 _∞_

This model allows for alignment of the kernel while small in the time window t ∈ (0, τ ), followed
by scale growth only for t ∈ (τ, ∞). The time threshold τ will generally depend on the initial kernel
scale ϵ. For example, in depth L linear MLPs, ϵ _σ[2][L][−][2]_ and τ _σ[−][L][+2]_ with initialization scale
_∼_ _∼_ _t_
_σ as we show in Figure E.3. We will define a differentiable function h(t) =_ _τ_ _[g][(][t][′][)][dt][′][ so that]_
_h[′](t) = g(t), h(τ_ ) = 0, and limt→∞ _h(t) = ∞. This last condition follows fromR_ _g’s continuity_
and the assumption that limt→∞ _g(t) = 1. We will first show that the final neural network has the_
form f (x) = f0(x) + k∞(x) · K∞[−][1][(][y][ −] **_[f][0][) +][ O][(][ϵτ]_** [(][ϵ][))][. First, we need to calculate the errors]
**∆(t) = y −** **_f_** (t) ∈ R[P] made on the P training examples. These satisfy the dynamics

_d_ _ϵK0(t)∆(t)_ _t ≤_ _τ_ (19)

_dt_ **[∆][(][t][) =][ −]** _h[′](t)K_ **∆(t)** _t > τ [,]_
 _∞_

where K0(t), K R[P][ ×][P] are P _P gram matrices; e.g. [K0(t)]µν = K0(x[µ], x[ν], t). The_
_∞_ _∈_ _×_
vector k(x) has entries given by [k(x)]µ = K(x, x[µ]). For t ∈ (0, τ ), the error vector follows the
dynamics
_d_

(20)
_dt_ **[∆][(][t][) =][ −][ϵ][K][0][(][t][)][∆][(][t][)][.]**


Introducing operator norm of a matrix, |Φ|op = max|v|2=1 |Φv|2, we will now bound the operator
norm of the change in the transition matrix Φ(t) introduced in section A.1.
**Lemma 1. Let k0 = maxt** (0,τ ) **_K0(t)_** _op represent the maximum operator norm of K0 achieved_
_∈_ _|_ _|_
_on the interval (0, τ_ ). Let Φ(t) ∈ R[P][ ×][P] _be the transition matrix for the linear dynamics of equation_
_(20) so that_ _dt[d]_ **[Φ][(][t][) =][ −][ϵ][K][0][(][t][)][Φ][(][t][)][ and][ Φ][(0) =][ I][. Then,]**

_|Φ(τ_ ) − **Φ(0)|op < ϵτ** (ϵ)k0. (21)

_Proof. We begin by noting that, due to the triangle inequality,_

_τ_ _τ_

_|Φ(τ_ ) − **Φ(0)|op = ϵ** 0 **_K0(t)Φ(t)dt_** _op_ _≤_ _ϵ_ 0 _|K0(t)Φ(t)|op dt_

Z _τ_ Z _τ_ (22)

_≤_ _ϵ_ 0 _|K0(t)| |Φ(t)|op dt ≤_ _ϵk0_ 0 _|Φ(t)|op dt._
Z Z

We will now establish that |Φ(t)|op ≤ 1. Note that for any vector v ∈ R[P] that

1 _d_

2 [=][ v][⊤][Φ][⊤]Φ[˙] (t)v = _ϵv[⊤]Φ(t)[⊤]K(t)Φ(t)v_ 0, (23)

2 _dt_ _[|][Φ][(][t][)][v][|][2]_ _−_ _≤_


where the final inequality follows from the fact that K(t) is positive semidefinite for all t. Therefore
we have shown that |Φ(t)|op ≤|Φ(0)|op = |I|op = 1 τ . Using this inequality, we find that

_|Φ(τ_ ) − **Φ(0)|op ≤** _ϵk0_ 0 _|Φ(t)|op dt ≤_ _ϵk0τ_ (ϵ). (24)

Z

With the above Lemma 1, we can bound the discrepancy ∆(τ ) and ∆(0), namely


_|∆(τ_ ) − **∆(0)|2 = |(Φ(τ** ) − **Φ(0))∆(0)|2** (25)
**Φ(τ** ) **Φ(0)** _op_ **∆(0)** 2 _ϵk0τ_ (ϵ) **∆(0)** 2.
_≤|_ _−_ _|_ _|_ _|_ _≤_ _|_ _|_

This inequality must therefore hold entry-wise as well, so that

**∆(τ** ) = ∆(0) + O(ϵk0τ (ϵ)). (26)

We will now establish how the training predictions ∆(t) evolve for the second interval t ∈ (τ, ∞).


-----

**Lemma 2. Suppose that from t ∈** (τ, ∞) that ∆(t) obeys the dynamics _dt[d]_ **[∆][(][t][) =][ −][h][′][(][t][)][K][∞][∆][(][t][)]**

_where ∆(τ_ ) is as in equation 26. Then, for all t ∈ (τ, ∞),

**∆(t) = exp (−h(t)K∞) [(y −** **_f0) + O(ϵτ_** (ϵ)k0)] . (27)

_Proof. The differential equation can be solved through eigendecomposition and integrating factors._
Let ∆k(t) represent the k-th component of ∆(t) in the eigenbasis of K∞ which is static for t ∈
(τ, ∞). Let the corresponding eigenvalue of K∞ be λk. The scalar variable ∆k(t) obeys the
dynamics

_d_

(28)
_dt_ [∆][k][(][t][) =][ −][λ][k][h][′][(][t][)∆][k][(][t][)][.]

This can be solved with integrating factors, noting that _dtd_ _e[λ][k][h][(][t][)]∆k(t)_ = 0. This implies that

∆k(t) = e[−][λ][k][h][(][t][)]∆k(τ ). Written as a vector, ∆(t) = exp ( _h(t)K_ ) ∆(τ ). Since by Lemma 1
− _∞_ 
we have ∆(τ ) = ∆(0) + O(ϵk0τ (ϵ)), we obtain the desired result.

We will now combine the results of the previous two lemmas which analyze the evolution of the
network predictions on the training set to give our main silent alignment result, which specifies what
the neural network function predicts for an arbitrary test point x.

**Theorem 1. Let the kernel have dynamics of Equation 18 where g(t) is a continuous, integrable**
_function with limt→∞_ _g(t) = 1. The function learned by the neural network is_

_f_ (x) − _f0(x) = k∞(x) · K∞[−][1][y][ +][ O][ϵ][(][ϵτ]_ [(][ϵ][))][.] (29)

_Proof. Using Lemma 1 and 2, we know the full dynamics for training predictions ∆(t). Using_
**∆(t), we can solve for the final predictor f** (x) by integrating dynamics _f[˙](x, t) = k(x, t) · ∆(t)._

_τ_ _∞_
_f_ (x) − _f0(x) = ϵ_ 0 **_k0(x, t) · ∆(t)dt + k∞(x) ·_** _τ_ _h[′](t) exp (−h(t)K∞) ∆(τ_ )dt (30)
Z Z

We will now bound the first term. Taking _k[˜]0 = maxt∈(0,τ_ ),x∈RD |k0(x, t)|2, we get that
_τ_ _τ_ _τ_

**_k0(x, t)_** **∆(t)** **_k(x)_** **∆(t)** _dt_ _ϵ(1 + ϵτ_ (ϵ)k0) **∆0** **_k0(x, t)_** _dt_

Z0 _·_ _[≤]_ _[ϵ]_ Z0 _|_ _||_ _|_ _≤_ _|_ _|_ Z0 _|_ _|_

_ϵτ_ (ϵ)k[˜]0(1 + ϵτ (ϵ)k0) **∆(0)** = Oϵ(ϵτ (ϵ)).

_[ϵ]_ _≤_ _|_ _|_


We can now integrate the matrix exponential in the second term, using the fact that
_∞_ _∞_

_h[′](t) exp (_ _h(t)K_ ) dt = exp ( _hK_ ) dh = K[−][1] (31)
_τ_ _−_ _∞_ 0 _−_ _∞_ _∞_ _[.]_

Z Z

Using the fact that ∆(τ ) = ∆(0) + O(τ (ϵ)ϵ) from Lemma 1, we arrive at the desired result

_f_ (x) _f0(x) = k_ (x)K[−][1] [+][ O][ϵ][(][ϵτ] [(][ϵ][))][.] (32)
_−_ _∞_ _∞_ **[∆][0]**

We have now established that, given the kernel dynamics in Equation 18, f (x) _f0(x) converges_
_−_
to the kernel regression solution with final NTK as ϵ 0 provided limϵ 0 ϵτ (ϵ) = 0. This is
_→_ _→_
generic in the settings we consider in this paper for networks with small initialization. In this small
initialization setting, f0 is also negligible so that f (x) itself is a kernel regression solution. For
example, in a linear depth L neural network with initial weight scale σ, the initial scale of the kernel
is ϵ ∼ _σ[2][L][−][2]_ while the time to alignment scales as τ ∼ _σ[2][−][L]_ thus ϵτ ∼ _σ[L]_ can be made arbitrarily
small by taking σ 0. Lastly, the initial network outputs f0(x) _σ[L]_ can also be made arbitrarily
_→_ _∼_
small.


-----

C PHASES OF LEARNING AT SMALL INITIALIZATION

C.1 PHASE I: TWO LAYER NETWORK AND KERNEL ALIGNMENT

We now present an analysis distinct from that of the previous subsection to go beyond the first step
of gradient descent. The NTK for the two layer linear network has the form K = x[⊤]Mx[′] with
**_M = W_** _[⊤]W + |a|[2]I. Our goal is to determine the eigendecomposition of M_ . Introduce the
variables q(t) = 2[1] **_[β][⊤][Mβ][ =][ 1]2_** _|a|[2]_ + β[⊤]W _[⊤]W β_ and r(t) = a[⊤]W β. These dynamics form

a closed two dimensional linear system early in training
 


0 1 _q(t)_
= 2ηs + O(σ[3]), _t, σ_ 0
1 0 _r(t)_ _→_
   

= [1] 1 _e[2][ηst]_ + [1] 1

2 [(][q][0][ +][ r][0][)] 1 2 [(][q][0][ −] _[r][0][)]_ 1
  − 


_q(t)_
_r(t)_

_q(t)_
_r(t)_


_dt_


(33)
_e[−][2][ηst]_ + O(σ[3])


The variable q(t) represents the alignment of the NTK with the optimal direction β while r(t)
defines the alignment of the network with the teacher. We see that this alignment increases exponentially with timescale t ∼ _η[−][1]s[−][1]. While the above equations hold for early time and small_
initialization for any initial condition q0, r0, we can further estimate these initial values under
random initialization provided the input dimension is large. We stress that this limit is not necessary for the silent alignment, but allows for a nice simplification. For Gaussian initialization
_ai_ (0, σ[2]/N ), Wij (0, σ[2]/D) with large D, we have
_∼N_ _∼N_

_⟨q0⟩_ = _[σ]2[2]_ 1 + _[N]D_ _, ⟨r0⟩_ = 0, _r0[2]_ = _[σ]D [4]_ _[.]_ (34)

 

_qIn the large = q0 cosh(2 Dηst limit, we have with high probability). Note that this gives the quantities q q0 ≫(t) =r0 and thus[1]2_ **_[β][⊤][M]_** [(][t] r[)][β]([, r]t) =[(][t][) =] q0[ β] sinh(2[⊤][W][ ⊤]ηst[a][ early]) and

in training. Now consider a unit vector v which is orthogonal to the solution β[⊤]v = 0. We find that
the projection of M along this direction evolves dynamically as:

_d_

_dt_ **_[v][⊤][M]_** [(][t][)][v][ = 2][v][⊤] []βa[⊤]W + β[⊤]W _[⊤]aI_ **_v_** (35)

= 2r(t). 


We can conclude that v[⊤]Mv is equal to q(t) up to an additive initialization constant. We see that
this is evolving half as quickly as β[⊤]Mβ = 2q(t). Since v[⊤]M0v _O(σ[2]) is small compared to_
_∼_
the exponentially growing M (t), the only matrix that satisfies these two conditions must necessarily
take the form

**_M_** (t) = q0 cosh(2ηst) **_ββ[⊤]_** + I + M0. (36)

The first term, which is growing exponentially in t will eventually overwhelm the randomly initial- 
ized kernel M0, which is O(σ[2]).

C.2 PHASE I: ERROR IN THE LEADING ORDER APPROXIMATION

In solving the equations of the previous section, we truncated the full gradient descent equations at
order σ[3]. It is important to confirm that the error generated by this truncation remains bounded. We
will argue by self-consistency. The full equations are

_dtd_ **_[a][ =][ W]_** _sβ −_ **_W_** _[⊤]a_ _,_ _dtd_ **_[W][ =][ a]_** _sβ −_ **_W_** _[⊤]a_ _⊤_ _._ (37)

One can use these equations to solve for the dynamics of the  _r, q_ variables: 


_d_

_dt_ _[q][(][t][) = 2][sr][ −]_ **_[β][⊤]_** []W _[⊤]aa[⊤]W + a[⊤]W W_ _[⊤]a_ **_β = 2sr −_** [2r[2] +



(a[⊤]W vi)[2]] (38)
**_vXi⊥β_**


_d_

(39)
_dt_ _[r][(][t][) = 2][sq][ −]_ **_[a][⊤][[][W W][ ⊤]_** [+][ aa][⊤][]][W β][ = 2][sq][ −] [2][|][a][|][2][r][ +][ a][⊤][[][aa][⊤] _[−]_ **_[W W][ ⊤][]][W][ β]_**


-----

The second equality in equation 38 comes from inserting a complete basis of states including β and
**_vi_** **_β into the last term of the left-hand side. The second equality in equation 39 comes from_**
writing ⊥ **_W W_** _[⊤]_ + aa[⊤] = 2aa[⊤] + (W W _[⊤]_ _−_ **_aa[⊤]). Note that the final term in brackets on the_**
right hand side is a conserved quantity for linear networks, and so is always of order O(σ[2]).

Assuming the solutions for q, r are valid to order σ[2], we get that _dt[d]_ **_[a][⊤][W v][i][ =][ O][(][σ][4][)][.]_**

_d_

_dt_ **_[a][⊤][W v][i][ = (][β][ −]_** **_β[ˆ])[⊤]W_** _[⊤]W vi + |a|[2](β −_ **_β[ˆ])[⊤]vi_** (40)

Further noting that because of the conservation law aa[⊤] _−_ **_W W_** _[⊤]_ = O(σ[2]) is also constant in
time. This gives us that
_|a[⊤][aa[⊤]_ _−_ **_W W_** _[⊤]]W β| ≤_ _σ[2]|a[⊤]W β| = σ[2]r._ (41)
We now note that r, |a|[2] both grow as a (σ, s-independent) constant times times σ[2]e[2][st]. The correction to the dynamics of both equations is then bounded by a constant times σ[4]e[4][st]. This will be less
than σ[2] as long as t satisfies

_s_

_t_ _._ (42)
_≪_ 4[1]s [log] _σ[2]_

For σ[2] _≪_ _s, the alignment time t = 1/s falls within this range and we are guaranteed alignment to_ 
the Ganguli-Saxe configuration.

The error of the full solution at time t can be bounded by the integral of this error bound from 0 to
_t, namely a constant times σ[4]/s. As long as s ≫_ _σ[4], we are guaranteed that the error of the kernel_
is O(σ[2]) as given in equation 7.

C.3 PHASE I: TWO LAYER ANALYSIS WITH UNWHITENED DATA

We now study the same linearization around the initial fixed point used in the main text but for the
two layer network with unwhitened data. In this case,
_d_

(43)
_dt_ **_[a][ ∼]_** _[s][W][ Σ][β][,]_

_d_

(44)
_dt_ **_[W][ ∼]_** _[s][aβ][⊤][Σ][.]_

which holds asymptotically as t/ log(σ[−][1]) → 0. We introduce the following variables which form
a closed linear dynamical system
_r1(t) = β[⊤]W_ **_a,_**

_[⊤]_

_r2(t) =_ **_a_** _,_
_|_ _|[2]_

_r3(t) = β[⊤]W_ **_W Σβ,_** (45)

_[⊤]_

_r4(t) = β[⊤]ΣW_ **_a,_**

_[⊤]_

_r5(t) = β[⊤]ΣW_ **_W Σβ._**

_[⊤]_

Introduce the constants a = β[⊤]Σβ, b = β[⊤]Σ[2]β. Using the weight dynamics, it is straightforward
to show that

0 _a_ 1 0 0
0 0 0 2 0

**_r˙(t) ∼_** _s_ b 0 0 _a_ 0 (46)

0 _b_ 0 0 1

 
0 0 0 2b 0
 
  **_[r][(][t][)][, t][ ≪]_** [log(][σ][−][2][)][.]

This matrix has eigenvalues λ ∈{0, −√b, _√b, −2√b, 2√b}. Since there are only two positive_

eigenvalues _√b, 2√b, it suffices to consider evolution along those two eigendirections, where the_

kernel and neural network function will be amplified. Evolution along these direcions give


**_r(t)_** _c1e[s]_
_∼_


_bt_





 [+][ c][2][e][2][s]


_bt_


(47)


-----

t = 0

2

1

(a) Initialization

|t t 1|Col2|Col3|
|---|---|---|
||||


1


t

2

1


(b) Transition Time


(c) Min ℓ2 norm solution


Figure C.1: Kernel evolution on anisotropic data consists of two alignment phases. (a) At initialization the level curves of β[⊤]M _[−][1]β exhibit spherical symmetry. (b) After the initial phase I_
alignment, the matrix M exhibits a spike in the Σβ direction. (c) At long times, the network function and kernel’s spiked direction need to converge to the minimum ℓ2 norm solution as we explain
in G.1. This requires realignment of the kernel at late times, eliminating the preconditions for the
silent alignment effect.

where c1, c2 are constants determined by intialization. At large time, the large eigenvalue mode
_λ = 2√b will dominate. Decomposing W = W0 + a(t) [v1(t)β + v2(t)Σβ][⊤]_ we find that the

only self consistent solution is v1(t) = 0, v2(t) = b[−][1][/][2]. This implies that the kernel evolution will
take the form
_K(x, x[′], t) ∼_ _K(x, x[′], 0) + |a(t)|[2]x[⊤]Mx[′],_

1 (48)

**_M =_** **Σββ[⊤]Σ + I** _._

" **_β[⊤]Σ[2]β_** #

We see that the kernel evolves along the directions Σβ early in training for unwhitened data. We

p

visualize the two stages of learning for unwhitened data in Figure C.1.


C.4 PHASE II: WHITENED DATA

Consider a two layer network f = a[⊤]W x where balance has been achieved W = u(t)aβˆ _[⊤]_ and
**_a(t) = u(t)aˆ. Once this balance condition is stable for fixed ˆa, we can calculate the time derivative_**
of u(t)

_d_

_u(t)aˆ = u(t)_ _s_ _u(t)[2][]_ **_aˆ._** (49)
_dt_ **_[a][(][t][) = ˙]_** _−_

Letting c(t) = u(t)[2], we find that ˙c(t) = 2u(t)[2][ ]s −u(t)[2][] = 2c(t) [s − _c(t)], which is the twose[2][st]_
layer dynamics derived in Saxe et al. (2014). This dynamics has solution c(t) = _e[2][st]−1+s/c0_ [.]


C.5 SOLUTIONS TO THE FULL TRAINING DYNAMICS OF LINEAR NETWORKS AT SMALL
INITIALIZATION

By combining the analyses of the subsection C.1 with the exact solutions discussed in C.4 we can
match both solutions to obtain formulas for r(t) and q(t) for the entire network’s training path that
are exact up to O(σ[2]) corrections. Up to O(σ[2]) we then have that

2 sinh(2st)
_r(t) = s_ _,_ (50)

(e[2][st] 1) + 2s/q0
_−_

2 cosh(2st)
_q(t) = s_ _._ (51)

(e[2][st] 1) + 2s/q0
_−_

This yields that the initialization constant q0/2 plays the effective role of c0 in the Ganguli-Saxe
solution for phase II. Equation 34 yields the expected value of this initialization constant. We have


-----

1.00.8 = 10= 10= 10= 10 5432 12001000 Experiment1 ln 2 1.00.8 L|AlignmentKt (t)|

Lt0.60.4 t1/2 800600 0.60.4

0.2 400 Kernel and Loss0.2

0.0 0 500 1000t 1500 2000 10 5 10 4 2 10 3 10 2 0.0 0 500 1000t 1500 2000


(a) Loss Dynamics


(b) Time to Lt = 0.5


(c) Alignment σ = 10[−][4].


Figure C.2: Initialization scale controls the time spent in phase I, where the network escapes the
saddle point near W, a = 0 and the kernel aligns to the task. (a) The loss curves for two-layer
linear networks with small initialization follow sigmoidal trajectories as in Saxe et al. (2014) which
transition from their maximum to minimum at a time which decreases with initialization scale.
Theory is shown in black dashed lines. (b) Verification of the Phase I time t1/2, measured as the
time for the loss to reach one half its original value. This scales logarithmically with σ[2]. (c) The
alignment of the kernel eigenfunctions happens before the loss appreciably decreases for σ = 10[−][4],
evidenced by the kernel alignment curve. The analytically obtained maximum alignment value is
overlayed in dashed green.

empirically verified that these exact equations hold to high accuracy across a variety network sizes,
initialization scales, and whitened datasets. We illustrate some of these in figure C.3.

Exact vs Analytic Solutions Exact vs Analytic Solutions

10[1]

0 200 400 600 800 0 20 40 60 80 100

|Exact vs Analytic Solutions|Col2|Col3|
|---|---|---|
|q(t) empirical q(t) analytic r(t) empirical r(t) analytic|||
|||q(t) empirical q(t) analytic r(t) empirical r(t) analytic|
|0|20 40 60 80 10 t||


Exact vs Analytic Solutions

10[0]

10 1

10 2

q(t) empirical

10 3 q(t) analytic

r(t) empirical

10 4 r(t) analytic

0 200 400 600 800

t


(a) Synthetic


(b) Whitened MNIST


Figure C.3: Overlay of empirical and exact solutions for q(t), r(t) in two layer linear feedforward
networks for synthetic and two-class MNIST whitened datasets. (a) We take D = 25, N = 10. (b)
We take D = 784, N = 100.

D BALANCING OF WEIGHTS IN DEEP LINEAR NETWORKS

The balance condition discussed in the main text holds for deep linear trained with any loss function
of the form L = _µ_ _[ℓ][(][f][ µ][, y][µ][)][ (not just MSE) since, as was shown by Arora et al. (2018); Du et al.]_

(2018)

[P]

1 _d_ _∂ℓ_ _∂f_ _[µ]_ _⊤[#]_

**_W_** _[ℓ]W_ _[ℓ][⊤][]_ =

_η_ _dt_ _∂f_ _[µ]_ _∂W_ _[ℓ]_ **_[W][ ℓ][⊤]_** [+][ W][ ℓ] _∂[∂f]W[ µ][ℓ]_

_µ_ "

 X

= _∂ℓ_ _∂f_ _[µ]_ _⊤W_ _[ℓ][+1]_ + W _[ℓ][+1][⊤]_ _∂f_ _[µ]_ (52)

_∂f_ _[µ]_ _∂W_ _[ℓ][+1]_ _∂W_ _[ℓ][+1]_

" #

X


_d_

**_W_** _[ℓ][+1][⊤]W_ _[ℓ][+1][]_
_dt_



= [1]


-----

The second line follows from the first since the following quantities are identical

_∂f_ _[µ]_

**_x[µ][⊤]W_** [1][⊤]...W _[ℓ][−][1][⊤][]_ **_W_** _[ℓ][⊤],_ (53)
_∂W_ _[ℓ]_ **_[W][ ℓ][⊤]_** [=][ W][ ℓ][+1][⊤][...][W][ L][−][1][⊤][w][L][ ]

_∂f_ _[µ]_
**_W_** _[ℓ][+1][⊤]_ **_W_** _[ℓ][⊤]...w[L][]_ **_x[µ][⊤]W_** [1][⊤]...W _[ℓ][⊤]._ (54)

_∂W_ _[ℓ][+1][ =][ W][ ℓ][+1][⊤]_ []

By inspection these two quantities are equal. Thus we have, for any loss function, a deep linear
network has the following conservation laws

_d_

**_W_** _[ℓ]W_ _[ℓ][⊤]_ **_W_** _[ℓ][+1][⊤]W_ _[ℓ][+1][]_ = 0. (55)
_dt_ _−_


We show in the next section that this balancing condition is very helpful in identifying the time
evolution of the neural tangent kernel. In the case where the network has a single output, we can
inductively prove that each layer’s weight matrix is W _[ℓ]_ = u(t)rℓ+1(t)rℓ(t)[⊤]. We will assume this
formula is true for layer ℓ + 1 and prove it must hold for layer ℓ since

**_W_** _[ℓ]W_ _[ℓ][⊤]_ = W _[ℓ][+1][⊤]W_ _[ℓ][+1]_ = u(t)[2]rℓ+1(t)rℓ+1(t)[⊤]. (56)

This implies that W _[ℓ]_ is rank one with right singular vector equal to rℓ+1(t). Thus, the decomposition for each layer the form W _[ℓ]_ = u(t)rℓ+1(t)rℓ(t)[⊤] for some unit vector rℓ(t). Similar analysis
can be performed for the multi-class setting.

E NTK FORMULA FOR DEEP LINEAR NETWORKS

The neural tangent kernel for a linear network f (x) = w[L][⊤]W _[L][−][1]...W_ [1]x is defined as an innerproduct over all gradients


_∂f_ (x)

_∂W_ _[ℓ]_ _[, ∂f]∂W[(][x][ℓ][′][)]_



_K(x, x[′]) =_


_ℓ=1_


(57)


**_x · x[′]_** +


**_W_** _[ℓ][′]_
_ℓ[′]>ℓ_

Y


**_x[⊤]W_** [1][⊤]...W _[ℓ][−][1][⊤]W_ _[ℓ][−][1]...W_ [1]x[′].


**_W_** _[ℓ]_

_ℓ=2_

Y


_ℓ=2_


Under the balanced assumption that W _[ℓ]_ = u(t)rℓ+1(t)rℓ(t)[⊤] + O(σ), expanding the kernel to
leading order in σ yields the following form:

_K(x, x[′]) = u(t)[2][L][−][2]x[⊤]_ [](L 1)I + r1(t)r1(t)[⊤][] **_x[′]_** + O(σ), (58)
_−_


1.0


|1.0|Col2|Col3|
|---|---|---|
|1.0 0.8 0.6 Loss 0.4 0.2 0.0|L = 2 L = 3 L = 4 L = 5||
||L = 2 L = 3 L = 4 L = 5||

|Alignment 0.6 0.4 Kernel 0.2|L = 2 L = 3 L = 4 L = 5|
|---|---|

|1.0 0.8 Alignment 0.6 0.4 W1 0.2|L = 2 L = 3 L = 4 L = 5|
|---|---|


10[0] 10[1] 10[2] 10[3]


10[0] 10[1] 10[2] 10[3]


10[0] 10[1] 10[2] 10[3]


L = 2
L = 3
L = 4
L = 5


L = 2
L = 3
L = 4
L = 5


L = 2
L = 3
L = 4
L = 5


Figure E.1: The depth dependence of loss dynamics, kernel alignment and the alignment of first layer
weights in a linear network on synthetic whitened data in D = 30 dimensions. (a) The loss reaches
half its initial value after t ∼ _σ[−][L][+2]_ steps for L > 2. The decay rate of the loss becomes sharper
with depth. (b) Final kernel alignment increases monotonically with depth L and approaches 1 as
_L →∞. (c) The alignment of the first layer weights W1 with the optimal direction β approaches 1_
for all models.


-----

|1.0 0.8 0.6 Loss 0.4 0.2 0.0|L = 2 L = 3 L = 4 L = 5|
|---|---|
||L = 3 L = 4 L = 5|

|0.08 Alignment 0.06 0.04 0.02|Col2|Col3|
|---|---|---|
|0.00|||


10[0] 10[1] 10[2] 10[3]

L = 2
L = 3
L = 4
L = 5

t

(a) Loss


10[2] 10

t

(b) Alignment


0.25 K0

||/|fNN 0.200.15 K
fNN

0.10

|fNTK 0.05

0.00

1.5 2.0 2.5ReLU MLP Depth3.0 3.5 4.0 4.5 5.0 5.5


(c) Test Comparison


Figure E.2: ReLU networks across depths trained on two-class whitened CIFAR. The hidden widths
are all of size 100. We use 3k train points and 7k test points. (a) The loss exhibits the same scaling as
_σ[−][L][+2]_ as in the linear setting. (b) Deep networks with nonlinearities are seen to undergo the silent
alignment effect early on in training. The dashed lines indicate when the kernel has grown to 10%
of its final value. (c) The trained network outputs on test data match closely the kernel regression
with the final learned kernel, but do not match regression the initial kernel.

E.1 DEEP LINEAR NETWORK DYNAMICS UNDER BALANCE


In this section, we will consider the dynamics of the variable u once the balance condition is satisfied.
Let wL = ˆwu(t). Then the dynamics for u(t) under the balancing assumption is


_L−1_

**_W_** _[ℓ]_

_ℓ=1_

Y


_L−1_

(s − _u(t)[L])_ **_W_** _[ℓ]β = u(t)[2][L][−][2](s −_ _u(t)[L]) ˆw,_ (59)

_ℓ=1_

Y


_d_

**_w_** _[d]_
_dt_ **_[w][L][ =][ ˆ]_** _dt_ _[u][(][t][) =]_


which implies the fact ˙u(t) = u(t)[L][−][1](s − _u(t)[L]). Changing variables to c(t) = u(t)[L]_ we obtain

_c˙(t) = u(t)[L][−][1]u˙_ = u(t)[2][L][−][2](s − _u(t)[L]) = c(t)[2][−][2][/L](s −_ _c(t))._ (60)

When c[L]0 [is initialized to a very small value compared to][ s][ we can]


_d_

_dt_ _[c][(][t][)][ ∼]_ _[c][(][t][)][2][−][2][/L][s]_ =⇒ _c(t)[−][1+2][/L]_ _−_ _c[−]0_ [1+2][/L] = − _[L][ −]L_ [2] _st_

=⇒ _c(t) =_ _c−0h[L]L[−][2]_ _−_ [(][L][ −]L [2)] _st_ _−_ _LL−2i._ (61)
 


This implies a timescale to learn of t ∼ _s[−][1]_ _LL−2_ _[σ][−][L][+2][.]_

We can approximate the timescale for the first layer’s singular vector r1(t) to align to β as well. Let
**_v be a vector orthogonal to β._**

_d_ **_W_** [(1)]β

_|_ _|[2]_ _O(σ[L][−][2])._ (62)
_dt_ **_W_** [(1)]v **_W_** [(1)]v _∼_

_|_ _|[2][ = 2][β][⊤]|[W][ (1)][⊤]|[...][2]_ **_[w][L]_**


This suggests that alignment in a deep network should also occur on a timescale of t ∼ _σ[−][L][+2]._
While there is no strict separation of timescales in terms of the scaling of alignment and learning
with σ, we find that alignment tends to precede a significant drop in the loss as we show in Figure 3.

E.2 FINAL NTK FOR DEEP LINEAR NETWORKS


Independent of the structure of the data, the first vector r1 **_β and the final NTK has the following_**
form: _→_

_K(x, x[′]) = s[2][L][−][2]x[⊤]_ [](L − 1)ββ[⊤] + I **_x[′]_** + O(σ). (63)

This formula is merely a consequence of the balancing condition and convergence to the optimum
_u(t)[L]_ _→_ _s. We provide empirical support that final kernel alignment increases with depth in Figure_
E.3.


-----

|Col1|Col2|
|---|---|


L = 2 L = 4 L = 6

0.4 L = 2

L = 4 0
L = 6 K

0.3

0.2

Kernel Alignment0.1 K

0 500 1000 1500 2000 2500

t


(a) Alignment Dynamics and Depth


(b) Initial and Final NTKs


Figure E.3: The final NTK for a deep linear network aligns with the class specific directions with
strength that depends on network depth L. This experiment is a partially whitened (γ = 0.25
see Section 5) subset of 100 CIFAR-10 images. (a) The dynamics of alignment for different depth
models. (b) The final gram matrix has the form K∞ _∝_ (L−1)yy[⊤]+K0, illustrating why alignment
of the final NTK to class structure increases with depth L.

E.3 FORMULA FOR THE INTERMEDIATE NNGP KERNELS

Let β represent the unit vector pointing in the optimal direction for the scalar output case. To
calculate this, all that needs to be assumed is that the network has converged to its optimum and that
the weights satisfy the balance property so that W _[ℓ]_ = u[∗]rℓ[∗]+1[r]ℓ[∗][. By the convergence assumption]
_u[∗]_ = s[1][/L] and r1[∗] [=][ β][. We stress that this holds for arbitrary input correlation structure][ Σ][, the]
final NNGP kernel for layer ℓ can also be computed. Expanding the weights and collecting terms at
leading order in σ yields:

_Kℓ(x, x[′])_ _s[2][ℓ/L]x[⊤]ββ[⊤]x[′]_ + O(σ). (64)
_∼_

Evaluating on the training set gives Kℓ = **_yy[⊤][][ℓ/L]_** _∈_ R[P][ ×][P] .


F GENERALIZATION ERROR IN TRANSFER LEARNING TASK

The structure of the final kernel can alter the ability of the network to flexibly transfer to new tasks
with a small amount of data. In this section, we examine how learned intermediate representations
compares with the inductive bias of the original isotropic kernel x · x[′]. In particular, we study the
offline generalization performance of kernels of the form

_K(x, x[′]; A) = x[⊤]_ []Aββ[⊤] + I **_x[′]._** (65)


In Section 4 we showed that A could be altered by changing the network depth. Concretely, our
transfer learning problem consists of training a linear probe on one of the intermediate layers of the
network (Alain & Bengio (2016); Cohen et al. (2020)). This would also produce a kernel regression
solution for kernel K(x, x[′], A) with A which depends on the chosen layer and the depth of the
network. For simplicity, we assume that the data are generated according to a simple Gaussian
distribution x ∼N (0, I) and that the target values are generated with a linear function y(x) = w·x.
We decompose the new task vector w = αβ + (1 _α[2])w_ where w **_β = 0. The expected_**

_−_ _⊥_ _⊥_ _·_
generalization error after training with P samples can be computed with methods from the physics

p

of disordered systems (Bordelon et al., 2020; Canatar et al., 2021; Loureiro et al., 2021). For any
_A > 0, the easiest transfer task is w = β (α = 1). If w = β, increasing the alignment A strictly_
decreases the generalization error. This is illustrated in Figure F.1.

F.1 DERIVATION OF LEARNING CURVES

We will discuss the average case generalization error in the transfer learning setting. Prior work has
shown that the generalization performance of kernel regression can be calculated through a kernel


-----

Kernel Alignment Strength Transfer Task Alignment Eg with P = 10

1.0 1.0 1.0 0.8

0.9 0.9

0.75

0.8 0.6

0.8

Eg 0.70.6 A = 0 Eg 0.7 = 0.0 0.5 0.4

0.5 AA = 1 = 5 0.6 = 0.2= 0.5 0.25

0.4 A = 10 0.5 = 0.7 0.2

0.0

0 5 10 15 20 25 0 5 10 15 20 25 0.0 2.5 5.0 7.5 10.0

P P A


(a) α = 0.75


(b) A = 1


(c) Fixed P


Figure F.1: The offline generalization error in a transfer task with the learned linear kernel K =
**_x[⊤]_** []Aβ[⊤]β[⊤] + I **_x[′]_** and ynew = _αβ +_ (1 _α[2])w_ **_x. (a) For a new transfer task which_**

_−_ _⊥_ _·_
is correlated with the learned function h _β, neural networks with large feature learningp_ i _A give lower_
generalization error at small sample sizes. (b) For fixed A = 1, the tasks w which are strongly
correlated with β are easier to learn during transfer. (c) The lowest generalization error in a transfer
learning setup occurs when feature learning strength A and correlation between tasks α are both
large.

eigenvalue problem (Bordelon et al., 2020; Canatar et al., 2021; Loureiro et al., 2021)


_⟨K(x, x[′])φk(x)⟩x = λkφk(x)._ (66)

Once this integral eigenvalue problem is solved for eigenvalues λk and orthonormal eigenfunctions
_φk, the average case generalization error at P training examples is_


_y(x)φk(x)_
_⟨_ _⟩[2]_ _, κ = λ + κ_

(λkP + κ)[2]


_λ[2]k_ (67)

(λkP + κ)[2][ .]


_λk_

_λkP + κ [, γ][ =][ P]_


_Eg =_


1 − _γ_


In our case, we are interested in the generalization performance of the linear kernel

_K(x, x[′]) = x[⊤]_ []ββ[⊤] + I **_x[′]._** (68)

Since K is a linear kernel, its eigenfunctions should be linear functions _φk(x) = φk_ **_x. Assuming_**
that the data distribution has identity covariance, we find _·_
**_x[⊤]_** []Aββ[⊤] + I **_x[′]φ[⊤]k_** **_[x]_** = φ[⊤]k _Aββ[⊤]_ + I **_x[′]_** = λkφ[⊤]k **_[x][′][.]_** (69)

This implies that the φk vectors are eigenvectors of  M . The first eigenvector is **_φ1 = β with_**
eigenvalue λ1 = A + 1. The other D − 1 eigenvectors can be chosen as any frame in the D − 1
dimensional subspace orthogonal to β. Each of these D − 1 eigenvectors has eigenvalue λk = 1.
Using these results, and the fact that w = αφ1 + _√1_ _α[2]w_, we can calculate the expected

_−_ _⊥_
generalization error.


_α[2]_ 1 − _α[2]_

((1 + A)P + κ)[2][ +] (P + κ)[2]


_Eg =_


_g_

1 − _γ_ ((1 + A)P + κ)[2][ +] (P

1 + A
_κ = λ + κ_

(1 + A)P + κ [+][ D]P +[ −] κ[1]




(1 + A)[2] _D_ 1
_, γ = P_ _−_

((1 + A)P + κ)[2][ +] (P + κ)[2]




By the result proven in Canatar et al. (2021), the lowest possible error for fixed A occurs by maximizing the fraction of variance along the large eigenvalue direction, corresponding to α = ±1.

G LINEAR NTKS DURING GD LEARN THE SAME FUNCTION

In the overparameterized setting where D > P, all linear networks discussed in the Section 4
converge to the minimum norm interpolator when the data is whitened. Specifically, letting the
learned neural network function be written as f (x) = **_β[ˆ][⊤]x, and the data matrix X ∈_** R[D][×][P] and


-----

target labels y ∈ R[P] represent the training data. The solution vector β solves the constrained
optimization problem

min **_β[ˆ]_** _, s.t. X_ _[⊤]β[ˆ] = y,_ (70)
**_βˆ_** _||_ _||[2]_

which is the kernel regression solution for the initial kernel K0(x, x[′]) = x **_x[′]._** This is un_·_
surprising due to a symmetry argument: when β0 0, the only privileged point on the affine
space X _[⊤]β = y is the point closest to the origin, which is precisely the solution above. Surpris- ≈_
ingly, the final pseudo-inverse solution sβ also minimizes the RKHS norms for any of the kernels
throughout gradient descent. Up to an overall scale, the kernels throughout evolution take the form
_K(x, x[′]; t) = x[⊤]_ []A(t)ββ[⊤] + I **_x[′]_** (see Section 4.1) which would induce the following kernel
interpolation problems


min **_βˆ[⊤]_** []A(t)ββ[⊤] + I _−1 ˆβ, s.t. X_ _[⊤]β[ˆ] = y._ (71)
**_βˆ_**


The solution to this optimization problem is indeed the kernel regression solution with kernel K(t)
since the learned function takes the form f (x) = _µ_ _[α][µ][K][(][x][,][ x][µ][, t][)][ with][ α][ =][ K][(][t][)][−][1][y][. Using]_

the Sherman-Morrison rule, we show that the solution to each of these problems t ≥ 0 gives the
same result, namely the pseudo-inverse solution. This can be seen from the following

[P]

**_βˆ[⊤]_** []A(t)ββ[⊤] + I _−1 ˆβ =_ **_β[ˆ]_** _A(t)_ **_β_** **_β[ˆ]_** 2 _._ (72)
_|_ _|[2]_ _−_ 1 + A(t) _·_
  

Now, we let **_β[ˆ] = sβ + β_**, where β **_β_** = 0. This is the general decomposition for the set of
_⊥_ _·_ _⊥_
interpolators which have the property X _[⊤]_ [β + β ] = y.
_⊥_

min (73)
**_β⊥_** _[|][β][⊥][|][2][.]_

The solution is merely to set β = 0. Thus the optimal solution is therefore the same for any finite value of A. However, the final RKHS norm of the learned function⊥ **_β[⊤]_** []A(t)ββ[⊤] + I _−1 β_
decreases with time, indicating that the kernel becomes more aligned with the pseudo-inverse direction as A increases. 

G.1 DEEP LINEAR NETWORKS FROM SMALL INITIALIZATION LEARN PSEUDO-INVERSE

In this subsection of the appendix, we will use our theoretical technology for balanced linear networks to demonstrate the universal learned function for any data, not just whitened input, providing an alternative derivation to the result proven in Theorem 7 of Yun et al. (2020). This
analysis is performed in the σ 0 limit, where Wℓ = u(t)rℓ+1(t)rℓ(t)[⊤] as we showed in
_→_
Section D. Under this condition the learned function f (x) = **_β[ˆ] · x is defined through weights_**
**_βˆ = W1[⊤][W][ ⊤]2_** _[...][w][L]_ [=][ u][(][t][)][L][r][1][(][t][)][. We see that the direction of the learned function is controlled]
entirely by r1(t). It suffices to prove that r1(t) span **_x1, ..., xP_** for all t to show that the network
learns the pseudo-inverse solution β[∗] = X(X ∈[⊤]X)[−]{[1]y, where X } _∈_ R[D][×][P] and y ∈ R[P] are the
training data and targets respectively. Note that by gradient descent, we have


_d_

2 _[...][w][L]_
_dt_ **_[W][1][(][t][) =][ W][ ⊤]_**


(yµ − _fµ(t))x[⊤]µ_ [=][ u][(][t][)][L][−][1][r][2][(][t][)]


(yµ − _fµ(t))x[⊤]µ_ _[.]_ (74)


From the balance condition W1(t) = u(t)r2(t)r1(t)[⊤], we also have

_d_

_u(t)r2(t) + u(t) ˙r2(t)] r1(t)[⊤]_ + u(t)r2(t) ˙r1(t)[⊤]. (75)
_dt_ **_[W][1][(][t][) = [ ˙]_**

Equating the two above expressions for _dtd_ **_[W][1][(][t][)][ and taking an inner product with][ r][2][(][t][)][ from the]_**

left gives the following

_u(t) ˙r1(t) = u(t)[L][−][1][ X](yµ_ _fµ(t))x[µ]_ [ ˙u(t) + u(t)r2(t) ˙r2(t)] r1(t). (76)

_µ_ _−_ _−_ _·_


-----

Thus, if r1(t) span **_x1, ..., xP_** then ˙r1(t) span **_x1, ..., xP_** so that the full dynamics
of r1(t) lie in the subspace spanned by the training data. At initialization, we have ∈ _{_ _}_ _∈_ _{_ _}_ **_W˙_** 1(t)
_∝_
**_z(t)_** _µ_ **_[y][µ][x]µ[⊤]_** [+][ O][(][σ][3][)][ so the initial][ r][1] [vector will indeed align with the span of the training data]

in the σ → 0 limit.

By the fact that[P] **_β[ˆ] = u(t)[L]r1(t), the learned linear coefficients_** **_β[ˆ] must also be in span_** **_x1, ..., xP_**
_{_ _}_
so **_β[ˆ] =_** _µ_ _[α][µ][x][µ][. These must also interpolate the data provided][ D][ ≥]_ _[P]_ [, giving the following]

condition
**_βˆ · x[P][ν]_** = _µ_ **_xν · xµαµ = yν =⇒_** **_α = (X_** _[⊤]X)[−][1]y =⇒_ **_β[ˆ] = X(X_** _[⊤]X)[−][1]y._ (77)
X

This is exactly the minimum ℓ2 norm interpolating solution which solves

min **_β[ˆ]_** 2 _[,][ s.t.][,][ X]_ _[⊤]β[ˆ] = y._ (78)
**_βˆ_** _|_ _|[2]_

While anisotropy of the data makes no impact on what function is ultimately learned in the linear
network case, the anisotropy can have a signficant influence on whether the preconditions for silent
alignment are satisfied in a nonlinear network, which can prevent the final function from being a
NTK regressor with final NTK.

H FINAL KERNEL IN MULTI-CLASS NETWORKS

For a network with C output channel, balancing and alignment guarantee that the configuration of
the network is orthogonal and balanced as in the setting of Saxe et al. (2014). One can then integrate
each mode separately to obtain the final kernel as


_uα(t)rℓ[α]+1[(][t][)][r]ℓ[α][(][t][)][⊤]_ _[, K][c,c][′]_ [(][x][,][ x][′][) =][ x][⊤][M][c,c][′] **_[x][′][,]_**
_α=1_

X


**_W_** _[ℓ]_ =


(79)


_L−1_

_ℓ=1_

X


_u[2(]β_ _[ℓ][−][1)](t)r1[β][r]1[β][⊤][,]_


_u[2(]α_ _[L][−][1)](t)r1[α][r]1[α]_ [+]


_u[2(]α_ _[L][−][ℓ][)](t)e[⊤]c_ **_[r]L[α][r]L[α][⊤][e][c][′]_**


_Mc,c[′] = δc,c[′]_


where the Cartesian unit vectorscontributions to the kernel depend on how well the class output channels align with the unit vectors ec ∈ R[C] are one-hot on class output c. This shows that the
**_rL[α][. Further, the singular values][ u][α][ can evolve at different timescales depending on the structure of]_**
the data.

Specifically, for a depth L network, both the the alignment time t[(]α[L][)] and the time to learn a given
singular value sα scale as s[−]α [1][σ][2][−][L][, as shown in appendix E.1. The differences in alignment times]
∆tαβ := t[(]α[L][)] _−_ _t[(]β[L][)]_ for modes sα, sβ therefores scales as ∆t[(]αβ[L][)] [=][ σ][2][−][L][∆][t]αβ[(2)][.]

H.1 FINAL NNGP IN MULTI-OUTPUT CASE

We can also gain intuition about the learned representations in each layer by looking at the NNGP
kernels, which merely take inner-products between layer activations for different inputs. Let β ∈
R[C][×][D] represent the optimal weight matrix which has the property (in the over-parameterized case)
**_βx[µ]_** = y[µ]. At the optimum, the neural network must learn β = W _[L]...W_ [1]. Computing the SVD
of β = _α_ _[β][α][z][α][v]α[⊤]_ [reveals that][ u]α[∗] [=][ β]α[1][/L] and rα[1] [=][ v][α] [and][ r]α[L] [=][ z][α][. Using these facts, it is]
easy to derive the final NNGP kernel for layer ℓ.

[P]

_Kℓ(x, x[′]) = x[⊤]W_ [1][⊤]...W _[ℓ][⊤]W_ _[ℓ]...W_ [1]x[′]

_ℓ/L_ (80)

= x[⊤] (u[∗]α[)][2][ℓ] **_[r]α[1]_** **_[r]α[1][⊤]_** **_x[′]_** = x[⊤] []β[⊤]β **_x′._**

_α_ #

"X 

_ℓ/L_
Evaluating on the training set X ∈ R[D][×][P] gives Kℓ = X _[⊤]_ []β[⊤]β **_X, which interpolates_**
between X _[⊤]X at layer ℓ_ = 0 and Y _[⊤]Y at layer L._ 


-----

Linear Depth 2 Linear Depth 4 Tanh Depth 3

1.0 1.0 1.0

0.8 0.8 0.8

0.6 L 0.6 0.6

|K|

0.4 Kernel Alignment 0.4 0.4

Kernel and Loss0.2 Kernel and Loss0.2 Kernel and Loss0.2

0.0 0.0 0.0

0 50 100 150 200 250 300 0 1000 2000 3000 4000 5000 6000 0 200 400 600 800 1000 1200 1400

t t t


(a) Linear depth 2


(b) Linear depth 4


(c) Tanh depth 3


Figure H.1: Demonstration of a separation between alignment and spectral learning phases across
networks trained on multi-class data. Here we train on whitened MNIST. Each network has σ so that
_σ[L]_ = 10[−][4] for L the depth of the network. (a) Depth two multi-class dynamics are very similar to
single class. The analytically predicted final alignment is in dashed green. (b) For deeper multi-class
networks, each singular values is learned far apart in time, and alignment does not as clearly precede
loss decay. (c) Similar dynamics are obtained for tanh networks. Note for deeper networks there is
a stronger separation of the times to learn each singular value, resulting in the ten separated drops in
the loss.

H.2 MORE REFINED BALANCING ANALYSIS

We can derive corrections to our decomposition of the kernel by including the initial conditions in
our derived conservation laws. In particular, we will consider balancing for large width networks.
Note that the network does not need to be in the lazy regime. The balancing condition is

**_W_** _[ℓ](t)W_ _[ℓ][⊤](t)_ **_W_** _[ℓ][+1][⊤](t)W_ _[ℓ][+1](t) = W0[ℓ][W]0[ ℓ][⊤]_ **_W0[ℓ][+1][⊤]W0[ℓ][+1]._** (81)
_−_ _−_

Note that W0[ℓ]
rameterization. The products of initial matrices are therefore Wishart distributed. For sufficiently[∈] [R][N][ℓ][+1][×][N][ℓ] [has entries with zero mean and variance][ σ][2][/N][ℓ] [in the standard pa-]
large widths, we can approximate the initial weight matrix products with their expectation over the
random initialization


**_W0[ℓ][W]0[ ℓ][⊤]_** _−_ **_W0[ℓ][+1][⊤]W0[ℓ][+1]_** _≈_ _σ[2]_ 1 − _[N]N[ℓ]ℓ[+2]+1_



**_I._** (82)


This concentration becomes more accurate as the widths Nℓ, Nℓ+1 . For the last layer if the
number of classes C = NL is sufficiently large, we also obtain similar concentration for the last →∞
layer W _[L][⊤]W_ _[L]_ _≈_ _σ[2]_ _NNLL−1_ **_[I][. Repeating the backward induction on the conservation law, we find]_**

the following recursively defined singular value decompositions


**_W_** _[ℓ]_ =


_u[ℓ]α[(][t][)][r]ℓ[α]+1[(][t][)][r]ℓ[α][(][t][)][⊤][,]_


_u[ℓ]α[(][t][)][2][ =][ u][ℓ]α[+1](t)[2]_ + σ[2] 1 − _N[N][ℓ]ℓ[+2]+1_



= u[L]α[(][t][)][2][ +][ σ][2][g][ℓ][,]


(83)


_L−1_

_k=ℓ_

X


_Nk+1_

_Nk_


_gℓ_ = L _ℓ_
_−_ _−_


We note that the corrections gℓ vanish if all layers k > ℓ have the same width. Let uα = u[(]α[L][)][(][t][)][.]
The condition for convergence is

2
_u[(]α[ℓ][)]_ = (uα)[2] + σ[2]gℓ = s[2]α[.] (84)

_ℓ_ _ℓ_

Y h i Y  

In the σ[2] 0 limit, we can solve that uα _s[1][/L]_ as before. However, we can now obtain leading
order corrections (in→ _σ[2]) which take the form of the form ∼_


_gσ[2]_

[uα][2] _s[2]α_ _, σ[2]_ 0, g =
_∼_ _[−]_ 2Ls[1]α[/L] _→_


_L_

_gℓ_ = _[L][(][L][ −]_ [1)]

2

_ℓ=1_

X


_Nk+1_

_,_ (85)
_Nk_


_ℓ=1_


_k=ℓ_


-----

which reveals that the size of the correction depends not only on σ but also on the depth and network
widths. Suppose all network widths were equal Nℓ = Nk, then the term g = 0 and there is no
contribution from the first moment of the random weights.

I LAZINESS IN HOMOGENOUS NETWORKS

In this section we recapitulate the argument found in Chizat et al. (2019). The goal is to estimate how
rapidly the gradient features on a test point ∇f (x) change compared to the loss early in training.
Let f ∈ R[P] represent the function outputs on the training set. We will compute the time derivatives
of the loss and the network gradients.

_d_

_dt_ _[∇][θ][f]_ [(][x][)] [=] _dt_ [=] (y[µ] _−_ _f_ _[µ])∇f_ _[µ]_ _[,]_ (86)

_µ_

X

2

_d_ [=] _[∇]dθ[2][f]_ [(][x]=[)][ ·][ d]f[θ] (y _[∇]f[2][f])[(][x].[)][ ·]_ (87)

_dt_ _[L]_ _dt_ _|∇_ _·_ _−_ _|[2]_


Here, | · |op denotes the operator norm of a matrix. We are interested in the ratio of the loss’ time
derivative to the gradient’s time derivative. With an initialization scale of f ∼ _O(σ[L]) we find_

_dt[d]_ _[∇][f]_ [(][x][)] _L_ _|∇[2]f_ (x) · _µ[(][y][µ][ −]_ _[f][ µ][)][∇][f][ µ][||][y][ −]_ **_[f]_** _[|][2]_ _|∇[2]f_ (x) · _µ_ _[y][µ][∇][f][ µ][||][y][|][2]_ _,_

**_f_** _[d]_ [=] **_f_** **_f_** (y **_f_** ) _≈_ **_f_** **_f_** **_y_**
_|∇_ _|_ _dt_ _[L]_ _|∇_ _||∇_ _·_ _−_ _|[2]_ _|∇_ _||∇_ _·_ _|[2]_

[P] [P] (88)

where in the last step we approximated y[µ] _−_ _f_ _[µ]_ _≈_ _y[µ]_ for small initialization scale since y[µ] _∼_ _O(1)_
and f _[µ]_ _∼_ _O(σ[L]). Now we will estimate the scale of each of the terms above. For a homogenous_
model ∇f ∼ _O(σ[L][−][1]) and ∇[2]f ∼_ _O(σ[L][−][2]). Counting powers of σ in numerator and denominator,_
we find that this quantity of interest scales as

_[d]_

_dt_ _[∇][f]_ [(][x][)]

_L_ (89)
**_f_** _[d]_ [=][ O][(][σ][−][L][)][.]
_|∇_ _|_ _dt_ _[L]_

This result indicates that, from small initialization, the gradient NTK features and thus the kernel
itself will evolve much more rapidly than the loss. This effect can be amplified by increasing depth
and decreasing initialization scale.

J RESNET EXPERIMENTAL DETAILS

Below, we provide the alignment and loss dynamics for wide resnet for CIFAR-10 with 100 training
points. Because the loss decreases significantly before the kernel reaches its final alignment value,
the final NTK is not perfectly correlated with the final neural network function. The wide ResNet
model is taken from Novak et al. (2020) and is based on the original architecture of Zagoruyko &
Komodakis (2017) with a widening factor of k = 4 and a single block per ResNet group b = 1,
giving a final network with 8 trainable layers. For both Figure 1 (d) and (e) as well as Figure
J.1 use Adam with a learning rate of η = 10[−][5] and initial weight scale of σ = 0.3 in standard
parameterization for all intermediate blocks. For the first conv layer, we used σ = 6.0. We find
that small initial weight variance in the first layer gives rise to less stable learning and worse kernel
alignment.

Below, in Figure J.2, we provide comprehensive results for different depths which we control by
increasing the number of blocks per group b, corresponding to WRNs with 6b + 1 trainable conv
layers.

J.1 ADAPTIVE OPTIMIZERS AND THE RELEVANT KERNEL

Many adaptive gradient methods compute updates to parameters θj according to


_θ˙j(t) =_ _ηj(t)_ _[∂][L]_ (90)
_−_ _∂θj,t_


-----

1.0 Lt Wide Res-Net on Unwhitened CIFAR

|K(t)| 2 K0

0.8 Alignment K

1

0.6

0

0.4

1

Loss and Alignment0.2 Test Prediction NTK

2

0.0

0 500 1000 1500 2000 2500 2 1 0 1 2

t Test Prediction NN


(a) Dynamics


(b) Predictions


Figure J.1: The dynamics and predictions of a Wide-Resnet with k = 4 and b = 1 on P = 100
unwhited CIFAR-10 images from the first two classes.


1.0 5

b = 1

0.8 b = 2 4

b = 3

Loss0.6 b = 4 3

0.4 2

Kernel Norm

0.2 1

0.0 0

0 1000 2000 3000 0 1000 2000 3000

t t


(a) Training Loss

0.5 |0.8 K0

fNN K

0.4

|/|0.6

0.3 fNN0.4

0.2

Alignment

0.1 |fNTK0.2

0.0

0.0

0 1000 2000 3000 1 2 3 4

t Wide Res-Net Blocks


(c) Alignment Dynamics


(b) Kernel Norm

1 2 3

Wide Res-Net Blocks

(d) Predictor Comparison


Figure J.2: The silent alignment effect is preserved across a large range of depths in WideResNet
trained on whitened CIFAR-10 images. The number of blocks per group b alters the total number
of conv layers (6b + 1 total conv layers). (a) The deeper models train faster with Adam. (b) The
final NTK norm increases with depth. (c) The alignment achieves close to its final value by the time
the kernel norm reaches 10% of its final value (dashed line), indicating successful silent alignment.
(d) The neural network predictions are very close to the predictions of the final NTK but is not
accurately predicted by the initial NTK.

where ηj(t) are time-varying functions which are computed in terms of the history of gradient moments for parameter θj or in terms of its instantaneous gradient. The relevant kernel at time t which
governs instantaneous evolution of network predictions is


_∂f_ (x[′], t)

(91)
_∂θj_


_ηj(t)_ _[∂f]_ [(][x][, t][)]

_∂θj_


_K(x, x[′], t) =_


since _f[˙](x) =_ _µ_ _[K][(][x][,][ x][µ][, t][)(][y][µ][ −]_ _[f]_ [(][x][µ][, t][))][. Though we do not calculate this kernel which is]

relevant to the adaptive learning rate scheme since it is not supported in Neural Tangents API, this
could be a worthy future investigation.

[P]


-----

1.0 k = 1 7

k = 2
k = 3 6

0.8 k = 4

5

Loss0.60.4 43

2

Kernel Norm

0.2

1

0.0 0

0 1000 2000 3000 4000 5000 6000 7000 0 500 1000 1500 2000 2500 3000 3500

t t


(a) Train Loss Dynamics

0.5 1.0 K0

K

0.40.3 ||/|fNN 0.80.6

fNN

0.2 0.4

Alignment

0.1 |fNTK 0.2

0.0

0.0

0 500 1000 1500 2000 2500 3000 3500 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5

t Wide Res-Net Width


(c) Alignment


(b) Kernel Norm

1.0 1.5 2.0 2.5 3.0 3.5

Wide Res-Net Width

(d) Predictor Comparison


Figure J.3: Varying the ResNet widening parameter k also alters the kernel and loss dynamics. (a)
The loss curve for b = 2 WRNs with widening factor k. Wider networks train more quickly. (b)
The kernel norm increases more rapidly for wider networks but changes by a smaller amount. (c)
Alignment reaches close to its asymptote by the time the kernel norm grows to 10% its final value
(dashed). (d) The final kernel is a much better predictor of the NN function than the initial kernel.


-----