Alpha-Romeo's picture
first
b0afe49
|
raw
history blame
No virus
670 Bytes

FAQs

Q: If the weight of a conv layer is zero, the gradient will also be zero, and the network will not learn anything. Why "zero convolution" works?

A: This is wrong. Let us consider a very simple

y=wx+by=wx+b

and we have

βˆ‚y/βˆ‚w=x,βˆ‚y/βˆ‚x=w,βˆ‚y/βˆ‚b=1\partial y/\partial w=x, \partial y/\partial x=w, \partial y/\partial b=1

and if $w=0$ and $x \neq 0$, then

βˆ‚y/βˆ‚wβ‰ 0,βˆ‚y/βˆ‚x=0,βˆ‚y/βˆ‚bβ‰ 0\partial y/\partial w \neq 0, \partial y/\partial x=0, \partial y/\partial b\neq 0

which means as long as $x \neq 0$, one gradient descent iteration will make $w$ non-zero. Then

βˆ‚y/βˆ‚xβ‰ 0\partial y/\partial x\neq 0

so that the zero convolutions will progressively become a common conv layer with non-zero weights.