CHANNEL_NAME
stringclasses 1
value | URL
stringlengths 43
43
| TITLE
stringlengths 12
100
| DESCRIPTION
stringlengths 66
5k
| TRANSCRIPTION
stringlengths 150
90.9k
| SEGMENTS
stringlengths 1.05k
146k
|
---|---|---|---|---|---|
Yannic Kilcher | https://www.youtube.com/watch?v=_8KNb5iqblE | Longformer: The Long-Document Transformer | The Longformer extends the Transformer by introducing sliding window attention and sparse global attention. This allows for the processing of much longer documents than classic models like BERT.
Paper: https://arxiv.org/abs/2004.05150
Code: https://github.com/allenai/longformer
Abstract:
Transformer-based models are unable to process long sequences due to their self-attention operation, which scales quadratically with the sequence length. To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer. Longformer's attention mechanism is a drop-in replacement for the standard self-attention and combines a local windowed attention with a task motivated global attention. Following prior work on long-sequence transformers, we evaluate Longformer on character-level language modeling and achieve state-of-the-art results on text8 and enwik8. In contrast to most prior work, we also pretrain Longformer and finetune it on a variety of downstream tasks. Our pretrained Longformer consistently outperforms RoBERTa on long document tasks and sets new state-of-the-art results on WikiHop and TriviaQA.
Authors: Iz Beltagy, Matthew E. Peters, Arman Cohan
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at Longformer, the Long Document Transformer by Esbalteji, Matthew Peters and Armin Cohen of Alan AI. So the Longformer is a variant of the Transformer as you might have guessed. The Longformer is a Transformer that can deal with long documents. So it's aptly named. So I am going to discuss what differentiates the Longformer from the Transformer. If you don't know what a Transformer is, watch the video on Attention is all you need. I have a video on that. And I would also suggest you watch the video on Bert because a lot of the architecture and training here is based on the on the Bert or variants of Bert. So I'll basically explain what makes the Longformer different such that it gets long documents. So she can be applied to long documents. So what is the problem with the original Transformer? If you have a Transformer model and let's say you're doing an NLP task, which usually is where Transformers are used. And you want to have a paragraph like this one right here, the abstract of the paper, and maybe you want to predict whether the paper gets accepted at a conference or not. Now the classic Transformers, they have a limit, a very harsh limit on the amount of tokens that they can look at at the same time. So what you would do in a classic Transformer is you couldn't process this entire thing. Let's say you would divide it in chunks. You'd say, okay, here's my first chunk from here to here, my second chunk from here to here, and then here to here, and so on. So you go through the documents, split it up in chunks, process each of the chunks individually, and then maybe aggregate the predictions. But of course the drawback is that the model cannot make specific connections between let's say some word here, like operation, and somewhere down here, like language. It cannot connect the two on a neural level, at least not in the classic Transformer architectures. Now there are ways to try to alleviate this, but classically, if you split up your documents into individual samples, they become independent, you cannot do attention. This attention mechanism cannot operate over across the boundaries of these chunks. So the long former, the goal is to actually just be able to put this entire document here into the model at the same time. So let's look a bit closer into this in a classic Transformer model. What you'll have is you'll have layers of what is called attention mechanism. I'm going to draw six units here, and the units are actually the input sequence. So in a Transformer, other than like a classic neural network, you don't actually have numbers of units in the in the layers, but you can input as many as long sequences as you want until your memory limit is reached basically. So these units, they expose something called keys on the lower layer, and these are vectors that point somewhere, and the upper layer will produce what are called queries. And again, I invite you to look at the attention is all you need, video, if you want more explanation. And basically the keys and queries, they decide where information gets routed to. Right? So the routing of information is what makes the Transformer, the Transformer. So for example, this here is probably going to be routed to this here. So the information is routed like this. And then this here is going to be routed like this. You see the routing is according to the dot product of the of the keys and queries. Right? So in essence, if you have an input sequence tokens, and you usually transform in a Transformer, you transform the things into same length sequences. That has to do a lot also with how you want to pre-train things and so on. So we're not really going to change that part. If you have n input sequence and n tokens on the next layer, and everything can attend to everything. So all the inner products are computed, right? Everything is connected to everything. That means that you're going to end up with an of n squared memory requirement, because you have n squared connections. The way to alleviate this is much, much like you would alleviate this in a classic neural network. So in a classic neural network, imagine you have this MLP, a multi-layer perceptron, or what you usually known as a fully connected layer. Right? So here I have the same thing, but it's not a Transformer. It's a classic neural network fully connected. So I have D units right here and D units in this first hidden layer. And I'll have a weight matrix in here, right? And the weight matrix means everything is connected to everything, right? Everything connects to everything else. Again, my memory requirement here is D squared. Now, how do we deal with this in a classic neural network? We go to what is called a convolutional neural network. At least that's one of the one of the methods. So let's make this again, but let's now make this a convolutional neural network. What we'll have is we'll have a convolutional kernel. In this case, it's just of length three, right? So we just have three units here and they will do the same fully connected pattern, but only over these three units right here. And then we slide the kernel over, right? Now it's in this position. It's still the same three units. But now these three things are connected to these three things that they're now over, right? And so you keep sliding this over across the lower layer until you're finally at the end here. And now you've reduced the memory consumption from D squared to just D times, and if this is usually the kernel size is called K to D times K. And K, you can keep pretty much constant. So that's all of D, right? The same goes for the long former. So in the long former, the idea is that you have a so-called sliding window attention. It's exactly the same as it is in the convolution, except that you don't have these hidden units here, but these are actually the parts of the input sequence. And instead of the weight matrix here, you have the attention mechanism over the keys, queries, and values. But the idea is similar. So you can basically say this is a sort of a convolution. And we've already had this in the video about axial attention a bit. Now, of course, this is is you trade off memory for performance because before, right, before I'm going to draw, let's draw it on top of this fully connected layer before all the units could attend to all the units, right? And now the unit can only attend to its immediate neighborhood, right? This green unit here can only attend to itself in the lower layer and its immediate neighbors if the kernel size is three. But consider what happens in the next layer. So in the next layer, I have for example, this unit right here, this is same unit, right? On the next layer, it can attend to these two and itself in the lower layer. But these two themselves can attend to all of these, right? So that the one on the right can attend to one more. So in the first layer, this particular unit had information from these three units. But in the second layer, the same unit has now information across these five, right? And this is kind of this cone of attention. It gets bigger and bigger as you go through the layers. So you lose the information to incorporate wide ranges of information in a single layer, but you regain it through depth, right? The deeper you go, the more a single unit gets information, right? This unit gets information from this unit over here through the layers through the layers. It can't watch the unit right here in this layer. That's not possible. But it gets the information through the layers. Of course, there's still a trade off like a fully connected layer could just do this in one step. And then in the next layer, it could do it again, right? It can do much more complex computation. But if you believe that the most important information is actually in the neighborhoods of the individual tokens, which is conceivable in something like the convolutional neural network, you know that, you know, in an image, usually you have localized information, right? If there's a cat here, then it knows in the eyes of the cat, they're pretty close together. So in order to recognize it's a cat, you won't mostly want local information, more and more local information. So in an image that makes sense, and in a text, it also makes sense to a degree in that usually words close together in a sentence. They are important for each other, right? But the power of the transformer was initially that it could attend to everything in a sentence, right? So for example, if you have again the paragraph here, the power of the transformer, at least that was said, is the fact that this piece of text here could make a connection to this piece of text here. And therefore the understanding of the entire paragraph could be reliant on this connection being made, which a local model can't do. But if you go through depth that you might be able to recover that. So the long former is basically what the convolutional neural network does for MLPs. It does it for transformers, right? So instead of n by n, giving you n squared, now you go into into this way where you have, so if you do the same for the transformer, you go to oh n times, let's call it W and W being your window size in this case. They have an illustration of this right here. So in a original transformer, this is an attention matrix. So here you have your n units in sequence and drawn in is which unit can attend to which other unit in a given layer. So you'll see this particular unit I here can attend of course to itself, right? Can attend to unit I. But it can also attend to this unit or to this unit or to this unit to any unit, right? And that's what gives you this n squared attention because any unit can attend to any unit. Now in this sliding window attention pattern, and this is one of the core components of the long former, you see that the I I f unit here right here can attend to itself, right? But also to this and to this, this, but no more, it can only attend to the I f unit or to I minus W to I plus W, right? And this here is a window of size W. This is this sliding window. So given unit can only attend to itself or its neighbors in one layer, right? And this is exactly what a convolution is like if you see, if you see this pattern, this is a this is a convolutional pattern. Now the second core component is they expand on this idea in that they make they create these dilated sliding windows. Now you see you already know what a sliding window is. Now they're saying, well, if you if you have this sliding window, it might take quite a number of layers in order to you know get your attention of the entire sequence incorporated. We saw before it took like three layers to get halfway through this sequence of what was it? Like six tokens and it took us like three layers. And with so basically if you go if you go one layer up, right? One layer up you gain one more context window in each direction, right? So it's not a you'd have to go very very deep in order to incorporate information from these very long sequences. And the sliding the dilated sliding window helps this where they say, well technically now any any any sequence here. So again, if we have this sequence and this is the next layer, actually, let's just draw. So this unit right here, it will be able to attend this and this, but not this and not this, but it will also be able to attend this and this, but not this and not this, sorry, not this. So it'll skip one. So right, these attention patterns, they will always kind of skip skip one. And the idea is that now you have a vastly greater window of attention, right? Your window size is now way bigger. That means you can incorporate information way faster across the layers, like global information. But of course, now they they're kind of arguing against each other. In when they do this sliding window, they say, well, we pose that mostly local information is important for NLP, right? The words right around the word are important. And now if they say this here, they basically say, well, it's not so important that we miss this word right here, which is right next to the word that they are attending from, which is counter counter to what they just said that probably the most important information is around the word. They they do get around this by saying, well, if we have different layers and a transformer and in the lower layers, we'll use this sliding window fully local. And in the higher layers, we'll use this dilated window. And therefore, in the in the lower layers, we postulate that local information is actually what's needed to understand local features. And then in the higher layers, we want more global information, because it will incorporate features from the from the local information of the lower layers. All right, I can I can get the argumentation, but I feel that's just something they've thrown in there to make it work better after they tried it out. And the the last idea here in the longformer is what they call global attention. And these global attention is sparse. What it means is that there are some special units here. So in this, this, this, this, and this unit, and these special units, as you can see from the attention pattern, these are these can actually attend to everything. So this unit can attend, for example, to this one or to this one or to anything, these can attend to anything and any unit can attend to those, right? Any unit can attend to the the first unit right here. Right? So these are your special tokens, your special units, and they have global attention. And the reason for this particularly is that sometimes this is needed and this is an engineering choice. Right? The example I can give is let's say you have a question answering task in a question answering task what you usually have is a question and a paragraph. And let's say the task here is to answer yes or no. Is the question to the question might be a statement, right? I don't know. King James was king of England from 1120 to 1140. And then the paragraph will be the Wikipedia entry for King James. And the question is yes or no is the question true or not? Is the statement made true or not? How you would feed this to a bird model to a transformer is you concatenate these two things question or statement and paragraph. Right? These are the tokens right here. And then you would separate them using a special token called the separator token. This is just to inform the model that here is where the first thing stops and the next thing starts. And then at the beginning you would put a special token called the CLS token. Now usually what you do is you send these things through your transformer. And now in the last layer, right, you end up as we've seen before because you always transform a sequence into a sequence you end up with a sequence again. But you just want a single thing. You just want yes or no. So you designate you say this particular unit here that corresponds to the CLS token. That's what I'm going to throw into a logistic regression. And that's what will give me my yes or no answer. And that's how you train it. Right? So you don't want to single out any of these any of these as like special. So you simply already include a special token at the beginning that then you take the classification from right. It's pretty smart. But also you say, ah this is such a special token. I want that to be able to attend to anything right even though for example this unit right here. It can only attend to its neighbors right. It has this cone thing and this unit right here has this cone thing. This unit right here can always attend to anything at each of the layers right. It can attend to anything and anything can attend to it. So it can get information from anywhere routed to it in each of the layers and it can send information to any of the other units. And this is an engineering choice. So at the beginning you as an engineer have to say which one of these tokens are special tokens. And for these tokens you'll actually then do full attention right. It can attend to and from anything. So what are our new memory requirements. Well this will give us is first of all we have n tokens right. And here w is our window size. So we have n times w memory. But then we also add the global attention. So plus the number of special tokens times right. So if there's a special token it will have n times two memory requirements because it can attend from and to in each layer. So and this entire thing sorry with the plus this entire thing times the number of layers. So this is your new attention memory requirements and as you can see here n plus n. So this is going to be order of n instead of much smaller than order of n squared as we had for the original transformer. Right. So this this is what the long former basically does. Now they have written custom CUDA kernels for doing this for doing this this dilated attention and so on. Which is pretty cool and they have code available for the model. They test this on a number of language tasks and they what I find interesting is it actually they start from the Roberta checkpoint which Roberta where is it said somewhere oh yeah this Roberta model right here is a variant of Bert right you can see the name in here is a variant of Bert and that's their baseline and they start from these checkpoints as far as I understand and that it kind of copy over the position embeddings and so on and therefore they only need to train not very much pasta Roberta. Now the reason why they can copy it over actually is and this I find very interesting is they use a window size of 512. So until I read this I got away from reading the paper thinking that this this window size here might be fairly small right so this window size it might be you know maybe 102030 tokens or something right but actually this window size is 512 in their in their formulation which basically means that this is this is as much as one of the classic models could take as a document right so sorry let's discover so this this here is 512 so this is what a classic model could take as as a as an entire document and in the classic model you simply split up the document feed chunks right and then aggregate over them now the long former basically has this so right now for now I said it has less memory requirements actually has the same memory requirements as a classic model but it is also able because of these global attention to kind of incorporate information from the surrounding things so that's the the new part because if you think about it if this W here is 512 512 was the original N so 512 was the N0 whatever the old models had as an N so right now if I replace this and let's let's you know don't not take care of this if I replace this it's actually N times N0 and that regresses to the classic model if you plug in N0 here right so the new part really is the fact that you have this this sliding window and the global attention is is able to incorporate information from these special tokens as well because sliding window that was done before so I just don't want you to get to you the wrong impression that now we can run Transformers on like very small memory machines we can't but we can run them on the same memory machines because this is the same length right but also feed in longer documents and have some information of the entire document be propagated to these to these blocks which before we couldn't before we could just simply feed these blocks as one and not have global information so that's that's the new thing at least I haven't tested it on the smaller things which is cool from an engineering point right you would want to as if you want to show that you're better you would want to basically be able to be as powerful as the old model but then be more powerful and that's what they do all right so if you want to check out the experiments and the ablations is very interesting because they they turn on and off a lot of things in their model and kind of check out where things come from what helps what doesn't and I'll leave this to you and I'll link it and with that thanks for listening watching and bye-bye | [{"start": 0.0, "end": 5.64, "text": " Hi there. Today we're looking at Longformer, the Long Document Transformer by"}, {"start": 5.64, "end": 13.64, "text": " Esbalteji, Matthew Peters and Armin Cohen of Alan AI. So the Longformer is a"}, {"start": 13.64, "end": 19.36, "text": " variant of the Transformer as you might have guessed. The Longformer is a"}, {"start": 19.36, "end": 27.68, "text": " Transformer that can deal with long documents. So it's aptly named. So I am going"}, {"start": 27.68, "end": 33.04, "text": " to discuss what differentiates the Longformer from the Transformer. If you don't"}, {"start": 33.04, "end": 38.88, "text": " know what a Transformer is, watch the video on Attention is all you need. I have a"}, {"start": 38.88, "end": 45.519999999999996, "text": " video on that. And I would also suggest you watch the video on Bert because a lot"}, {"start": 45.519999999999996, "end": 51.32, "text": " of the architecture and training here is based on the on the Bert or variants of"}, {"start": 51.32, "end": 58.28, "text": " Bert. So I'll basically explain what makes the Longformer different such that it"}, {"start": 58.28, "end": 63.88, "text": " gets long documents. So she can be applied to long documents. So what is the"}, {"start": 63.88, "end": 70.03999999999999, "text": " problem with the original Transformer? If you have a Transformer model and let's"}, {"start": 70.03999999999999, "end": 75.68, "text": " say you're doing an NLP task, which usually is where Transformers are used. And"}, {"start": 75.68, "end": 81.76, "text": " you want to have a paragraph like this one right here, the abstract of the"}, {"start": 81.76, "end": 86.32000000000001, "text": " paper, and maybe you want to predict whether the paper gets accepted at a"}, {"start": 86.32000000000001, "end": 93.92000000000002, "text": " conference or not. Now the classic Transformers, they have a limit, a very harsh"}, {"start": 93.92000000000002, "end": 99.84, "text": " limit on the amount of tokens that they can look at at the same time. So what you"}, {"start": 99.84, "end": 105.72, "text": " would do in a classic Transformer is you couldn't process this entire thing. Let's"}, {"start": 105.72, "end": 109.96000000000001, "text": " say you would divide it in chunks. You'd say, okay, here's my first chunk from"}, {"start": 109.96000000000001, "end": 116.88, "text": " here to here, my second chunk from here to here, and then here to here, and so on."}, {"start": 116.88, "end": 121.16, "text": " So you go through the documents, split it up in chunks, process each of the"}, {"start": 121.16, "end": 127.08000000000001, "text": " chunks individually, and then maybe aggregate the predictions. But of course the"}, {"start": 127.08, "end": 132.52, "text": " drawback is that the model cannot make specific connections between let's say"}, {"start": 132.52, "end": 139.07999999999998, "text": " some word here, like operation, and somewhere down here, like language. It cannot"}, {"start": 139.07999999999998, "end": 144.07999999999998, "text": " connect the two on a neural level, at least not in the classic Transformer"}, {"start": 144.07999999999998, "end": 151.16, "text": " architectures. Now there are ways to try to alleviate this, but classically, if"}, {"start": 151.16, "end": 156.0, "text": " you split up your documents into individual samples, they become independent,"}, {"start": 156.0, "end": 161.88, "text": " you cannot do attention. This attention mechanism cannot operate over across"}, {"start": 161.88, "end": 168.72, "text": " the boundaries of these chunks. So the long former, the goal is to actually just"}, {"start": 168.72, "end": 176.92000000000002, "text": " be able to put this entire document here into the model at the same time. So"}, {"start": 176.92000000000002, "end": 181.84, "text": " let's look a bit closer into this in a classic Transformer model. What you'll"}, {"start": 181.84, "end": 187.4, "text": " have is you'll have layers of what is called attention mechanism. I'm going to"}, {"start": 187.4, "end": 194.48000000000002, "text": " draw six units here, and the units are actually the input sequence. So in a"}, {"start": 194.48000000000002, "end": 199.12, "text": " Transformer, other than like a classic neural network, you don't actually have"}, {"start": 199.12, "end": 207.88, "text": " numbers of units in the in the layers, but you can input as many as long"}, {"start": 207.88, "end": 215.0, "text": " sequences as you want until your memory limit is reached basically. So these"}, {"start": 215.0, "end": 220.48, "text": " units, they expose something called keys on the lower layer, and these are"}, {"start": 220.48, "end": 227.64, "text": " vectors that point somewhere, and the upper layer will produce what are called"}, {"start": 227.64, "end": 233.28, "text": " queries. And again, I invite you to look at the attention is all you need,"}, {"start": 233.28, "end": 238.04, "text": " video, if you want more explanation. And basically the keys and queries, they"}, {"start": 238.04, "end": 244.56, "text": " decide where information gets routed to. Right? So the routing of information is"}, {"start": 244.56, "end": 249.44, "text": " what makes the Transformer, the Transformer. So for example, this here is"}, {"start": 249.44, "end": 254.8, "text": " probably going to be routed to this here. So the information is routed like this."}, {"start": 254.8, "end": 258.36, "text": " And then this here is going to be routed like this. You see the routing is"}, {"start": 258.36, "end": 266.68, "text": " according to the dot product of the of the keys and queries. Right? So in essence,"}, {"start": 266.68, "end": 273.96000000000004, "text": " if you have an input sequence tokens, and you usually transform in a Transformer,"}, {"start": 273.96000000000004, "end": 280.76, "text": " you transform the things into same length sequences. That has to do a lot"}, {"start": 280.76, "end": 286.40000000000003, "text": " also with how you want to pre-train things and so on. So we're not really going"}, {"start": 286.4, "end": 292.47999999999996, "text": " to change that part. If you have n input sequence and n tokens on the next"}, {"start": 292.47999999999996, "end": 297.79999999999995, "text": " layer, and everything can attend to everything. So all the inner products are"}, {"start": 297.79999999999995, "end": 302.0, "text": " computed, right? Everything is connected to everything. That means that you're"}, {"start": 302.0, "end": 307.91999999999996, "text": " going to end up with an of n squared memory requirement, because you have n"}, {"start": 307.91999999999996, "end": 316.2, "text": " squared connections. The way to alleviate this is much, much like you would"}, {"start": 316.2, "end": 320.59999999999997, "text": " alleviate this in a classic neural network. So in a classic neural network,"}, {"start": 320.59999999999997, "end": 325.68, "text": " imagine you have this MLP, a multi-layer perceptron, or what you usually"}, {"start": 325.68, "end": 330.44, "text": " known as a fully connected layer. Right? So here I have the same thing, but"}, {"start": 330.44, "end": 335.08, "text": " it's not a Transformer. It's a classic neural network fully connected. So I have"}, {"start": 335.08, "end": 340.91999999999996, "text": " D units right here and D units in this first hidden layer. And I'll have a weight"}, {"start": 340.91999999999996, "end": 345.36, "text": " matrix in here, right? And the weight matrix means everything is connected to"}, {"start": 345.36, "end": 350.84000000000003, "text": " everything, right? Everything connects to everything else. Again, my memory"}, {"start": 350.84000000000003, "end": 356.24, "text": " requirement here is D squared. Now, how do we deal with this in a classic neural"}, {"start": 356.24, "end": 360.76, "text": " network? We go to what is called a convolutional neural network. At least that's"}, {"start": 360.76, "end": 367.12, "text": " one of the one of the methods. So let's make this again, but let's now make"}, {"start": 367.12, "end": 371.92, "text": " this a convolutional neural network. What we'll have is we'll have a convolutional"}, {"start": 371.92, "end": 377.64000000000004, "text": " kernel. In this case, it's just of length three, right? So we just have three"}, {"start": 377.64000000000004, "end": 385.04, "text": " units here and they will do the same fully connected pattern, but only over"}, {"start": 385.04, "end": 390.40000000000003, "text": " these three units right here. And then we slide the kernel over, right? Now it's"}, {"start": 390.40000000000003, "end": 395.72, "text": " in this position. It's still the same three units. But now these three things are"}, {"start": 395.72, "end": 401.16, "text": " connected to these three things that they're now over, right? And so you keep"}, {"start": 401.16, "end": 407.12, "text": " sliding this over across the lower layer until you're finally at the end"}, {"start": 407.12, "end": 413.76000000000005, "text": " here. And now you've reduced the memory consumption from D squared to just D"}, {"start": 413.76000000000005, "end": 421.28000000000003, "text": " times, and if this is usually the kernel size is called K to D times K. And K, you"}, {"start": 421.28000000000003, "end": 428.88, "text": " can keep pretty much constant. So that's all of D, right? The same goes for the"}, {"start": 428.88, "end": 433.52, "text": " long former. So in the long former, the idea is that you have a so-called"}, {"start": 433.52, "end": 438.52, "text": " sliding window attention. It's exactly the same as it is in the convolution,"}, {"start": 438.52, "end": 443.08, "text": " except that you don't have these hidden units here, but these are actually"}, {"start": 443.08, "end": 449.0, "text": " the parts of the input sequence. And instead of the weight matrix here, you have"}, {"start": 449.0, "end": 454.84, "text": " the attention mechanism over the keys, queries, and values. But the idea is"}, {"start": 454.84, "end": 460.32, "text": " similar. So you can basically say this is a sort of a convolution. And we've"}, {"start": 460.32, "end": 465.71999999999997, "text": " already had this in the video about axial attention a bit. Now, of course, this"}, {"start": 465.71999999999997, "end": 474.35999999999996, "text": " is is you trade off memory for performance because before, right, before I'm"}, {"start": 474.35999999999996, "end": 481.28, "text": " going to draw, let's draw it on top of this fully connected layer before all the"}, {"start": 481.28, "end": 487.28, "text": " units could attend to all the units, right? And now the unit can only attend to"}, {"start": 487.28, "end": 492.91999999999996, "text": " its immediate neighborhood, right? This green unit here can only attend to"}, {"start": 492.91999999999996, "end": 498.15999999999997, "text": " itself in the lower layer and its immediate neighbors if the kernel size is"}, {"start": 498.15999999999997, "end": 504.55999999999995, "text": " three. But consider what happens in the next layer. So in the next layer, I have"}, {"start": 504.55999999999995, "end": 509.64, "text": " for example, this unit right here, this is same unit, right? On the next layer,"}, {"start": 509.64, "end": 516.84, "text": " it can attend to these two and itself in the lower layer. But these two"}, {"start": 516.84, "end": 522.64, "text": " themselves can attend to all of these, right? So that the one on the right can"}, {"start": 522.64, "end": 529.68, "text": " attend to one more. So in the first layer, this particular unit had information"}, {"start": 529.68, "end": 535.52, "text": " from these three units. But in the second layer, the same unit has now"}, {"start": 535.52, "end": 541.52, "text": " information across these five, right? And this is kind of this cone of attention."}, {"start": 541.52, "end": 546.76, "text": " It gets bigger and bigger as you go through the layers. So you lose the"}, {"start": 546.76, "end": 552.1999999999999, "text": " information to incorporate wide ranges of information in a single layer, but"}, {"start": 552.1999999999999, "end": 558.72, "text": " you regain it through depth, right? The deeper you go, the more a single unit"}, {"start": 558.72, "end": 563.16, "text": " gets information, right? This unit gets information from this unit over here"}, {"start": 563.16, "end": 569.48, "text": " through the layers through the layers. It can't watch the unit right here in"}, {"start": 569.48, "end": 573.48, "text": " this layer. That's not possible. But it gets the information through the layers."}, {"start": 573.48, "end": 577.7199999999999, "text": " Of course, there's still a trade off like a fully connected layer could just do"}, {"start": 577.7199999999999, "end": 582.04, "text": " this in one step. And then in the next layer, it could do it again, right? It can"}, {"start": 582.04, "end": 587.52, "text": " do much more complex computation. But if you believe that the most important"}, {"start": 587.52, "end": 592.0, "text": " information is actually in the neighborhoods of the individual tokens, which is"}, {"start": 592.0, "end": 597.04, "text": " conceivable in something like the convolutional neural network, you know that, you"}, {"start": 597.04, "end": 602.32, "text": " know, in an image, usually you have localized information, right? If there's a"}, {"start": 602.32, "end": 608.92, "text": " cat here, then it knows in the eyes of the cat, they're pretty close together."}, {"start": 608.92, "end": 613.12, "text": " So in order to recognize it's a cat, you won't mostly want local"}, {"start": 613.12, "end": 618.36, "text": " information, more and more local information. So in an image that makes sense, and"}, {"start": 618.36, "end": 623.0, "text": " in a text, it also makes sense to a degree in that usually words close"}, {"start": 623.0, "end": 630.48, "text": " together in a sentence. They are important for each other, right? But the power"}, {"start": 630.48, "end": 636.24, "text": " of the transformer was initially that it could attend to everything in a"}, {"start": 636.24, "end": 642.96, "text": " sentence, right? So for example, if you have again the paragraph here, the power"}, {"start": 642.96, "end": 650.24, "text": " of the transformer, at least that was said, is the fact that this piece of text"}, {"start": 650.24, "end": 655.36, "text": " here could make a connection to this piece of text here. And therefore the"}, {"start": 655.36, "end": 660.6, "text": " understanding of the entire paragraph could be reliant on this connection being"}, {"start": 660.6, "end": 666.4000000000001, "text": " made, which a local model can't do. But if you go through depth that you might"}, {"start": 666.4000000000001, "end": 671.24, "text": " be able to recover that. So the long former is basically what the convolutional"}, {"start": 671.24, "end": 680.12, "text": " neural network does for MLPs. It does it for transformers, right? So instead of n by"}, {"start": 680.12, "end": 688.96, "text": " n, giving you n squared, now you go into into this way where you have, so if you do"}, {"start": 688.96, "end": 696.32, "text": " the same for the transformer, you go to oh n times, let's call it W and W being"}, {"start": 696.32, "end": 705.0400000000001, "text": " your window size in this case. They have an illustration of this right here. So in a"}, {"start": 705.0400000000001, "end": 711.44, "text": " original transformer, this is an attention matrix. So here you have your n units"}, {"start": 711.44, "end": 718.2, "text": " in sequence and drawn in is which unit can attend to which other unit in a"}, {"start": 718.2, "end": 725.48, "text": " given layer. So you'll see this particular unit I here can attend of course to"}, {"start": 725.48, "end": 732.0, "text": " itself, right? Can attend to unit I. But it can also attend to this unit or to"}, {"start": 732.0, "end": 739.48, "text": " this unit or to this unit to any unit, right? And that's what gives you this n"}, {"start": 739.48, "end": 745.52, "text": " squared attention because any unit can attend to any unit. Now in this sliding"}, {"start": 745.52, "end": 750.32, "text": " window attention pattern, and this is one of the core components of the long"}, {"start": 750.32, "end": 759.36, "text": " former, you see that the I I f unit here right here can attend to itself, right? But"}, {"start": 759.36, "end": 771.84, "text": " also to this and to this, this, but no more, it can only attend to the I f"}, {"start": 771.84, "end": 783.2800000000001, "text": " unit or to I minus W to I plus W, right? And this here is a window of size W. This"}, {"start": 783.2800000000001, "end": 790.52, "text": " is this sliding window. So given unit can only attend to itself or its"}, {"start": 790.52, "end": 795.44, "text": " neighbors in one layer, right? And this is exactly what a convolution is like if"}, {"start": 795.44, "end": 803.8000000000001, "text": " you see, if you see this pattern, this is a this is a convolutional pattern. Now"}, {"start": 803.8000000000001, "end": 809.6800000000001, "text": " the second core component is they expand on this idea in that they make they"}, {"start": 809.6800000000001, "end": 815.12, "text": " create these dilated sliding windows. Now you see you already know what a"}, {"start": 815.12, "end": 821.2, "text": " sliding window is. Now they're saying, well, if you if you have this sliding"}, {"start": 821.2, "end": 826.2, "text": " window, it might take quite a number of layers in order to you know get your"}, {"start": 826.2, "end": 831.84, "text": " attention of the entire sequence incorporated. We saw before it took like three"}, {"start": 831.84, "end": 841.44, "text": " layers to get halfway through this sequence of what was it? Like six tokens and"}, {"start": 841.44, "end": 847.6800000000001, "text": " it took us like three layers. And with so basically if you go if you go one"}, {"start": 847.68, "end": 855.8399999999999, "text": " layer up, right? One layer up you gain one more context window in each"}, {"start": 855.8399999999999, "end": 861.0, "text": " direction, right? So it's not a you'd have to go very very deep in order to"}, {"start": 861.0, "end": 868.52, "text": " incorporate information from these very long sequences. And the sliding the"}, {"start": 868.52, "end": 877.5999999999999, "text": " dilated sliding window helps this where they say, well technically now any"}, {"start": 877.6, "end": 883.2, "text": " any any sequence here. So again, if we have this sequence and this is the next"}, {"start": 883.2, "end": 892.24, "text": " layer, actually, let's just draw. So this unit right here, it will be able to"}, {"start": 892.24, "end": 897.84, "text": " attend this and this, but not this and not this, but it will also be able to"}, {"start": 897.84, "end": 903.32, "text": " attend this and this, but not this and not this, sorry, not this. So it'll skip"}, {"start": 903.32, "end": 909.32, "text": " one. So right, these attention patterns, they will always kind of skip skip one."}, {"start": 909.32, "end": 916.36, "text": " And the idea is that now you have a vastly greater window of attention, right?"}, {"start": 916.36, "end": 921.6800000000001, "text": " Your window size is now way bigger. That means you can incorporate"}, {"start": 921.6800000000001, "end": 927.6, "text": " information way faster across the layers, like global information. But of course,"}, {"start": 927.6, "end": 932.8000000000001, "text": " now they they're kind of arguing against each other. In when they do this"}, {"start": 932.8, "end": 939.76, "text": " sliding window, they say, well, we pose that mostly local information is"}, {"start": 939.76, "end": 944.4399999999999, "text": " important for NLP, right? The words right around the word are important. And now"}, {"start": 944.4399999999999, "end": 949.64, "text": " if they say this here, they basically say, well, it's not so important that we"}, {"start": 949.64, "end": 954.9599999999999, "text": " miss this word right here, which is right next to the word that they are"}, {"start": 954.9599999999999, "end": 960.64, "text": " attending from, which is counter counter to what they just said that probably the"}, {"start": 960.64, "end": 966.3199999999999, "text": " most important information is around the word. They they do get around this by"}, {"start": 966.3199999999999, "end": 971.52, "text": " saying, well, if we have different layers and a transformer and in the lower"}, {"start": 971.52, "end": 978.04, "text": " layers, we'll use this sliding window fully local. And in the higher layers,"}, {"start": 978.04, "end": 985.08, "text": " we'll use this dilated window. And therefore, in the in the lower layers, we"}, {"start": 985.08, "end": 989.88, "text": " postulate that local information is actually what's needed to understand"}, {"start": 989.88, "end": 995.32, "text": " local features. And then in the higher layers, we want more global"}, {"start": 995.32, "end": 1001.32, "text": " information, because it will incorporate features from the from the local"}, {"start": 1001.32, "end": 1006.52, "text": " information of the lower layers. All right, I can I can get the"}, {"start": 1006.52, "end": 1011.04, "text": " argumentation, but I feel that's just something they've thrown in there to make"}, {"start": 1011.04, "end": 1018.44, "text": " it work better after they tried it out. And the the last idea here in the"}, {"start": 1018.44, "end": 1024.48, "text": " longformer is what they call global attention. And these global attention is"}, {"start": 1024.48, "end": 1031.3200000000002, "text": " sparse. What it means is that there are some special units here. So in this,"}, {"start": 1031.3200000000002, "end": 1037.56, "text": " this, this, this, and this unit, and these special units, as you can see from the"}, {"start": 1037.56, "end": 1042.28, "text": " attention pattern, these are these can actually attend to everything. So this"}, {"start": 1042.28, "end": 1048.0800000000002, "text": " unit can attend, for example, to this one or to this one or to anything, these"}, {"start": 1048.08, "end": 1052.96, "text": " can attend to anything and any unit can attend to those, right? Any unit can"}, {"start": 1052.96, "end": 1060.3999999999999, "text": " attend to the the first unit right here. Right? So these are your special tokens,"}, {"start": 1060.3999999999999, "end": 1067.48, "text": " your special units, and they have global attention. And the reason for this"}, {"start": 1067.48, "end": 1072.96, "text": " particularly is that sometimes this is needed and this is an engineering"}, {"start": 1072.96, "end": 1077.1599999999999, "text": " choice. Right? The example I can give is let's say you have a question"}, {"start": 1077.16, "end": 1080.92, "text": " answering task in a question answering task what you usually have is a question"}, {"start": 1080.92, "end": 1087.2, "text": " and a paragraph. And let's say the task here is to answer yes or no. Is the"}, {"start": 1087.2, "end": 1092.72, "text": " question to the question might be a statement, right? I don't know. King James"}, {"start": 1092.72, "end": 1099.5600000000002, "text": " was king of England from 1120 to 1140. And then the paragraph will be the"}, {"start": 1099.5600000000002, "end": 1106.64, "text": " Wikipedia entry for King James. And the question is yes or no is the"}, {"start": 1106.64, "end": 1111.6000000000001, "text": " question true or not? Is the statement made true or not? How you would feed"}, {"start": 1111.6000000000001, "end": 1118.0800000000002, "text": " this to a bird model to a transformer is you concatenate these two things"}, {"start": 1118.0800000000002, "end": 1123.64, "text": " question or statement and paragraph. Right? These are the tokens right here. And"}, {"start": 1123.64, "end": 1128.72, "text": " then you would separate them using a special token called the separator token."}, {"start": 1128.72, "end": 1133.3600000000001, "text": " This is just to inform the model that here is where the first thing stops and"}, {"start": 1133.36, "end": 1138.8, "text": " the next thing starts. And then at the beginning you would put a special token"}, {"start": 1138.8, "end": 1145.8, "text": " called the CLS token. Now usually what you do is you send these things through"}, {"start": 1145.8, "end": 1152.6, "text": " your transformer. And now in the last layer, right, you end up as we've seen"}, {"start": 1152.6, "end": 1156.28, "text": " before because you always transform a sequence into a sequence you end up with"}, {"start": 1156.28, "end": 1161.36, "text": " a sequence again. But you just want a single thing. You just want yes or no. So"}, {"start": 1161.36, "end": 1168.9199999999998, "text": " you designate you say this particular unit here that corresponds to the CLS"}, {"start": 1168.9199999999998, "end": 1174.56, "text": " token. That's what I'm going to throw into a logistic regression. And that's"}, {"start": 1174.56, "end": 1178.9199999999998, "text": " what will give me my yes or no answer. And that's how you train it. Right? So"}, {"start": 1178.9199999999998, "end": 1185.9199999999998, "text": " you don't want to single out any of these any of these as like special. So you"}, {"start": 1185.92, "end": 1191.5600000000002, "text": " simply already include a special token at the beginning that then you take the"}, {"start": 1191.5600000000002, "end": 1197.96, "text": " classification from right. It's pretty smart. But also you say, ah this is such a"}, {"start": 1197.96, "end": 1204.48, "text": " special token. I want that to be able to attend to anything right even though"}, {"start": 1204.48, "end": 1209.92, "text": " for example this unit right here. It can only attend to its neighbors right. It"}, {"start": 1209.92, "end": 1214.8000000000002, "text": " has this cone thing and this unit right here has this cone thing. This unit right"}, {"start": 1214.8, "end": 1221.24, "text": " here can always attend to anything at each of the layers right. It can attend to"}, {"start": 1221.24, "end": 1227.24, "text": " anything and anything can attend to it. So it can get information from anywhere"}, {"start": 1227.24, "end": 1233.04, "text": " routed to it in each of the layers and it can send information to any of the"}, {"start": 1233.04, "end": 1239.36, "text": " other units. And this is an engineering choice. So at the beginning you as an"}, {"start": 1239.36, "end": 1243.84, "text": " engineer have to say which one of these tokens are special tokens. And for these"}, {"start": 1243.84, "end": 1249.76, "text": " tokens you'll actually then do full attention right. It can attend to and from"}, {"start": 1249.76, "end": 1256.1999999999998, "text": " anything. So what are our new memory requirements. Well this will give us is"}, {"start": 1256.1999999999998, "end": 1264.72, "text": " first of all we have n tokens right. And here w is our window size. So we have n"}, {"start": 1264.72, "end": 1273.12, "text": " times w memory. But then we also add the global attention. So plus the number of"}, {"start": 1273.12, "end": 1282.56, "text": " special tokens times right. So if there's a special token it will have n times"}, {"start": 1282.56, "end": 1289.1599999999999, "text": " two memory requirements because it can attend from and to in each layer. So and"}, {"start": 1289.1599999999999, "end": 1296.28, "text": " this entire thing sorry with the plus this entire thing times the number of"}, {"start": 1296.28, "end": 1301.9199999999998, "text": " layers. So this is your new attention memory requirements and as you can see"}, {"start": 1301.92, "end": 1309.44, "text": " here n plus n. So this is going to be order of n instead of much smaller than"}, {"start": 1309.44, "end": 1318.3200000000002, "text": " order of n squared as we had for the original transformer. Right. So this this"}, {"start": 1318.3200000000002, "end": 1322.96, "text": " is what the long former basically does. Now they have written custom CUDA"}, {"start": 1322.96, "end": 1330.0, "text": " kernels for doing this for doing this this dilated attention and so on. Which is"}, {"start": 1330.0, "end": 1335.08, "text": " pretty cool and they have code available for the model. They test this on a"}, {"start": 1335.08, "end": 1343.16, "text": " number of language tasks and they what I find interesting is it actually they"}, {"start": 1343.16, "end": 1350.84, "text": " start from the Roberta checkpoint which Roberta where is it said somewhere"}, {"start": 1350.84, "end": 1357.12, "text": " oh yeah this Roberta model right here is a variant of Bert right you can see"}, {"start": 1357.12, "end": 1362.1599999999999, "text": " the name in here is a variant of Bert and that's their baseline and they start"}, {"start": 1362.1599999999999, "end": 1366.2399999999998, "text": " from these checkpoints as far as I understand and that it kind of copy over the"}, {"start": 1366.2399999999998, "end": 1372.1999999999998, "text": " position embeddings and so on and therefore they only need to train not very"}, {"start": 1372.1999999999998, "end": 1377.9599999999998, "text": " much pasta Roberta. Now the reason why they can copy it over actually is and"}, {"start": 1377.9599999999998, "end": 1385.0, "text": " this I find very interesting is they use a window size of 512. So until I read"}, {"start": 1385.0, "end": 1393.16, "text": " this I got away from reading the paper thinking that this this window size"}, {"start": 1393.16, "end": 1398.84, "text": " here might be fairly small right so this window size it might be you know"}, {"start": 1398.84, "end": 1406.48, "text": " maybe 102030 tokens or something right but actually this window size is 512"}, {"start": 1406.48, "end": 1415.6, "text": " in their in their formulation which basically means that this is this is as"}, {"start": 1415.6, "end": 1422.64, "text": " much as one of the classic models could take as a document right so sorry let's"}, {"start": 1422.64, "end": 1432.88, "text": " discover so this this here is 512 so this is what a classic model could take as"}, {"start": 1432.88, "end": 1439.3600000000001, "text": " as a as an entire document and in the classic model you simply split up the"}, {"start": 1439.3600000000001, "end": 1443.3600000000001, "text": " document feed chunks right and then aggregate over them now the long"}, {"start": 1443.3600000000001, "end": 1450.8000000000002, "text": " former basically has this so right now for now I said it has less memory"}, {"start": 1450.8000000000002, "end": 1455.8400000000001, "text": " requirements actually has the same memory requirements as a classic model but it"}, {"start": 1455.8400000000001, "end": 1460.0400000000002, "text": " is also able because of these global attention to kind of incorporate"}, {"start": 1460.04, "end": 1466.84, "text": " information from the surrounding things so that's the the new part because if"}, {"start": 1466.84, "end": 1477.08, "text": " you think about it if this W here is 512 512 was the original N so 512 was the"}, {"start": 1477.08, "end": 1489.8799999999999, "text": " N0 whatever the old models had as an N so right now if I replace this and"}, {"start": 1489.88, "end": 1494.0400000000002, "text": " let's let's you know don't not take care of this if I replace this it's actually"}, {"start": 1494.0400000000002, "end": 1501.0, "text": " N times N0 and that regresses to the classic model if you plug in N0 here"}, {"start": 1501.0, "end": 1507.68, "text": " right so the new part really is the fact that you have this this sliding"}, {"start": 1507.68, "end": 1513.24, "text": " window and the global attention is is able to incorporate information from"}, {"start": 1513.24, "end": 1521.28, "text": " these special tokens as well because sliding window that was done before so I just"}, {"start": 1521.28, "end": 1525.64, "text": " don't want you to get to you the wrong impression that now we can run"}, {"start": 1525.64, "end": 1531.1200000000001, "text": " Transformers on like very small memory machines we can't but we can run them"}, {"start": 1531.1200000000001, "end": 1537.04, "text": " on the same memory machines because this is the same length right but also"}, {"start": 1537.04, "end": 1543.32, "text": " feed in longer documents and have some information of the entire document be"}, {"start": 1543.32, "end": 1549.3999999999999, "text": " propagated to these to these blocks which before we couldn't before we could"}, {"start": 1549.3999999999999, "end": 1554.8, "text": " just simply feed these blocks as one and not have global information so that's"}, {"start": 1554.8, "end": 1559.56, "text": " that's the new thing at least I haven't tested it on the smaller things which"}, {"start": 1559.56, "end": 1563.6, "text": " is cool from an engineering point right you would want to as if you want to"}, {"start": 1563.6, "end": 1570.32, "text": " show that you're better you would want to basically be able to be as powerful as"}, {"start": 1570.32, "end": 1576.56, "text": " the old model but then be more powerful and that's what they do all right so if"}, {"start": 1576.56, "end": 1580.28, "text": " you want to check out the experiments and the ablations is very interesting"}, {"start": 1580.28, "end": 1585.0, "text": " because they they turn on and off a lot of things in their model and kind of"}, {"start": 1585.0, "end": 1589.1599999999999, "text": " check out where things come from what helps what doesn't and I'll leave this"}, {"start": 1589.16, "end": 1594.28, "text": " to you and I'll link it and with that thanks for listening watching and bye-bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=a0f07M2uj_A | Backpropagation and the brain | Geoffrey Hinton and his co-authors describe a biologically plausible variant of backpropagation and report evidence that such an algorithm might be responsible for learning in the brain.
https://www.nature.com/articles/s41583-020-0277-3
Abstract:
During learning, the brain modifies synapses to improve behaviour. In the cortex, synapses are embedded within multilayered networks, making it difficult to determine the effect of an individual synaptic modification on the behaviour of the system. The backpropagation algorithm solves this problem in deep artificial neural networks, but historically it has been viewed as biologically problematic. Nonetheless, recent developments in neuroscience and the successes of artificial neural networks have reinvigorated interest in whether backpropagation offers insights for understanding learning in the cortex. The backpropagation algorithm learns quickly by computing synaptic updates using feedback connections to deliver error signals. Although feedback connections are ubiquitous in the cortex, it is difficult to see how they could deliver the error signals required by strict formulations of backpropagation. Here we build on past and recent developments to argue that feedback connections may instead induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain.
Authors: Timothy P. Lillicrap, Adam Santoro, Luke Marris, Colin J. Akerman & Geoffrey Hinton
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at back propagation and the brain by Timothy Lillikrapp, Adam Santoro, Luke Morris, Colin Akerman and Jeffrey Hinton. So this is a bit of an unusual paper for the machine learning community but nevertheless it's interesting. And let's be honest, at least half of our interest comes from the fact that Jeffrey Hinton is one of the authors of this paper. So this is a paper that basically proposes a hypothesis on how the algorithm of back propagation works in the brain. Because previously there has been a lot of evidence against there being something like back propagation in the brain. So the question is how do neural networks in the brain learn? And they say there can be many different ways that neural networks learn and they list them up in this kind of diagram where you have a network and it mobs from input to output by having these weighted connections between neurons. So the input is too dimensional and then it maps using these weights to a three dimensional hidden layer. And usually there is a non-linear function somewhere at the output here of these. So they do a weighted sum of the inputs and then they do a non-linear non-linear function and then they propagate that signal to the next layer and to finally to the output. Alright, so how do these networks learn? The one way of learning is called hebian learning. The interesting thing here is that it requires no feedback from the outside world. Basically what you want to do in hebian learning is you want to update the connections such that they kind of match their own previous outputs or even increase their own previous outputs. So you propagate a signal and then maybe this neuron spikes really hard and this neuron spikes really low. Then if you propagate the signal again, right, then you want to match that those those activations or if you if you propagate similar signals, no feedback required. So basically it's a self amplifying or self dampening process. Ultimately, though you want to learn something about the world, then that means you have to have some some feedback from outside. So with feedback, what we mean is usually that the output here, let's put this away, the output here is goes into the world. Let's say this is a motor neuron, right, you do something with your arm like you hammer on a nail. And then you either hit the nail or you don't let's say you don't hit the nail. So after it looks like crooked there, you have feedback, right. So feedback usually in the form of some sort of error signal, right. So feedback, you can be like this was good or this was bad or it can be this was a bit too much to the left or so on. The important part is you get kind of one number of feedback, right. How bad you were. And now your goal is to adjust all of the individual neurons or weights between neurons such that the error will be lower. So in heavy and learning, there is no feedback. It's just simply a self reinforcing pattern activation machine in the first in these kind of first instances of perturbation learning what you'll have is you'll have one single feedback. And that you can see this is a diffuse cloud here. What you're basically saying is that every single neuron is kind of punished. Let's say the feedback here was negative one. That means every single neuron is is punished for that. So how you can imagine something if you have your input X and you map it through through your function F. And the function F has a weight W1 and so on. Right. So you map X through it. Right. And then you get a feedback of negative one. And then you map X with a little bit of noise plus N. Right. And you get a feedback of negative two. Right. Then you get that means that the direction of this noise was probably a bad direction. So ultimately you want to update X into the direction of negative that noise by modulated. Of course, by by some factor here that's that kind of tells you how bad it was. So this could be the negative two minus negative one. Yeah, that makes big sense. No. Yes. That would be no. It would be negative one minus negative. Never mind. So basically with a scalar feedback, you simply tell each neuron what it did right or sorry if if the entire network right the entire network did right or wrong. So the entire network will lead to this feedback. You don't have accountability of the individual neurons. All you can say is that whatever I'm doing here is wrong and whatever I'm doing here is right. So I'm going to do more of the right things. Now in back propagation, it is very different. Right. In back propagation, what you'll do is you will have your feedback here. Let's say that's negative one. And then you do a reverse computation. So the forward computation in this case was this weighted sum of this layer. Now you do a layer wise reverse computation, which means that you know how this function here, this output came to be out of the outputs. And that means you can inverse and you can do an inverse propagation of the error signal, which is of course the gradient. So this would be your, your, you would derive your error by the inputs to the layer. Right. So this basically tells in the back propagation algorithm, you can exactly determine if you are this node, how do I have to adjust my input weights, how do I have to adjust them in order to make this number here go down. Right. And then because you always propagate the error according to that, what you'll have in each in each layer is basically a vector target. So it's no longer just one number, but each layer now has a target of vectors. And it says, OK, these are the outputs that would be beneficial. Please, this layer, please change your outputs in the direction of negative two, negative three plus four. So you see, this is, so the negative two would be this unit, the negative three would be this unit and the plus four would be this unit. So each unit is instructed individually to say, please, this is the direction that each unit should change in in order to make this number go lower. You see how this is much more information than the perturbation learning in the perturbation learning all the units simply know, well, before was bad and now is better. So let's, you know, change a bit. Here, you have detailed instructions for each unit because of the back propagation algorithm. So ultimately, people have kind of thought that since back propagation wasn't really possible with biological neurons that the brain might be doing something like perturbation learning. But this paper argues that something like back propagation is not only possible, but likely in the brain and they propose this kind of back prop like learning with the feedback network. So they basically concern all the native rentiate hard between these two regimes here. In this hand, you have the scalar feedback, which means that the entire network gets one number as feedback and each neuron just gets that number. And here you have vector feedback where each neuron gets an individual instruction of how to update and they achieve this not by back propagation because still the original formulation of back prop as we use it in neural networks is not biologically plausible, but they achieve this with this back prop like learning with feedback network. And we'll see how this does, but in essence, this feedback network is constructed such that it can give each neuron in the forward pass here detailed instructions on how to update itself. So yeah, they have a little bit of a diagram here of if you do heavy in if this if this is an error landscape, if you do heavy in learning, you're basically you don't care about the error. You're just reinforcing yourself. If you do perturbation learning, then you it's very slow because you don't have a detailed signal. You just you just relying on this one number. It's kind of if you were to update every single neuron in neural network with reinforcement learning, considering the output of the neural networks or the error, considering that the reward, not using back prop and then with back prop, you have a much smoother, much faster optimization trajectory. So they look at this and they they come to some some conclusions. First of all, so here's here's back prop basically same back prop as we said, you have a forward pass. And there you simply compute these weighted averages and you also pass them usually through some sort of non linear activation. And the cool thing about this is in artificial neural networks is that once the error comes in, you can exactly reverse that. So you can do a backward pass of errors where you can propagate these errors through because you know it's kind of invertible. The function doesn't have to be invertible, but the gradients will flow backwards if you know how the forward pass was computed. So first of all, they go into a discussion of back prop in the brain. How can we even expect that? And one cool piece of evidence is where I find is that they cite several examples where they use artificial neural networks to learn the same task as humans. Right. And or as animal brains. And then I have no clue how how they measure any of this. But then they compare the hidden representations of the living neural networks and the artificial neural networks. And then it turns out that the these the networks that were trained with back prop can clear up much more of the variance of these hidden activations than networks that were not trained with back prop. So basically that means if you train a network with back prop, it matches the biological networks much closer in how they form their hidden representations and they they do a number they cite the number of experiments here that show this. And then you do very good evidence that if the hidden representations they look as if they had been computed by back prop and not by any of these scalar updating algorithms. So it is conceivable that we find back prop in the brain. That's why they go here. Next they go into problems with back prop. So basically why why would we why so far have we believed that back prop isn't happening in the brain. So now let's I want to highlight two factors here that that I think are suffice they have more but first of all back prop demands synaptic symmetry in the forward and backward paths. So basically if you have a neuron and it has output to another neuron what you need to be able to do is to pass back information along that neuron so it kind of has to be a symmetric connection. So you have the forward and the backward pass and these need to be exact right and this is just not if you know how neurons are structured they have kind of input tend rights and then there is this action action potential and along the axon the signal travels and the back traveling of the signal just I think is is very is very very slow if even possible. So it's generally not invertible or inverse compute capable. So this is one reason why back prop seems unlikely and then the second reason here is error signals are signed and potentially extreme valued and I want to add to that they also talk about this somewhere that error signals are of a different type. So first let's see what signed error signals are signed yes we need to be able to adjust neurons in a specific directions right if you look at again what we've drawn before here we said here this is how these neurons must must update so the first neuron must must decrease by two this must decrease by three and this must increase by four now in back probably need this but in if we assume that there is something like a reverse computation or signaling here happening then we still have the problem that usually these output signals are in the form of spiking rates which means that over time right so if a neuron wants to if a neuron has zero activation there's just no signal but if a neuron has a high activation it spikes a lot if has a low activation it kind of spikes sometimes well what it can do is negative spike right like zero is as low as it goes so the the thought that there are signed information in in the back or pass is conceivable even if you have something like a second so you can imagine here instead of this back or connection because of the symmetry problem that we have some kind of second neural network that goes in this direction still you'd have the problem that here you can only have positive signal or zero and they might be extreme valued which okay it can't be really encoded with the spiking because they are they're limited in the range they can assume but they are also of a different type and what I mean by that is basically if you think of this as a programming problem then the four word passes here are our activations right and the backward passes here they are deltas so in the backward passes you either propagate deltas or you propagate kind of directions so the activations are sort of impulses whereas the backward signals are this is new how you need to change their gradients ultimately so it's fundamentally a different type of data that is propagated along would be propagated along these directions and that makes it very unlikely because we are not aware as this paper says that the neural networks get neurons can kind of switch the data type that they're they're transmitting all right so then the paper goes into their n-grad hypothesis and what this is the hypothesis basically states that the brain could implement something like neural networks by using an approximate back prop like algorithm based on auto encoders and I want to jump straight into the algorithm no actually first they do talk about auto encoders which which I find very interesting so if you think of auto encoders what is an auto encoder an auto encoder is a network that basically starts out with an input layer and then has a bunch of hidden layers and that the end it tries to reconstruct its own input right so you feed a data in here you get data out here and then your error the error signal is will be your difference to your original input now that the usually when we train auto encoders in deep learning we also train this by back prop right we feed and this error here and this goes back but if you just think of single layer auto encoders so let's let's go over here single layer auto encoder with let's say the same number of units in this in this layer what you'll have is so this this is input this is output and this is the hidden layer right you'll have weight matrix here and you'll probably have some sort of non-linear function and then you have another weight matrix here and they call them W and B another way to draw this is I have weight matrix going up then I have an on linear function going transforming this into this signal and then I have the B going back right so I'm drawing I'm drawing it in two different ways up here or over here and with the second way you can see that it is kind of a forward backward algorithm where now the error if you look at what is the error here the error is the difference between this and this and the difference between this and this and the difference between this and this right and you can train an auto encoder simply by saying W you please make sure that the that the the the the inputs here gets mapped closer to the output and the B the same thing this will become clear in a second so but basically sorry I mean the hidden representations you'll see basically the idea is that you can train an auto encoder only by using local update rules you don't have to do back prop and that's what this algorithm is proposing namely if you think of a stack of auto encoders this this this transforming one hidden representation into the next right this is the feet forward function right what you can do is you first of all you can assume that for each of these functions here you have a perfect inverse right you can you can perfectly compute the inverse function that's this this G here of course this doesn't exist but assume you have it what you then could do is you could if if you knew in one layer and on the top layer of course you know if you knew that okay I got this from my forward pass but I would like to have this this is my desired output right so in the output layer you get this this is your error signal if you knew you you you could compute an error right here this is what you do in the output right now in back prop we would back propagate this error along the layers but now we don't do this instead of what we do is we use this G function to invert the F function right and by that what we'll say is what hidden representation in layer two what should the hidden representation have been in order for us to obtain this thing right so the claim here is if in layer two we had had H2 as a hidden representation then we would have landed exactly where we want it right that's what this G function does right because here we use F so had we had F H2 and used F on it we would be exactly where we want instead we had H2 here and used F on it and then we landed here where we don't want so this is where we want we would want to be in layer two and this is where we were so again we can compute an error here again instead of back propagating that error what we'll do is we'll use the inverse of the forward function in order to back propagate our desired hidden representation and you can see there is of course a relationship to the true back prop here but the the important distinction is we are not trying to back propagate the error signal we're trying to invert the desired hidden states of the network and then in each layer we can compute from the forward pass we can compute the difference to the desired hidden state and thereby compute an error signal and now we have achieved what we want it we want an algorithm that doesn't do back prop that only uses local information in order to compute the error signal that it needs to adjust and by local I mean information in the same layer and also the data type that is propagated by F is activations right of hidden representations and by G is also activations of hidden representations both of them are always positive can be encoded by spiking neurons and so on so this algorithm achieves what we want they go a bit into detail how the actual error update here can be achieved and apparently neurons can achieve in the same layer to to adjust themselves to a given desired activation so this algorithm achieves it of course we don't have this G we don't have it and therefore we need to go a bit more complicated what they introduce is this following algorithm the goals are the same but now we assume we do not have a perfect inverse but we have something that is a bit like an inverse so we have an approximate inverse and they they basically suggest if we have an approximate inverse we can do the following so G G is now an approximate inverse to F what we can do is this is our input signal right we use F to map it forward to this and so on all the way up until we get our true our error right here this is our error from the environment right this is the nail being wrong and then we do two applications of G right so this is an application of F we do two applications of G one we apply G to this to what we got in the forward pass right and this now gives us a measure of how bad our inverse is right so if G is now an approximate inverse and this now we see here oh okay we we had H2 in the forward pass and we basically forward passed and then went through our inverse and we didn't land quite exactly where we started but we know that okay this this is basically the difference between our our inverse our forward inverse H and our true H and then we also back project using G again the desired outcome so we invert the desired outcome here now before we have adjusted directly these two right because we said this is what we got this is what we want but now we include for the fact that G isn't a perfect inverse and our assumption is that G here probably makes about the same mistakes as G here so what we'll do is we'll take this vector right here and apply it here in order to achieve this thing and this thing is now the corrected thing our corrected desired hidden representation corrected for the fact that we don't have a perfect inverse and now again we have our error here that we can locally adjust again all the signals propagated here here and here are just neural activations and all the information required to update a layer of neurons is now contained within that layer of neurons right and this goes back through the network so this is how they achieve how they achieve this this is a bit of a of a close up look and here are the computations to do this so basically for the forward updates you want to adjust W into the direction of the H minus the H tilde and the H tilde in this case would be this the hidden representation that you would do and the hidden representation that you would like to have so you would update your forward forward weights into direction such that your hidden representations are closer sorry that your forward hidden representation is closer to your backward hidden representation and the backward updates now your goal is to get a more a better to make G so sorry W here these are W are the weights of F and B are the weights of G so in the backward updates your goal is to make G a better inverse right so what you'll do is again you'll take the difference between now you see the difference here here here here right not the same error so here you use you in the W update use what we labeled error here in the G update you use this error here so this is the error of G so when you update the function G you want to make these two closer together such that G becomes a better inverse because you're dealing with an approximate inverse you still need to obtain that approximate inverse and this here is how you learn it this algorithm now achieves what we wanted right local updates data types check assigned check and so on I hope this was enough clear in essence is pretty simple but it's pretty cool how they work around this they call this a different starboard propagation and I'm not these these kind of papers I don't think they invented this maybe I'm not sure maybe they did and maybe they didn't and this paper just kind of frames it frames it in this hypothesis it is unclear to me I'm not familiar with this kind of papers so sorry if I misattribute something here all right then they go into into how could these things be implemented biologically and they go for some evidence and they also state that we used to look at neurons basically in this way where you had input and feedback here very simple simplistic view of neurons whereas nowadays even the the computational community views neurons in a more differentiated way where you have for example different regions here on the Soma that can be separated from each other and you have inter neuron interference and so on I'm not qualified too much to comment on this stuff but I invite you to read it for yourself if you want all right so this was my take on this paper I find the algorithm they propose pretty cool if you I hope you liked it and check it out bye bye | [{"start": 0.0, "end": 12.0, "text": " Hi there. Today we're looking at back propagation and the brain by Timothy Lillikrapp, Adam Santoro, Luke Morris, Colin Akerman and Jeffrey Hinton."}, {"start": 12.0, "end": 19.0, "text": " So this is a bit of an unusual paper for the machine learning community but nevertheless it's interesting."}, {"start": 19.0, "end": 28.0, "text": " And let's be honest, at least half of our interest comes from the fact that Jeffrey Hinton is one of the authors of this paper."}, {"start": 28.0, "end": 40.0, "text": " So this is a paper that basically proposes a hypothesis on how the algorithm of back propagation works in the brain."}, {"start": 40.0, "end": 50.0, "text": " Because previously there has been a lot of evidence against there being something like back propagation in the brain."}, {"start": 50.0, "end": 76.0, "text": " So the question is how do neural networks in the brain learn? And they say there can be many different ways that neural networks learn and they list them up in this kind of diagram where you have a network and it mobs from input to output by having these weighted connections between neurons."}, {"start": 76.0, "end": 90.0, "text": " So the input is too dimensional and then it maps using these weights to a three dimensional hidden layer. And usually there is a non-linear function somewhere at the output here of these."}, {"start": 90.0, "end": 103.0, "text": " So they do a weighted sum of the inputs and then they do a non-linear non-linear function and then they propagate that signal to the next layer and to finally to the output."}, {"start": 103.0, "end": 112.0, "text": " Alright, so how do these networks learn? The one way of learning is called hebian learning."}, {"start": 112.0, "end": 129.0, "text": " The interesting thing here is that it requires no feedback from the outside world. Basically what you want to do in hebian learning is you want to update the connections such that they kind of match their own previous outputs or even increase their own previous outputs."}, {"start": 129.0, "end": 149.0, "text": " So you propagate a signal and then maybe this neuron spikes really hard and this neuron spikes really low. Then if you propagate the signal again, right, then you want to match that those those activations or if you if you propagate similar signals, no feedback required."}, {"start": 149.0, "end": 163.0, "text": " So basically it's a self amplifying or self dampening process. Ultimately, though you want to learn something about the world, then that means you have to have some some feedback from outside."}, {"start": 163.0, "end": 186.0, "text": " So with feedback, what we mean is usually that the output here, let's put this away, the output here is goes into the world. Let's say this is a motor neuron, right, you do something with your arm like you hammer on a nail."}, {"start": 186.0, "end": 198.0, "text": " And then you either hit the nail or you don't let's say you don't hit the nail. So after it looks like crooked there, you have feedback, right."}, {"start": 198.0, "end": 212.0, "text": " So feedback usually in the form of some sort of error signal, right. So feedback, you can be like this was good or this was bad or it can be this was a bit too much to the left or so on."}, {"start": 212.0, "end": 217.0, "text": " The important part is you get kind of one number of feedback, right."}, {"start": 217.0, "end": 231.0, "text": " How bad you were. And now your goal is to adjust all of the individual neurons or weights between neurons such that the error will be lower."}, {"start": 231.0, "end": 250.0, "text": " So in heavy and learning, there is no feedback. It's just simply a self reinforcing pattern activation machine in the first in these kind of first instances of perturbation learning what you'll have is you'll have one single feedback."}, {"start": 250.0, "end": 262.0, "text": " And that you can see this is a diffuse cloud here. What you're basically saying is that every single neuron is kind of punished. Let's say the feedback here was negative one."}, {"start": 262.0, "end": 277.0, "text": " That means every single neuron is is punished for that. So how you can imagine something if you have your input X and you map it through through your function F."}, {"start": 277.0, "end": 291.0, "text": " And the function F has a weight W1 and so on. Right. So you map X through it. Right. And then you get a feedback of negative one."}, {"start": 291.0, "end": 308.0, "text": " And then you map X with a little bit of noise plus N. Right. And you get a feedback of negative two. Right. Then you get that means that the direction of this noise was probably a bad direction."}, {"start": 308.0, "end": 326.0, "text": " So ultimately you want to update X into the direction of negative that noise by modulated. Of course, by by some factor here that's that kind of tells you how bad it was."}, {"start": 326.0, "end": 345.0, "text": " So this could be the negative two minus negative one. Yeah, that makes big sense. No. Yes. That would be no. It would be negative one minus negative. Never mind."}, {"start": 345.0, "end": 357.0, "text": " So basically with a scalar feedback, you simply tell each neuron what it did right or sorry if if the entire network right the entire network did right or wrong."}, {"start": 357.0, "end": 368.0, "text": " So the entire network will lead to this feedback. You don't have accountability of the individual neurons. All you can say is that whatever I'm doing here is wrong and whatever I'm doing here is right."}, {"start": 368.0, "end": 382.0, "text": " So I'm going to do more of the right things. Now in back propagation, it is very different. Right. In back propagation, what you'll do is you will have your feedback here. Let's say that's negative one."}, {"start": 382.0, "end": 404.0, "text": " And then you do a reverse computation. So the forward computation in this case was this weighted sum of this layer. Now you do a layer wise reverse computation, which means that you know how this function here, this output came to be out of the outputs."}, {"start": 404.0, "end": 424.0, "text": " And that means you can inverse and you can do an inverse propagation of the error signal, which is of course the gradient. So this would be your, your, you would derive your error by the inputs to the layer."}, {"start": 424.0, "end": 444.0, "text": " Right. So this basically tells in the back propagation algorithm, you can exactly determine if you are this node, how do I have to adjust my input weights, how do I have to adjust them in order to make this number here go down."}, {"start": 444.0, "end": 459.0, "text": " Right. And then because you always propagate the error according to that, what you'll have in each in each layer is basically a vector target. So it's no longer just one number, but each layer now has a target of vectors."}, {"start": 459.0, "end": 479.0, "text": " And it says, OK, these are the outputs that would be beneficial. Please, this layer, please change your outputs in the direction of negative two, negative three plus four. So you see, this is, so the negative two would be this unit, the negative three would be this unit and the plus four would be this unit."}, {"start": 479.0, "end": 492.0, "text": " So each unit is instructed individually to say, please, this is the direction that each unit should change in in order to make this number go lower."}, {"start": 492.0, "end": 505.0, "text": " You see how this is much more information than the perturbation learning in the perturbation learning all the units simply know, well, before was bad and now is better. So let's, you know, change a bit."}, {"start": 505.0, "end": 526.0, "text": " Here, you have detailed instructions for each unit because of the back propagation algorithm. So ultimately, people have kind of thought that since back propagation wasn't really possible with biological neurons that the brain might be doing something like perturbation learning."}, {"start": 526.0, "end": 541.0, "text": " But this paper argues that something like back propagation is not only possible, but likely in the brain and they propose this kind of back prop like learning with the feedback network."}, {"start": 541.0, "end": 560.0, "text": " So they basically concern all the native rentiate hard between these two regimes here. In this hand, you have the scalar feedback, which means that the entire network gets one number as feedback and each neuron just gets that number."}, {"start": 560.0, "end": 585.0, "text": " And here you have vector feedback where each neuron gets an individual instruction of how to update and they achieve this not by back propagation because still the original formulation of back prop as we use it in neural networks is not biologically plausible, but they achieve this with this back prop like learning with feedback network."}, {"start": 585.0, "end": 602.0, "text": " And we'll see how this does, but in essence, this feedback network is constructed such that it can give each neuron in the forward pass here detailed instructions on how to update itself."}, {"start": 602.0, "end": 615.0, "text": " So yeah, they have a little bit of a diagram here of if you do heavy in if this if this is an error landscape, if you do heavy in learning, you're basically you don't care about the error."}, {"start": 615.0, "end": 628.0, "text": " You're just reinforcing yourself. If you do perturbation learning, then you it's very slow because you don't have a detailed signal. You just you just relying on this one number."}, {"start": 628.0, "end": 649.0, "text": " It's kind of if you were to update every single neuron in neural network with reinforcement learning, considering the output of the neural networks or the error, considering that the reward, not using back prop and then with back prop, you have a much smoother, much faster optimization trajectory."}, {"start": 649.0, "end": 664.0, "text": " So they look at this and they they come to some some conclusions. First of all, so here's here's back prop basically same back prop as we said, you have a forward pass."}, {"start": 664.0, "end": 679.0, "text": " And there you simply compute these weighted averages and you also pass them usually through some sort of non linear activation."}, {"start": 679.0, "end": 699.0, "text": " And the cool thing about this is in artificial neural networks is that once the error comes in, you can exactly reverse that. So you can do a backward pass of errors where you can propagate these errors through because you know it's kind of invertible."}, {"start": 699.0, "end": 710.0, "text": " The function doesn't have to be invertible, but the gradients will flow backwards if you know how the forward pass was computed."}, {"start": 710.0, "end": 728.0, "text": " So first of all, they go into a discussion of back prop in the brain. How can we even expect that? And one cool piece of evidence is where I find is that they cite several examples where"}, {"start": 728.0, "end": 736.0, "text": " they use artificial neural networks to learn the same task as humans. Right."}, {"start": 736.0, "end": 753.0, "text": " And or as animal brains. And then I have no clue how how they measure any of this. But then they compare the hidden representations of the living neural networks and the artificial neural networks."}, {"start": 753.0, "end": 770.0, "text": " And then it turns out that the these the networks that were trained with back prop can clear up much more of the variance of these hidden activations than networks that were not trained with back prop."}, {"start": 770.0, "end": 787.0, "text": " So basically that means if you train a network with back prop, it matches the biological networks much closer in how they form their hidden representations and they they do a number they cite the number of experiments here that show this."}, {"start": 787.0, "end": 801.0, "text": " And then you do very good evidence that if the hidden representations they look as if they had been computed by back prop and not by any of these scalar updating algorithms."}, {"start": 801.0, "end": 822.0, "text": " So it is conceivable that we find back prop in the brain. That's why they go here. Next they go into problems with back prop. So basically why why would we why so far have we believed that back prop isn't happening in the brain."}, {"start": 822.0, "end": 837.0, "text": " So now let's I want to highlight two factors here that that I think are suffice they have more but first of all back prop demands synaptic symmetry in the forward and backward paths."}, {"start": 837.0, "end": 854.0, "text": " So basically if you have a neuron and it has output to another neuron what you need to be able to do is to pass back information along that neuron so it kind of has to be a symmetric connection."}, {"start": 854.0, "end": 882.0, "text": " So you have the forward and the backward pass and these need to be exact right and this is just not if you know how neurons are structured they have kind of input tend rights and then there is this action action potential and along the axon the signal travels and the back traveling of the signal just I think is is very is very very slow if even possible."}, {"start": 882.0, "end": 908.0, "text": " So it's generally not invertible or inverse compute capable. So this is one reason why back prop seems unlikely and then the second reason here is error signals are signed and potentially extreme valued and I want to add to that they also talk about this somewhere that error signals are of a different type."}, {"start": 908.0, "end": 935.0, "text": " So first let's see what signed error signals are signed yes we need to be able to adjust neurons in a specific directions right if you look at again what we've drawn before here we said here this is how these neurons must must update so the first neuron must must"}, {"start": 935.0, "end": 964.0, "text": " decrease by two this must decrease by three and this must increase by four now in back probably need this but in if we assume that there is something like a reverse computation or signaling here happening then we still have the problem that usually these output signals are in the form of spiking rates which means that over time"}, {"start": 964.0, "end": 982.0, "text": " right so if a neuron wants to if a neuron has zero activation there's just no signal but if a neuron has a high activation it spikes a lot if has a low activation it kind of spikes sometimes"}, {"start": 982.0, "end": 1000.0, "text": " well what it can do is negative spike right like zero is as low as it goes so the the thought that there are signed information in in the back or pass is conceivable even if you have something like a second so you can imagine here instead of this back"}, {"start": 1000.0, "end": 1014.0, "text": " or connection because of the symmetry problem that we have some kind of second neural network that goes in this direction still you'd have the problem that here you can only have positive signal or zero"}, {"start": 1014.0, "end": 1025.0, "text": " and they might be extreme valued which okay it can't be really encoded with the spiking because they are they're limited in the range they can assume"}, {"start": 1025.0, "end": 1043.0, "text": " but they are also of a different type and what I mean by that is basically if you think of this as a programming problem then the four word passes here are our activations right and the backward passes here they are deltas"}, {"start": 1043.0, "end": 1065.0, "text": " so in the backward passes you either propagate deltas or you propagate kind of directions so the activations are sort of impulses whereas the backward signals are this is new how you need to change their gradients ultimately"}, {"start": 1065.0, "end": 1089.0, "text": " so it's fundamentally a different type of data that is propagated along would be propagated along these directions and that makes it very unlikely because we are not aware as this paper says that the neural networks get neurons can kind of switch the data type that they're they're transmitting"}, {"start": 1089.0, "end": 1112.0, "text": " all right so then the paper goes into their n-grad hypothesis and what this is the hypothesis basically states that the brain could implement something like neural networks by using an approximate back prop like algorithm based on auto encoders"}, {"start": 1112.0, "end": 1136.0, "text": " and I want to jump straight into the algorithm no actually first they do talk about auto encoders which which I find very interesting so if you think of auto encoders what is an auto encoder an auto encoder is a network that basically starts out with an input layer and then has a bunch of hidden layers"}, {"start": 1136.0, "end": 1156.0, "text": " and that the end it tries to reconstruct its own input right so you feed a data in here you get data out here and then your error the error signal is will be your difference to your original input now"}, {"start": 1156.0, "end": 1182.0, "text": " that the usually when we train auto encoders in deep learning we also train this by back prop right we feed and this error here and this goes back but if you just think of single layer auto encoders so let's let's go over here single layer auto encoder with let's say the same number of"}, {"start": 1182.0, "end": 1203.0, "text": " units in this in this layer what you'll have is so this this is input this is output and this is the hidden layer right you'll have weight matrix here and you'll probably have some sort of non-linear function"}, {"start": 1203.0, "end": 1223.0, "text": " and then you have another weight matrix here and they call them W and B another way to draw this is I have weight matrix going up then I have an on linear function going transforming this into this signal and then I have the B going back"}, {"start": 1223.0, "end": 1241.0, "text": " right so I'm drawing I'm drawing it in two different ways up here or over here and with the second way you can see that it is kind of a forward backward algorithm where now the error if you look at what is the error here the"}, {"start": 1241.0, "end": 1257.0, "text": " error is the difference between this and this and the difference between this and this and the difference between this and this right and you can train an auto encoder simply by saying W"}, {"start": 1257.0, "end": 1286.0, "text": " you please make sure that the that the the the the inputs here gets mapped closer to the output and the B the same thing this will become clear in a second so but basically sorry I mean the hidden representations you'll see"}, {"start": 1286.0, "end": 1306.0, "text": " basically the idea is that you can train an auto encoder only by using local update rules you don't have to do back prop and that's what this algorithm is proposing namely if you think of a stack of auto encoders this this this transforming one hidden"}, {"start": 1306.0, "end": 1323.0, "text": " representation into the next right this is the feet forward function right what you can do is you first of all you can assume that for each of these functions here you have a perfect inverse right you can you can perfectly"}, {"start": 1323.0, "end": 1341.0, "text": " compute the inverse function that's this this G here of course this doesn't exist but assume you have it what you then could do is you could if if you knew in one layer"}, {"start": 1341.0, "end": 1358.0, "text": " and on the top layer of course you know if you knew that okay I got this from my forward pass but I would like to have this this is my desired output right so in the output layer you get this this is your error signal"}, {"start": 1358.0, "end": 1383.0, "text": " if you knew you you you could compute an error right here this is what you do in the output right now in back prop we would back propagate this error along the layers but now we don't do this instead of what we do is we use this G function to invert the F function right and by that"}, {"start": 1383.0, "end": 1412.0, "text": " what we'll say is what hidden representation in layer two what should the hidden representation have been in order for us to obtain this thing right so the claim here is if in layer two we had had H2 as a hidden representation then we would have landed exactly where we want it right"}, {"start": 1412.0, "end": 1433.0, "text": " that's what this G function does right because here we use F so had we had F H2 and used F on it we would be exactly where we want instead we had H2 here and used F on it and then we landed here where we don't want so this is where we want we"}, {"start": 1433.0, "end": 1454.0, "text": " would want to be in layer two and this is where we were so again we can compute an error here again instead of back propagating that error what we'll do is we'll use the inverse of the forward function in order to back propagate our desired hidden representation"}, {"start": 1454.0, "end": 1482.0, "text": " and you can see there is of course a relationship to the true back prop here but the the important distinction is we are not trying to back propagate the error signal we're trying to invert the desired hidden states of the network and then in each layer we can compute from the forward pass we can compute the difference to the desired hidden state and thereby compute an error signal"}, {"start": 1482.0, "end": 1498.0, "text": " and now we have achieved what we want it we want an algorithm that doesn't do back prop that only uses local information in order to compute the error signal that it needs to adjust and by local"}, {"start": 1498.0, "end": 1522.0, "text": " I mean information in the same layer and also the data type that is propagated by F is activations right of hidden representations and by G is also activations of hidden representations both of them are always positive can be encoded by spiking neurons and so on so this algorithm achieves what we want"}, {"start": 1522.0, "end": 1540.0, "text": " they go a bit into detail how the actual error update here can be achieved and apparently neurons can achieve in the same layer to to adjust themselves to a given desired activation"}, {"start": 1540.0, "end": 1564.0, "text": " so this algorithm achieves it of course we don't have this G we don't have it and therefore we need to go a bit more complicated what they introduce is this following algorithm the goals are the same but now we assume we do not have a perfect inverse but we have something that is a bit like an inverse"}, {"start": 1564.0, "end": 1592.0, "text": " so we have an approximate inverse and they they basically suggest if we have an approximate inverse we can do the following so G G is now an approximate inverse to F what we can do is this is our input signal right we use F to map it forward to this and so on all the way up until we get our true our error right here this is our error from the environment right this is the nail being wrong"}, {"start": 1592.0, "end": 1620.0, "text": " and then we do two applications of G right so this is an application of F we do two applications of G one we apply G to this to what we got in the forward pass right and this now gives us a measure of how bad our inverse is right so if G is now an approximate inverse and this now we see here oh okay"}, {"start": 1620.0, "end": 1642.0, "text": " we we had H2 in the forward pass and we basically forward passed and then went through our inverse and we didn't land quite exactly where we started but we know that okay this this is basically the difference between our our inverse our forward inverse H and our true H"}, {"start": 1642.0, "end": 1663.0, "text": " and then we also back project using G again the desired outcome so we invert the desired outcome here now before we have adjusted directly these two right because we said this is what we got this is what we want"}, {"start": 1663.0, "end": 1692.0, "text": " but now we include for the fact that G isn't a perfect inverse and our assumption is that G here probably makes about the same mistakes as G here so what we'll do is we'll take this vector right here and apply it here in order to achieve this thing and this thing is now the corrected thing our corrected desired hidden representation"}, {"start": 1692.0, "end": 1707.0, "text": " corrected for the fact that we don't have a perfect inverse and now again we have our error here that we can locally adjust again all the signals propagated here here and here are just neural activations"}, {"start": 1707.0, "end": 1719.0, "text": " and all the information required to update a layer of neurons is now contained within that layer of neurons right and this goes back through the network"}, {"start": 1719.0, "end": 1748.0, "text": " so this is how they achieve how they achieve this this is a bit of a of a close up look and here are the computations to do this so basically for the forward updates you want to adjust W into the direction of the H minus the H tilde and the H tilde in this case would be this the hidden representation that you would do"}, {"start": 1748.0, "end": 1764.0, "text": " and the hidden representation that you would like to have so you would update your forward forward weights into direction such that your hidden representations are closer sorry that your forward hidden representation is closer to your backward hidden representation"}, {"start": 1764.0, "end": 1793.0, "text": " and the backward updates now your goal is to get a more a better to make G so sorry W here these are W are the weights of F and B are the weights of G so in the backward updates your goal is to make G a better inverse right so what you'll do is again you'll take the difference between now you see the difference here"}, {"start": 1793.0, "end": 1821.0, "text": " here here here right not the same error so here you use you in the W update use what we labeled error here in the G update you use this error here so this is the error of G so when you update the function G you want to make these two closer together such that G becomes a better inverse"}, {"start": 1821.0, "end": 1831.0, "text": " because you're dealing with an approximate inverse you still need to obtain that approximate inverse and this here is how you learn it"}, {"start": 1831.0, "end": 1841.0, "text": " this algorithm now achieves what we wanted right local updates data types check assigned check and so on"}, {"start": 1841.0, "end": 1870.0, "text": " I hope this was enough clear in essence is pretty simple but it's pretty cool how they work around this they call this a different starboard propagation and I'm not these these kind of papers I don't think they invented this maybe I'm not sure maybe they did and maybe they didn't and this paper just kind of frames it"}, {"start": 1870.0, "end": 1899.0, "text": " frames it in this hypothesis it is unclear to me I'm not familiar with this kind of papers so sorry if I misattribute something here all right then they go into into how could these things be implemented biologically and they go for some evidence and they also state that we used to look at neurons basically in this way where you had input and feedback"}, {"start": 1899.0, "end": 1924.0, "text": " here very simple simplistic view of neurons whereas nowadays even the the computational community views neurons in a more differentiated way where you have for example different regions here on the Soma that can be separated from each other and you have inter neuron interference and so on"}, {"start": 1924.0, "end": 1944.0, "text": " I'm not qualified too much to comment on this stuff but I invite you to read it for yourself if you want all right so this was my take on this paper I find the algorithm they propose pretty cool if you I hope you liked it and check it out bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=D-eg7k8YSfs | Shortcut Learning in Deep Neural Networks | This paper establishes a framework for looking at out-of-distribution generalization failures of modern deep learning as the models learning false shortcuts that are present in the training data. The paper characterizes why and when shortcut learning can happen and gives recommendations for how to counter its effect.
https://arxiv.org/abs/2004.07780
Abstract:
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distil how many of deep learning's problem can be seen as different symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
Authors: Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi, today we're looking at shortcut learning in deep neural networks by a number of authors from the University of Tübingen, the Muxplonk Research Center and the University of Toronto. So I'm not going to read all of them, but all of them are either joint first authors or joint senior authors. I just... What is this? Like, it's just a team of people who did this work together. This whole, I have a star, I don't have a star, I have a cross, whatever. Okay, sorry, bit of a rant. All right, so this paper discusses what they call shortcut learning. And they actually, they don't, like, propose something new here, they discuss this phenomenon and they try to link several things together under the name of shortcut learning, which they claim is a problem in current deep learning and they discuss why it happens and what can be done about it. And I just want to jump into this example real quick, right? So in this case, you can see you have a training set of images. And the training set is these four images here along with these labels and also these four images along with these labels. So you can think, you can train a machine learning model, let's say you have a bunch with us and then you're going to test them on the IID test set, right, on this test set. And what you'll find is that if you let a human do this task, right, the human would give this an A, this an A, this a B and this a B, which is what you can think of is probably what a human would do is like, ah, these are the stars and these are the moons, right? And the human would see the stars and the humans would see the moons. And if you do this by the neural network, also you'd get the labels A, A, B and B. And now you go about this out of distribution test set and we'll go over it. Why that is out of distribution in a second. Again, you'll see that the human will classify this as the A's because it has the stars and these as B's. But the neural network will classify these as B's and these as A's. So I'm not saying this is what's going to happen every time, but imagine that happens. And this is a conceivable, conceivable situation. And you can think of what happens here. So you see in the training set, all of the stars were either on the bottom left, right? Or in the top right of the image, where if I, you know, whereas the moons were either in the bottom right or the top left, right? You see that so the neural network might have learned that this is moon and this is moon and this is star and this is star star. And then if it applies that rule to this new test set, right? And you can see that it'll classify these as moons and these as stars, which is incorrect. So this might happen, for example, if the person that wrote the generator for the, for the data set, for some reason, it produced only data here that had this property of the bottom left top right being a star and otherwise being a moon. So what generally happens if we do machine learning test set is we collect a data set, a big data set, but we collected it in a single pass, right? So this is our data set. And what we'll do then is we'll split it, right, into a fairly large train and maybe a bit of a smaller test set, right? But this, it's important that we first collect the data and then second, we randomly split it. Now this out of distribution test set, what that might be is, that might be a second person, right? So this was done in a first step, but then later, right, a person collects another bunch of data. So this is data too. And they think it should be the same as this data, but then you apply the, the, the classifier that you learned in train and test, you apply that here, right? So what is different in this case is the data collection process that hands beforehand, right? So somewhere here is the real world. I'm going to draw the real world. This is, it's a globe. This is the real world and you draw data sets from the real world and you draw this data set first and then you split it in train and test and then you draw this data set second. So the second data set has a fundamentally is a different sample of data and this data, whereas these train and tests, you can think of them, they are closer together than these two data sets are here. I think that's, that's all, that's kind of intuitive. So what we usually do is we train here and then we evaluate on the test set, right? But the training test set, they've just, they're just like randomly split versions of the same data set. And that means that if there is some kind of bias in the training set, it's probably also in the test set, like we saw here with the moons, right? So this training set is this, this test set is this and both have this, this moon moon star star property because that was introduced. So this pattern here by accident in this case was introduced when the data was produced, right? Whereas the OOD test set now is maybe a second person writing their own generator that doesn't have this property and then that will lead to this data. And of course, since we train on the, this training, then evaluate on IID data, this is now the IID assumption, the evaluate on the IID test data, we're going to do fairly well with our, you know, crooked decision rule because it has the same bias. But then if we, once we evaluate on the, on the out of distribution data, then we, we will fail, right? Because now this doesn't have this, this bias in it, right? This, this is not here. So short learning refers to the phenomenon that there might be features in the training set that the model starts to learn such, such that it learns something else that we want it to learn, right? We want it to learn the shape here, but it learns something else, it learns the position. And usually these things will not be recognized by generalizing to this test set, right? Because the test set being an IID split from the same training set will have the same biases. And therefore, they will only become apparent once we do out of distribution generalization evaluation. So this is short cut learning. And this paper goes into the origins and kind of descriptions of this. And while I think this is a good approach and paper and it says many correct things, I think the framing is a bit off at times. And we'll go through it. So first of all, they say they give some examples in biological neural networks. So they have this one example where they have a rat and the rat learned to navigate a complex maze, right? Based on color differences of the walls. And this was surprising because rats don't really have color vision or it was kind of known that rats don't have super good color vision. So it was very surprising. And then they discovered that the rats did actually not use the visual system at all. They simply discriminated the colors by the odor of the color paint, right? So if you paint it the wall, the red or blue that smell differently and the rats could smell it. Once they controlled for the smell, the remarkable color discrimination ability disappeared. So the second example they give here is, so Alice loves history and Alice had spent weeks immersing herself in the world of Hannibal and his exploits in the Roman Empire. And now the exam questions are just like how many elephants did Hannibal employ in his army? So the exam question are multiple choice, not focus on understanding and Bob had just learned it by heart and is now doing much better than Alice who has actually understood the topic, right? So they give this as examples of shortcut learning where the model learns something that we don't intend it to do, right? And I think this is the crucial point, right? The model learns something that we don't intend it to do, right? So here, and this seems, this might be pretty clear to when you observe it, but what do we want? We want, we want shape. And the model learns something else, right? Just something else. And the crucial part here, and I think this paper isn't putting that much as much emphasis as it deserves, is the two words we and want. So my, my, basically my answer to this is, first of all, the word won't. We want shape. And my answer, my comment to this is, you can't, you can't formulate that. You can't formulate it. This is very crucial. And I think the paper, the, almost ignores this point. You cannot formulate what it means to classify things by shape. And this seems so oblivious to us because we're so used to it as humans, right? We're just like, oh, it's just used to shape, right? This is the, the, the shape, right? But you cannot, you cannot program a computer to do this. That's why we use deep learning in the first place, right? Because we have no freaking idea how to program an algorithm that extracts the shape of something. It might be possible for like a star in the moon, but not for a cat or a car or anything, right? So you cannot formulate your objective. That's the problem, right? And it's easy to then say, oh, oh, oh, oh, the model doesn't do what we wanted to do. It's like you can't even formulate what you wanted to do in a precise way. So basically all, all you're, you're saying here, what you're saying here is, I'll train a shape classifier, right? Once you've gone through this process of training and evaluating, you, you say, ah, now I have a, now I have a shape classifier, right? Say you haven't, you hadn't done this, oh, the evaluation, you've gone through this, and you know, you now claim, you proclaim, I have trained a shape classifier. No, no, you have trained something that given the entire process of how you create your data, can classify these two images, right? So at here is your generator. This is your little program that you wrote to produce these images. And your generator assigns either the star, like at random, it produces these things, the star or the moon. It does these two things and then it creates the image from it and that will give you your data set, right? What you have trained is not a shape classifier. What you have trained is a classifier that can distinguish data that comes from this data generation process, right? The entire notion of calling it a shape classifier is because you as a human have thought of shape when you programmed this generator, right? When you collected the data set, that's what you thought of. But this isn't the case. You can call it a shape classifier just because you like this is what your intent was. You have a classifier that did that classifies images from this particular data generation process and you can't actually formulate a shape classifier, right? Okay. The second word is we, we, sorry. We humans, right? We want a shape classifier. Now I've said this before and this very much refers back to the, for example, the paper about the contrast sets in NLP and so on. Humans have grounded knowledge, right? Humans, sorry. Humans have grounding. This is very important here. Grounding means that the humans live in a world of physics and culture, sorry, physics and culture and the need for food biology. Humans live in this world. And this, this generated our brain, right? So what that means is that humans live in a world of objects and of people, sorry, of people and of being eaten, right? Being eaten or needing to eat food, right? Humans grew up and live in this world. Your brain was literally structured according to these things. And thus we understand everything with an eye to this grounded knowledge of reality where there is such a thing as objects. Now, if you have image net and you train a classifier for objects, right? This is what I find so crazy, right? And you know, you collect this thing and there's a car and you say, that's a car, right? You know this is a car because there is an object of a car. And but the neural network does not have a bias for object. The neural network simply sees these pixels, same here. What you will do immediately here is you'll recognize the object of the star, right? You will transform this into a 3D scene, right, into a 3D cube where you are here watching in 3D space and there is this star object somewhere here, right? And then you understand that the star could move around and it would still be the same star, but that's because you have the inherent bias of there being objects and shape, for example, the word shape is nothing more than a property of an object. And the neural network simply do not have a inherent bias for objects, right? Or people or intent or what it means to eat, right? This becomes super, super obvious if you ever try to solve, for example, a jigsaw puzzle, you know, like these things here. I'm terrible at this. If you solve this on its head, right, say this has like a face on it and you try to solve it like this and you try to solve it on its head, like try it. So it's the same task, you simply need to match the border shapes, right? And you need to make sure that the lines are continuous of the picture. It becomes so much harder just because you have this brain. And so that is my entire criticism and it will pull through this entire paper and we'll go through it for now relatively quickly because we've already like touched on it. Keep in mind this is my commentary on it. This is not superior knowledge or something. That's just me. All right. So what they do is they have this taxonomy of decision rules that they point out. What they're saying is, okay, you're, you, these, there's a set. A set of all possible decision rules, right? And this is the, this is the outer set here. All possible decision rules that one could think of to discriminate data. And let's say we will talk about images here, right, to, to discriminate images. Most of them will just be crap and these will be using these uninformative features. What they say. But then there are some decision rules that perform well on training data, right? This is this big circle here, right? So that there are some decision rules that perform well on the training set. And they call these overfitting features. So these are all the features that perform well on the training set, only on the training set, sorry, the overfitting features. But to me, it's a bit unclear. I think they only call this band the overfitting features, but they call the entire circle the all possible training solutions. Any case, so there are decision rules that perform well on the training set, but some of them are overfitting as you know that problem, right? Then the next circle inside of that are all decision rules that perform well on the training set and the IID test set. And this would be our location classifier from before would fall into this category, right? And now there, these are still a much larger set as you see here as this inside set of the intended solution performs well on training set, IID and all relevant out of distribution test sets. And then they draw this in here. The out of distribution test sets are subsets of the IID test set or the sorry, the solutions that work on the OOD test sets are subsets of the solutions that work well on the IID test sets. And I don't have a problem with this characterization of decision rules, right? Here specifically, these are decision rules. What I have a problem of characterization with is the fact that you cannot specify what the intended solution is. You cannot and therefore this diagram I think is misleading because you have ultimately have no idea where this dot is, right? You can't specify it beforehand. You can't even specify the rules how you get there. All you can do is give better data and they kind of advocate for this here with these OOD test sets. But again, I think when they say all relevant out of distribution test sets, I'm a bit wary because they suggest this as one of the measures to assess whether or not a model has learned these short cut rules is to measure its performance on out of distribution test sets. And this is very much like the contrast sets in the NLP. But I think actually this is a pretty, pretty, pretty, pretty bad solution in most cases. And let me explain why. So if we go back to here, right, what we saw is that these discrepancy, it comes about because here from the real world, we produce the data in a very specific form, right? And then this other out of distribution test set is produced in a slightly different form, right? Now what you can think of this is if you look at your cost function that you train, right? What you usually say is my cost function is some sort of a loss for my data points and my labels, right? But this is often left out, I mean, you write it in your introductory classes. What is important is that this is an expected loss that you're minimizing here over a data distribution, right? Over x and y sampled from a particular data distribution. Now when you talk about this out of distribution classifiers, what you'll have is you'll have a slightly different data distribution, the prime, right? So but if you simply have one out of distribution, think of this as the contrast set, right? If you haven't seen the video about contrast sets, it's basically an handcrafted out of distribution test set, right? My problem with this, it's just one. It's a single one. And I think even if you try 10 of them, 10 of those sets, you won't even get close to a true measure because so the cool thing about an IID test set is at least it is precisely the same distribution, right? So it kind of gives you an unbiased number that for this particular data generation pipeline, you get this number. If you evaluate on an out of distribution test set, you now have two effects. You first have this generalization effect. And you have the effect of having produced this in a different fashion here. But you only have one of them. What you would like to do is you would, what you would like to assess is your loss of X and Y in expectation with X and Y coming from data in expectation, let me, a different color, in expectation with your data distribution coming from all possible data distributions in the real world, right? That's what you would like to say to do. If you only have a single contrast set, it is a kin. You can think of what if, like how well, how well of a machine learning engineer would I be if my test set here only had one sample, right? So I give you a train and a test set and I'm saying your performance will be make a Kaggle challenge. And I say your performance will be evaluated on this one single sample test set. That's basically what you're doing. If you have a single OOD test set is you're saying, I'm going to give you one out of distribution data set that I have biased in one particular way, right? And we'll measure how well you capture our intent, right? Our shape classifier intent will measure how well you capture that using this one single out of distribution thing. I think what that will do is, right? You say, I want to approximate this by a sum of I equals one to one. That will just pump the variance beyond what beyond any reasonable meaning that the outcoming number will be able to give you. What you'd have to do is you'd have to have this entire process and sample train and test sets according to this day or at least test sets, according to this data distribution, this underlying data distribution, which you have no clue what it is because if you could specify this directly, you could get the solution for free, right? If you could specify the underlying mechanism, you would already know the solution you would need machine learning. So I think the model puts way too little emphasis, sorry, the paper puts a bit too little emphasis. So if I was to say with their taxonomy, they can say, if you use, for example, only the overfitting features, right, then you will do well on the training set, but not on the IID and OOD test sets. If you use the intended features, again, intended, no one knows what that is, no one can specify. You'll do well on all the OOD test sets. If you use the shortcut features, you will do well on the training and IID test set, but not on the OOD test. This is valid, right? I'm not discrediting this paper here and they do allude to a lot of the things I'm saying, but not not all. And I don't think they frame it correctly. So they ask shortcuts, where do they come from? And they say a lot of the things that I've been saying here. But for example, they ask, what makes a cow a cow? And they give this example here where they say familiar background can be as important for recognition to deep neural networks, where the deep neural networks will misclassify this picture because they used to seeing a cow in grass. Now consider this in our framework, right? If, let's say this is an image net classifier, image net is not an object classifier. It is not, right? That's what we say. That's our intent. But what it is, it is a classifier. If you go through the pipeline, how do you generate the data? Image net is a classifier of naturally taken images, right? With a certain camera, cropped, center cropped, to a particular object labeled by human raiders filtered in some capacity, right, from flicker. And for that particular data set, we train a classifier. It is not an object classifier. It is a classifier for that. And it doesn't, has no clue of objects. So in fact, and also what you have to see is that the output isn't, even if the output is shape, it isn't shape. It is actually probability of shape, right? Or probability of object or probability of something, right? And it is completely conceivable, right, that if it's not grasping the background, it's probably not as much a cow. Now I see the problem here. This is clearly a cow. And this is actually a conceivable natural image. But imagine a picture of the cow, oops, what happened? On the moon, right? This is the moon. And here's the cow. Cow, moon, this is terrible. Or it's a cow on the moon. Like who can fault the neural network? And I would say that's not a cow either because in terms of the data generation process, if you ask me, please classify this as a natural image that has been taken, blah, blah, blah, blah, blah, blah, right? I'm going to say there's no way there's a cow on the moon. So I don't know what this is, but it is very improbable that this is a cow, right? Because all the training examples I've seen, cow on grass. So yeah, so I mean, they do, they do actually allude to this, right? They call this data set biases and so on. But I'm pretty sure that the interpretation is just a bit off where they say, the point of this is like, it's, you know, we want an object classifier, but this we want. The second I find even more kind of strange is they say shortcuts from discriminative learning and they, they allude to this picture here and they ask what makes a cat a cat. And they basically their argument is that the neural networks, they don't understand things. They just discriminate, right? They, they have these thousand classes and the output layer. This is the neural network. And they just need to discriminate. So they just need to learn what is different from one class to the other class. And they will often rely on features such as texture like here. So they rely on detection. They classify this as an elephant, right? So they say what makes a cat a cat to standard DNNs? The example image on the left clearly shows an elephant not a cat. And again, I agree. If you tell me this is data from naturally to it taken images with standard cameras, right? Then I will, I will have two possibilities. I will say, is this a cat? There's no way that if you take anywhere in the universe a picture with a, like a phone camera of anything of a cat, it will look like this. No way, I don't, it's just not possible, right? However, is it possible that there is an elephant that as a skin-fold pattern by random chance, elephant big ears, raw trunk, as a skin-fold pattern, looks like a cat, like looks like the shape of a cat. Yes, that's possible. So if you ask me, according to the data generation process, this is way more likely to be an elephant than a cat, right? And the paper here makes it seem like it is so obvious that this is a cat, but what do these standards do it DNNs think? It's an elephant, not a cat. And the DNN, oh, it's just looking at object texture and other local structures and not shape, right? What we want it to do, and this is, I find, just stop calling things object classifiers if they're not object classifiers, they're classifier between images of a data generation process. If you want them to be object classifiers, make up a data set that actually has different objects, but you can't specify that. So yeah, and then they go into some sort of adversarial examples, and I find this to be, I also find this to be a bit maybe not belonging here, like, where they say, oh, look, here the DNNs predict guitar with high certainty. Again, it's just a discriminator, but this pattern, why not a guitar? If you had to, you know, if you had to get one of the thousand classes out, why could this not be most likely a guitar? But I have a further problem with this is I see, I kind of see this in, so what what you have is IID data. Let's go with their taxonomy and say IID data has from the same generation process, and then there is OOD data. Now I think there are a number of effects here that they try to lump together with this thing, where they just say, oh, OOD data, whenever my model doesn't work on OOD data, it has learned a short cut, but it's very, very weird. So first of all, there, I would say the OOD data, you can probably divide into what I you call unnatural OOD data. Let's say our task here is to build an object, an object to detector, whatever that means for natural images. So then there's unnatural OOD data, which in here, you'll find something like adversarial examples. Adversarial examples are constructed, at least if you go by the interpretation of a modry and the adversarial examples are features, not bugs, then you'll go into the direction of adversarial examples, actually constructed by combining features that don't naturally go together. So you'll get the low frequency features of a cat and add the high frequency, for example, features of a dog, so much with this lambda factor here, so high that to a DNN it looks like a dog, because it has many of the features, but to a human that kind of ignores the high frequency features, it'll look like a cat. But these are unnatural, because the features in actual nature in the real world, they never occur in this combination. So it seems like this is a very, very, very different phenomenon from what I would call natural OOD. Where simply the features that you're seeing have never occurred in the training dataset, but there is, there is, if you go from the real world, and you construct in different ways dataset, there is some dataset where the data actually occurs in the way that you have here. So natural OOD data is what most of the examples for now were about like a cow on the beach. It's just because you've never seen that, because your data generation here always good, cow plus grass. Right? So I think these are very different. And then the last thing they also lump in here is fairness, like the fairness in bias literature, where, for example, you have a resume classifier, and the resume classifier ends up being biased by gender or something like this. And again, so I kind of struggle with this, although they say not all the fairness problems come from here, but I would also like to stress that some of the fairness problem goes exactly here. It occurs because your data generation process is different from what you want. For example, if you do this hiring classifier, you have to understand what that is. What you're training is a system that will tell you, how would my human dataset creators have decided on this particular application? Now, of course, there is this problem of bias amplification and so on, but it is not an infallible system. It simply tells you how the humans have predicted. And if you collect the dataset in a biased way, of course, the machine will inherit that. But on the other hand, the fairness, why I don't think this really belongs in here, because in fairness, you actually have an alternate world, draw this in green. Prime, world prime, right? Where in this OOD and IID setting, you always assume that the world is the world. And you want to really learn a system that understands the world. Where in fairness, you, this here, this is your super world. So actually, for the fairness literature, it doesn't really matter if in the real world, let's say two groups of people are equal in some respect or not equal in the true world, right? What they care about is that they are treated equally by the system, right? So they will impose, they will impose some restrictions or some condition on their model. And they don't naturally, like this sounds bad, but it is the mathematical formulation is such that you start with the super knowledge of two things must be equal. And then you, this is how you imagine your world. You think, I know the world. And then I try to learn the model such that that happens, right? Whereas over here, you do something different. Now, some of it is in, as I said, is in the same category, but it is, I think, a different take and a different literature. So I would focus on, let's say this part right here, sorry, on this part and not on the adversarial examples and also not in the fairness literature too much. All right. So yeah, you can see here, like no wonder this, and this screws up an image net classifier. Yeah. And even this, like how do we know that that is naturally natural? Though I can see that that looks pretty natural, but still. It's probably like really specifically constructed such that the probability that someone would take this picture with a camera in the real world is zero. Cool. So they give some examples where they say, OK, shortcut learning exists in computer vision, for example, adversarial examples. You see shifting the image by a few pixels, though. You have to say shifting the image very precisely by a few pixels, such that the probability of this occurring in the data generation pipeline is zero. And so on, then they call it domain transfer. That, that, of course, is, I think that's the, that's a good example. They say natural language processing, where Bert has been found to rely on superficial keywords. For instance, it learned within a data set of natural language arguments, detecting the presence of not was sufficient to perform a love chance in finding the correct line of argumentation. Again, this is like all we can do is construct data sets. We cannot, if we could tell the model what to look at, we would, we would, we would just program the solution. So the solution is, there's only one solution, program better data sets, get better data sets. Or, I mean, OK, the second solution is get better inductive biases. But if we knew the correct inductive biases, we wouldn't have the problem. Yeah, I've, like that there is a, in NLP, this is very, this is very, very prevalent, even more than an envision, right? This, this fact of, hey, these curious correlations in NLP, the models usually just learn kind of correlation between some words. And then they, they don't learn to understand the sentences at all, right? But this, this is because in NLP, we have even more problems with constructing data sets that force the model to learn to understand the, the text. Again, I could not tell you what understanding the text means. They go, in, by the way, humans do that too. Humans, most in most of NLP that happens in humans, humans do this, right, in many, many, many forms. This is simply because the cost function is not aligned with what you would want. What is the specific, oh, well, a specific example is that news stories nowadays, right? You have a news, you say news. What do people expect? What is the intent of news? The intent of news is to inform you, maybe. But the cost, right, the cost function is clicks. So what do you do? You, you news story and vary on the top in the title, you say orange, man, bad. And then people, you highlight this, right? So a news story, I don't know, Brad Pitt had a new baby. You just append bot orange, man, bad. People click on it much more. Your cost goes up and sorry, your clicks go up. Your cost function goes up. And so I think this happens everywhere, right? You can't even do this with humans, right? What do you expect neural networks to do that? All right, agent-based reinforcement learning, I find this pretty funny. Where is it? Where it learned how to play Tetris? Yeah, instead of learning how to play Tetris, an algorithm simply learned to pause the game to fake news. Come on, it's genius, right? Like it is objectively genius. And then of course fairness and algorithmic decision making. Right, so they say understanding these shortcuts, and they touch on a lot of the things that I've touched on, including what I find well is, for example, this Morgan's can for machine learning, where you say, probably a machine learning system will learn the easiest feature it can. And that's oftentimes not what you want, right? So this even now amplifies things. They also touch on this thing of anthropomorphism, where you view everything through a human lens, and that is not correct. If you look at these neural networks, they're not humans, and we should never attribute humanness to their solutions, never attribute to high-level abilities, that which can be adequately explained by short-learning. Yes, I agree with this. Like I agree with this paper in all the things it says, right? Except this, detecting shortcuts, making OOD generalization tests a standard practice. For the reasons I specified before, I think that is counterproductive. And yeah, I think I've already said enough, right? Designing good OOD tests. This, you can only design good OOD tests if you know the real underlying data distribution, which you don't. And let's go through it. Yeah, again, the principle of least effort, they say, why are they learned? Because it's just easier, right? To, it's just easier to write a new story with just the words you know people will click on, right? Like, or these top 10 things of blah, blah, blah. Number seven will surprise you. You don't actually have to come up with 10 relevant things. The entire title is enough to get you the clicks. So it's the least effort to solve the cost function. It might not align with what you want. And also the inductive biases. As I said, we are humans. We have some inductive biases that neural networks don't have them. And we need to take this into account. But the solution is to make training data sets that take this into account. All right, they say beyond chocolate learning, this is kind of an outlook. And then a conclusion where they remind, but we're already at some 45 minutes of video. And if you're still here, like respect, or maybe you just have this in the background and have some company during this time, I will finish with saying thank you for watching and leave your comments. Since this is mostly opinion, I would be interested in hearing your comments on this. With that, I say bye-bye. | [{"start": 0.0, "end": 7.88, "text": " Hi, today we're looking at shortcut learning in deep neural networks by a number of authors"}, {"start": 7.88, "end": 15.72, "text": " from the University of T\u00fcbingen, the Muxplonk Research Center and the University of Toronto."}, {"start": 15.72, "end": 20.48, "text": " So I'm not going to read all of them, but all of them are either joint first authors"}, {"start": 20.48, "end": 24.52, "text": " or joint senior authors."}, {"start": 24.52, "end": 27.44, "text": " I just..."}, {"start": 27.44, "end": 28.44, "text": " What is this?"}, {"start": 28.44, "end": 32.36, "text": " Like, it's just a team of people who did this work together."}, {"start": 32.36, "end": 38.36, "text": " This whole, I have a star, I don't have a star, I have a cross, whatever."}, {"start": 38.36, "end": 41.760000000000005, "text": " Okay, sorry, bit of a rant."}, {"start": 41.760000000000005, "end": 47.400000000000006, "text": " All right, so this paper discusses what they call shortcut learning."}, {"start": 47.400000000000006, "end": 54.0, "text": " And they actually, they don't, like, propose something new here, they discuss this phenomenon"}, {"start": 54.0, "end": 59.92, "text": " and they try to link several things together under the name of shortcut learning, which"}, {"start": 59.92, "end": 67.4, "text": " they claim is a problem in current deep learning and they discuss why it happens and what"}, {"start": 67.4, "end": 69.08, "text": " can be done about it."}, {"start": 69.08, "end": 72.96000000000001, "text": " And I just want to jump into this example real quick, right?"}, {"start": 72.96000000000001, "end": 78.64, "text": " So in this case, you can see you have a training set of images."}, {"start": 78.64, "end": 85.48, "text": " And the training set is these four images here along with these labels and also these"}, {"start": 85.48, "end": 88.92, "text": " four images along with these labels."}, {"start": 88.92, "end": 94.04, "text": " So you can think, you can train a machine learning model, let's say you have a bunch"}, {"start": 94.04, "end": 103.32, "text": " with us and then you're going to test them on the IID test set, right, on this test set."}, {"start": 103.32, "end": 110.67999999999999, "text": " And what you'll find is that if you let a human do this task, right, the human would give"}, {"start": 110.67999999999999, "end": 116.03999999999999, "text": " this an A, this an A, this a B and this a B, which is what you can think of is probably"}, {"start": 116.03999999999999, "end": 121.32, "text": " what a human would do is like, ah, these are the stars and these are the moons, right?"}, {"start": 121.32, "end": 125.28, "text": " And the human would see the stars and the humans would see the moons."}, {"start": 125.28, "end": 131.76, "text": " And if you do this by the neural network, also you'd get the labels A, A, B and B. And"}, {"start": 131.76, "end": 137.28, "text": " now you go about this out of distribution test set and we'll go over it."}, {"start": 137.28, "end": 140.28, "text": " Why that is out of distribution in a second."}, {"start": 140.28, "end": 147.0, "text": " Again, you'll see that the human will classify this as the A's because it has the stars"}, {"start": 147.0, "end": 148.72, "text": " and these as B's."}, {"start": 148.72, "end": 155.44, "text": " But the neural network will classify these as B's and these as A's."}, {"start": 155.44, "end": 161.07999999999998, "text": " So I'm not saying this is what's going to happen every time, but imagine that happens."}, {"start": 161.08, "end": 165.64000000000001, "text": " And this is a conceivable, conceivable situation."}, {"start": 165.64000000000001, "end": 167.56, "text": " And you can think of what happens here."}, {"start": 167.56, "end": 176.92000000000002, "text": " So you see in the training set, all of the stars were either on the bottom left, right?"}, {"start": 176.92000000000002, "end": 183.64000000000001, "text": " Or in the top right of the image, where if I, you know, whereas the moons were either"}, {"start": 183.64000000000001, "end": 188.56, "text": " in the bottom right or the top left, right?"}, {"start": 188.56, "end": 196.48, "text": " You see that so the neural network might have learned that this is moon and this is moon"}, {"start": 196.48, "end": 206.52, "text": " and this is star and this is star star."}, {"start": 206.52, "end": 214.16, "text": " And then if it applies that rule to this new test set, right?"}, {"start": 214.16, "end": 221.6, "text": " And you can see that it'll classify these as moons and these as stars, which is incorrect."}, {"start": 221.6, "end": 227.72, "text": " So this might happen, for example, if the person that wrote the generator for the, for the"}, {"start": 227.72, "end": 239.68, "text": " data set, for some reason, it produced only data here that had this property of the bottom"}, {"start": 239.68, "end": 243.51999999999998, "text": " left top right being a star and otherwise being a moon."}, {"start": 243.52, "end": 249.4, "text": " So what generally happens if we do machine learning test set is we collect a data set,"}, {"start": 249.4, "end": 253.84, "text": " a big data set, but we collected it in a single pass, right?"}, {"start": 253.84, "end": 256.16, "text": " So this is our data set."}, {"start": 256.16, "end": 264.72, "text": " And what we'll do then is we'll split it, right, into a fairly large train and maybe"}, {"start": 264.72, "end": 268.88, "text": " a bit of a smaller test set, right?"}, {"start": 268.88, "end": 275.0, "text": " But this, it's important that we first collect the data and then second, we randomly split"}, {"start": 275.0, "end": 277.6, "text": " it."}, {"start": 277.6, "end": 285.64, "text": " Now this out of distribution test set, what that might be is, that might be a second person,"}, {"start": 285.64, "end": 286.64, "text": " right?"}, {"start": 286.64, "end": 292.4, "text": " So this was done in a first step, but then later, right, a person collects another bunch"}, {"start": 292.4, "end": 294.0, "text": " of data."}, {"start": 294.0, "end": 296.44, "text": " So this is data too."}, {"start": 296.44, "end": 304.2, "text": " And they think it should be the same as this data, but then you apply the, the, the"}, {"start": 304.2, "end": 308.92, "text": " classifier that you learned in train and test, you apply that here, right?"}, {"start": 308.92, "end": 315.15999999999997, "text": " So what is different in this case is the data collection process that hands beforehand,"}, {"start": 315.15999999999997, "end": 316.32, "text": " right?"}, {"start": 316.32, "end": 318.76, "text": " So somewhere here is the real world."}, {"start": 318.76, "end": 320.72, "text": " I'm going to draw the real world."}, {"start": 320.72, "end": 322.92, "text": " This is, it's a globe."}, {"start": 322.92, "end": 328.92, "text": " This is the real world and you draw data sets from the real world and you draw this data"}, {"start": 328.92, "end": 335.04, "text": " set first and then you split it in train and test and then you draw this data set second."}, {"start": 335.04, "end": 342.12, "text": " So the second data set has a fundamentally is a different sample of data and this data,"}, {"start": 342.12, "end": 348.8, "text": " whereas these train and tests, you can think of them, they are closer together than these"}, {"start": 348.8, "end": 351.44, "text": " two data sets are here."}, {"start": 351.44, "end": 354.92, "text": " I think that's, that's all, that's kind of intuitive."}, {"start": 354.92, "end": 366.04, "text": " So what we usually do is we train here and then we evaluate on the test set, right?"}, {"start": 366.04, "end": 369.92, "text": " But the training test set, they've just, they're just like randomly split versions of"}, {"start": 369.92, "end": 371.28, "text": " the same data set."}, {"start": 371.28, "end": 378.04, "text": " And that means that if there is some kind of bias in the training set, it's probably"}, {"start": 378.04, "end": 382.28000000000003, "text": " also in the test set, like we saw here with the moons, right?"}, {"start": 382.28000000000003, "end": 389.84000000000003, "text": " So this training set is this, this test set is this and both have this, this moon moon"}, {"start": 389.84000000000003, "end": 394.0, "text": " star star property because that was introduced."}, {"start": 394.0, "end": 400.88, "text": " So this pattern here by accident in this case was introduced when the data was produced,"}, {"start": 400.88, "end": 401.88, "text": " right?"}, {"start": 401.88, "end": 408.15999999999997, "text": " Whereas the OOD test set now is maybe a second person writing their own generator that"}, {"start": 408.15999999999997, "end": 412.76, "text": " doesn't have this property and then that will lead to this data."}, {"start": 412.76, "end": 419.32, "text": " And of course, since we train on the, this training, then evaluate on IID data, this"}, {"start": 419.32, "end": 425.96, "text": " is now the IID assumption, the evaluate on the IID test data, we're going to do fairly"}, {"start": 425.96, "end": 431.0, "text": " well with our, you know, crooked decision rule because it has the same bias."}, {"start": 431.0, "end": 442.28, "text": " But then if we, once we evaluate on the, on the out of distribution data, then we, we"}, {"start": 442.28, "end": 444.36, "text": " will fail, right?"}, {"start": 444.36, "end": 448.04, "text": " Because now this doesn't have this, this bias in it, right?"}, {"start": 448.04, "end": 450.52, "text": " This, this is not here."}, {"start": 450.52, "end": 460.08, "text": " So short learning refers to the phenomenon that there might be features in the training"}, {"start": 460.08, "end": 469.64, "text": " set that the model starts to learn such, such that it learns something else that we want"}, {"start": 469.64, "end": 470.91999999999996, "text": " it to learn, right?"}, {"start": 470.91999999999996, "end": 477.79999999999995, "text": " We want it to learn the shape here, but it learns something else, it learns the position."}, {"start": 477.79999999999995, "end": 486.91999999999996, "text": " And usually these things will not be recognized by generalizing to this test set, right?"}, {"start": 486.92, "end": 493.92, "text": " Because the test set being an IID split from the same training set will have the same biases."}, {"start": 493.92, "end": 499.40000000000003, "text": " And therefore, they will only become apparent once we do out of distribution generalization"}, {"start": 499.40000000000003, "end": 501.20000000000005, "text": " evaluation."}, {"start": 501.20000000000005, "end": 504.08000000000004, "text": " So this is short cut learning."}, {"start": 504.08000000000004, "end": 509.48, "text": " And this paper goes into the origins and kind of descriptions of this."}, {"start": 509.48, "end": 516.8000000000001, "text": " And while I think this is a good approach and paper and it says many correct things, I"}, {"start": 516.8, "end": 522.4, "text": " think the framing is a bit off at times."}, {"start": 522.4, "end": 523.4, "text": " And we'll go through it."}, {"start": 523.4, "end": 530.76, "text": " So first of all, they say they give some examples in biological neural networks."}, {"start": 530.76, "end": 537.7199999999999, "text": " So they have this one example where they have a rat and the rat learned to navigate a complex"}, {"start": 537.7199999999999, "end": 540.1999999999999, "text": " maze, right?"}, {"start": 540.1999999999999, "end": 543.1999999999999, "text": " Based on color differences of the walls."}, {"start": 543.2, "end": 550.0, "text": " And this was surprising because rats don't really have color vision or it was kind of known"}, {"start": 550.0, "end": 553.1600000000001, "text": " that rats don't have super good color vision."}, {"start": 553.1600000000001, "end": 554.9200000000001, "text": " So it was very surprising."}, {"start": 554.9200000000001, "end": 560.76, "text": " And then they discovered that the rats did actually not use the visual system at all."}, {"start": 560.76, "end": 566.44, "text": " They simply discriminated the colors by the odor of the color paint, right?"}, {"start": 566.44, "end": 571.24, "text": " So if you paint it the wall, the red or blue that smell differently and the rats could"}, {"start": 571.24, "end": 572.44, "text": " smell it."}, {"start": 572.44, "end": 580.08, "text": " Once they controlled for the smell, the remarkable color discrimination ability disappeared."}, {"start": 580.08, "end": 591.24, "text": " So the second example they give here is, so Alice loves history and Alice had spent weeks"}, {"start": 591.24, "end": 596.24, "text": " immersing herself in the world of Hannibal and his exploits in the Roman Empire."}, {"start": 596.24, "end": 601.5600000000001, "text": " And now the exam questions are just like how many elephants did Hannibal employ in his"}, {"start": 601.56, "end": 602.56, "text": " army?"}, {"start": 602.56, "end": 608.92, "text": " So the exam question are multiple choice, not focus on understanding and Bob had just"}, {"start": 608.92, "end": 614.56, "text": " learned it by heart and is now doing much better than Alice who has actually understood"}, {"start": 614.56, "end": 616.5999999999999, "text": " the topic, right?"}, {"start": 616.5999999999999, "end": 623.76, "text": " So they give this as examples of shortcut learning where the model learns something that"}, {"start": 623.76, "end": 628.4799999999999, "text": " we don't intend it to do, right?"}, {"start": 628.48, "end": 632.32, "text": " And I think this is the crucial point, right?"}, {"start": 632.32, "end": 638.32, "text": " The model learns something that we don't intend it to do, right?"}, {"start": 638.32, "end": 646.24, "text": " So here, and this seems, this might be pretty clear to when you observe it, but what do"}, {"start": 646.24, "end": 647.44, "text": " we want?"}, {"start": 647.44, "end": 653.88, "text": " We want, we want shape."}, {"start": 653.88, "end": 661.16, "text": " And the model learns something else, right?"}, {"start": 661.16, "end": 666.16, "text": " Just something else."}, {"start": 666.16, "end": 671.6, "text": " And the crucial part here, and I think this paper isn't putting that much as much emphasis"}, {"start": 671.6, "end": 678.08, "text": " as it deserves, is the two words we and want."}, {"start": 678.08, "end": 687.32, "text": " So my, my, basically my answer to this is, first of all, the word won't."}, {"start": 687.32, "end": 689.76, "text": " We want shape."}, {"start": 689.76, "end": 700.12, "text": " And my answer, my comment to this is, you can't, you can't formulate that."}, {"start": 700.12, "end": 707.4000000000001, "text": " You can't formulate it."}, {"start": 707.4, "end": 708.88, "text": " This is very crucial."}, {"start": 708.88, "end": 715.16, "text": " And I think the paper, the, almost ignores this point."}, {"start": 715.16, "end": 720.4399999999999, "text": " You cannot formulate what it means to classify things by shape."}, {"start": 720.4399999999999, "end": 726.56, "text": " And this seems so oblivious to us because we're so used to it as humans, right?"}, {"start": 726.56, "end": 728.9599999999999, "text": " We're just like, oh, it's just used to shape, right?"}, {"start": 728.9599999999999, "end": 731.04, "text": " This is the, the, the shape, right?"}, {"start": 731.04, "end": 736.36, "text": " But you cannot, you cannot program a computer to do this."}, {"start": 736.36, "end": 738.8000000000001, "text": " That's why we use deep learning in the first place, right?"}, {"start": 738.8000000000001, "end": 747.16, "text": " Because we have no freaking idea how to program an algorithm that extracts the shape of something."}, {"start": 747.16, "end": 754.12, "text": " It might be possible for like a star in the moon, but not for a cat or a car or anything,"}, {"start": 754.12, "end": 755.12, "text": " right?"}, {"start": 755.12, "end": 758.8000000000001, "text": " So you cannot formulate your objective."}, {"start": 758.8000000000001, "end": 760.48, "text": " That's the problem, right?"}, {"start": 760.48, "end": 765.76, "text": " And it's easy to then say, oh, oh, oh, oh, the model doesn't do what we wanted to do."}, {"start": 765.76, "end": 771.96, "text": " It's like you can't even formulate what you wanted to do in a precise way."}, {"start": 771.96, "end": 778.6, "text": " So basically all, all you're, you're saying here, what you're saying here is, I'll train"}, {"start": 778.6, "end": 780.4, "text": " a shape classifier, right?"}, {"start": 780.4, "end": 786.8, "text": " Once you've gone through this process of training and evaluating, you, you say, ah, now"}, {"start": 786.8, "end": 791.24, "text": " I have a, now I have a shape classifier, right?"}, {"start": 791.24, "end": 797.96, "text": " Say you haven't, you hadn't done this, oh, the evaluation, you've gone through this,"}, {"start": 797.96, "end": 803.44, "text": " and you know, you now claim, you proclaim, I have trained a shape classifier."}, {"start": 803.44, "end": 813.12, "text": " No, no, you have trained something that given the entire process of how you create your"}, {"start": 813.12, "end": 817.12, "text": " data, can classify these two images, right?"}, {"start": 817.12, "end": 821.08, "text": " So at here is your generator."}, {"start": 821.08, "end": 825.0400000000001, "text": " This is your little program that you wrote to produce these images."}, {"start": 825.0400000000001, "end": 832.84, "text": " And your generator assigns either the star, like at random, it produces these things,"}, {"start": 832.84, "end": 837.12, "text": " the star or the moon."}, {"start": 837.12, "end": 841.76, "text": " It does these two things and then it creates the image from it and that will give you"}, {"start": 841.76, "end": 844.44, "text": " your data set, right?"}, {"start": 844.44, "end": 847.0400000000001, "text": " What you have trained is not a shape classifier."}, {"start": 847.04, "end": 853.64, "text": " What you have trained is a classifier that can distinguish data that comes from this"}, {"start": 853.64, "end": 857.8, "text": " data generation process, right?"}, {"start": 857.8, "end": 868.52, "text": " The entire notion of calling it a shape classifier is because you as a human have thought of shape"}, {"start": 868.52, "end": 872.88, "text": " when you programmed this generator, right?"}, {"start": 872.88, "end": 876.68, "text": " When you collected the data set, that's what you thought of."}, {"start": 876.68, "end": 877.68, "text": " But this isn't the case."}, {"start": 877.68, "end": 883.2399999999999, "text": " You can call it a shape classifier just because you like this is what your intent was."}, {"start": 883.2399999999999, "end": 888.8, "text": " You have a classifier that did that classifies images from this particular data generation"}, {"start": 888.8, "end": 895.7199999999999, "text": " process and you can't actually formulate a shape classifier, right?"}, {"start": 895.7199999999999, "end": 896.7199999999999, "text": " Okay."}, {"start": 896.7199999999999, "end": 905.8, "text": " The second word is we, we, sorry."}, {"start": 905.8, "end": 911.04, "text": " We humans, right?"}, {"start": 911.04, "end": 913.5999999999999, "text": " We want a shape classifier."}, {"start": 913.5999999999999, "end": 920.12, "text": " Now I've said this before and this very much refers back to the, for example, the paper"}, {"start": 920.12, "end": 926.52, "text": " about the contrast sets in NLP and so on."}, {"start": 926.52, "end": 929.9599999999999, "text": " Humans have grounded knowledge, right?"}, {"start": 929.9599999999999, "end": 932.1999999999999, "text": " Humans, sorry."}, {"start": 932.1999999999999, "end": 935.1999999999999, "text": " Humans have grounding."}, {"start": 935.2, "end": 940.12, "text": " This is very important here."}, {"start": 940.12, "end": 951.32, "text": " Grounding means that the humans live in a world of physics and culture, sorry, physics"}, {"start": 951.32, "end": 961.0, "text": " and culture and the need for food biology."}, {"start": 961.0, "end": 962.44, "text": " Humans live in this world."}, {"start": 962.44, "end": 969.1600000000001, "text": " And this, this generated our brain, right?"}, {"start": 969.1600000000001, "end": 979.6400000000001, "text": " So what that means is that humans live in a world of objects and of people, sorry, of"}, {"start": 979.6400000000001, "end": 986.6, "text": " people and of being eaten, right?"}, {"start": 986.6, "end": 992.12, "text": " Being eaten or needing to eat food, right?"}, {"start": 992.12, "end": 995.28, "text": " Humans grew up and live in this world."}, {"start": 995.28, "end": 1000.88, "text": " Your brain was literally structured according to these things."}, {"start": 1000.88, "end": 1007.24, "text": " And thus we understand everything with an eye to this grounded knowledge of reality"}, {"start": 1007.24, "end": 1009.6, "text": " where there is such a thing as objects."}, {"start": 1009.6, "end": 1017.5600000000001, "text": " Now, if you have image net and you train a classifier for objects, right?"}, {"start": 1017.56, "end": 1022.5999999999999, "text": " This is what I find so crazy, right? And you know, you collect this thing and there's"}, {"start": 1022.5999999999999, "end": 1026.0, "text": " a car and you say, that's a car, right?"}, {"start": 1026.0, "end": 1031.52, "text": " You know this is a car because there is an object of a car."}, {"start": 1031.52, "end": 1036.6799999999998, "text": " And but the neural network does not have a bias for object."}, {"start": 1036.6799999999998, "end": 1041.1599999999999, "text": " The neural network simply sees these pixels, same here."}, {"start": 1041.16, "end": 1047.76, "text": " What you will do immediately here is you'll recognize the object of the star, right?"}, {"start": 1047.76, "end": 1058.6000000000001, "text": " You will transform this into a 3D scene, right, into a 3D cube where you are here watching"}, {"start": 1058.6000000000001, "end": 1065.72, "text": " in 3D space and there is this star object somewhere here, right?"}, {"start": 1065.72, "end": 1069.92, "text": " And then you understand that the star could move around and it would still be the same"}, {"start": 1069.92, "end": 1075.48, "text": " star, but that's because you have the inherent bias of there being objects and shape, for"}, {"start": 1075.48, "end": 1081.28, "text": " example, the word shape is nothing more than a property of an object."}, {"start": 1081.28, "end": 1090.0800000000002, "text": " And the neural network simply do not have a inherent bias for objects, right?"}, {"start": 1090.0800000000002, "end": 1096.72, "text": " Or people or intent or what it means to eat, right?"}, {"start": 1096.72, "end": 1103.44, "text": " This becomes super, super obvious if you ever try to solve, for example, a jigsaw puzzle,"}, {"start": 1103.44, "end": 1106.6000000000001, "text": " you know, like these things here."}, {"start": 1106.6000000000001, "end": 1111.52, "text": " I'm terrible at this."}, {"start": 1111.52, "end": 1118.2, "text": " If you solve this on its head, right, say this has like a face on it and you try to solve"}, {"start": 1118.2, "end": 1122.2, "text": " it like this and you try to solve it on its head, like try it."}, {"start": 1122.2, "end": 1128.88, "text": " So it's the same task, you simply need to match the border shapes, right?"}, {"start": 1128.88, "end": 1132.72, "text": " And you need to make sure that the lines are continuous of the picture."}, {"start": 1132.72, "end": 1139.2, "text": " It becomes so much harder just because you have this brain."}, {"start": 1139.2, "end": 1147.72, "text": " And so that is my entire criticism and it will pull through this entire paper and we'll"}, {"start": 1147.72, "end": 1154.84, "text": " go through it for now relatively quickly because we've already like touched on it."}, {"start": 1154.84, "end": 1157.24, "text": " Keep in mind this is my commentary on it."}, {"start": 1157.24, "end": 1165.64, "text": " This is not superior knowledge or something."}, {"start": 1165.64, "end": 1166.64, "text": " That's just me."}, {"start": 1166.64, "end": 1167.64, "text": " All right."}, {"start": 1167.64, "end": 1173.32, "text": " So what they do is they have this taxonomy of decision rules that they point out."}, {"start": 1173.32, "end": 1177.68, "text": " What they're saying is, okay, you're, you, these, there's a set."}, {"start": 1177.68, "end": 1181.0, "text": " A set of all possible decision rules, right?"}, {"start": 1181.0, "end": 1183.72, "text": " And this is the, this is the outer set here."}, {"start": 1183.72, "end": 1188.48, "text": " All possible decision rules that one could think of to discriminate data."}, {"start": 1188.48, "end": 1193.3600000000001, "text": " And let's say we will talk about images here, right, to, to discriminate images."}, {"start": 1193.3600000000001, "end": 1198.28, "text": " Most of them will just be crap and these will be using these uninformative features."}, {"start": 1198.28, "end": 1199.28, "text": " What they say."}, {"start": 1199.28, "end": 1204.0800000000002, "text": " But then there are some decision rules that perform well on training data, right?"}, {"start": 1204.0800000000002, "end": 1207.48, "text": " This is this big circle here, right?"}, {"start": 1207.48, "end": 1213.72, "text": " So that there are some decision rules that perform well on the training set."}, {"start": 1213.72, "end": 1217.1200000000001, "text": " And they call these overfitting features."}, {"start": 1217.1200000000001, "end": 1223.32, "text": " So these are all the features that perform well on the training set, only on the training"}, {"start": 1223.32, "end": 1225.08, "text": " set, sorry, the overfitting features."}, {"start": 1225.08, "end": 1228.92, "text": " But to me, it's a bit unclear."}, {"start": 1228.92, "end": 1234.0, "text": " I think they only call this band the overfitting features, but they call the entire circle"}, {"start": 1234.0, "end": 1236.88, "text": " the all possible training solutions."}, {"start": 1236.88, "end": 1240.2800000000002, "text": " Any case, so there are decision rules that perform well on the training set, but some"}, {"start": 1240.2800000000002, "end": 1245.24, "text": " of them are overfitting as you know that problem, right?"}, {"start": 1245.24, "end": 1252.48, "text": " Then the next circle inside of that are all decision rules that perform well on the training"}, {"start": 1252.48, "end": 1255.48, "text": " set and the IID test set."}, {"start": 1255.48, "end": 1263.64, "text": " And this would be our location classifier from before would fall into this category, right?"}, {"start": 1263.64, "end": 1272.3200000000002, "text": " And now there, these are still a much larger set as you see here as this inside set of"}, {"start": 1272.3200000000002, "end": 1279.24, "text": " the intended solution performs well on training set, IID and all relevant out of distribution"}, {"start": 1279.24, "end": 1280.24, "text": " test sets."}, {"start": 1280.24, "end": 1282.24, "text": " And then they draw this in here."}, {"start": 1282.24, "end": 1291.24, "text": " The out of distribution test sets are subsets of the IID test set or the sorry, the solutions"}, {"start": 1291.24, "end": 1296.8, "text": " that work on the OOD test sets are subsets of the solutions that work well on the IID"}, {"start": 1296.8, "end": 1297.8, "text": " test sets."}, {"start": 1297.8, "end": 1304.96, "text": " And I don't have a problem with this characterization of decision rules, right?"}, {"start": 1304.96, "end": 1307.68, "text": " Here specifically, these are decision rules."}, {"start": 1307.68, "end": 1316.24, "text": " What I have a problem of characterization with is the fact that you cannot specify what"}, {"start": 1316.24, "end": 1318.52, "text": " the intended solution is."}, {"start": 1318.52, "end": 1325.68, "text": " You cannot and therefore this diagram I think is misleading because you have ultimately"}, {"start": 1325.68, "end": 1329.48, "text": " have no idea where this dot is, right?"}, {"start": 1329.48, "end": 1333.04, "text": " You can't specify it beforehand."}, {"start": 1333.04, "end": 1335.6399999999999, "text": " You can't even specify the rules how you get there."}, {"start": 1335.6399999999999, "end": 1340.8799999999999, "text": " All you can do is give better data and they kind of advocate for this here with these OOD"}, {"start": 1340.8799999999999, "end": 1341.8799999999999, "text": " test sets."}, {"start": 1341.8799999999999, "end": 1348.2, "text": " But again, I think when they say all relevant out of distribution test sets, I'm a bit"}, {"start": 1348.2, "end": 1355.76, "text": " wary because they suggest this as one of the measures to assess whether or not a model"}, {"start": 1355.76, "end": 1362.4, "text": " has learned these short cut rules is to measure its performance on out of distribution test"}, {"start": 1362.4, "end": 1363.4, "text": " sets."}, {"start": 1363.4, "end": 1370.52, "text": " And this is very much like the contrast sets in the NLP."}, {"start": 1370.52, "end": 1378.76, "text": " But I think actually this is a pretty, pretty, pretty, pretty bad solution in most cases."}, {"start": 1378.76, "end": 1381.44, "text": " And let me explain why."}, {"start": 1381.44, "end": 1390.68, "text": " So if we go back to here, right, what we saw is that these discrepancy, it comes about"}, {"start": 1390.68, "end": 1398.48, "text": " because here from the real world, we produce the data in a very specific form, right?"}, {"start": 1398.48, "end": 1403.44, "text": " And then this other out of distribution test set is produced in a slightly different"}, {"start": 1403.44, "end": 1405.32, "text": " form, right?"}, {"start": 1405.32, "end": 1413.2, "text": " Now what you can think of this is if you look at your cost function that you train, right?"}, {"start": 1413.2, "end": 1422.92, "text": " What you usually say is my cost function is some sort of a loss for my data points and"}, {"start": 1422.92, "end": 1425.48, "text": " my labels, right?"}, {"start": 1425.48, "end": 1430.64, "text": " But this is often left out, I mean, you write it in your introductory classes."}, {"start": 1430.64, "end": 1437.6, "text": " What is important is that this is an expected loss that you're minimizing here over a data"}, {"start": 1437.6, "end": 1439.32, "text": " distribution, right?"}, {"start": 1439.32, "end": 1446.88, "text": " Over x and y sampled from a particular data distribution."}, {"start": 1446.88, "end": 1454.72, "text": " Now when you talk about this out of distribution classifiers, what you'll have is you'll"}, {"start": 1454.72, "end": 1460.24, "text": " have a slightly different data distribution, the prime, right?"}, {"start": 1460.24, "end": 1472.48, "text": " So but if you simply have one out of distribution, think of this as the contrast set, right?"}, {"start": 1472.48, "end": 1477.3600000000001, "text": " If you haven't seen the video about contrast sets, it's basically an handcrafted out of"}, {"start": 1477.3600000000001, "end": 1480.28, "text": " distribution test set, right?"}, {"start": 1480.28, "end": 1483.2, "text": " My problem with this, it's just one."}, {"start": 1483.2, "end": 1486.52, "text": " It's a single one."}, {"start": 1486.52, "end": 1493.92, "text": " And I think even if you try 10 of them, 10 of those sets, you won't even get close to"}, {"start": 1493.92, "end": 1503.04, "text": " a true measure because so the cool thing about an IID test set is at least it is precisely"}, {"start": 1503.04, "end": 1504.44, "text": " the same distribution, right?"}, {"start": 1504.44, "end": 1511.16, "text": " So it kind of gives you an unbiased number that for this particular data generation pipeline,"}, {"start": 1511.16, "end": 1512.72, "text": " you get this number."}, {"start": 1512.72, "end": 1518.8, "text": " If you evaluate on an out of distribution test set, you now have two effects."}, {"start": 1518.8, "end": 1525.88, "text": " You first have this generalization effect."}, {"start": 1525.88, "end": 1531.8, "text": " And you have the effect of having produced this in a different fashion here."}, {"start": 1531.8, "end": 1533.16, "text": " But you only have one of them."}, {"start": 1533.16, "end": 1542.1200000000001, "text": " What you would like to do is you would, what you would like to assess is your loss of X and"}, {"start": 1542.1200000000001, "end": 1554.3600000000001, "text": " Y in expectation with X and Y coming from data in expectation, let me, a different color,"}, {"start": 1554.3600000000001, "end": 1562.28, "text": " in expectation with your data distribution coming from all possible data distributions in"}, {"start": 1562.28, "end": 1564.3999999999999, "text": " the real world, right?"}, {"start": 1564.3999999999999, "end": 1567.56, "text": " That's what you would like to say to do."}, {"start": 1567.56, "end": 1572.68, "text": " If you only have a single contrast set, it is a kin."}, {"start": 1572.68, "end": 1579.52, "text": " You can think of what if, like how well, how well of a machine learning engineer would"}, {"start": 1579.52, "end": 1586.6, "text": " I be if my test set here only had one sample, right?"}, {"start": 1586.6, "end": 1594.36, "text": " So I give you a train and a test set and I'm saying your performance will be make a Kaggle"}, {"start": 1594.36, "end": 1595.36, "text": " challenge."}, {"start": 1595.36, "end": 1602.6399999999999, "text": " And I say your performance will be evaluated on this one single sample test set."}, {"start": 1602.6399999999999, "end": 1603.9599999999998, "text": " That's basically what you're doing."}, {"start": 1603.9599999999998, "end": 1612.6399999999999, "text": " If you have a single OOD test set is you're saying, I'm going to give you one out of distribution"}, {"start": 1612.64, "end": 1617.6000000000001, "text": " data set that I have biased in one particular way, right?"}, {"start": 1617.6000000000001, "end": 1623.5200000000002, "text": " And we'll measure how well you capture our intent, right?"}, {"start": 1623.5200000000002, "end": 1631.0400000000002, "text": " Our shape classifier intent will measure how well you capture that using this one single"}, {"start": 1631.0400000000002, "end": 1633.4, "text": " out of distribution thing."}, {"start": 1633.4, "end": 1636.0800000000002, "text": " I think what that will do is, right?"}, {"start": 1636.08, "end": 1646.76, "text": " You say, I want to approximate this by a sum of I equals one to one."}, {"start": 1646.76, "end": 1656.08, "text": " That will just pump the variance beyond what beyond any reasonable meaning that the"}, {"start": 1656.08, "end": 1659.76, "text": " outcoming number will be able to give you."}, {"start": 1659.76, "end": 1667.04, "text": " What you'd have to do is you'd have to have this entire process and sample train and test"}, {"start": 1667.04, "end": 1673.36, "text": " sets according to this day or at least test sets, according to this data distribution,"}, {"start": 1673.36, "end": 1677.52, "text": " this underlying data distribution, which you have no clue what it is because if you could"}, {"start": 1677.52, "end": 1684.72, "text": " specify this directly, you could get the solution for free, right?"}, {"start": 1684.72, "end": 1690.16, "text": " If you could specify the underlying mechanism, you would already know the solution you would"}, {"start": 1690.16, "end": 1693.08, "text": " need machine learning."}, {"start": 1693.08, "end": 1700.0, "text": " So I think the model puts way too little emphasis, sorry, the paper puts a bit too little emphasis."}, {"start": 1700.0, "end": 1706.68, "text": " So if I was to say with their taxonomy, they can say, if you use, for example, only the"}, {"start": 1706.68, "end": 1712.0, "text": " overfitting features, right, then you will do well on the training set, but not on the"}, {"start": 1712.0, "end": 1714.2, "text": " IID and OOD test sets."}, {"start": 1714.2, "end": 1721.76, "text": " If you use the intended features, again, intended, no one knows what that is, no one can specify."}, {"start": 1721.76, "end": 1724.76, "text": " You'll do well on all the OOD test sets."}, {"start": 1724.76, "end": 1729.72, "text": " If you use the shortcut features, you will do well on the training and IID test set, but"}, {"start": 1729.72, "end": 1731.68, "text": " not on the OOD test."}, {"start": 1731.68, "end": 1733.48, "text": " This is valid, right?"}, {"start": 1733.48, "end": 1738.44, "text": " I'm not discrediting this paper here and they do allude to a lot of the things I'm saying,"}, {"start": 1738.44, "end": 1741.52, "text": " but not not all."}, {"start": 1741.52, "end": 1743.72, "text": " And I don't think they frame it correctly."}, {"start": 1743.72, "end": 1746.4, "text": " So they ask shortcuts, where do they come from?"}, {"start": 1746.4, "end": 1749.68, "text": " And they say a lot of the things that I've been saying here."}, {"start": 1749.68, "end": 1754.72, "text": " But for example, they ask, what makes a cow a cow?"}, {"start": 1754.72, "end": 1760.32, "text": " And they give this example here where they say familiar background can be as important"}, {"start": 1760.32, "end": 1765.6000000000001, "text": " for recognition to deep neural networks, where the deep neural networks will misclassify"}, {"start": 1765.6000000000001, "end": 1770.6000000000001, "text": " this picture because they used to seeing a cow in grass."}, {"start": 1770.6000000000001, "end": 1773.4, "text": " Now consider this in our framework, right?"}, {"start": 1773.4, "end": 1780.3200000000002, "text": " If, let's say this is an image net classifier, image net is not an object classifier."}, {"start": 1780.3200000000002, "end": 1781.8400000000001, "text": " It is not, right?"}, {"start": 1781.8400000000001, "end": 1782.8400000000001, "text": " That's what we say."}, {"start": 1782.8400000000001, "end": 1783.8400000000001, "text": " That's our intent."}, {"start": 1783.8400000000001, "end": 1786.76, "text": " But what it is, it is a classifier."}, {"start": 1786.76, "end": 1791.3600000000001, "text": " If you go through the pipeline, how do you generate the data?"}, {"start": 1791.3600000000001, "end": 1796.88, "text": " Image net is a classifier of naturally taken images, right?"}, {"start": 1796.88, "end": 1804.0400000000002, "text": " With a certain camera, cropped, center cropped, to a particular object labeled by human"}, {"start": 1804.0400000000002, "end": 1808.0800000000002, "text": " raiders filtered in some capacity, right, from flicker."}, {"start": 1808.0800000000002, "end": 1812.0800000000002, "text": " And for that particular data set, we train a classifier."}, {"start": 1812.0800000000002, "end": 1813.8400000000001, "text": " It is not an object classifier."}, {"start": 1813.8400000000001, "end": 1815.72, "text": " It is a classifier for that."}, {"start": 1815.72, "end": 1818.6000000000001, "text": " And it doesn't, has no clue of objects."}, {"start": 1818.6000000000001, "end": 1825.3200000000002, "text": " So in fact, and also what you have to see is that the output isn't, even if the output"}, {"start": 1825.3200000000002, "end": 1826.8400000000001, "text": " is shape, it isn't shape."}, {"start": 1826.84, "end": 1831.24, "text": " It is actually probability of shape, right?"}, {"start": 1831.24, "end": 1836.8, "text": " Or probability of object or probability of something, right?"}, {"start": 1836.8, "end": 1843.8799999999999, "text": " And it is completely conceivable, right, that if it's not grasping the background, it's"}, {"start": 1843.8799999999999, "end": 1846.56, "text": " probably not as much a cow."}, {"start": 1846.56, "end": 1848.0, "text": " Now I see the problem here."}, {"start": 1848.0, "end": 1849.8799999999999, "text": " This is clearly a cow."}, {"start": 1849.8799999999999, "end": 1853.08, "text": " And this is actually a conceivable natural image."}, {"start": 1853.08, "end": 1860.12, "text": " But imagine a picture of the cow, oops, what happened?"}, {"start": 1860.12, "end": 1861.6, "text": " On the moon, right?"}, {"start": 1861.6, "end": 1863.3999999999999, "text": " This is the moon."}, {"start": 1863.3999999999999, "end": 1865.8799999999999, "text": " And here's the cow."}, {"start": 1865.8799999999999, "end": 1870.12, "text": " Cow, moon, this is terrible."}, {"start": 1870.12, "end": 1876.36, "text": " Or it's a cow on the moon."}, {"start": 1876.36, "end": 1880.3999999999999, "text": " Like who can fault the neural network?"}, {"start": 1880.4, "end": 1885.92, "text": " And I would say that's not a cow either because in terms of the data generation process, if"}, {"start": 1885.92, "end": 1892.88, "text": " you ask me, please classify this as a natural image that has been taken, blah, blah, blah,"}, {"start": 1892.88, "end": 1893.88, "text": " blah, blah, blah, right?"}, {"start": 1893.88, "end": 1895.48, "text": " I'm going to say there's no way there's a cow on the moon."}, {"start": 1895.48, "end": 1901.52, "text": " So I don't know what this is, but it is very improbable that this is a cow, right?"}, {"start": 1901.52, "end": 1907.3200000000002, "text": " Because all the training examples I've seen, cow on grass."}, {"start": 1907.32, "end": 1914.12, "text": " So yeah, so I mean, they do, they do actually allude to this, right?"}, {"start": 1914.12, "end": 1916.96, "text": " They call this data set biases and so on."}, {"start": 1916.96, "end": 1925.6399999999999, "text": " But I'm pretty sure that the interpretation is just a bit off where they say, the point"}, {"start": 1925.6399999999999, "end": 1931.8, "text": " of this is like, it's, you know, we want an object classifier, but this we want."}, {"start": 1931.8, "end": 1939.12, "text": " The second I find even more kind of strange is they say shortcuts from discriminative learning"}, {"start": 1939.12, "end": 1945.48, "text": " and they, they allude to this picture here and they ask what makes a cat a cat."}, {"start": 1945.48, "end": 1949.3999999999999, "text": " And they basically their argument is that the neural networks, they don't understand"}, {"start": 1949.3999999999999, "end": 1950.3999999999999, "text": " things."}, {"start": 1950.3999999999999, "end": 1951.3999999999999, "text": " They just discriminate, right?"}, {"start": 1951.3999999999999, "end": 1954.76, "text": " They, they have these thousand classes and the output layer."}, {"start": 1954.76, "end": 1956.56, "text": " This is the neural network."}, {"start": 1956.56, "end": 1958.04, "text": " And they just need to discriminate."}, {"start": 1958.04, "end": 1964.92, "text": " So they just need to learn what is different from one class to the other class."}, {"start": 1964.92, "end": 1970.84, "text": " And they will often rely on features such as texture like here."}, {"start": 1970.84, "end": 1972.0, "text": " So they rely on detection."}, {"start": 1972.0, "end": 1975.08, "text": " They classify this as an elephant, right?"}, {"start": 1975.08, "end": 1977.48, "text": " So they say what makes a cat a cat to standard DNNs?"}, {"start": 1977.48, "end": 1981.72, "text": " The example image on the left clearly shows an elephant not a cat."}, {"start": 1981.72, "end": 1984.44, "text": " And again, I agree."}, {"start": 1984.44, "end": 1994.48, "text": " If you tell me this is data from naturally to it taken images with standard cameras,"}, {"start": 1994.48, "end": 1995.88, "text": " right?"}, {"start": 1995.88, "end": 1998.48, "text": " Then I will, I will have two possibilities."}, {"start": 1998.48, "end": 2000.8400000000001, "text": " I will say, is this a cat?"}, {"start": 2000.8400000000001, "end": 2006.68, "text": " There's no way that if you take anywhere in the universe a picture with a, like a phone"}, {"start": 2006.68, "end": 2011.64, "text": " camera of anything of a cat, it will look like this."}, {"start": 2011.64, "end": 2015.5600000000002, "text": " No way, I don't, it's just not possible, right?"}, {"start": 2015.5600000000002, "end": 2026.6000000000001, "text": " However, is it possible that there is an elephant that as a skin-fold pattern by random"}, {"start": 2026.6000000000001, "end": 2037.88, "text": " chance, elephant big ears, raw trunk, as a skin-fold pattern, looks like a cat, like looks"}, {"start": 2037.88, "end": 2039.4, "text": " like the shape of a cat."}, {"start": 2039.4, "end": 2041.2, "text": " Yes, that's possible."}, {"start": 2041.2, "end": 2050.7200000000003, "text": " So if you ask me, according to the data generation process, this is way more likely to be an elephant"}, {"start": 2050.7200000000003, "end": 2053.44, "text": " than a cat, right?"}, {"start": 2053.44, "end": 2058.96, "text": " And the paper here makes it seem like it is so obvious that this is a cat, but what do"}, {"start": 2058.96, "end": 2061.56, "text": " these standards do it DNNs think?"}, {"start": 2061.56, "end": 2063.7200000000003, "text": " It's an elephant, not a cat."}, {"start": 2063.7200000000003, "end": 2068.32, "text": " And the DNN, oh, it's just looking at object texture and other local structures and not"}, {"start": 2068.32, "end": 2071.84, "text": " shape, right?"}, {"start": 2071.84, "end": 2078.44, "text": " What we want it to do, and this is, I find, just stop calling things object classifiers"}, {"start": 2078.44, "end": 2083.32, "text": " if they're not object classifiers, they're classifier between images of a data generation"}, {"start": 2083.32, "end": 2084.4, "text": " process."}, {"start": 2084.4, "end": 2091.1600000000003, "text": " If you want them to be object classifiers, make up a data set that actually has different"}, {"start": 2091.1600000000003, "end": 2094.8, "text": " objects, but you can't specify that."}, {"start": 2094.8, "end": 2102.8, "text": " So yeah, and then they go into some sort of adversarial examples, and I find this to be,"}, {"start": 2102.8, "end": 2110.6400000000003, "text": " I also find this to be a bit maybe not belonging here, like, where they say, oh, look, here"}, {"start": 2110.6400000000003, "end": 2113.88, "text": " the DNNs predict guitar with high certainty."}, {"start": 2113.88, "end": 2120.04, "text": " Again, it's just a discriminator, but this pattern, why not a guitar?"}, {"start": 2120.04, "end": 2125.04, "text": " If you had to, you know, if you had to get one of the thousand classes out, why could this"}, {"start": 2125.04, "end": 2128.08, "text": " not be most likely a guitar?"}, {"start": 2128.08, "end": 2135.8, "text": " But I have a further problem with this is I see, I kind of see this in, so what what you"}, {"start": 2135.8, "end": 2137.56, "text": " have is IID data."}, {"start": 2137.56, "end": 2142.36, "text": " Let's go with their taxonomy and say IID data has from the same generation process, and"}, {"start": 2142.36, "end": 2144.52, "text": " then there is OOD data."}, {"start": 2144.52, "end": 2152.56, "text": " Now I think there are a number of effects here that they try to lump together with this"}, {"start": 2152.56, "end": 2157.6, "text": " thing, where they just say, oh, OOD data, whenever my model doesn't work on OOD data, it has"}, {"start": 2157.6, "end": 2161.04, "text": " learned a short cut, but it's very, very weird."}, {"start": 2161.04, "end": 2167.24, "text": " So first of all, there, I would say the OOD data, you can probably divide into what I"}, {"start": 2167.24, "end": 2171.2799999999997, "text": " you call unnatural OOD data."}, {"start": 2171.2799999999997, "end": 2174.64, "text": " Let's say our task here is to build an object,"}, {"start": 2174.64, "end": 2178.12, "text": " an object to detector, whatever that means for natural images."}, {"start": 2178.12, "end": 2182.64, "text": " So then there's unnatural OOD data, which in here,"}, {"start": 2182.64, "end": 2186.2, "text": " you'll find something like adversarial examples."}, {"start": 2186.2, "end": 2188.64, "text": " Adversarial examples are constructed,"}, {"start": 2188.64, "end": 2191.72, "text": " at least if you go by the interpretation of a modry"}, {"start": 2191.72, "end": 2194.9599999999996, "text": " and the adversarial examples are features, not bugs,"}, {"start": 2194.96, "end": 2199.32, "text": " then you'll go into the direction of adversarial examples,"}, {"start": 2199.32, "end": 2203.52, "text": " actually constructed by combining features"}, {"start": 2203.52, "end": 2205.92, "text": " that don't naturally go together."}, {"start": 2205.92, "end": 2211.2, "text": " So you'll get the low frequency features of a cat"}, {"start": 2211.2, "end": 2213.96, "text": " and add the high frequency, for example,"}, {"start": 2213.96, "end": 2219.56, "text": " features of a dog, so much with this lambda factor here,"}, {"start": 2219.56, "end": 2223.2400000000002, "text": " so high that to a DNN it looks like a dog,"}, {"start": 2223.2400000000002, "end": 2224.7200000000003, "text": " because it has many of the features,"}, {"start": 2224.72, "end": 2228.04, "text": " but to a human that kind of ignores the high frequency features,"}, {"start": 2228.04, "end": 2231.04, "text": " it'll look like a cat."}, {"start": 2231.04, "end": 2235.48, "text": " But these are unnatural, because the features in actual nature"}, {"start": 2235.48, "end": 2240.3999999999996, "text": " in the real world, they never occur in this combination."}, {"start": 2240.3999999999996, "end": 2245.04, "text": " So it seems like this is a very, very, very different phenomenon"}, {"start": 2245.04, "end": 2250.48, "text": " from what I would call natural OOD."}, {"start": 2250.48, "end": 2255.48, "text": " Where simply the features that you're seeing"}, {"start": 2255.48, "end": 2258.08, "text": " have never occurred in the training dataset,"}, {"start": 2258.08, "end": 2261.88, "text": " but there is, there is, if you go from the real world,"}, {"start": 2261.88, "end": 2264.48, "text": " and you construct in different ways dataset,"}, {"start": 2264.48, "end": 2271.48, "text": " there is some dataset where the data actually occurs"}, {"start": 2271.48, "end": 2273.48, "text": " in the way that you have here."}, {"start": 2273.48, "end": 2277.48, "text": " So natural OOD data is what most of the examples for now"}, {"start": 2277.48, "end": 2281.4, "text": " were about like a cow on the beach."}, {"start": 2281.4, "end": 2284.04, "text": " It's just because you've never seen that,"}, {"start": 2284.04, "end": 2288.88, "text": " because your data generation here always good, cow plus grass."}, {"start": 2288.88, "end": 2289.88, "text": " Right?"}, {"start": 2289.88, "end": 2291.92, "text": " So I think these are very different."}, {"start": 2291.92, "end": 2296.72, "text": " And then the last thing they also lump in here is fairness,"}, {"start": 2296.72, "end": 2299.52, "text": " like the fairness in bias literature,"}, {"start": 2299.52, "end": 2302.96, "text": " where, for example, you have a resume classifier,"}, {"start": 2302.96, "end": 2305.92, "text": " and the resume classifier ends up being biased by gender"}, {"start": 2305.92, "end": 2308.16, "text": " or something like this."}, {"start": 2308.16, "end": 2315.48, "text": " And again, so I kind of struggle with this,"}, {"start": 2315.48, "end": 2318.76, "text": " although they say not all the fairness problems come from here,"}, {"start": 2318.76, "end": 2322.4, "text": " but I would also like to stress that some of the fairness"}, {"start": 2322.4, "end": 2324.36, "text": " problem goes exactly here."}, {"start": 2324.36, "end": 2329.32, "text": " It occurs because your data generation process"}, {"start": 2329.32, "end": 2331.6, "text": " is different from what you want."}, {"start": 2331.6, "end": 2337.48, "text": " For example, if you do this hiring classifier,"}, {"start": 2337.48, "end": 2339.2799999999997, "text": " you have to understand what that is."}, {"start": 2339.2799999999997, "end": 2342.48, "text": " What you're training is a system that will tell you,"}, {"start": 2342.48, "end": 2345.2, "text": " how would my human dataset creators"}, {"start": 2345.2, "end": 2348.64, "text": " have decided on this particular application?"}, {"start": 2348.64, "end": 2351.16, "text": " Now, of course, there is this problem of bias amplification"}, {"start": 2351.16, "end": 2354.2799999999997, "text": " and so on, but it is not an infallible system."}, {"start": 2354.2799999999997, "end": 2356.64, "text": " It simply tells you how the humans have predicted."}, {"start": 2356.64, "end": 2360.4, "text": " And if you collect the dataset in a biased way, of course,"}, {"start": 2360.4, "end": 2364.6, "text": " the machine will inherit that."}, {"start": 2364.6, "end": 2367.0, "text": " But on the other hand, the fairness,"}, {"start": 2367.0, "end": 2369.48, "text": " why I don't think this really belongs in here,"}, {"start": 2369.48, "end": 2375.0, "text": " because in fairness, you actually have"}, {"start": 2375.0, "end": 2378.36, "text": " an alternate world, draw this in green."}, {"start": 2378.36, "end": 2381.08, "text": " Prime, world prime, right?"}, {"start": 2381.08, "end": 2384.36, "text": " Where in this OOD and IID setting,"}, {"start": 2384.36, "end": 2388.08, "text": " you always assume that the world is the world."}, {"start": 2388.08, "end": 2394.64, "text": " And you want to really learn a system that understands the world."}, {"start": 2394.64, "end": 2403.44, "text": " Where in fairness, you, this here, this is your super world."}, {"start": 2403.44, "end": 2406.08, "text": " So actually, for the fairness literature,"}, {"start": 2406.08, "end": 2409.72, "text": " it doesn't really matter if in the real world,"}, {"start": 2409.72, "end": 2412.7599999999998, "text": " let's say two groups of people are equal in some respect"}, {"start": 2412.7599999999998, "end": 2415.72, "text": " or not equal in the true world, right?"}, {"start": 2415.72, "end": 2420.2, "text": " What they care about is that they are treated equally by the system, right?"}, {"start": 2420.2, "end": 2424.8399999999997, "text": " So they will impose, they will impose some restrictions"}, {"start": 2424.8399999999997, "end": 2430.3599999999997, "text": " or some condition on their model."}, {"start": 2430.3599999999997, "end": 2434.12, "text": " And they don't naturally, like this sounds bad,"}, {"start": 2434.12, "end": 2438.2799999999997, "text": " but it is the mathematical formulation is such that you start"}, {"start": 2438.2799999999997, "end": 2442.9599999999996, "text": " with the super knowledge of two things must be equal."}, {"start": 2442.96, "end": 2446.36, "text": " And then you, this is how you imagine your world."}, {"start": 2446.36, "end": 2448.52, "text": " You think, I know the world."}, {"start": 2448.52, "end": 2453.48, "text": " And then I try to learn the model such that that happens, right?"}, {"start": 2453.48, "end": 2455.84, "text": " Whereas over here, you do something different."}, {"start": 2455.84, "end": 2460.6, "text": " Now, some of it is in, as I said, is in the same category,"}, {"start": 2460.6, "end": 2466.32, "text": " but it is, I think, a different take and a different literature."}, {"start": 2466.32, "end": 2475.96, "text": " So I would focus on, let's say this part right here,"}, {"start": 2475.96, "end": 2481.0, "text": " sorry, on this part and not on the adversarial examples"}, {"start": 2481.0, "end": 2485.4, "text": " and also not in the fairness literature too much."}, {"start": 2485.4, "end": 2486.4, "text": " All right."}, {"start": 2486.4, "end": 2491.32, "text": " So yeah, you can see here, like no wonder this,"}, {"start": 2491.32, "end": 2498.0800000000004, "text": " and this screws up an image net classifier."}, {"start": 2498.0800000000004, "end": 2499.44, "text": " Yeah."}, {"start": 2499.44, "end": 2504.32, "text": " And even this, like how do we know that that is naturally natural?"}, {"start": 2504.32, "end": 2508.4, "text": " Though I can see that that looks pretty natural, but still."}, {"start": 2508.4, "end": 2511.88, "text": " It's probably like really specifically constructed"}, {"start": 2511.88, "end": 2514.0, "text": " such that the probability that someone"}, {"start": 2514.0, "end": 2518.6800000000003, "text": " would take this picture with a camera in the real world"}, {"start": 2518.68, "end": 2524.72, "text": " is zero."}, {"start": 2524.72, "end": 2525.48, "text": " Cool."}, {"start": 2525.48, "end": 2529.9199999999996, "text": " So they give some examples where they say, OK, shortcut learning"}, {"start": 2529.9199999999996, "end": 2534.2799999999997, "text": " exists in computer vision, for example, adversarial examples."}, {"start": 2534.2799999999997, "end": 2536.8399999999997, "text": " You see shifting the image by a few pixels, though."}, {"start": 2536.8399999999997, "end": 2539.3999999999996, "text": " You have to say shifting the image very precisely"}, {"start": 2539.3999999999996, "end": 2541.8399999999997, "text": " by a few pixels, such that the probability of this"}, {"start": 2541.8399999999997, "end": 2546.8799999999997, "text": " occurring in the data generation pipeline is zero."}, {"start": 2546.88, "end": 2551.0, "text": " And so on, then they call it domain transfer."}, {"start": 2551.0, "end": 2554.6400000000003, "text": " That, that, of course, is, I think that's the,"}, {"start": 2554.6400000000003, "end": 2556.2000000000003, "text": " that's a good example."}, {"start": 2556.2000000000003, "end": 2559.28, "text": " They say natural language processing,"}, {"start": 2559.28, "end": 2563.56, "text": " where Bert has been found to rely on superficial keywords."}, {"start": 2563.56, "end": 2566.2400000000002, "text": " For instance, it learned within a data set"}, {"start": 2566.2400000000002, "end": 2568.96, "text": " of natural language arguments, detecting the presence"}, {"start": 2568.96, "end": 2572.4, "text": " of not was sufficient to perform a love chance"}, {"start": 2572.4, "end": 2575.8, "text": " in finding the correct line of argumentation."}, {"start": 2575.8, "end": 2579.32, "text": " Again, this is like all we can do is construct data sets."}, {"start": 2579.32, "end": 2584.44, "text": " We cannot, if we could tell the model what to look at,"}, {"start": 2584.44, "end": 2590.36, "text": " we would, we would, we would just program the solution."}, {"start": 2590.36, "end": 2593.52, "text": " So the solution is, there's only one solution,"}, {"start": 2593.52, "end": 2597.1600000000003, "text": " program better data sets, get better data sets."}, {"start": 2597.1600000000003, "end": 2599.44, "text": " Or, I mean, OK, the second solution"}, {"start": 2599.44, "end": 2601.92, "text": " is get better inductive biases."}, {"start": 2601.92, "end": 2605.2400000000002, "text": " But if we knew the correct inductive biases,"}, {"start": 2605.24, "end": 2606.8799999999997, "text": " we wouldn't have the problem."}, {"start": 2610.68, "end": 2615.0, "text": " Yeah, I've, like that there is a, in NLP, this is very,"}, {"start": 2615.0, "end": 2618.68, "text": " this is very, very prevalent, even more than an envision,"}, {"start": 2618.68, "end": 2619.2, "text": " right?"}, {"start": 2619.2, "end": 2625.3199999999997, "text": " This, this fact of, hey, these curious correlations"}, {"start": 2625.3199999999997, "end": 2629.56, "text": " in NLP, the models usually just learn kind of correlation"}, {"start": 2629.56, "end": 2631.12, "text": " between some words."}, {"start": 2631.12, "end": 2635.2, "text": " And then they, they don't learn to understand the sentences"}, {"start": 2635.2, "end": 2636.2, "text": " at all, right?"}, {"start": 2636.2, "end": 2639.4, "text": " But this, this is because in NLP, we have even more problems"}, {"start": 2639.4, "end": 2642.3599999999997, "text": " with constructing data sets that force the model"}, {"start": 2642.3599999999997, "end": 2646.92, "text": " to learn to understand the, the text."}, {"start": 2646.92, "end": 2650.8399999999997, "text": " Again, I could not tell you what understanding the text means."}, {"start": 2650.8399999999997, "end": 2654.16, "text": " They go, in, by the way, humans do that too."}, {"start": 2654.16, "end": 2657.6, "text": " Humans, most in most of NLP that happens in humans,"}, {"start": 2657.6, "end": 2661.44, "text": " humans do this, right, in many, many, many forms."}, {"start": 2661.44, "end": 2664.16, "text": " This is simply because the cost function is not aligned"}, {"start": 2664.16, "end": 2665.7999999999997, "text": " with what you would want."}, {"start": 2668.44, "end": 2671.52, "text": " What is the specific, oh, well, a specific example"}, {"start": 2671.52, "end": 2675.48, "text": " is that news stories nowadays, right?"}, {"start": 2675.48, "end": 2678.68, "text": " You have a news, you say news."}, {"start": 2678.68, "end": 2680.36, "text": " What do people expect?"}, {"start": 2680.36, "end": 2681.68, "text": " What is the intent of news?"}, {"start": 2681.68, "end": 2684.44, "text": " The intent of news is to inform you, maybe."}, {"start": 2684.44, "end": 2690.96, "text": " But the cost, right, the cost function is clicks."}, {"start": 2690.96, "end": 2692.7200000000003, "text": " So what do you do?"}, {"start": 2692.7200000000003, "end": 2695.56, "text": " You, you news story and vary on the top in the title,"}, {"start": 2695.56, "end": 2700.92, "text": " you say orange, man, bad."}, {"start": 2700.92, "end": 2703.96, "text": " And then people, you highlight this, right?"}, {"start": 2703.96, "end": 2707.2000000000003, "text": " So a news story, I don't know, Brad Pitt had a new baby."}, {"start": 2707.2000000000003, "end": 2709.68, "text": " You just append bot orange, man, bad."}, {"start": 2709.68, "end": 2711.32, "text": " People click on it much more."}, {"start": 2711.32, "end": 2715.0, "text": " Your cost goes up and sorry, your clicks go up."}, {"start": 2715.0, "end": 2716.76, "text": " Your cost function goes up."}, {"start": 2716.76, "end": 2720.96, "text": " And so I think this happens everywhere, right?"}, {"start": 2720.96, "end": 2724.2400000000002, "text": " You can't even do this with humans, right?"}, {"start": 2724.2400000000002, "end": 2727.2000000000003, "text": " What do you expect neural networks to do that?"}, {"start": 2728.96, "end": 2733.6000000000004, "text": " All right, agent-based reinforcement learning,"}, {"start": 2733.6000000000004, "end": 2736.0, "text": " I find this pretty funny."}, {"start": 2739.28, "end": 2740.8, "text": " Where is it?"}, {"start": 2740.8, "end": 2743.48, "text": " Where it learned how to play Tetris?"}, {"start": 2743.48, "end": 2745.92, "text": " Yeah, instead of learning how to play Tetris,"}, {"start": 2745.92, "end": 2751.52, "text": " an algorithm simply learned to pause the game to fake news."}, {"start": 2751.52, "end": 2754.04, "text": " Come on, it's genius, right?"}, {"start": 2754.04, "end": 2760.1200000000003, "text": " Like it is objectively genius."}, {"start": 2760.1200000000003, "end": 2763.92, "text": " And then of course fairness and algorithmic decision making."}, {"start": 2763.92, "end": 2767.6800000000003, "text": " Right, so they say understanding these shortcuts,"}, {"start": 2767.68, "end": 2771.8399999999997, "text": " and they touch on a lot of the things that I've touched on,"}, {"start": 2771.8399999999997, "end": 2781.0, "text": " including what I find well is, for example, this Morgan's"}, {"start": 2781.0, "end": 2782.68, "text": " can for machine learning, where you say,"}, {"start": 2782.68, "end": 2785.24, "text": " probably a machine learning system will learn the easiest"}, {"start": 2785.24, "end": 2786.16, "text": " feature it can."}, {"start": 2786.16, "end": 2789.96, "text": " And that's oftentimes not what you want, right?"}, {"start": 2789.96, "end": 2793.08, "text": " So this even now amplifies things."}, {"start": 2793.08, "end": 2798.04, "text": " They also touch on this thing of anthropomorphism, where you view"}, {"start": 2798.04, "end": 2802.24, "text": " everything through a human lens, and that is not correct."}, {"start": 2802.24, "end": 2806.48, "text": " If you look at these neural networks, they're not humans,"}, {"start": 2806.48, "end": 2811.4, "text": " and we should never attribute humanness to their solutions,"}, {"start": 2811.4, "end": 2813.56, "text": " never attribute to high-level abilities,"}, {"start": 2813.56, "end": 2817.16, "text": " that which can be adequately explained by short-learning."}, {"start": 2817.16, "end": 2818.4, "text": " Yes, I agree with this."}, {"start": 2818.4, "end": 2822.36, "text": " Like I agree with this paper in all the things it says, right?"}, {"start": 2822.36, "end": 2826.6800000000003, "text": " Except this, detecting shortcuts, making"}, {"start": 2826.6800000000003, "end": 2829.48, "text": " OOD generalization tests a standard practice."}, {"start": 2829.48, "end": 2834.7200000000003, "text": " For the reasons I specified before, I think that is counterproductive."}, {"start": 2834.7200000000003, "end": 2839.0, "text": " And yeah, I think I've already said enough, right?"}, {"start": 2839.0, "end": 2841.1600000000003, "text": " Designing good OOD tests."}, {"start": 2841.1600000000003, "end": 2844.28, "text": " This, you can only design good OOD tests"}, {"start": 2844.28, "end": 2850.2000000000003, "text": " if you know the real underlying data distribution, which you don't."}, {"start": 2850.2, "end": 2853.56, "text": " And let's go through it."}, {"start": 2853.56, "end": 2856.2, "text": " Yeah, again, the principle of least effort, they say,"}, {"start": 2856.2, "end": 2857.8799999999997, "text": " why are they learned?"}, {"start": 2857.8799999999997, "end": 2860.8799999999997, "text": " Because it's just easier, right?"}, {"start": 2860.8799999999997, "end": 2868.2, "text": " To, it's just easier to write a new story with just the words"}, {"start": 2868.2, "end": 2871.16, "text": " you know people will click on, right?"}, {"start": 2871.16, "end": 2873.8399999999997, "text": " Like, or these top 10 things of blah, blah, blah."}, {"start": 2873.8399999999997, "end": 2876.6, "text": " Number seven will surprise you."}, {"start": 2876.6, "end": 2880.0, "text": " You don't actually have to come up with 10 relevant things."}, {"start": 2880.0, "end": 2885.24, "text": " The entire title is enough to get you the clicks."}, {"start": 2885.24, "end": 2889.96, "text": " So it's the least effort to solve the cost function."}, {"start": 2889.96, "end": 2891.88, "text": " It might not align with what you want."}, {"start": 2894.92, "end": 2897.2, "text": " And also the inductive biases."}, {"start": 2897.2, "end": 2899.04, "text": " As I said, we are humans."}, {"start": 2899.04, "end": 2902.56, "text": " We have some inductive biases that neural networks don't have them."}, {"start": 2902.56, "end": 2905.16, "text": " And we need to take this into account."}, {"start": 2905.16, "end": 2913.7599999999998, "text": " But the solution is to make training data sets that take this into account."}, {"start": 2913.7599999999998, "end": 2918.0, "text": " All right, they say beyond chocolate learning, this is kind of an outlook."}, {"start": 2918.0, "end": 2926.64, "text": " And then a conclusion where they remind, but we're already at some 45 minutes of video."}, {"start": 2926.64, "end": 2931.8799999999997, "text": " And if you're still here, like respect, or maybe you just have this in the background"}, {"start": 2931.88, "end": 2938.6800000000003, "text": " and have some company during this time, I will finish with saying thank you for watching"}, {"start": 2938.6800000000003, "end": 2941.84, "text": " and leave your comments."}, {"start": 2941.84, "end": 2948.36, "text": " Since this is mostly opinion, I would be interested in hearing your comments on this."}, {"start": 2948.36, "end": 2978.32, "text": " With that, I say bye-bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=Ok44otx90D4 | Feature Visualization & The OpenAI microscope | A closer look at the OpenAI microscope, a database of visualizations of the inner workings of ImageNet classifiers, along with an explanation of how to obtain these visualizations.
https://distill.pub/2017/feature-visualization/
https://microscope.openai.com/models
https://github.com/tensorflow/lucid
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're going to take a look at the OpenAI microscope and this article on the still called feature visualization. So the feature visualization article is by Chris Ola Alexander Mortevinsv and Ludwig Schubert of the Google Brain team while the OpenAI microscope is by OpenAI. So keep that in mind. These tools are tools for visualizing what neural networks learn and specifically here we're dealing with image classifiers and even more specifically image net classifiers. So you know image net is this big data set with a thousand classes of images and the images are somewhat like 200 by 200 pixels and you're just supposed to put them into one of these 1000 classes and networks have become really good at these kinds of things. So our question is what do these networks learn and there are a number of very cool works that have started to investigate what these networks learn. So this started with work like deep dream or even before that but this is a very good summary and also and also overview of this. So in this article they first showcase these patterns here where you can see that as you go through the network in the layer this is layer conf2d0 which is a very low layer in the network. You have things what looks like pattern just pattern detectors. So these are the things that the network is excited and we'll just get we'll get in a second at how you create these things. Just be sure these are the things that these particular layers in the networks are most excited by. So this layer is super excited by these textures. Then but as you go higher the network gets excited by these kind of textures then it gets excited by more complex things. So the pattern is the higher you go in the layers and these people have been seeing and claiming this and measuring this since RBF networks and whatnot that the higher you go in these networks the more complex features that they build. So they always build hypothesis is they build very complex features from the lower layers and the lower layers they have less less complex features and until you're the very bottom layer they simply extract edges and patterns of texture like this whereas in the top layers all of these are hierarchically assembled to give you very very intricate features. So these usually look pretty funky that's why I like to investigate. So these article focuses on how is this done and the answer is by optimization. Now what do we mean? I actually have the article here somewhat printed out but the graphics they don't really print very well to to the notebook here. So imagine this here. So what you want to do is you want to see how much activation is in the network for a particular input and you can do this in many many different ways. The easiest form, let's actually start over here if you have a neural network here and this is the softmax classifier and these are the classes. In this case let's say this is dog this is cat and this is car house. Let's go with house. You can think of okay I want to know when the network sees a cat. What does the network think of of cats? So what I would do is I would take an image that is just noise right just random noise like this one on the left here and I would start to optimize using back propagation. I would start to optimize this image. Now you usually know back propagation as the thing that will optimize these weights if we have given an image and a label right but right now you have to so and we optimize the weights here sorry I should theta right but right now you have to rethink what we have given is the label and the weights we keep them constant but we optimize the input image to maximize this label as much as possible. So we ask of xx please please update yourself such that the output is as much cat as possible right and then we we just optimize for that and so we hope that this picture would turn out to be as much cat as we like and usually here I don't know exactly which class they have here but you won't get a cat sorry over here you won't get a cat or here if you optimize the log it's the same just you do the same thing before the softmax what you will get is some real trippy thing so in our classifier you might get something like there's a cat here but there's also one here right because two cats are more cat than one cat and there's like a giant cat head here and inside the cat head there is another cat head and there's like a cat tail somewhere here and again this there's a cat eye right here so you get like something super trippy that is as much cat as possible all right now so this is somewhat somewhat interesting because you can find out what what does the network think is the most cat-like thing there is and the most dog-like thing there is but what you can also do is you can see what do the intermediate layers of the networks what do they get excited by so what you can do is for example you can take an individual neuron in one of the layers right so this here might be a convolutional layer with its convolutional filters right and these are the different channels of the convolutional layer and then this thing here is just a single neuron there and what you can do is you can you can say okay x we have now again we input the image x and we optimize the image x such that with a given weights this particular neuron let's call it n and n is and is maybe this neuron right here right this particular neuron is as much activated as possible right so we no longer optimize for a label but we optimize for a particular neuron to be activated as much as possible and then we can sort of see what is an image that activates this neuron as much as possible in this case that thing you can actually do the same thing with an entire channel if you simply say when is the channel as a whole activated as much as possible and something like a deep dream did the same thing but with an entire layer in the neural network like what is this layer activated by they can imagine there's not only one image that activates it the most but probably depending on your start here it's it's it's a you'll get different different results for depending on where you start and we'll go into that later as well so these are the kinds of things you can do to investigate these neural networks and see what they what they pay attention to in each of the layers right so let's go on here and so they say they say okay if we optimize by optimization you get something like this here on the bottom row looks pretty funky right but what you could also do is you can visualize by what there's called data set examples and date with data set examples you don't do this thing you don't do the optimization procedure but you go into your into your database into your data set and you find the images that activate so the x i from your data set that activate a particular neuron the most so you simply sort all the images in your database and you just pick the 10 or so that activate that particular neuron the most and that would also give you an understanding it's actually a valid thing and the AI microscope combines both of them so on the bottom you see what particular neurons are most excited by and at the top you see that the data set examples they're most excited by so you can see that there is a there's a healthy diversity in the data set examples but they also all kind of up to the to the what it's most excited by it's pretty cool the last point they make in this article is that of diversity with the data set examples you do get naturally sort of a diverse set of images of course where you can guess you can also say okay whether the maximum activations and only slightly positive you can even give negative examples but with the positive you can only either maximize fully or you can you know take the negative and you can say okay what is this neuron not excited by at all but you you won't get kind of a spectrum of what the neuron is excited by or the unit or the layer and what they're doing is simply the added diversity term they say that works best so what does that mean it means that here if we optimize x and y right let's go up here if we optimize x in that right and try to maximize the activation of n we don't want to do x by itself but what we want to do is we want to do an entire set of x i right so we feed an entire mini batch in there and we want to maximize n but also maximize a diversity term let's call that d between the x1 to x b right so you want to maximize the activation of the neuron but at the same time in your loss function that you optimize you also have this diversity term where you say the images that you produce should be far apart from each other or kind of apart from each other and thus you do get diverse samples okay the printing again doesn't work so here you see that if you just simply optimize you get the thing on the left but if you have a batch of things and you optimize them to be diverse from each other but also activate the layer or the unit you get a variety of high activations and you can see here that this is some sort of curve this could be a beak of a bird but on the right here this could also be kind of a snout of a monkey or something okay so it's just curvy curvy things that is activated by here they give another example you can clearly see that there is like some sort of i in the picture but then if you optimize with diversity you can see that some of them do not have this i thing and in fact also some of the dataset examples do not have this i thing so it might be interesting to to optimize here with diversity and even in they say in higher layers it gets even more diverse with what you achieve with this ball detector they also say they research interactions between neurons where they can interpolate between them right so you can here have two different new units let's say this top left one here is the thing that we've just seen with the curvature activator and then on the right we can select this thing here that appears to be activated by these bird-like bird-like things and if you optimize an image that activates both of them you get the thing on the bottom left and this is very good for understanding how neural networks work because what a neural network will do is exactly it will take the thing on the top left and the top right from lower layers and it will combine them to form a features of higher layers so while the top right thing looks like generic birds the bottom left thing looks much more like birds with let's say long necks and then curved necks or more stork-ish birds because we've added in this curvature thing so this is very very cool to play around you can also here interpolate between between neurons like you would interpolate in a in a gan and yeah so they do make a point of regularization I don't want to go into that particularly but you have to be careful you can just apply the optimization procedure as I said right now you actually have to do some regularization and to get to get rid of what are essentially adversarial examples in this process because if you just straight up optimize you will get pretty high frequency crappy results I actually want to jump over now to this opening microscope so this is a tool that lets you explore these visualizations so at the beginning you can pick one of these models and I'll pick inception V1 just because some of the other ones they don't have everything done quite yet all the all the all the visualizations so on the right here you can actually see the architecture of the network if you know what an inception network is so this looks like and you you would be able from here to select one of these units straight away but I'm gonna sorry we're gonna go to the left here so you have deep dream activated which means the entire layer is optimized for so per layer on the right side you have an image here and you can already see that if we go from the bottom what we saw before we get patterns that become more and more complex as you go up the layers and then more and more until you finally have what the network appears to be most activated by is mostly dogs which is okay because image net is dominated by dogs so you can click on any of these right here like this one and now you'll be able to to inspect the individual nodes in this layer so before we had the whole layer right the whole layer was activated by something but these layers they have different channels and also different neurons within the channel so you can select this here you can go neuron activation or channel activation and these are the images that these channels are excited by the most you see you get pretty funky pattern if we select one interesting one maybe this this one right here you can see on the left this is the channel optimizing optimization on the right this is the neuron optimization and here you get the data set examples that are most activated that mostly activate this particular channel or neuron so sorry this particular yeah channel so you can see this is pretty similar to the thing I drew where except for it being a cat it's some sort of a fox dog thing right and and you can explore the neural network in this fashion so you can go through the layer here and look at that you need this seems to be whiskers classifier and lo and behold things with whiskers will activate the neuron and as you go up the layer and this is the the cool thing right so we're right now we're here in this layer for as you go up you will see more and more intricate patterns of activations I I could play around this for very very long time but I won't I won't waste your time too much they there is a disk sorry there is a a slack workspace where people discuss interesting patterns what is this okay yes this is a some sort of temple temple constructor very cool there is a slack workspace where people discuss interesting things for example they discuss how the car detector that you see right here is one of the units there is literally endless units to look at that text cars can be clearly seen to be built from lower-level features such as this wheel detector you see this wheel detector here is the unit 3 3 7 in the mixed four layer and this car hood detector right here is unit 2 3 7 also in the in one of the layer fours right so these are both from layer 4 and then the car detector I actually look this up is in layer 4 as well but let's check it out this is in layer 4b this is in layer 4b and this is in layer 4c so you see I this was a risk the car detector is built from lower level features of car hood and car wheel right the car wheel right here detects wheels and the car hood detector detects hoods and then the car detector detects cars so there are very like I really invite you to go look at it check out what people find and explore these models all of this is based on this lucid library right here also invite you to check that out where you can perform such optimizations yourself I will link to that and with that bye bye | [{"start": 0.0, "end": 6.88, "text": " Hi there. Today we're going to take a look at the OpenAI microscope and this"}, {"start": 6.88, "end": 12.36, "text": " article on the still called feature visualization. So the feature visualization"}, {"start": 12.36, "end": 18.88, "text": " article is by Chris Ola Alexander Mortevinsv and Ludwig Schubert of the"}, {"start": 18.88, "end": 26.84, "text": " Google Brain team while the OpenAI microscope is by OpenAI. So keep that in"}, {"start": 26.84, "end": 33.16, "text": " mind. These tools are tools for visualizing what neural networks learn and"}, {"start": 33.16, "end": 39.2, "text": " specifically here we're dealing with image classifiers and even more specifically"}, {"start": 39.2, "end": 43.64, "text": " image net classifiers. So you know image net is this big data set with a thousand"}, {"start": 43.64, "end": 49.8, "text": " classes of images and the images are somewhat like 200 by 200 pixels and you're"}, {"start": 49.8, "end": 55.6, "text": " just supposed to put them into one of these 1000 classes and networks have"}, {"start": 55.6, "end": 60.800000000000004, "text": " become really good at these kinds of things. So our question is what do these"}, {"start": 60.800000000000004, "end": 67.32000000000001, "text": " networks learn and there are a number of very cool works that have started to"}, {"start": 67.32000000000001, "end": 72.36, "text": " investigate what these networks learn. So this started with work like deep"}, {"start": 72.36, "end": 80.08, "text": " dream or even before that but this is a very good summary and also and also"}, {"start": 80.08, "end": 86.03999999999999, "text": " overview of this. So in this article they first showcase these patterns here"}, {"start": 86.03999999999999, "end": 92.92, "text": " where you can see that as you go through the network in the layer this is layer"}, {"start": 92.92, "end": 100.36, "text": " conf2d0 which is a very low layer in the network. You have things what looks"}, {"start": 100.36, "end": 104.92, "text": " like pattern just pattern detectors. So these are the things that the network is"}, {"start": 104.92, "end": 110.36, "text": " excited and we'll just get we'll get in a second at how you create these things."}, {"start": 110.36, "end": 116.96000000000001, "text": " Just be sure these are the things that these particular layers in the networks"}, {"start": 116.96000000000001, "end": 124.92, "text": " are most excited by. So this layer is super excited by these textures. Then but"}, {"start": 124.92, "end": 131.92000000000002, "text": " as you go higher the network gets excited by these kind of textures then it gets"}, {"start": 131.92, "end": 138.39999999999998, "text": " excited by more complex things. So the pattern is the higher you go in the"}, {"start": 138.39999999999998, "end": 142.92, "text": " layers and these people have been seeing and claiming this and measuring this"}, {"start": 142.92, "end": 152.32, "text": " since RBF networks and whatnot that the higher you go in these networks the"}, {"start": 152.32, "end": 159.16, "text": " more complex features that they build. So they always build hypothesis is they"}, {"start": 159.16, "end": 164.32, "text": " build very complex features from the lower layers and the lower layers they"}, {"start": 164.32, "end": 169.07999999999998, "text": " have less less complex features and until you're the very bottom layer they"}, {"start": 169.07999999999998, "end": 174.76, "text": " simply extract edges and patterns of texture like this whereas in the top"}, {"start": 174.76, "end": 180.28, "text": " layers all of these are hierarchically assembled to give you very very"}, {"start": 180.28, "end": 186.8, "text": " intricate features. So these usually look pretty funky that's why I like to"}, {"start": 186.8, "end": 194.0, "text": " investigate. So these article focuses on how is this done and the answer is by"}, {"start": 194.0, "end": 198.52, "text": " optimization. Now what do we mean? I actually have the article here somewhat"}, {"start": 198.52, "end": 205.60000000000002, "text": " printed out but the graphics they don't really print very well to to the notebook"}, {"start": 205.6, "end": 217.6, "text": " here. So imagine this here. So what you want to do is you want to see how much"}, {"start": 217.6, "end": 222.76, "text": " activation is in the network for a particular input and you can do this in many"}, {"start": 222.76, "end": 230.07999999999998, "text": " many different ways. The easiest form, let's actually start over here if you"}, {"start": 230.08, "end": 236.0, "text": " have a neural network here and this is the softmax classifier and these are the"}, {"start": 236.0, "end": 243.0, "text": " classes. In this case let's say this is dog this is cat and this is car house."}, {"start": 243.0, "end": 255.24, "text": " Let's go with house. You can think of okay I want to know when the network sees"}, {"start": 255.24, "end": 261.6, "text": " a cat. What does the network think of of cats? So what I would do is I would take an"}, {"start": 261.6, "end": 268.0, "text": " image that is just noise right just random noise like this one on the left"}, {"start": 268.0, "end": 274.76, "text": " here and I would start to optimize using back propagation. I would start to"}, {"start": 274.76, "end": 279.88, "text": " optimize this image. Now you usually know back propagation as the thing that"}, {"start": 279.88, "end": 285.48, "text": " will optimize these weights if we have given an image and a label right but"}, {"start": 285.48, "end": 292.71999999999997, "text": " right now you have to so and we optimize the weights here sorry I should theta"}, {"start": 292.71999999999997, "end": 298.64, "text": " right but right now you have to rethink what we have given is the label and"}, {"start": 298.64, "end": 303.71999999999997, "text": " the weights we keep them constant but we optimize the input image to"}, {"start": 303.72, "end": 310.68, "text": " maximize this label as much as possible. So we ask of xx please please update"}, {"start": 310.68, "end": 317.40000000000003, "text": " yourself such that the output is as much cat as possible right and then we we"}, {"start": 317.40000000000003, "end": 326.28000000000003, "text": " just optimize for that and so we hope that this picture would turn out to be as"}, {"start": 326.28, "end": 334.67999999999995, "text": " much cat as we like and usually here I don't know exactly which class they have"}, {"start": 334.67999999999995, "end": 340.88, "text": " here but you won't get a cat sorry over here you won't get a cat or here if you"}, {"start": 340.88, "end": 345.4, "text": " optimize the log it's the same just you do the same thing before the softmax"}, {"start": 345.4, "end": 351.52, "text": " what you will get is some real trippy thing so in our classifier you might get"}, {"start": 351.52, "end": 359.2, "text": " something like there's a cat here but there's also one here right because two"}, {"start": 359.2, "end": 363.79999999999995, "text": " cats are more cat than one cat and there's like a giant cat head here and"}, {"start": 363.79999999999995, "end": 368.91999999999996, "text": " inside the cat head there is another cat head and there's like a cat tail"}, {"start": 368.91999999999996, "end": 374.0, "text": " somewhere here and again this there's a cat eye right here so you get like"}, {"start": 374.0, "end": 382.88, "text": " something super trippy that is as much cat as possible all right now so this is"}, {"start": 382.88, "end": 386.6, "text": " somewhat somewhat interesting because you can find out what what does the"}, {"start": 386.6, "end": 390.36, "text": " network think is the most cat-like thing there is and the most dog-like thing"}, {"start": 390.36, "end": 396.32, "text": " there is but what you can also do is you can see what do the intermediate layers"}, {"start": 396.32, "end": 402.48, "text": " of the networks what do they get excited by so what you can do is for example"}, {"start": 402.48, "end": 407.6, "text": " you can take an individual neuron in one of the layers right so this here"}, {"start": 407.6, "end": 412.56, "text": " might be a convolutional layer with its convolutional filters right and these"}, {"start": 412.56, "end": 417.84000000000003, "text": " are the different channels of the convolutional layer and then this thing here"}, {"start": 417.84000000000003, "end": 426.24, "text": " is just a single neuron there and what you can do is you can you can say"}, {"start": 426.24, "end": 438.64, "text": " okay x we have now again we input the image x and we optimize the image x such"}, {"start": 438.64, "end": 445.8, "text": " that with a given weights this particular neuron let's call it n and n is"}, {"start": 445.8, "end": 451.36, "text": " and is maybe this neuron right here right this particular neuron is as much"}, {"start": 451.36, "end": 456.44, "text": " activated as possible right so we no longer optimize for a label but we"}, {"start": 456.44, "end": 462.04, "text": " optimize for a particular neuron to be activated as much as possible and then"}, {"start": 462.04, "end": 468.6, "text": " we can sort of see what is an image that activates this neuron as much as"}, {"start": 468.6, "end": 474.6, "text": " possible in this case that thing you can actually do the same thing with an"}, {"start": 474.6, "end": 479.40000000000003, "text": " entire channel if you simply say when is the channel as a whole activated as"}, {"start": 479.4, "end": 484.12, "text": " much as possible and something like a deep dream did the same thing but with an"}, {"start": 484.12, "end": 491.08, "text": " entire layer in the neural network like what is this layer activated by they can"}, {"start": 491.08, "end": 495.2, "text": " imagine there's not only one image that activates it the most but probably"}, {"start": 495.2, "end": 503.23999999999995, "text": " depending on your start here it's it's it's a you'll get different different"}, {"start": 503.23999999999995, "end": 509.23999999999995, "text": " results for depending on where you start and we'll go into that later as well so"}, {"start": 509.24, "end": 513.84, "text": " these are the kinds of things you can do to investigate these neural networks"}, {"start": 513.84, "end": 520.72, "text": " and see what they what they pay attention to in each of the layers right so"}, {"start": 520.72, "end": 531.48, "text": " let's go on here and so they say they say okay if we optimize by optimization you"}, {"start": 531.48, "end": 539.64, "text": " get something like this here on the bottom row looks pretty funky right but what"}, {"start": 539.64, "end": 544.16, "text": " you could also do is you can visualize by what there's called data set"}, {"start": 544.16, "end": 550.0, "text": " examples and date with data set examples you don't do this thing you don't do"}, {"start": 550.0, "end": 555.36, "text": " the optimization procedure but you go into your into your database into your"}, {"start": 555.36, "end": 563.52, "text": " data set and you find the images that activate so the x i from your data set"}, {"start": 563.52, "end": 568.32, "text": " that activate a particular neuron the most so you simply sort all the images"}, {"start": 568.32, "end": 572.96, "text": " in your database and you just pick the 10 or so that activate that particular"}, {"start": 572.96, "end": 576.44, "text": " neuron the most and that would also give you an understanding it's actually a"}, {"start": 576.44, "end": 583.44, "text": " valid thing and the AI microscope combines both of them so on the bottom you"}, {"start": 583.44, "end": 591.0, "text": " see what particular neurons are most excited by and at the top you see that"}, {"start": 591.0, "end": 594.96, "text": " the data set examples they're most excited by so you can see that there is a"}, {"start": 594.96, "end": 600.0, "text": " there's a healthy diversity in the data set examples but they also all kind of"}, {"start": 600.0, "end": 608.5600000000001, "text": " up to the to the what it's most excited by it's pretty cool the last point"}, {"start": 608.56, "end": 614.0799999999999, "text": " they make in this article is that of diversity with the data set examples you do"}, {"start": 614.0799999999999, "end": 621.1999999999999, "text": " get naturally sort of a diverse set of images of course where you can guess you"}, {"start": 621.1999999999999, "end": 625.52, "text": " can also say okay whether the maximum activations and only slightly positive"}, {"start": 625.52, "end": 630.88, "text": " you can even give negative examples but with the positive you can only either"}, {"start": 630.88, "end": 636.56, "text": " maximize fully or you can you know take the negative and you can say okay what"}, {"start": 636.56, "end": 643.04, "text": " is this neuron not excited by at all but you you won't get kind of a spectrum of"}, {"start": 643.04, "end": 649.7199999999999, "text": " what the neuron is excited by or the unit or the layer and what they're doing"}, {"start": 649.7199999999999, "end": 656.16, "text": " is simply the added diversity term they say that works best so what does that"}, {"start": 656.16, "end": 664.28, "text": " mean it means that here if we optimize x and y right let's go up here if we"}, {"start": 664.28, "end": 672.76, "text": " optimize x in that right and try to maximize the activation of n we don't"}, {"start": 672.76, "end": 679.6, "text": " want to do x by itself but what we want to do is we want to do an entire set of"}, {"start": 679.6, "end": 687.92, "text": " x i right so we feed an entire mini batch in there and we want to maximize n but"}, {"start": 687.92, "end": 699.4799999999999, "text": " also maximize a diversity term let's call that d between the x1 to x b"}, {"start": 699.4799999999999, "end": 706.0799999999999, "text": " right so you want to maximize the activation of the neuron but at the same time"}, {"start": 706.0799999999999, "end": 710.0, "text": " in your loss function that you optimize you also have this diversity term where"}, {"start": 710.0, "end": 716.16, "text": " you say the images that you produce should be far apart from each other or"}, {"start": 716.16, "end": 723.24, "text": " kind of apart from each other and thus you do get diverse samples okay the"}, {"start": 723.24, "end": 727.92, "text": " printing again doesn't work so here you see that if you just simply"}, {"start": 727.92, "end": 732.8, "text": " optimize you get the thing on the left but if you have a batch of things and you"}, {"start": 732.8, "end": 738.12, "text": " optimize them to be diverse from each other but also activate the layer or the"}, {"start": 738.12, "end": 745.0799999999999, "text": " unit you get a variety of high activations and you can see here that this is"}, {"start": 745.08, "end": 749.88, "text": " some sort of curve this could be a beak of a bird but on the right here this could"}, {"start": 749.88, "end": 755.76, "text": " also be kind of a snout of a monkey or something okay so it's just curvy"}, {"start": 755.76, "end": 761.6800000000001, "text": " curvy things that is activated by here they give another example you can"}, {"start": 761.6800000000001, "end": 766.32, "text": " clearly see that there is like some sort of i in the picture but then if you"}, {"start": 766.32, "end": 772.0400000000001, "text": " optimize with diversity you can see that some of them do not have this i thing"}, {"start": 772.04, "end": 779.88, "text": " and in fact also some of the dataset examples do not have this i thing so it"}, {"start": 779.88, "end": 785.24, "text": " might be interesting to to optimize here with diversity and even in they say in"}, {"start": 785.24, "end": 794.16, "text": " higher layers it gets even more diverse with what you achieve with this"}, {"start": 794.16, "end": 802.52, "text": " ball detector they also say they research interactions between neurons where"}, {"start": 802.52, "end": 807.76, "text": " they can interpolate between them right so you can here have two different"}, {"start": 807.76, "end": 813.28, "text": " new units let's say this top left one here is the thing that we've just seen"}, {"start": 813.28, "end": 818.68, "text": " with the curvature activator and then on the right we can select this thing"}, {"start": 818.68, "end": 823.8399999999999, "text": " here that appears to be activated by these bird-like bird-like things and if you"}, {"start": 823.84, "end": 828.24, "text": " optimize an image that activates both of them you get the thing on the bottom"}, {"start": 828.24, "end": 833.08, "text": " left and this is very good for understanding how neural networks work because"}, {"start": 833.08, "end": 837.0400000000001, "text": " what a neural network will do is exactly it will take the thing on the top left"}, {"start": 837.0400000000001, "end": 842.2, "text": " and the top right from lower layers and it will combine them to form a"}, {"start": 842.2, "end": 847.4, "text": " features of higher layers so while the top right thing looks like generic birds"}, {"start": 847.4, "end": 853.0400000000001, "text": " the bottom left thing looks much more like birds with let's say long necks and"}, {"start": 853.04, "end": 859.12, "text": " then curved necks or more stork-ish birds because we've added in this"}, {"start": 859.12, "end": 865.4399999999999, "text": " curvature thing so this is very very cool to play around you can also here"}, {"start": 865.4399999999999, "end": 870.28, "text": " interpolate between between neurons like you would interpolate in a in a"}, {"start": 870.28, "end": 878.88, "text": " gan and yeah so they do make a point of regularization I don't want to go into"}, {"start": 878.88, "end": 883.56, "text": " that particularly but you have to be careful you can just apply the optimization"}, {"start": 883.56, "end": 889.76, "text": " procedure as I said right now you actually have to do some regularization and"}, {"start": 889.76, "end": 894.92, "text": " to get to get rid of what are essentially adversarial examples in this"}, {"start": 894.92, "end": 900.72, "text": " process because if you just straight up optimize you will get pretty high"}, {"start": 900.72, "end": 908.48, "text": " frequency crappy results I actually want to jump over now to this opening"}, {"start": 908.48, "end": 914.32, "text": " microscope so this is a tool that lets you explore these visualizations so at"}, {"start": 914.32, "end": 918.36, "text": " the beginning you can pick one of these models and I'll pick inception V1 just"}, {"start": 918.36, "end": 924.44, "text": " because some of the other ones they don't have everything done quite yet all"}, {"start": 924.44, "end": 931.16, "text": " the all the all the visualizations so on the right here you can actually see"}, {"start": 931.16, "end": 935.9200000000001, "text": " the architecture of the network if you know what an inception network is so"}, {"start": 935.92, "end": 940.7199999999999, "text": " this looks like and you you would be able from here to select one of these"}, {"start": 940.7199999999999, "end": 948.36, "text": " units straight away but I'm gonna sorry we're gonna go to the left here so you"}, {"start": 948.36, "end": 958.3199999999999, "text": " have deep dream activated which means the entire layer is optimized for so per"}, {"start": 958.3199999999999, "end": 965.12, "text": " layer on the right side you have an image here and you can already see that if"}, {"start": 965.12, "end": 970.28, "text": " we go from the bottom what we saw before we get patterns that become more and"}, {"start": 970.28, "end": 976.64, "text": " more complex as you go up the layers and then more and more until you finally"}, {"start": 976.64, "end": 982.52, "text": " have what the network appears to be most activated by is mostly dogs which is"}, {"start": 982.52, "end": 989.72, "text": " okay because image net is dominated by dogs so you can click on any of these"}, {"start": 989.72, "end": 998.4, "text": " right here like this one and now you'll be able to to inspect the individual"}, {"start": 998.4, "end": 1002.52, "text": " nodes in this layer so before we had the whole layer right the whole layer was"}, {"start": 1002.52, "end": 1008.08, "text": " activated by something but these layers they have different channels and also"}, {"start": 1008.08, "end": 1013.24, "text": " different neurons within the channel so you can select this here you can go"}, {"start": 1013.24, "end": 1021.64, "text": " neuron activation or channel activation and these are the images that these"}, {"start": 1021.64, "end": 1027.2, "text": " channels are excited by the most you see you get pretty funky pattern if we"}, {"start": 1027.2, "end": 1034.72, "text": " select one interesting one maybe this this one right here you can see on the"}, {"start": 1034.72, "end": 1038.44, "text": " left this is the channel optimizing optimization on the right this is the"}, {"start": 1038.44, "end": 1043.0, "text": " neuron optimization and here you get the data set examples that are"}, {"start": 1043.0, "end": 1053.44, "text": " most activated that mostly activate this particular channel or neuron so sorry"}, {"start": 1053.44, "end": 1060.08, "text": " this particular yeah channel so you can see this is pretty similar to the"}, {"start": 1060.08, "end": 1066.88, "text": " thing I drew where except for it being a cat it's some sort of a fox dog thing"}, {"start": 1066.88, "end": 1075.0, "text": " right and and you can explore the neural network in this fashion so you can go"}, {"start": 1075.0, "end": 1084.92, "text": " through the layer here and look at that you need this seems to be whiskers"}, {"start": 1084.92, "end": 1094.44, "text": " classifier and lo and behold things with whiskers will activate the neuron and"}, {"start": 1094.44, "end": 1098.3200000000002, "text": " as you go up the layer and this is the the cool thing right so we're right"}, {"start": 1098.3200000000002, "end": 1103.4, "text": " now we're here in this layer for as you go up you will see more and more"}, {"start": 1103.4, "end": 1112.92, "text": " intricate patterns of activations I I could play around this for very very long"}, {"start": 1112.92, "end": 1119.88, "text": " time but I won't I won't waste your time too much they there is a disk sorry"}, {"start": 1119.88, "end": 1124.88, "text": " there is a a slack workspace where people discuss interesting patterns"}, {"start": 1124.88, "end": 1138.5200000000002, "text": " what is this okay yes this is a some sort of temple temple"}, {"start": 1138.5200000000002, "end": 1145.2800000000002, "text": " constructor very cool there is a slack workspace where people discuss"}, {"start": 1145.28, "end": 1151.84, "text": " interesting things for example they discuss how the car detector that you see"}, {"start": 1151.84, "end": 1156.32, "text": " right here is one of the units there is literally endless units to look at"}, {"start": 1156.32, "end": 1162.08, "text": " that text cars can be clearly seen to be built from lower-level features"}, {"start": 1162.08, "end": 1169.36, "text": " such as this wheel detector you see this wheel detector here is the unit 3 3"}, {"start": 1169.36, "end": 1180.6399999999999, "text": " 7 in the mixed four layer and this car hood detector right here is unit 2 3"}, {"start": 1180.6399999999999, "end": 1186.84, "text": " 7 also in the in one of the layer fours right so these are both from layer"}, {"start": 1186.84, "end": 1193.1999999999998, "text": " 4 and then the car detector I actually look this up is in layer 4 as well"}, {"start": 1193.2, "end": 1200.72, "text": " but let's check it out this is in layer 4b this is in layer 4b and this is in"}, {"start": 1200.72, "end": 1211.3600000000001, "text": " layer 4c so you see I this was a risk the car detector is built from lower"}, {"start": 1211.3600000000001, "end": 1216.8, "text": " level features of car hood and car wheel right the car wheel right here"}, {"start": 1216.8, "end": 1224.04, "text": " detects wheels and the car hood detector detects hoods and then the car"}, {"start": 1224.04, "end": 1230.56, "text": " detector detects cars so there are very like I really invite you to go look at"}, {"start": 1230.56, "end": 1237.96, "text": " it check out what people find and explore these models all of this is based on"}, {"start": 1237.96, "end": 1243.28, "text": " this lucid library right here also invite you to check that out where you can"}, {"start": 1243.28, "end": 1251.44, "text": " perform such optimizations yourself I will link to that and with that bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=-h1KB8ps11A | Datasets for Data-Driven Reinforcement Learning | Offline Reinforcement Learning has come more and more into focus recently in domains where classic on-policy RL algorithms are infeasible to train, such as safety-critical tasks or learning from expert demonstrations. This paper presents an extensive benchmark for evaluating offline RL algorithms in a variety of settings.
Paper: https://arxiv.org/abs/2004.07219
Code: https://github.com/rail-berkeley/offline_rl
Abstract:
The offline reinforcement learning (RL) problem, also referred to as batch RL, refers to the setting where a policy must be learned from a dataset of previously collected data, without additional online data collection. In supervised learning, large datasets and complex deep neural networks have fueled impressive progress, but in contrast, conventional RL algorithms must collect large amounts of on-policy data and have had little success leveraging previously collected datasets. As a result, existing RL benchmarks are not well-suited for the offline setting, making progress in this area difficult to measure. To design a benchmark tailored to offline RL, we start by outlining key properties of datasets relevant to applications of offline RL. Based on these properties, we design a set of benchmark tasks and datasets that evaluate offline RL algorithms under these conditions. Examples of such properties include: datasets generated via hand-designed controllers and human demonstrators, multi-objective datasets, where an agent can perform different tasks in the same environment, and datasets consisting of a heterogeneous mix of high-quality and low-quality trajectories. By designing the benchmark tasks and datasets to reflect properties of real-world offline RL problems, our benchmark will focus research effort on methods that drive substantial improvements not just on simulated benchmarks, but ultimately on the kinds of real-world problems where offline RL will have the largest impact.
Authors: Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, Sergey Levine
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at data sets for data driven reinforcement learning by Justin Fuh, a Viral Kumar, Ophir Nachum, George Tucker, and Sergei Levine. So this is a, what you would call, a data set paper or a benchmark paper. And the main point or the main area of the paper is, what's called offline reinforcement learning. So offline reinforcement learning, usually in reinforcement learning, you have this task, right? You have the agent and you have the environment. And the agent gets some sort of observation and has to come up with an action in response to that observation. And then it gets back a reward and another observation. And again, has to come up with an action. And the goal is to maximize the rewards over time that the agent gets while interacting with the environment. So usually this is organized in what are called episodes, which basically means if you have some sort of environment, right? And here is the agent. And here is the goal. Right? The goal is a inverted triangle. And there are a bunch of walls right here. Right? So it looks kind of a maze that the agent has to navigate. Then one episode is could be the agent moving around until it either finds the target or hits a wall or just kind of goes around and around. And then at some point, you say, all right, that's enough over game over. And usually in reinforcement learning, you perform many of these episodes and then you learn from them. So you perform episodes and each episode gets into usually some sort of replay buffer. Right? Let's call this replay buffer. And you do this many times and at the same time that you're doing this, you're using the things that you stored here in order to learn. Right? So the agent learns from these things. Right? So it acts with the environment in this loop in this fashion. Then once it has done an episode, it puts it into the replay buffer. And then it learns from the actions it has performed. This is what is usually called online reinforcement learning. Right? So this loop is online. Online means because the agent learns from its own actions. Right? Now in contrast to this, there is offline reinforcement learning. So in offline reinforcement learning, the agent has to learn from someone else's actions. Right? So this connection here is severed. Instead, you have other agents. Let's call these agent one, agent two, multiple agents, agent three, they all have their own interaction with the environment. Right? Environment, environment, interactions. And they feed their experience into, they perform these episodes. They feed their experience into the replay buffer. And then the agent just has to learn from that. So whatever happened here, this was previous, right? And now the agent has to learn how to maximize its reward just from the experience that is in the replay buffer from these other agents. This is what's called offline reinforcement learning. It means the agent learns from someone else's actions. Um, usually the, the power of reinforcement learning, of course, comes from the fact that you learn from your own actions. It means that, for example, if you already have some successful trajectories here, right? You found the target. You can try to replicate that because you know which actions you performed. And if you don't, you know, change anything, you're probably going to find the target again just by randomness. All right? Because you've done it already once. And so on. So you, you kind of know all the intrinsic of your own algorithm that led you to reach the target. Um, now this is an entirely different case with all of these other agents. You have no clue how they were acting, why they were acting. Right? You just know, okay, they did a series of actions and that gave them some kind of reward. And you have no idea what their reasoning was or anything. All you really can learn from is their sequence of actions. Now why is that problematic? Right? So if all of the agents, for example, if this is, if this is a, an actual platform and this really, right, steep here, where this is, uh, all of here is, is really steep cliffs, right? And you, you can actually fall off, but the agents, they're, they're humans, right? So they don't want to fall off. So what they're going to do is they're just going to take steps that are maybe like this or maybe like this, but they're, they're humans. They're smart. They're never, they're never going to, they're never going to fall off here, right? Why is this a problem if you're not trying to learn from this experience and your policy by some chance because you might have some entropy in there or something, you do know what happens if you make a move like this and you also know what happens if you make a move like this, right? Already two humans have done these moves, but what happens if you make a move like this? You just don't know, right? In classic reinforcement learning, you would get a negative reward and you could learn from that to not do this action anymore, but in, in this case, you simply don't have any data to, uh, to, to tell you what happens when you go off there. So you see the, there's a, there's a problem if you are not able to learn from your own experience, but you have to learn from something or someone else's experience. The distribution of experience that you have available to you might be on like not, not fully specific, specific of the, of the environment. It might be very different from what you would do and it might be very not conducive to what you want to do with it. So the task of offline reinforcement learning is harder than online reinforcement learning, but it also has, uh, many, many applications. Sometimes it's just not possible to do online reinforcement learning. When, for example, in medical field, right? Think of, think of the medical field where you want a robot to perform a surgery. You can't just do reinforcement learning, uh, in with our online techniques because they're just gonna try a bunch of things and see what works and you don't, you don't, like maybe you want that. I don't want that. Uh, so necessarily you're going to be left with let's have this robot learn from human experts, right? So that's a task for offline reinforcement learning. Um, there are many more tasks, for example, if you think of, uh, search engine, you will have many, many, many logs from human searching things and you simply store them. You simply have them in a buffer. Now you want to maybe, uh, train a reinforcement learning agent that, I don't know, serves the best possible ads or something like this. Um, you want to do this in a way that you can use all of that data, even though that data wasn't collected by that particular agent, right? So there's the crucial difference to supervised learning again is that you have this, this interactive structure, right? This multi-step interactive structure because in a supervised learning, you also have, you also have this buffer here, right? In supervised learning, you simply have your labeled data set. Um, but the difference is in supervised learning, you always know what the right action is currently, right? Because you have the labels in offline reinforcement learning. You don't know you, right? You might be here and there are three actions available, right? And, uh, if you, all you know is that the, the demonstrator a, these actors here, one of them has done this and then this and then this and then got a two. You have, you have no clue what happens if you do this and then this and then this, right? All you know is that this action here might eventually lead to it too, right? And it also can't try it out because, uh, you can't try out this path because you don't get a reward here. You have to find, and this is the, the task here, you'll have to find some other example or stitch together. They make a good example here. So this paper basically proposes a benchmark for offline RL algorithms. So what they do is they have a bunch of data sets, right? They have a bunch of these replay buffers around for different tasks. They are a collection of this that they collected with various techniques. So there is human demonstration, there is other agents and so on. They have that and you're supposed to take one of them, learn something, learn an agent and then evaluate it on an environment, right? And they propose which ones are suitable for this. They give you the data and they give you the environment to evaluate it on. In the end, you'll get a score and you can compare your offline RL algorithm with others. They also provide some benchmark implementations for algorithms that already do this and they show that they don't really work well. So one of the, one of the tasks is this maze here. In this maze, you will, the task is you are somewhere, let's say here and you need to go somewhere, let's say here and you need to find your way, right? And the demonstrations you have, the data in your replay buffer is such that this is the same task but never the same start and endpoints like you are tasked to. So you might have one in your replay buffer, you might have one trajectory, one episode that went like this from one to two, right? And you'll be able to see the reward of that. And you might have one trajectory that was from two to three like this, right? So both of these things actually give you really higher reward. So if you were an agent, right? And you had to learn and now the task is please go from one to three. What you could do is you could simply say, ah, I know, I know the green thing gave a pretty high reward and the yellow thing gave a pretty high reward. So I know the green thing started at one and I know the yellow thing ended at three and I know they both have this common location. So what I might do just is I might go to that common location and then go on on the different path, right? So you have to somehow stitch together experience from other agents in order to make your task work. Now this is a very explicit example. Of course, what we want to do is we want to do this in a more implicit deep learning way, ideally, and not manually stitch together other trajectories. Though I'm pretty sure that would not, that would not be so so dumb, right? I'm pretty sure there's a lot of data augmentation you could do during training, simply by stitching together other trajectories, right? So from this trajectory, you could actually, not only could you make other gold conditioned ways, for example, from here to here or from here to here, you could make from here to here anywhere where you have shared points, you could go, you could train a policy that goes there and then goes further or something like this. I'm pretty sure there's already an algorithm that does things like this, but I'm just thinking allowed here. All right, so this is one of the tasks and you see that the that you'll have to learn a policy to go as fast as possible from any point to any other point. And you're all you're given is a database of experience that already exists from some other agent, but never will probably, never the exact route that you need to learn right now. All right, so the goal is how fast or how efficiently can you do this? This is one task in this data set. The next task is very similar, is this grid world here where there is this red square, a red triangle, that's your agent, and then there is the green square, that's your goal or vice versa. And so you're basically tasked to not hit the walls here and go about your your way finding the target. There are more elaborate things like this mojo co environment here or the Ant Maze where you have this little ant with, you know, the spider legs. So this is no longer you can just move in either direction, you have to actually control the legs. And there's also this arm, this robotic arm. So you see there is a white diversity of tasks. And also there is a white, white diversity of how the replay buffer was constructed. So in some cases, the replay buffer is actually constructed by a human performing in this environment. So in this hand, in this hand manipulation task, you'll have demonstrations from humans. You see, it's not particularly many samples here. It's a 5000 samples, which I guess are, is a chopped up version of I'm not really sure how the human things were constructed, but you can clearly guess that the degrees of freedom that you have in a robotic hand is much, much higher than you could learn just from these 5000 samples. If you were to, you know, an online RL algorithm that just does random exploration will need much more than these 5000 samples. And the 5000 samples won't be IID distributed with all the degrees of freedom. It will just be here's what a human does, right? And so you can think of algorithms like inverse reinforcement learning or something like this. But here in inverse reinforcement learning, usually you assume that the expert, the expert is kind of trying to achieve the same reward as you do. But this is not necessarily the case here. You have a given reward structure, but you are tasked to simply learn from these demonstrations. You can see, it's also possible that there is, this is constructed by a policy. And that usually means that they, so either it's constructed by, let's say, a reinforcement learning algorithm that was trained in an online fashion, but maybe not as well. But also, I think they have behavior cloning policy that they got from human demonstration, I think, so that there are many ways. Also, sometimes you have a planner, which is, can you imagine? It's an algorithm that wasn't machine learned. So I know almost unthinkable, but in these kind of mazes, you can actually do planning algorithms that can sort of, so I know this is crazy and crazy talk, niche topic, but there exist things like A-star search, where you can construct the kind of shortest path through these mazes and things like this. So, yeah, that's, I know, I know, that is very niche, but you can construct policies like this, and then you can use those as your replay buffer filling, and you can already see that this also will be a massively different distribution of data than you would get with an online RL algorithm. So, in conclusion, they do test other algorithms on this. In conclusion, they say that most offline RL algorithms nowadays, they don't work well on these data sets. The only data sets where they do work well is where the replay buffer was generated by some sort of, like here, by some sort of policy, by some sort of reinforcement learning policy. So what they would do is they would train an online policy, and the experience generated by that online policy, while it learns, will make up the replay buffer. And if you use that replay buffer for offline learning, then they say it tends to work okay. But if you have other methods of collecting the data that are very different from this offline, sorry, from a reinforcement learning collection approach, then it tends not to work as well. All right, so if you are interested in offline RL, please check out this paper. All their code is available right here. Note that the link in the paper doesn't seem to work. The true link is here. I'll also put it in the description. And with that, I wish you a good day. Bye. | [{"start": 0.0, "end": 6.44, "text": " Hi there. Today we're looking at data sets for data driven reinforcement learning by Justin Fuh,"}, {"start": 6.44, "end": 15.120000000000001, "text": " a Viral Kumar, Ophir Nachum, George Tucker, and Sergei Levine. So this is a, what you would call,"}, {"start": 15.120000000000001, "end": 22.16, "text": " a data set paper or a benchmark paper. And the main point or the main area of the paper is,"}, {"start": 22.16, "end": 30.72, "text": " what's called offline reinforcement learning. So offline reinforcement learning, usually in reinforcement"}, {"start": 30.72, "end": 37.519999999999996, "text": " learning, you have this task, right? You have the agent and you have the environment. And the agent"}, {"start": 37.519999999999996, "end": 44.400000000000006, "text": " gets some sort of observation and has to come up with an action in response to that observation."}, {"start": 44.4, "end": 52.48, "text": " And then it gets back a reward and another observation. And again, has to come up with an action."}, {"start": 52.48, "end": 60.879999999999995, "text": " And the goal is to maximize the rewards over time that the agent gets while interacting with the environment."}, {"start": 62.16, "end": 68.16, "text": " So usually this is organized in what are called episodes, which basically means if you have"}, {"start": 68.16, "end": 76.72, "text": " some sort of environment, right? And here is the agent. And here is the goal. Right? The goal is a"}, {"start": 77.6, "end": 85.44, "text": " inverted triangle. And there are a bunch of walls right here. Right? So it looks kind of a maze"}, {"start": 85.44, "end": 95.03999999999999, "text": " that the agent has to navigate. Then one episode is could be the agent moving around until it either"}, {"start": 95.04, "end": 103.04, "text": " finds the target or hits a wall or just kind of goes around and around. And then at some point,"}, {"start": 103.04, "end": 111.84, "text": " you say, all right, that's enough over game over. And usually in reinforcement learning, you perform"}, {"start": 111.84, "end": 117.76, "text": " many of these episodes and then you learn from them. So you perform episodes and each episode"}, {"start": 117.76, "end": 127.2, "text": " gets into usually some sort of replay buffer. Right? Let's call this replay buffer. And"}, {"start": 129.04, "end": 135.36, "text": " you do this many times and at the same time that you're doing this, you're using the things that"}, {"start": 135.36, "end": 143.52, "text": " you stored here in order to learn. Right? So the agent learns from these things. Right? So it acts"}, {"start": 143.52, "end": 150.56, "text": " with the environment in this loop in this fashion. Then once it has done an episode, it puts it"}, {"start": 150.56, "end": 157.44, "text": " into the replay buffer. And then it learns from the actions it has performed. This is what is"}, {"start": 157.44, "end": 162.64000000000001, "text": " usually called online reinforcement learning. Right? So this loop is online."}, {"start": 162.64, "end": 172.64, "text": " Online means because the agent learns from its own actions. Right? Now in contrast to this,"}, {"start": 172.64, "end": 179.6, "text": " there is offline reinforcement learning. So in offline reinforcement learning,"}, {"start": 181.6, "end": 191.44, "text": " the agent has to learn from someone else's actions. Right? So this connection here is severed."}, {"start": 191.44, "end": 202.56, "text": " Instead, you have other agents. Let's call these agent one, agent two, multiple agents, agent three,"}, {"start": 202.56, "end": 209.44, "text": " they all have their own interaction with the environment. Right? Environment, environment,"}, {"start": 209.44, "end": 220.48, "text": " interactions. And they feed their experience into, they perform these episodes. They feed their"}, {"start": 220.48, "end": 226.79999999999998, "text": " experience into the replay buffer. And then the agent just has to learn from that. So whatever happened"}, {"start": 226.79999999999998, "end": 236.0, "text": " here, this was previous, right? And now the agent has to learn how to maximize its reward just"}, {"start": 236.0, "end": 241.92, "text": " from the experience that is in the replay buffer from these other agents. This is what's called"}, {"start": 241.92, "end": 247.35999999999999, "text": " offline reinforcement learning. It means the agent learns from someone else's actions."}, {"start": 247.36, "end": 253.28, "text": " Um, usually the, the power of reinforcement learning, of course, comes from the fact that you"}, {"start": 253.28, "end": 261.84000000000003, "text": " learn from your own actions. It means that, for example, if you already have some successful"}, {"start": 261.84000000000003, "end": 268.08000000000004, "text": " trajectories here, right? You found the target. You can try to replicate that because you know"}, {"start": 268.64, "end": 274.16, "text": " which actions you performed. And if you don't, you know, change anything, you're probably going"}, {"start": 274.16, "end": 280.24, "text": " to find the target again just by randomness. All right? Because you've done it already once."}, {"start": 280.24, "end": 285.84000000000003, "text": " And so on. So you, you kind of know all the intrinsic of your own algorithm that led you to reach"}, {"start": 285.84000000000003, "end": 292.64000000000004, "text": " the target. Um, now this is an entirely different case with all of these other agents. You have no clue"}, {"start": 292.64000000000004, "end": 299.12, "text": " how they were acting, why they were acting. Right? You just know, okay, they did a series of actions"}, {"start": 299.12, "end": 306.8, "text": " and that gave them some kind of reward. And you have no idea what their reasoning was or anything."}, {"start": 306.8, "end": 313.04, "text": " All you really can learn from is their sequence of actions. Now why is that problematic? Right?"}, {"start": 313.04, "end": 321.36, "text": " So if all of the agents, for example, if this is, if this is a, an actual platform and this"}, {"start": 321.36, "end": 329.12, "text": " really, right, steep here, where this is, uh, all of here is, is really steep cliffs, right? And you,"}, {"start": 329.12, "end": 334.72, "text": " you can actually fall off, but the agents, they're, they're humans, right? So they don't want to fall"}, {"start": 334.72, "end": 341.36, "text": " off. So what they're going to do is they're just going to take steps that are maybe like this or"}, {"start": 341.36, "end": 346.72, "text": " maybe like this, but they're, they're humans. They're smart. They're never, they're never going to,"}, {"start": 346.72, "end": 352.64000000000004, "text": " they're never going to fall off here, right? Why is this a problem if you're not trying to learn"}, {"start": 352.64000000000004, "end": 361.12, "text": " from this experience and your policy by some chance because you might have some entropy in there"}, {"start": 361.12, "end": 367.84000000000003, "text": " or something, you do know what happens if you make a move like this and you also know what happens"}, {"start": 367.84000000000003, "end": 372.8, "text": " if you make a move like this, right? Already two humans have done these moves, but what happens if"}, {"start": 372.8, "end": 378.32, "text": " you make a move like this? You just don't know, right? In classic reinforcement learning, you would"}, {"start": 378.32, "end": 385.2, "text": " get a negative reward and you could learn from that to not do this action anymore, but in, in this"}, {"start": 385.2, "end": 392.56, "text": " case, you simply don't have any data to, uh, to, to tell you what happens when you go off there. So"}, {"start": 393.28000000000003, "end": 398.16, "text": " you see the, there's a, there's a problem if you are not able to learn from your own experience,"}, {"start": 398.16, "end": 405.12, "text": " but you have to learn from something or someone else's experience. The distribution of experience"}, {"start": 405.12, "end": 414.72, "text": " that you have available to you might be on like not, not fully specific, specific of the,"}, {"start": 414.72, "end": 420.48, "text": " of the environment. It might be very different from what you would do and it might be very not"}, {"start": 420.48, "end": 426.96000000000004, "text": " conducive to what you want to do with it. So the task of offline reinforcement learning is harder"}, {"start": 426.96, "end": 435.03999999999996, "text": " than online reinforcement learning, but it also has, uh, many, many applications. Sometimes it's"}, {"start": 435.03999999999996, "end": 443.2, "text": " just not possible to do online reinforcement learning. When, for example, in medical field, right?"}, {"start": 444.0, "end": 449.67999999999995, "text": " Think of, think of the medical field where you want a robot to perform a surgery."}, {"start": 449.68, "end": 456.40000000000003, "text": " You can't just do reinforcement learning, uh, in with our online techniques because they're just"}, {"start": 456.40000000000003, "end": 461.92, "text": " gonna try a bunch of things and see what works and you don't, you don't, like maybe you want that."}, {"start": 461.92, "end": 468.64, "text": " I don't want that. Uh, so necessarily you're going to be left with let's have this robot"}, {"start": 468.64, "end": 474.64, "text": " learn from human experts, right? So that's a task for offline reinforcement learning."}, {"start": 474.64, "end": 480.56, "text": " Um, there are many more tasks, for example, if you think of, uh, search engine, you will have"}, {"start": 480.56, "end": 487.91999999999996, "text": " many, many, many logs from human searching things and you simply store them. You simply have"}, {"start": 487.91999999999996, "end": 493.28, "text": " them in a buffer. Now you want to maybe, uh, train a reinforcement learning agent that,"}, {"start": 493.84, "end": 499.36, "text": " I don't know, serves the best possible ads or something like this. Um, you want to do this in a"}, {"start": 499.36, "end": 506.8, "text": " way that you can use all of that data, even though that data wasn't collected by that particular"}, {"start": 506.8, "end": 512.5600000000001, "text": " agent, right? So there's the crucial difference to supervised learning again is that you have this,"}, {"start": 512.5600000000001, "end": 519.2, "text": " this interactive structure, right? This multi-step interactive structure because in a supervised"}, {"start": 519.2, "end": 524.16, "text": " learning, you also have, you also have this buffer here, right? In supervised learning, you simply"}, {"start": 524.16, "end": 530.9599999999999, "text": " have your labeled data set. Um, but the difference is in supervised learning, you always know what"}, {"start": 530.9599999999999, "end": 537.12, "text": " the right action is currently, right? Because you have the labels in offline reinforcement learning."}, {"start": 537.12, "end": 544.7199999999999, "text": " You don't know you, right? You might be here and there are three actions available, right? And, uh,"}, {"start": 544.72, "end": 554.96, "text": " if you, all you know is that the, the demonstrator a, these actors here, one of them has done this"}, {"start": 554.96, "end": 561.76, "text": " and then this and then this and then got a two. You have, you have no clue what happens if you do"}, {"start": 563.0400000000001, "end": 570.88, "text": " this and then this and then this, right? All you know is that this action here might eventually"}, {"start": 570.88, "end": 578.32, "text": " lead to it too, right? And it also can't try it out because, uh, you can't try out this path because"}, {"start": 578.32, "end": 583.92, "text": " you don't get a reward here. You have to find, and this is the, the task here, you'll have to find"}, {"start": 584.88, "end": 591.4399999999999, "text": " some other example or stitch together. They make a good example here. So this paper basically"}, {"start": 591.4399999999999, "end": 599.04, "text": " proposes a benchmark for offline RL algorithms. So what they do is they have a bunch of data sets,"}, {"start": 599.04, "end": 605.92, "text": " right? They have a bunch of these replay buffers around for different tasks. They are a collection"}, {"start": 605.92, "end": 611.4399999999999, "text": " of this that they collected with various techniques. So there is human demonstration,"}, {"start": 611.4399999999999, "end": 618.56, "text": " there is other agents and so on. They have that and you're supposed to take one of them,"}, {"start": 619.36, "end": 628.0799999999999, "text": " learn something, learn an agent and then evaluate it on an environment, right? And they propose"}, {"start": 628.08, "end": 636.64, "text": " which ones are suitable for this. They give you the data and they give you the environment to"}, {"start": 636.64, "end": 642.32, "text": " evaluate it on. In the end, you'll get a score and you can compare your offline RL algorithm with"}, {"start": 642.32, "end": 649.6800000000001, "text": " others. They also provide some benchmark implementations for algorithms that already do this and"}, {"start": 649.68, "end": 659.8399999999999, "text": " they show that they don't really work well. So one of the, one of the tasks is this maze here."}, {"start": 661.3599999999999, "end": 668.4, "text": " In this maze, you will, the task is you are somewhere, let's say here and you need to go somewhere,"}, {"start": 668.4, "end": 675.5999999999999, "text": " let's say here and you need to find your way, right? And the demonstrations you have, the data in"}, {"start": 675.6, "end": 683.76, "text": " your replay buffer is such that this is the same task but never the same start and endpoints like"}, {"start": 683.76, "end": 691.0400000000001, "text": " you are tasked to. So you might have one in your replay buffer, you might have one trajectory,"}, {"start": 691.0400000000001, "end": 698.4, "text": " one episode that went like this from one to two, right? And you'll be able to see the reward of"}, {"start": 698.4, "end": 707.36, "text": " that. And you might have one trajectory that was from two to three like this, right? So both of"}, {"start": 707.36, "end": 714.56, "text": " these things actually give you really higher reward. So if you were an agent, right? And you had to"}, {"start": 714.56, "end": 722.3199999999999, "text": " learn and now the task is please go from one to three. What you could do is you could simply say,"}, {"start": 722.32, "end": 728.24, "text": " ah, I know, I know the green thing gave a pretty high reward and the yellow thing gave a pretty high"}, {"start": 728.24, "end": 734.48, "text": " reward. So I know the green thing started at one and I know the yellow thing ended at three and I know"}, {"start": 735.2, "end": 743.9200000000001, "text": " they both have this common location. So what I might do just is I might go to that common location"}, {"start": 744.8000000000001, "end": 752.08, "text": " and then go on on the different path, right? So you have to somehow stitch together experience"}, {"start": 752.08, "end": 758.0, "text": " from other agents in order to make your task work. Now this is a very explicit example. Of course,"}, {"start": 758.0, "end": 765.0400000000001, "text": " what we want to do is we want to do this in a more implicit deep learning way, ideally, and not"}, {"start": 765.5200000000001, "end": 772.0, "text": " manually stitch together other trajectories. Though I'm pretty sure that would not, that would not be so"}, {"start": 773.36, "end": 780.24, "text": " so dumb, right? I'm pretty sure there's a lot of data augmentation you could do during training,"}, {"start": 780.24, "end": 788.64, "text": " simply by stitching together other trajectories, right? So from this trajectory, you could actually,"}, {"start": 788.64, "end": 794.8, "text": " not only could you make other gold conditioned ways, for example, from here to here or from here to here,"}, {"start": 794.8, "end": 800.88, "text": " you could make from here to here anywhere where you have shared points, you could go,"}, {"start": 802.96, "end": 807.52, "text": " you could train a policy that goes there and then goes further or something like this. I'm"}, {"start": 807.52, "end": 812.8, "text": " pretty sure there's already an algorithm that does things like this, but I'm just thinking allowed"}, {"start": 812.8, "end": 821.52, "text": " here. All right, so this is one of the tasks and you see that the that you'll have to learn"}, {"start": 821.52, "end": 828.0799999999999, "text": " a policy to go as fast as possible from any point to any other point. And you're all you're given"}, {"start": 828.0799999999999, "end": 836.0799999999999, "text": " is a database of experience that already exists from some other agent, but never will probably,"}, {"start": 836.08, "end": 844.96, "text": " never the exact route that you need to learn right now. All right, so the goal is how fast or how"}, {"start": 844.96, "end": 852.08, "text": " efficiently can you do this? This is one task in this data set. The next task is very similar,"}, {"start": 852.08, "end": 858.88, "text": " is this grid world here where there is this red square, a red triangle, that's your agent,"}, {"start": 858.88, "end": 866.8, "text": " and then there is the green square, that's your goal or vice versa. And so you're basically tasked"}, {"start": 866.8, "end": 876.8, "text": " to not hit the walls here and go about your your way finding the target. There are more elaborate"}, {"start": 876.8, "end": 884.8, "text": " things like this mojo co environment here or the Ant Maze where you have this little ant with,"}, {"start": 884.8, "end": 889.52, "text": " you know, the spider legs. So this is no longer you can just move in either direction, you have to"}, {"start": 889.52, "end": 900.7199999999999, "text": " actually control the legs. And there's also this arm, this robotic arm. So you see there is a white"}, {"start": 900.7199999999999, "end": 910.7199999999999, "text": " diversity of tasks. And also there is a white, white diversity of how the replay buffer was constructed."}, {"start": 910.72, "end": 918.96, "text": " So in some cases, the replay buffer is actually constructed by a human performing in this environment."}, {"start": 918.96, "end": 926.24, "text": " So in this hand, in this hand manipulation task, you'll have demonstrations from humans. You see,"}, {"start": 926.24, "end": 935.12, "text": " it's not particularly many samples here. It's a 5000 samples, which I guess are, is a chopped up"}, {"start": 935.12, "end": 943.68, "text": " version of I'm not really sure how the human things were constructed, but you can clearly guess"}, {"start": 943.68, "end": 950.24, "text": " that the degrees of freedom that you have in a robotic hand is much, much higher than you could"}, {"start": 950.24, "end": 956.16, "text": " learn just from these 5000 samples. If you were to, you know, an online RL algorithm that just"}, {"start": 956.16, "end": 962.8, "text": " does random exploration will need much more than these 5000 samples. And the 5000 samples won't"}, {"start": 962.8, "end": 968.64, "text": " be IID distributed with all the degrees of freedom. It will just be here's what a human does, right?"}, {"start": 969.3599999999999, "end": 977.1999999999999, "text": " And so you can think of algorithms like inverse reinforcement learning or something like this."}, {"start": 977.1999999999999, "end": 982.64, "text": " But here in inverse reinforcement learning, usually you assume that the expert,"}, {"start": 984.88, "end": 992.24, "text": " the expert is kind of trying to achieve the same reward as you do. But this is not"}, {"start": 992.24, "end": 998.48, "text": " necessarily the case here. You have a given reward structure, but you are"}, {"start": 999.2, "end": 1008.88, "text": " tasked to simply learn from these demonstrations. You can see, it's also possible that there is,"}, {"start": 1008.88, "end": 1018.16, "text": " this is constructed by a policy. And that usually means that they, so either it's constructed"}, {"start": 1018.16, "end": 1023.76, "text": " by, let's say, a reinforcement learning algorithm that was trained in an online fashion,"}, {"start": 1023.76, "end": 1030.6399999999999, "text": " but maybe not as well. But also, I think they have behavior cloning policy that they got from"}, {"start": 1030.6399999999999, "end": 1036.56, "text": " human demonstration, I think, so that there are many ways. Also, sometimes you have a planner,"}, {"start": 1036.56, "end": 1046.08, "text": " which is, can you imagine? It's an algorithm that wasn't machine learned. So I know almost"}, {"start": 1046.08, "end": 1057.28, "text": " unthinkable, but in these kind of mazes, you can actually do planning algorithms that can sort of,"}, {"start": 1057.28, "end": 1065.28, "text": " so I know this is crazy and crazy talk, niche topic, but there exist things like A-star search,"}, {"start": 1065.28, "end": 1073.36, "text": " where you can construct the kind of shortest path through these mazes and things like this."}, {"start": 1073.36, "end": 1085.52, "text": " So, yeah, that's, I know, I know, that is very niche, but you can construct policies like this,"}, {"start": 1085.52, "end": 1091.9199999999998, "text": " and then you can use those as your replay buffer filling, and you can already see that this also"}, {"start": 1091.9199999999998, "end": 1098.32, "text": " will be a massively different distribution of data than you would get with an online RL algorithm."}, {"start": 1098.32, "end": 1111.28, "text": " So, in conclusion, they do test other algorithms on this. In conclusion, they say that most"}, {"start": 1111.28, "end": 1120.8, "text": " offline RL algorithms nowadays, they don't work well on these data sets. The only data sets where"}, {"start": 1120.8, "end": 1131.68, "text": " they do work well is where the replay buffer was generated by some sort of, like here, by some"}, {"start": 1131.68, "end": 1136.1599999999999, "text": " sort of policy, by some sort of reinforcement learning policy. So what they would do is they would"}, {"start": 1136.1599999999999, "end": 1144.32, "text": " train an online policy, and the experience generated by that online policy, while it learns,"}, {"start": 1144.32, "end": 1151.2, "text": " will make up the replay buffer. And if you use that replay buffer for offline learning,"}, {"start": 1151.2, "end": 1158.1599999999999, "text": " then they say it tends to work okay. But if you have other methods of collecting the data"}, {"start": 1159.12, "end": 1167.4399999999998, "text": " that are very different from this offline, sorry, from a reinforcement learning collection"}, {"start": 1167.44, "end": 1176.0800000000002, "text": " approach, then it tends not to work as well. All right, so if you are interested in offline RL,"}, {"start": 1176.0800000000002, "end": 1182.16, "text": " please check out this paper. All their code is available right here. Note that the link in the"}, {"start": 1182.16, "end": 1190.56, "text": " paper doesn't seem to work. The true link is here. I'll also put it in the description. And"}, {"start": 1190.56, "end": 1199.6799999999998, "text": " with that, I wish you a good day. Bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=eYgPJ_7BkEw | FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence | FixMatch is a simple, yet surprisingly effective approach to semi-supervised learning. It combines two previous methods in a clever way and achieves state-of-the-art in regimes with few and very few labeled examples.
Paper: https://arxiv.org/abs/2001.07685
Code: https://github.com/google-research/fixmatch
Abstract:
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are most important to FixMatch's success. We make our code available at this https URL.
Authors: Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi, today we're looking at fixed match simplifying semi-supervised learning with consistency and confidence by Kyuk-Son, David Perthalot and others of Google Research. So this paper concerns semi-supervised learning. So what does semi-supervised learning mean? In semi-supervised learning, you have a data set of labeled samples. So you have this data set of x's and corresponding y labels. But this data set sometimes is very small. Now you have a much bigger data set of unlabeled examples, just x's with no labels. So you don't know what the labels of the unlabeled examples are. But what you would like to do is you would like to use this really large data set in order to help you with learning the association between the data points and the labels. So for example, in this case, you would have something like an image classification data set. And I'm going to take the example here of medical data. So you have a pictures of lungs, let's draw a lung here, that is an ugly lung. You have pictures of lungs and whether or not they have a tumor in them. So medical data is very hard to get, especially labeled medical data. First of all, you need the data itself. Then you also need at least one, but ideally three radiologists to look at whether or not this is a good or a bad image and label it. So it's usually very expensive to collect that data. But you might have plenty of unlabeled data. You might just be able to go through some database and find anonymized, undiagnosed, long scans somewhere lying around. The same with images like other images. So labeling images is pretty human intensive, but the internet contains like a whole bunch of unlabeled images. So the task of semi supervised learning is how do you use this unlabeled data set in order to make your classification on the labeled data set easier. And fixed match combines two approaches to this in a smart way, namely consistency and confidence approach. So what does, what does, we'll jump right into into the method. So basically what you want to do is you want to say my loss that I optimize, right, this is my loss. It consists of two parts, namely a supervised loss, which is your classic classification loss, right, plus an unsupervised loss, right. And then you have like some sort of a trade of parameter in front. Now your supervised loss here, this is where this is just the cross entropy, let's call it age, between your predicted labels and the actual true labels, right. And the predicted labels, they can be, you know, kind of a distribution over labels. Now the magic of course is here in the unsupervised loss and this unsupervised loss, this is what's described here in this part, right. So the unsupervised loss is going to be this age between P and Q, and we'll see what P and Q is. So if for the unsupervised loss, you of course want to start with an unlabeled example, then you have the same sample go into two different pipelines in the first pipeline up here, or you do is, you so called weekly augmented. So we're dealing with images, so we have to talk about image augmentation. So image augmentation has long been used in supervised learning to kind of give you more, it's kind of a cheat to give you more training data. So if you have an image, right, of let's say our famous cat, you can obtain more training data if you, for example, by random cropping. So you can random crop, let's say we just take this bottom right corner here and then we enlarge it to the original size, right, then it is still sort of a cat, but it's just a part of a cat, right. But usually that helps because you say, okay, my image data set is just pictures of animals, right, it's entirely conceivable that someone held the camera like this or like this, right. So technically in terms of generalizing to a test set, these both data points should be valid. So I'm just going to add both to my training data. So you can see how from one training data point, you can get many training data points just by doing this cropping. What you can also do is you can flip it left, right, you just in swap the pixels left, right. And usually these kind of so a cat that has a little dark spot here is still a cat when it has the little dark spot over there, right, but to your classifier, those are two different samples. So you can do many of those things and they have two kind of augmentations, they have what they call weekly augmented and strongly augmented, right. So in the weekly augmented pipeline, I think they just they crop and they shift and they rotate or something like this. You can see here this this horsey here, it is something like it's cropped here about then it is turned slightly to the left and then, yeah, I think that's it. So they crop, they rotate and then they also flip horizontally at random in like 50% of the time. So this is what's called weekly augmented, the goal here is just to kind of obtain a bit more training data. So you run this through your model, through your classification model, as you would a regular sample and you get a prediction. Now from your prediction, you can take the highest prediction here and that is going to be your pseudo label. So this is P of Y, this is your distribution that you estimate, right. So and this and this, if you just take the max, this is going to be your Y hat, right. And this is what they call a pseudo label. Sorry, you'll see why it is called a pseudo label. So the other pipeline here is the strong augmentation pipeline. Now in week augmentation, we just wanted to get some more training data in strong augmentation. Now the goal is to really screw up that picture to the point where it's still, you know, you could recognize in the same class, but you can see here the augmentations they go wild. So you play around with the color with the hue, you play around with the light intensity, right. With the contrast, you can do many, many things. You can see this, this image looks basically nothing like this image, but it you can still kind of recognize it as a, as a horse. But the strongly augmented data is much more distorted than the weekly augmented data. And that's the point. So also you send the strongly augmented data through the model. And again, you get a prediction, right. And now is the trick is you take the label from here and you, you take that as if it were the true label, right. You take that as if it were the true label. And you form a loss from this prediction being the model prediction as if this thing here that also comes from the model as if that was the true label, right. That's why it's called a pseudo label because it is a label that you produce from the model itself. Now, of course, if these were to be the same picture, it would be kind of pointless, right. That's why you see there needs to be a weekly and a strongly augmented pipe line. I am pretty sure if you want a more basic version of this, make this just clean. So no augmentation and make this augmented. Right. That's that's how you can think of it. The fact that there is weak and here strong augmentation, I think is just a your classic trick to get more training data. But in essence, you can think of it as this is here the clean thing. You just want to produce a label. And then you want that, that an augmented version of the image has the same label. Now, you can think of it shortly. What does this model learn? If you just have this, you remember, I think the important thing is always to remember that there are two components here, right. There is first the supervised loss. This is the important one, ultimately, because we have the true labels, right. And then second, there is the unsupervised loss, which is just an auxiliary loss that is supposed to just kind of tune our model to the name. So don't forget that this down here just concerns the unsupervised part of that loss. So if you think what does the model actually learn when whenever you train it like this, it basically learns to revert this strong augmentation. So it basically says, hey model, whenever I give you a weak augmented image and I distort it heavily. Whenever I give you an image and I distort it heavily, I want the label to be the same. So the model basically learns that whatever the image, whatever the image, the model at the end of the training will be able to basically map any strongly augmented picture to the same class as a weekly augmented picture if it comes from the same source. So the model basically learns to ignore these kinds of augmentations. That's what this loss over here does. It basically says these sorts of augmentations, these sorts of distortions of images, please ignore those because I always want you to output the same label here in the prediction here as if I had not distorted or just weekly distorted the image. So that's that's what you have to keep in mind that this this loss is designed to make the model not distinguish between differently augmented versions of the same image. And interestingly, that really seems to help with the supervised loss. My hypothesis is that all these methods, what they're kind of trying to do is to just tune the neural network to the, let's say the orders of magnitude of the input data and also to the kinds of augmentations that the humans come up with. And that's a very important point. So the augmentations and here we said, you know, it's kind of a rotation and the crop, the kind of augmentation really seem to play a role. So this paper finds that on C for 10, where the state of the R, I believe is something like 96 97% accuracy on C for 10 with just 250 labeled examples. Right. Now the usual data set size is about 50,000. It goes to 94.4.9%. So almost 95% accuracy with the state of the R being like 97. This is incredible with just 250 labeled examples. Crazy, right. And with only four labels per class, it gets 88.6%. So that's just 40 images with labels. They get 88.6% of the of accuracy compared to the 97% that you get with like 50,000 images. That is pretty, pretty cool, right. Simply by having all other images not labeled, but pseudo labeled and consistency regularized. Right. So the two, the two things that are combined by fixed match again are consistency regularization, which basically it means that the model should output similar predictions when fed perturbed versions of the same image. Right. This they they're really forthcoming that they are not the ones who invented this. They just combine the consistency regularization with the pseudo labeling. Now the pseudo labeling, they have also not invented the pseudo labeling. Leverages, the idea that we should use the model itself to obtain artificial labels for unlabeled data. We've seen a lot of papers in the last few months or years where it's like the teacher teaches the student and then the student teaches the teacher model again and so on. So that they simply combine the two methods in a clever way. They have one last thing that is not in this drawing. Namely, they only use the pseudo label. They have a break right here and they only use the pseudo label if if the confidence. So if this P of Y here is above a certain threshold. So they don't take all the pseudo labels, but they only take the labels where the model is fairly sure about right. So they haven't actually an ablation study where they show that this is reasonably reasonably important. And if you go down here where they say ablation or is it ablation, ablation study. Oh yeah, something I also find cool. If you just give one image per class, one image per class, 10 images that are labeled, it still gets like 78% accuracy. I think the images are chosen as good representations of their class, but still one image per class, pretty, pretty cool. An important part of this is the ablation study where they say, okay, we want to tease apart why this algorithm, why this unsemisupervised learning technique works so well. And they find several important factors. They find, for example, that their augmentation strategy is extremely important. So how they augment the images is very important. You see here the error of this 4.8% on the 250 label split. If you change up the, if you change up the, the augmentation strategies, your error gets higher, right. And so they, they say we use this cutout and we measure the effect of cutout. We find that both cutout and CT augment are required to obtain the best performance. Removing either results in a comparable increase in error rate. Now you've seen before, for example, they went, they went from this 93, sorry, 93.0% to 94.0% from the previous state of the art semi-supervised learning. And here they find that simply changing the augmentation strategy changes the error by more than a percent. So you can just see this in context of what's important here, right. They say, again, the ratio of unlabeled data seems pretty important. We observe a significant decrease in error rates by losing using a large amounts of unlabeled data, right. Then the optimizer and learning rate schedule seems to be very important as well in that they use this, they say, SGD with momentum works much better than Adam. And then they use this decreasing learning rate schedule, this cosine learning rate schedule. There seem to be a lot of things, a lot of hyperparameters that are fairly important here. And you can see that the gains are substantial sometimes, but they aren't like through the roof substantial, where you can make a good argument that it is unclear how much really comes from this clever combination. That fixed match proposes and how much also just comes from whether or not you set the hyperparameters correctly and exactly how much computation are you able to throw at selecting your hyperparameters. So that seems to be a bit of a pain point for me. They also say we find that tuning the weight decay is exceptionally important for low-label regimes, right. Choosing a value that is just one order of magnitude larger or smaller than optimal can cost 10% points or more. And so that all of that seems to me that this kind of research, where you're nibbling for half or single percentage points in accuracy, while a single misstep in a choice of hyperparameter might cost you 10 times that gain is a bit sketchy. Now I recognize they get numbers like no one else has gotten before, but where exactly the gains come from and if the gains really come from this architecture or actually just more from throwing computers at it, I don't know. Alright, with that, I hope you enjoyed this and I invite you to check out the paper. Bye-bye. | [{"start": 0.0, "end": 7.0, "text": " Hi, today we're looking at fixed match simplifying semi-supervised learning with consistency"}, {"start": 7.0, "end": 15.32, "text": " and confidence by Kyuk-Son, David Perthalot and others of Google Research."}, {"start": 15.32, "end": 19.32, "text": " So this paper concerns semi-supervised learning."}, {"start": 19.32, "end": 21.76, "text": " So what does semi-supervised learning mean?"}, {"start": 21.76, "end": 27.2, "text": " In semi-supervised learning, you have a data set of labeled samples."}, {"start": 27.2, "end": 34.2, "text": " So you have this data set of x's and corresponding y labels."}, {"start": 34.2, "end": 37.2, "text": " But this data set sometimes is very small."}, {"start": 37.2, "end": 47.2, "text": " Now you have a much bigger data set of unlabeled examples, just x's with no labels."}, {"start": 47.2, "end": 53.2, "text": " So you don't know what the labels of the unlabeled examples are."}, {"start": 53.2, "end": 59.2, "text": " But what you would like to do is you would like to use this really large data set"}, {"start": 59.2, "end": 66.2, "text": " in order to help you with learning the association between the data points and the labels."}, {"start": 66.2, "end": 74.2, "text": " So for example, in this case, you would have something like an image classification data set."}, {"start": 74.2, "end": 77.2, "text": " And I'm going to take the example here of medical data."}, {"start": 77.2, "end": 84.2, "text": " So you have a pictures of lungs, let's draw a lung here, that is an ugly lung."}, {"start": 84.2, "end": 91.2, "text": " You have pictures of lungs and whether or not they have a tumor in them."}, {"start": 91.2, "end": 96.2, "text": " So medical data is very hard to get, especially labeled medical data."}, {"start": 96.2, "end": 100.2, "text": " First of all, you need the data itself."}, {"start": 100.2, "end": 113.2, "text": " Then you also need at least one, but ideally three radiologists to look at whether or not this is a good or a bad image and label it."}, {"start": 113.2, "end": 116.2, "text": " So it's usually very expensive to collect that data."}, {"start": 116.2, "end": 119.2, "text": " But you might have plenty of unlabeled data."}, {"start": 119.2, "end": 129.2, "text": " You might just be able to go through some database and find anonymized, undiagnosed, long scans somewhere lying around."}, {"start": 129.2, "end": 134.2, "text": " The same with images like other images."}, {"start": 134.2, "end": 140.2, "text": " So labeling images is pretty human intensive, but the internet contains like a whole bunch of unlabeled images."}, {"start": 140.2, "end": 151.2, "text": " So the task of semi supervised learning is how do you use this unlabeled data set in order to make your classification on the labeled data set easier."}, {"start": 151.2, "end": 160.2, "text": " And fixed match combines two approaches to this in a smart way, namely consistency and confidence approach."}, {"start": 160.2, "end": 167.2, "text": " So what does, what does, we'll jump right into into the method."}, {"start": 167.2, "end": 176.2, "text": " So basically what you want to do is you want to say my loss that I optimize, right, this is my loss."}, {"start": 176.2, "end": 187.2, "text": " It consists of two parts, namely a supervised loss, which is your classic classification loss, right, plus an unsupervised loss, right."}, {"start": 187.2, "end": 191.2, "text": " And then you have like some sort of a trade of parameter in front."}, {"start": 191.2, "end": 203.2, "text": " Now your supervised loss here, this is where this is just the cross entropy, let's call it age, between your predicted labels and the actual true labels, right."}, {"start": 203.2, "end": 209.2, "text": " And the predicted labels, they can be, you know, kind of a distribution over labels."}, {"start": 209.2, "end": 220.2, "text": " Now the magic of course is here in the unsupervised loss and this unsupervised loss, this is what's described here in this part, right."}, {"start": 220.2, "end": 228.2, "text": " So the unsupervised loss is going to be this age between P and Q, and we'll see what P and Q is."}, {"start": 228.2, "end": 248.2, "text": " So if for the unsupervised loss, you of course want to start with an unlabeled example, then you have the same sample go into two different pipelines in the first pipeline up here, or you do is, you so called weekly augmented."}, {"start": 248.2, "end": 262.2, "text": " So we're dealing with images, so we have to talk about image augmentation. So image augmentation has long been used in supervised learning to kind of give you more, it's kind of a cheat to give you more training data."}, {"start": 262.2, "end": 279.2, "text": " So if you have an image, right, of let's say our famous cat, you can obtain more training data if you, for example, by random cropping."}, {"start": 279.2, "end": 295.2, "text": " So you can random crop, let's say we just take this bottom right corner here and then we enlarge it to the original size, right, then it is still sort of a cat, but it's just a part of a cat, right."}, {"start": 295.2, "end": 307.2, "text": " But usually that helps because you say, okay, my image data set is just pictures of animals, right, it's entirely conceivable that someone held the camera like this or like this, right."}, {"start": 307.2, "end": 314.2, "text": " So technically in terms of generalizing to a test set, these both data points should be valid."}, {"start": 314.2, "end": 328.2, "text": " So I'm just going to add both to my training data. So you can see how from one training data point, you can get many training data points just by doing this cropping. What you can also do is you can flip it left, right, you just in swap the pixels left, right."}, {"start": 328.2, "end": 344.2, "text": " And usually these kind of so a cat that has a little dark spot here is still a cat when it has the little dark spot over there, right, but to your classifier, those are two different samples."}, {"start": 344.2, "end": 364.2, "text": " So you can do many of those things and they have two kind of augmentations, they have what they call weekly augmented and strongly augmented, right. So in the weekly augmented pipeline, I think they just they crop and they shift and they rotate or something like this."}, {"start": 364.2, "end": 380.2, "text": " You can see here this this horsey here, it is something like it's cropped here about then it is turned slightly to the left and then, yeah, I think that's it."}, {"start": 380.2, "end": 396.2, "text": " So they crop, they rotate and then they also flip horizontally at random in like 50% of the time. So this is what's called weekly augmented, the goal here is just to kind of obtain a bit more training data."}, {"start": 396.2, "end": 410.2, "text": " So you run this through your model, through your classification model, as you would a regular sample and you get a prediction. Now from your prediction, you can take the highest prediction here and that is going to be your pseudo label."}, {"start": 410.2, "end": 428.2, "text": " So this is P of Y, this is your distribution that you estimate, right. So and this and this, if you just take the max, this is going to be your Y hat, right. And this is what they call a pseudo label."}, {"start": 428.2, "end": 442.2, "text": " Sorry, you'll see why it is called a pseudo label. So the other pipeline here is the strong augmentation pipeline. Now in week augmentation, we just wanted to get some more training data in strong augmentation."}, {"start": 442.2, "end": 461.2, "text": " Now the goal is to really screw up that picture to the point where it's still, you know, you could recognize in the same class, but you can see here the augmentations they go wild. So you play around with the color with the hue, you play around with the light intensity, right."}, {"start": 461.2, "end": 477.2, "text": " With the contrast, you can do many, many things. You can see this, this image looks basically nothing like this image, but it you can still kind of recognize it as a, as a horse."}, {"start": 477.2, "end": 492.2, "text": " But the strongly augmented data is much more distorted than the weekly augmented data. And that's the point. So also you send the strongly augmented data through the model. And again, you get a prediction, right."}, {"start": 492.2, "end": 507.2, "text": " And now is the trick is you take the label from here and you, you take that as if it were the true label, right. You take that as if it were the true label."}, {"start": 507.2, "end": 528.2, "text": " And you form a loss from this prediction being the model prediction as if this thing here that also comes from the model as if that was the true label, right. That's why it's called a pseudo label because it is a label that you produce from the model itself."}, {"start": 528.2, "end": 540.2, "text": " Now, of course, if these were to be the same picture, it would be kind of pointless, right. That's why you see there needs to be a weekly and a strongly augmented pipe line."}, {"start": 540.2, "end": 552.2, "text": " I am pretty sure if you want a more basic version of this, make this just clean. So no augmentation and make this augmented."}, {"start": 552.2, "end": 567.2, "text": " Right. That's that's how you can think of it. The fact that there is weak and here strong augmentation, I think is just a your classic trick to get more training data. But in essence, you can think of it as this is here the clean thing. You just want to produce a label."}, {"start": 567.2, "end": 573.2, "text": " And then you want that, that an augmented version of the image has the same label."}, {"start": 573.2, "end": 584.2, "text": " Now, you can think of it shortly. What does this model learn? If you just have this, you remember, I think the important thing is always to remember that there are two components here, right."}, {"start": 584.2, "end": 602.2, "text": " There is first the supervised loss. This is the important one, ultimately, because we have the true labels, right. And then second, there is the unsupervised loss, which is just an auxiliary loss that is supposed to just kind of tune our model to the name."}, {"start": 602.2, "end": 624.2, "text": " So don't forget that this down here just concerns the unsupervised part of that loss. So if you think what does the model actually learn when whenever you train it like this, it basically learns to revert this strong augmentation."}, {"start": 624.2, "end": 640.2, "text": " So it basically says, hey model, whenever I give you a weak augmented image and I distort it heavily. Whenever I give you an image and I distort it heavily, I want the label to be the same."}, {"start": 640.2, "end": 667.2, "text": " So the model basically learns that whatever the image, whatever the image, the model at the end of the training will be able to basically map any strongly augmented picture to the same class as a weekly augmented picture if it comes from the same source."}, {"start": 667.2, "end": 696.2, "text": " So the model basically learns to ignore these kinds of augmentations. That's what this loss over here does. It basically says these sorts of augmentations, these sorts of distortions of images, please ignore those because I always want you to output the same label here in the prediction here as if I had not distorted or just weekly distorted the image."}, {"start": 696.2, "end": 710.2, "text": " So that's that's what you have to keep in mind that this this loss is designed to make the model not distinguish between differently augmented versions of the same image."}, {"start": 710.2, "end": 735.2, "text": " And interestingly, that really seems to help with the supervised loss. My hypothesis is that all these methods, what they're kind of trying to do is to just tune the neural network to the, let's say the orders of magnitude of the input data and also to the kinds of augmentations that the humans come up with."}, {"start": 735.2, "end": 749.2, "text": " And that's a very important point. So the augmentations and here we said, you know, it's kind of a rotation and the crop, the kind of augmentation really seem to play a role."}, {"start": 749.2, "end": 764.2, "text": " So this paper finds that on C for 10, where the state of the R, I believe is something like 96 97% accuracy on C for 10 with just 250 labeled examples."}, {"start": 764.2, "end": 784.2, "text": " Right. Now the usual data set size is about 50,000. It goes to 94.4.9%. So almost 95% accuracy with the state of the R being like 97. This is incredible with just 250 labeled examples."}, {"start": 784.2, "end": 812.2, "text": " Crazy, right. And with only four labels per class, it gets 88.6%. So that's just 40 images with labels. They get 88.6% of the of accuracy compared to the 97% that you get with like 50,000 images."}, {"start": 812.2, "end": 823.2, "text": " That is pretty, pretty cool, right. Simply by having all other images not labeled, but pseudo labeled and consistency regularized. Right."}, {"start": 823.2, "end": 841.2, "text": " So the two, the two things that are combined by fixed match again are consistency regularization, which basically it means that the model should output similar predictions when fed perturbed versions of the same image."}, {"start": 841.2, "end": 858.2, "text": " Right. This they they're really forthcoming that they are not the ones who invented this. They just combine the consistency regularization with the pseudo labeling. Now the pseudo labeling, they have also not invented the pseudo labeling."}, {"start": 858.2, "end": 875.2, "text": " Leverages, the idea that we should use the model itself to obtain artificial labels for unlabeled data. We've seen a lot of papers in the last few months or years where it's like the teacher teaches the student and then the student teaches the teacher model again and so on."}, {"start": 875.2, "end": 895.2, "text": " So that they simply combine the two methods in a clever way. They have one last thing that is not in this drawing. Namely, they only use the pseudo label. They have a break right here and they only use the pseudo label if if the confidence."}, {"start": 895.2, "end": 909.2, "text": " So if this P of Y here is above a certain threshold. So they don't take all the pseudo labels, but they only take the labels where the model is fairly sure about right."}, {"start": 909.2, "end": 915.2, "text": " So they haven't actually an ablation study where they show that this is reasonably reasonably important."}, {"start": 915.2, "end": 927.2, "text": " And if you go down here where they say ablation or is it ablation, ablation study. Oh yeah, something I also find cool."}, {"start": 927.2, "end": 938.2, "text": " If you just give one image per class, one image per class, 10 images that are labeled, it still gets like 78% accuracy."}, {"start": 938.2, "end": 950.2, "text": " I think the images are chosen as good representations of their class, but still one image per class, pretty, pretty cool."}, {"start": 950.2, "end": 963.2, "text": " An important part of this is the ablation study where they say, okay, we want to tease apart why this algorithm, why this unsemisupervised learning technique works so well."}, {"start": 963.2, "end": 975.2, "text": " And they find several important factors. They find, for example, that their augmentation strategy is extremely important. So how they augment the images is very important."}, {"start": 975.2, "end": 985.2, "text": " You see here the error of this 4.8% on the 250 label split."}, {"start": 985.2, "end": 997.2, "text": " If you change up the, if you change up the, the augmentation strategies, your error gets higher, right."}, {"start": 997.2, "end": 1010.2, "text": " And so they, they say we use this cutout and we measure the effect of cutout."}, {"start": 1010.2, "end": 1022.2, "text": " We find that both cutout and CT augment are required to obtain the best performance. Removing either results in a comparable increase in error rate."}, {"start": 1022.2, "end": 1037.2, "text": " Now you've seen before, for example, they went, they went from this 93, sorry, 93.0% to 94.0% from the previous state of the art semi-supervised learning."}, {"start": 1037.2, "end": 1045.2, "text": " And here they find that simply changing the augmentation strategy changes the error by more than a percent."}, {"start": 1045.2, "end": 1052.2, "text": " So you can just see this in context of what's important here, right."}, {"start": 1052.2, "end": 1060.2, "text": " They say, again, the ratio of unlabeled data seems pretty important."}, {"start": 1060.2, "end": 1067.2, "text": " We observe a significant decrease in error rates by losing using a large amounts of unlabeled data, right."}, {"start": 1067.2, "end": 1076.2, "text": " Then the optimizer and learning rate schedule seems to be very important as well in that they use this, they say,"}, {"start": 1076.2, "end": 1087.2, "text": " SGD with momentum works much better than Adam. And then they use this decreasing learning rate schedule, this cosine learning rate schedule."}, {"start": 1087.2, "end": 1094.2, "text": " There seem to be a lot of things, a lot of hyperparameters that are fairly important here."}, {"start": 1094.2, "end": 1116.2, "text": " And you can see that the gains are substantial sometimes, but they aren't like through the roof substantial, where you can make a good argument that it is unclear how much really comes from this clever combination."}, {"start": 1116.2, "end": 1135.2, "text": " That fixed match proposes and how much also just comes from whether or not you set the hyperparameters correctly and exactly how much computation are you able to throw at selecting your hyperparameters."}, {"start": 1135.2, "end": 1153.2, "text": " So that seems to be a bit of a pain point for me. They also say we find that tuning the weight decay is exceptionally important for low-label regimes, right."}, {"start": 1153.2, "end": 1177.2, "text": " Choosing a value that is just one order of magnitude larger or smaller than optimal can cost 10% points or more. And so that all of that seems to me that this kind of research, where you're nibbling for half or single percentage points in accuracy,"}, {"start": 1177.2, "end": 1188.2, "text": " while a single misstep in a choice of hyperparameter might cost you 10 times that gain is a bit sketchy."}, {"start": 1188.2, "end": 1204.2, "text": " Now I recognize they get numbers like no one else has gotten before, but where exactly the gains come from and if the gains really come from this architecture or actually just more from throwing computers at it, I don't know."}, {"start": 1204.2, "end": 1211.2, "text": " Alright, with that, I hope you enjoyed this and I invite you to check out the paper. Bye-bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=AU30czb4iQA | Imputer: Sequence Modelling via Imputation and Dynamic Programming | The imputer is a sequence-to-sequence model that strikes a balance between fully autoregressive models with long inference times and fully non-autoregressive models with fast inference. The imputer achieves constant decoding time independent of sequence length by exploiting dynamic programming.
https://arxiv.org/abs/2002.08926
Abstract:
This paper presents the Imputer, a neural sequence model that generates output sequences iteratively via imputations. The Imputer is an iterative generative model, requiring only a constant number of generation steps independent of the number of input or output tokens. The Imputer can be trained to approximately marginalize over all possible alignments between the input and output sequences, and all possible generation orders. We present a tractable dynamic programming training algorithm, which yields a lower bound on the log marginal likelihood. When applied to end-to-end speech recognition, the Imputer outperforms prior non-autoregressive models and achieves competitive results to autoregressive models. On LibriSpeech test-other, the Imputer achieves 11.1 WER, outperforming CTC at 13.0 WER and seq2seq at 12.5 WER.
Authors: William Chan, Chitwan Saharia, Geoffrey Hinton, Mohammad Norouzi, Navdeep Jaitly
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at the imputed sequence modeling via imputation and dynamic programming by William Chan, Chitwan Saria, Jeffrey Hinton, Mohamed Nuruzie and Navdeep Jaitley. So this is a model to perform sequence to sequence tasks. Now sequence to sequence tasks are very very common in NLP but in this case it's kind of a subset of sequence to sequence tasks. So a classic sequence to sequence task is machine translation. Here for example the sentence I like you if you want to translate it to German, sorry, you if you want to translate it to German that would become a mark day. And you see that the input is a sequence right and the output is a sequence. Now the imputer deals with very special kind of sequence to sequence tasks. Namely it deals with sequence to sequence tasks where there is a monotonic alignment. So you see that this is given here. The first word is corresponding to the first word here, the second to the second and the third to the third. This is not always the case in machine translation. You know different languages have different sentence structures. So for example in French this would be J'aim. And you can see that the first word is still the first word. However the third word has become the second and the when the verb goes to the end. So the imputer would not be able to deal with this task very well. A different task where the imputer would be useful for would be something like speech recognition. So if someone were to speak the words I like you and you would measure the waveform of that it would look something like I like you. Right. So if you have this waveform let's actually make some chunk samples out of this. So let's say this is a sample right here and here is a break here and here. Right. So we have five samples on the bottom. Right. And you can see pretty easily that this sample here this is the I and this is silence. This is the like this is silence and this is the U. So the imputer deals with these kind of sequence to sequence task where first of all there is a monotonic alignment. Sorry monotonic alignment. And second of all this is an engineering constraint where the length of the input sequence X is larger or equal to the length of the input sequence Y and you'll you'll see why mainly because we rely on being able to compute this alignment here. The alignment of input samples to output samples. Right. You can see that the monotonic alignment is actually given fairly well in speech recognition because if something is later down here it is also later in the sequence up here that is a monotonic alignment. And also usually we have more wave samples it then we have words in the output sequence. So that would be a task for the imputer. Now let's think about how we would do something like this. So let's put X at the top here and we said X has five tokens in it. And let's put Y at the at the bottom. Right. Y actually has three tokens. Right. So this here is I like you. This is the waveform. Right. And we want the I like you at the bottom. So what could we do? Right. First of all, what the imputer does is it represents I like you. Not as this form right here, but as a form where you have same length as X divided into the same amount of things. And then it does the following. So for this, this is an example. Right. This is how it would represent Y. Right. It would say I have as many chunks on the top as on the bottom, I know this chunk here corresponds to this token, then this here to this and this here to this. And then these are these intermediate ones. Right. So you can see these correspond to those. These are silence right here. Now it doesn't always need to be that there is always one token and a silence, then the token and a silence. The task of the imputer is actually to see whether this is more likely than for example, I like and then silence silence and then you. Right. So the task of the imputer is to distinguish these two from each other. And then of course also produce the actual tokens. Now if you think about how would you go about taking X and producing something like Y. So this is Y. Let's call it tilde. This is the actual Y. Right. But you can see that this here is a deterministic function in one way. It's actually not a deterministic function in the other way. And that becomes interesting when you have to compute your loss for this. But how would we go about doing this? Right. What we could do is we could just take a big transformer and a bird. Right. That's actually a draw an arrow. We could just take bird. And we could simply. So in in bird, you have in if you construct it correctly, you have as many input tokens as output tokens. So what we could simply say is for each of the outputs that we get, we simply make this as a softmax classifier over our vocabulary with the silence being one special token. And we simply classify each of the outputs into this vocabulary. This would be one step. Right. So we could do one step bird, bang bang input to outputs. And there is more. There are more sophisticated approaches to doing this in one step like a CTC. But ultimately we could just do one step. But then you'd have the same problem like for example, X L net. If you haven't seen my X L net video, I recommend it. They exactly take the problem. If you do this, right. Then at the moment where you decode the word like, you have no idea that there is an eye over here. All you know is the the vector you have here that you sample the eye from, right. But this could be a distribution where I is pretty high. But some other word is also pretty high. So this process over here that samples the word like has no idea which of the two here you actually would sample. So it cannot condition on it. So it is the assumption here is that the sampling of the word like is independent of the sampling of the word I. And of course that's not the case. You need to know what word is there if you want to sample the word like. Otherwise you can end up with some very confusing sentences. So this one step process is pretty quick, but it has the drawback that there are these conditional independence assumptions. And again, I invite you to watch the X L net video if you want to dive more into this problem. The second thing we could do is we could just decode one after another. So we could say, all right, I'll make, sorry, I'll make my five slots here. And I just leave the M. T for now. And I'm just going to decode the one that I am most sure about. And let's say the speech at the back here is very clear. And you say, I know this is a U, right. So I'm on a fill in U right here, right, and make this alignment. That this goes here. This is the U, right. I still don't know what the others are, but now what I do a second step. And in the second step, I get as an input, not only the original input like this, this thing here, but I also get the fact that I already decoded the word U to here, right, in this step. So now I say given that I already decoded the word U, which one am I now most sure about? And I might be most sure about to say, I'm most sure about this now being an I because there's a U at the end. And this kind of sounds like an I. So an I here, right. It goes to the next step. And then the next step, it already has the information that it decoded I and you. And now it's a might say, ah, okay, given these, so probably this thing, so I here, probably the thing here, the thing right here is silence, right. Makes the most sense. I kind of hear some noise, but there's already a word after. So now I'm pretty sure that this here is a silent token, right. And you go this until the end, until you're actually at this. So this here would be N step decoding. This here would be N steps of decoding, which now no longer has the problem of these conditional independence assumptions, but of course, now you have the problem that you need N steps, right. The imputer does something in the middle of this. The imputer will, as you can see here, it will form this in two blocks, right. Blocks of size B. And this is the empty symbol here. Um, right. And what it will do is it will make a step where in each block, for each block, it will condition on the previous alignment and condition on the input, it will decode whatever it feels it is most certain about in each block. And then it does this for as long as there are still empty tokens, right. You can see here the first block. And then in the second step, it will decode this, this, this, and this. So the imputer can trade off between the conditional independence assumption of the one step bird and the full conditional independence assumption of the N step decoding. Right. So it will compute this alignment and the actual tokens at the same time in this process. So how many steps does this take? This takes now B steps. And this is pretty cool because B is the block size. So this is independent of the sequence length. So it is able to compute this alignment and output in a constant number of steps. Right. So you're by modulating this B, you're now able to trade off speed versus, let's say, performance in the imputer. And this is pretty cool. So I think, actually, I think the bigger point to understand here is how to actually use the assumption that there is a monotonic alignment, right. Because if there is a monotonic alignment, and if this thing is given here, then you can do this, you can do this representation right here with the silence tokens. And that allows you to basically represent the output in a form that is of the same length as the input and do this kind of token by token decoding while still allowing you to have variable lengths output as long as they're smaller in length than the input. So that's pretty cool. And then the next pretty cool thing is the fact that they do this in blocks. Now, of course, my issue with this, so this is how the system works. My issue with this is how the system is trained. So if you think about how you train this, you must train this. First of all, the loss function, right, has to revert this. And how they do it is they marginalize, you see this down here, you want to marginalize over all the possible alignments right here. And so this is how you train. You sample an alignment from the alignment policy. And this alignment policy is, I think they have some heuristics of how they construct the alignments during training or you have experts actually giving you this alignment. I think they use in the speech recognition, they use something like CTC to give you the alignments from the alignment policy. And then you have a masking policy. And I think they also just do random masking. And then they use that for training and then they marginalize over the alignments. This I'm pretty sure is not the same distribution as the decoding procedure I just described. Right. So the decoding procedure, if you do this in B steps, right, that means each of the step is dependent on the step before. So that means the distribution of whatever you, whatever the imputer sees is actually dependent on itself. While these people are proposing a training framework where you have here, you have a heuristic in order to come up with the training sample alignments. And here you have a random, I think a random masking policy that comes up with the, with where the empty tokens are. So this is not the same distribution. And then also it marginalizes over all compatible alignments, which I'm pretty sure this is not the same distribution. This is not the correct loss distribution. They have some math to show that in expectation it's the same. But yeah, this is, this is over there over there rolling policy and role and exks are and marginalization. This, I don't want to go too deep into this. I've given it some thought, but it will make this video too long and boring if I actually go into the details here. Suffice to say, I invite you to look at the loss computation and ask yourself if you think that is the correct way to produce the data set for training given how you do the inference later. The architecture of the imputer is actually pretty similar to burped in that. First of all, well, okay, you're dealing with audio in the input. So you're going to have some convolutional network here. And you also need to take as an input the prior alignment that you've already produced. So this you embed and then you simply do an attention network a transformer, which is pretty close to the burped example we've made. So I mean, they stress that their loss is actually a lower bound on the loss. So I shouldn't be I shouldn't be too hard when I say it's not the correct distribution. They do minimize something, some loss that actually makes sense. But yeah, I mainly wanted to go over the over the how the imputer works and how the it is structured. And I think it's pretty cool. And it lends itself very well to these tasks. And most of all, I like the fact that it exploits the these assumptions here. So not all tasks fit these assumptions, but if a task does fit the assumption, then I think it should be, you know, it should be fairly obvious that one should exploit that in order to perform better. All right, that was it from me. Thanks. | [{"start": 0.0, "end": 6.48, "text": " Hi there. Today we're looking at the imputed sequence modeling via imputation and"}, {"start": 6.48, "end": 12.8, "text": " dynamic programming by William Chan, Chitwan Saria, Jeffrey Hinton, Mohamed"}, {"start": 12.8, "end": 19.48, "text": " Nuruzie and Navdeep Jaitley. So this is a model to perform sequence to sequence"}, {"start": 19.48, "end": 28.32, "text": " tasks. Now sequence to sequence tasks are very very common in NLP but in this"}, {"start": 28.32, "end": 33.96, "text": " case it's kind of a subset of sequence to sequence tasks. So a classic sequence"}, {"start": 33.96, "end": 39.24, "text": " to sequence task is machine translation. Here for example the sentence I like"}, {"start": 39.24, "end": 46.2, "text": " you if you want to translate it to German, sorry, you if you want to translate it"}, {"start": 46.2, "end": 57.28, "text": " to German that would become a mark day. And you see that the input is a sequence"}, {"start": 57.28, "end": 63.4, "text": " right and the output is a sequence. Now the imputer deals with very special"}, {"start": 63.4, "end": 67.84, "text": " kind of sequence to sequence tasks. Namely it deals with sequence to sequence"}, {"start": 67.84, "end": 73.0, "text": " tasks where there is a monotonic alignment. So you see that this is given here."}, {"start": 73.0, "end": 77.24000000000001, "text": " The first word is corresponding to the first word here, the second to the"}, {"start": 77.24000000000001, "end": 83.28, "text": " second and the third to the third. This is not always the case in machine"}, {"start": 83.28, "end": 87.56, "text": " translation. You know different languages have different sentence structures. So"}, {"start": 87.56, "end": 94.84, "text": " for example in French this would be J'aim. And you can see that the first word is"}, {"start": 94.84, "end": 100.4, "text": " still the first word. However the third word has become the second and the"}, {"start": 100.4, "end": 105.68, "text": " when the verb goes to the end. So the imputer would not be able to deal with"}, {"start": 105.68, "end": 112.2, "text": " this task very well. A different task where the imputer would be useful for"}, {"start": 112.2, "end": 117.60000000000001, "text": " would be something like speech recognition. So if someone were to speak the"}, {"start": 117.60000000000001, "end": 121.64, "text": " words I like you and you would measure the waveform of that it would look"}, {"start": 121.64, "end": 129.32, "text": " something like I like you. Right. So if you have this waveform let's"}, {"start": 129.32, "end": 134.92000000000002, "text": " actually make some chunk samples out of this. So let's say this is a sample"}, {"start": 134.92, "end": 142.51999999999998, "text": " right here and here is a break here and here. Right. So we have five samples on"}, {"start": 142.51999999999998, "end": 149.79999999999998, "text": " the bottom. Right. And you can see pretty easily that this sample here this is"}, {"start": 149.79999999999998, "end": 156.6, "text": " the I and this is silence. This is the like this is silence and this is the U."}, {"start": 156.6, "end": 161.11999999999998, "text": " So the imputer deals with these kind of sequence to sequence task where first"}, {"start": 161.12, "end": 167.72, "text": " of all there is a monotonic alignment. Sorry monotonic alignment. And second of"}, {"start": 167.72, "end": 173.48000000000002, "text": " all this is an engineering constraint where the length of the input sequence"}, {"start": 173.48000000000002, "end": 178.92000000000002, "text": " X is larger or equal to the length of the input sequence Y and you'll you'll"}, {"start": 178.92000000000002, "end": 185.04000000000002, "text": " see why mainly because we rely on being able to compute this alignment here."}, {"start": 185.04, "end": 194.04, "text": " The alignment of input samples to output samples. Right. You can see that the"}, {"start": 194.04, "end": 198.35999999999999, "text": " monotonic alignment is actually given fairly well in speech recognition because"}, {"start": 198.35999999999999, "end": 203.32, "text": " if something is later down here it is also later in the sequence up here that"}, {"start": 203.32, "end": 211.56, "text": " is a monotonic alignment. And also usually we have more wave samples it then we"}, {"start": 211.56, "end": 219.6, "text": " have words in the output sequence. So that would be a task for the imputer. Now"}, {"start": 219.6, "end": 226.24, "text": " let's think about how we would do something like this. So let's put X at the"}, {"start": 226.24, "end": 234.28, "text": " top here and we said X has five tokens in it. And let's put Y at the at the"}, {"start": 234.28, "end": 246.68, "text": " bottom. Right. Y actually has three tokens. Right. So this here is I like you. This"}, {"start": 246.68, "end": 252.2, "text": " is the waveform. Right. And we want the I like you at the bottom. So what could"}, {"start": 252.2, "end": 258.4, "text": " we do? Right. First of all, what the imputer does is it represents I like you. Not"}, {"start": 258.4, "end": 267.76, "text": " as this form right here, but as a form where you have same length as X divided"}, {"start": 267.76, "end": 276.08, "text": " into the same amount of things. And then it does the following. So for this, this"}, {"start": 276.08, "end": 286.59999999999997, "text": " is an example. Right. This is how it would represent Y. Right."}, {"start": 286.6, "end": 294.0, "text": " It would say I have as many chunks on the top as on the bottom, I know this chunk"}, {"start": 294.0, "end": 301.24, "text": " here corresponds to this token, then this here to this and this here to this. And then"}, {"start": 301.24, "end": 308.12, "text": " these are these intermediate ones. Right. So you can see these correspond to those."}, {"start": 308.12, "end": 314.32000000000005, "text": " These are silence right here. Now it doesn't always need to be that there is always one"}, {"start": 314.32, "end": 319.04, "text": " token and a silence, then the token and a silence. The task of the imputer is actually"}, {"start": 319.04, "end": 330.36, "text": " to see whether this is more likely than for example, I like and then silence silence"}, {"start": 330.36, "end": 336.2, "text": " and then you. Right. So the task of the imputer is to distinguish these two from each"}, {"start": 336.2, "end": 341.48, "text": " other. And then of course also produce the actual tokens. Now if you think about how"}, {"start": 341.48, "end": 348.84000000000003, "text": " would you go about taking X and producing something like Y. So this is Y. Let's call it"}, {"start": 348.84000000000003, "end": 353.96000000000004, "text": " tilde. This is the actual Y. Right. But you can see that this here is a deterministic"}, {"start": 353.96000000000004, "end": 359.44, "text": " function in one way. It's actually not a deterministic function in the other way."}, {"start": 359.44, "end": 363.36, "text": " And that becomes interesting when you have to compute your loss for this. But how would"}, {"start": 363.36, "end": 368.16, "text": " we go about doing this? Right. What we could do is we could just take a big transformer"}, {"start": 368.16, "end": 375.76000000000005, "text": " and a bird. Right. That's actually a draw an arrow. We could just take bird. And we could"}, {"start": 375.76000000000005, "end": 385.0, "text": " simply. So in in bird, you have in if you construct it correctly, you have as many input tokens"}, {"start": 385.0, "end": 390.40000000000003, "text": " as output tokens. So what we could simply say is for each of the outputs that we get,"}, {"start": 390.40000000000003, "end": 397.64000000000004, "text": " we simply make this as a softmax classifier over our vocabulary with the silence being"}, {"start": 397.64, "end": 404.8, "text": " one special token. And we simply classify each of the outputs into this vocabulary. This"}, {"start": 404.8, "end": 414.47999999999996, "text": " would be one step. Right. So we could do one step bird, bang bang input to outputs."}, {"start": 414.47999999999996, "end": 419.52, "text": " And there is more. There are more sophisticated approaches to doing this in one step like"}, {"start": 419.52, "end": 425.56, "text": " a CTC. But ultimately we could just do one step. But then you'd have the same problem"}, {"start": 425.56, "end": 432.68, "text": " like for example, X L net. If you haven't seen my X L net video, I recommend it. They exactly"}, {"start": 432.68, "end": 440.16, "text": " take the problem. If you do this, right. Then at the moment where you decode the word like,"}, {"start": 440.16, "end": 446.16, "text": " you have no idea that there is an eye over here. All you know is the the vector you have"}, {"start": 446.16, "end": 453.64, "text": " here that you sample the eye from, right. But this could be a distribution where I is"}, {"start": 453.64, "end": 458.96, "text": " pretty high. But some other word is also pretty high. So this process over here that samples"}, {"start": 458.96, "end": 466.12, "text": " the word like has no idea which of the two here you actually would sample. So it cannot"}, {"start": 466.12, "end": 472.03999999999996, "text": " condition on it. So it is the assumption here is that the sampling of the word like is"}, {"start": 472.03999999999996, "end": 477.88, "text": " independent of the sampling of the word I. And of course that's not the case. You need"}, {"start": 477.88, "end": 484.88, "text": " to know what word is there if you want to sample the word like. Otherwise you can end up"}, {"start": 484.88, "end": 492.56, "text": " with some very confusing sentences. So this one step process is pretty quick, but it has"}, {"start": 492.56, "end": 497.08, "text": " the drawback that there are these conditional independence assumptions. And again, I invite"}, {"start": 497.08, "end": 503.56, "text": " you to watch the X L net video if you want to dive more into this problem. The second thing"}, {"start": 503.56, "end": 514.24, "text": " we could do is we could just decode one after another. So we could say, all right, I'll"}, {"start": 514.24, "end": 519.92, "text": " make, sorry, I'll make my five slots here. And I just leave the M. T for now. And I'm"}, {"start": 519.92, "end": 526.12, "text": " just going to decode the one that I am most sure about. And let's say the speech at the"}, {"start": 526.12, "end": 532.12, "text": " back here is very clear. And you say, I know this is a U, right. So I'm on a fill in"}, {"start": 532.12, "end": 540.88, "text": " U right here, right, and make this alignment. That this goes here. This is the U, right."}, {"start": 540.88, "end": 547.88, "text": " I still don't know what the others are, but now what I do a second step. And in the"}, {"start": 547.88, "end": 558.08, "text": " second step, I get as an input, not only the original input like this, this thing here,"}, {"start": 558.08, "end": 564.8000000000001, "text": " but I also get the fact that I already decoded the word U to here, right, in this step."}, {"start": 564.8000000000001, "end": 571.1600000000001, "text": " So now I say given that I already decoded the word U, which one am I now most sure about?"}, {"start": 571.1600000000001, "end": 577.12, "text": " And I might be most sure about to say, I'm most sure about this now being an I because"}, {"start": 577.12, "end": 582.24, "text": " there's a U at the end. And this kind of sounds like an I. So an I here, right. It goes"}, {"start": 582.24, "end": 587.6800000000001, "text": " to the next step. And then the next step, it already has the information that it decoded"}, {"start": 587.68, "end": 597.04, "text": " I and you. And now it's a might say, ah, okay, given these, so probably this thing,"}, {"start": 597.04, "end": 604.3599999999999, "text": " so I here, probably the thing here, the thing right here is silence, right. Makes the"}, {"start": 604.3599999999999, "end": 608.4399999999999, "text": " most sense. I kind of hear some noise, but there's already a word after. So now I'm"}, {"start": 608.4399999999999, "end": 614.56, "text": " pretty sure that this here is a silent token, right. And you go this until the end, until"}, {"start": 614.56, "end": 623.76, "text": " you're actually at this. So this here would be N step decoding. This here would be N steps"}, {"start": 623.76, "end": 630.52, "text": " of decoding, which now no longer has the problem of these conditional independence assumptions,"}, {"start": 630.52, "end": 637.88, "text": " but of course, now you have the problem that you need N steps, right. The imputer does"}, {"start": 637.88, "end": 645.92, "text": " something in the middle of this. The imputer will, as you can see here, it will form this"}, {"start": 645.92, "end": 653.8, "text": " in two blocks, right. Blocks of size B. And this is the empty symbol here. Um, right."}, {"start": 653.8, "end": 660.52, "text": " And what it will do is it will make a step where in each block, for each block, it will"}, {"start": 660.52, "end": 666.44, "text": " condition on the previous alignment and condition on the input, it will decode whatever it feels"}, {"start": 666.44, "end": 675.44, "text": " it is most certain about in each block. And then it does this for as long as there are"}, {"start": 675.44, "end": 679.9200000000001, "text": " still empty tokens, right. You can see here the first block. And then in the second step,"}, {"start": 679.9200000000001, "end": 688.36, "text": " it will decode this, this, this, and this. So the imputer can trade off between the conditional"}, {"start": 688.36, "end": 696.6, "text": " independence assumption of the one step bird and the full conditional independence assumption"}, {"start": 696.6, "end": 704.64, "text": " of the N step decoding. Right. So it will compute this alignment and the actual tokens"}, {"start": 704.64, "end": 715.76, "text": " at the same time in this process. So how many steps does this take? This takes now B steps."}, {"start": 715.76, "end": 722.24, "text": " And this is pretty cool because B is the block size. So this is independent of the sequence"}, {"start": 722.24, "end": 729.76, "text": " length. So it is able to compute this alignment and output in a constant number of steps."}, {"start": 729.76, "end": 737.28, "text": " Right. So you're by modulating this B, you're now able to trade off speed versus, let's"}, {"start": 737.28, "end": 744.48, "text": " say, performance in the imputer. And this is pretty cool. So I think, actually, I think"}, {"start": 744.48, "end": 752.04, "text": " the bigger point to understand here is how to actually use the assumption that there is"}, {"start": 752.04, "end": 756.88, "text": " a monotonic alignment, right. Because if there is a monotonic alignment, and if this thing"}, {"start": 756.88, "end": 765.84, "text": " is given here, then you can do this, you can do this representation right here with the"}, {"start": 765.84, "end": 775.08, "text": " silence tokens. And that allows you to basically represent the output in a form that is of"}, {"start": 775.08, "end": 781.32, "text": " the same length as the input and do this kind of token by token decoding while still"}, {"start": 781.32, "end": 787.24, "text": " allowing you to have variable lengths output as long as they're smaller in length than"}, {"start": 787.24, "end": 794.8000000000001, "text": " the input. So that's pretty cool. And then the next pretty cool thing is the fact that"}, {"start": 794.8, "end": 803.5999999999999, "text": " they do this in blocks. Now, of course, my issue with this, so this is how the system works."}, {"start": 803.5999999999999, "end": 811.68, "text": " My issue with this is how the system is trained. So if you think about how you train this,"}, {"start": 811.68, "end": 820.4399999999999, "text": " you must train this. First of all, the loss function, right, has to revert this. And how"}, {"start": 820.44, "end": 830.32, "text": " they do it is they marginalize, you see this down here, you want to marginalize over all"}, {"start": 830.32, "end": 839.6400000000001, "text": " the possible alignments right here. And so this is how you train. You sample an alignment"}, {"start": 839.6400000000001, "end": 849.2800000000001, "text": " from the alignment policy. And this alignment policy is, I think they have some heuristics"}, {"start": 849.28, "end": 856.1999999999999, "text": " of how they construct the alignments during training or you have experts actually giving"}, {"start": 856.1999999999999, "end": 861.48, "text": " you this alignment. I think they use in the speech recognition, they use something like"}, {"start": 861.48, "end": 872.0799999999999, "text": " CTC to give you the alignments from the alignment policy. And then you have a masking policy."}, {"start": 872.0799999999999, "end": 878.48, "text": " And I think they also just do random masking. And then they use that for training and then"}, {"start": 878.48, "end": 888.44, "text": " they marginalize over the alignments. This I'm pretty sure is not the same distribution"}, {"start": 888.44, "end": 898.04, "text": " as the decoding procedure I just described. Right. So the decoding procedure, if you do"}, {"start": 898.04, "end": 905.96, "text": " this in B steps, right, that means each of the step is dependent on the step before."}, {"start": 905.96, "end": 911.5600000000001, "text": " So that means the distribution of whatever you, whatever the imputer sees is actually"}, {"start": 911.5600000000001, "end": 919.44, "text": " dependent on itself. While these people are proposing a training framework where you"}, {"start": 919.44, "end": 927.12, "text": " have here, you have a heuristic in order to come up with the training sample alignments."}, {"start": 927.12, "end": 935.8, "text": " And here you have a random, I think a random masking policy that comes up with the, with"}, {"start": 935.8, "end": 942.36, "text": " where the empty tokens are. So this is not the same distribution. And then also it marginalizes"}, {"start": 942.36, "end": 950.28, "text": " over all compatible alignments, which I'm pretty sure this is not the same distribution."}, {"start": 950.28, "end": 956.16, "text": " This is not the correct loss distribution. They have some math to show that in expectation"}, {"start": 956.16, "end": 964.8399999999999, "text": " it's the same. But yeah, this is, this is over there over there rolling policy and role"}, {"start": 964.8399999999999, "end": 973.92, "text": " and exks are and marginalization. This, I don't want to go too deep into this. I've given"}, {"start": 973.92, "end": 979.3199999999999, "text": " it some thought, but it will make this video too long and boring if I actually go into"}, {"start": 979.3199999999999, "end": 985.4399999999999, "text": " the details here. Suffice to say, I invite you to look at the loss computation and ask"}, {"start": 985.44, "end": 994.2, "text": " yourself if you think that is the correct way to produce the data set for training given"}, {"start": 994.2, "end": 1002.6400000000001, "text": " how you do the inference later. The architecture of the imputer is actually pretty similar to"}, {"start": 1002.6400000000001, "end": 1008.8800000000001, "text": " burped in that. First of all, well, okay, you're dealing with audio in the input. So you're"}, {"start": 1008.8800000000001, "end": 1014.2, "text": " going to have some convolutional network here. And you also need to take as an input the"}, {"start": 1014.2, "end": 1020.72, "text": " prior alignment that you've already produced. So this you embed and then you simply do"}, {"start": 1020.72, "end": 1029.8, "text": " an attention network a transformer, which is pretty close to the burped example we've"}, {"start": 1029.8, "end": 1042.0, "text": " made. So I mean, they stress that their loss is actually a lower bound on the loss. So"}, {"start": 1042.0, "end": 1046.8, "text": " I shouldn't be I shouldn't be too hard when I say it's not the correct distribution."}, {"start": 1046.8, "end": 1056.76, "text": " They do minimize something, some loss that actually makes sense. But yeah, I mainly wanted"}, {"start": 1056.76, "end": 1063.72, "text": " to go over the over the how the imputer works and how the it is structured. And I think"}, {"start": 1063.72, "end": 1072.28, "text": " it's pretty cool. And it lends itself very well to these tasks. And most of all, I like the"}, {"start": 1072.28, "end": 1080.72, "text": " fact that it exploits the these assumptions here. So not all tasks fit these assumptions,"}, {"start": 1080.72, "end": 1087.64, "text": " but if a task does fit the assumption, then I think it should be, you know, it should be"}, {"start": 1087.64, "end": 1092.64, "text": " fairly obvious that one should exploit that in order to perform better. All right, that"}, {"start": 1092.64, "end": 1093.8400000000001, "text": " was it from me. Thanks."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=ZVVnvZdUMUk | The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks | Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly contains a nearly optimal sub-network that is responsible for most of the final performance.
https://arxiv.org/abs/1803.03635
Abstract:
Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective.
We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.
Authors: Jonathan Frankle, Michael Carbin
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carben. So this paper is sort of an empirical paper into what makes neural networks train successfully. And it comes out of the literature of pruning. So they say neural network pruning techniques, right? They have been around for a while. They can reduce the parameter counts of trained networks by over 90% decreasing storage requirements and improving computational performance or inference without compromising accuracy, right? So what does this mean? If you have a neural network, let's say you just have three nodes each layer. You have two layers here. You have a neural network here. If you have a fully connected neural network, every node is going to be connected with every node in the next layer, right? And these are these connections are your weights, your theta's. And you're going to train them, which means you have a, you have a number of steps in this direction. And let's say you have a test set accuracy, right? Here. So here is steps. You're going to train them. And if you train them, your accuracy will reach a certain point, right? I'm just going to draw the end point here. Let's say you reach a 90% test accuracy. So your network generalizes pretty well. That's pretty good. So people have been wondering, this, these networks, they require quite a lot of storage. You know, this is nine connections right here. So three times three. And this is also nine connections. Can we make it smaller, but still retain the accuracy? And this is where pruning comes in. So with pruning, people would go and after you train them. So the first step is train the full network, right? And then the second step is prune. Now when you prune, you basically select among the weights that you have, that you have trained, you select the best ones in some form or another. In this case, people just select the ones with the largest magnitudes. But there are multiple techniques to do this. And this is very related to things like quantization or distillation. So with pruning, you just leave away some of the weights or most of the weights. And you hope that you still retain a pretty good accuracy right here, right? Sorry, actually, we don't need these steps thing. So you leave away weights and you retain a good accuracy. So pruning methods have been deployed successfully to make networks useless space or be faster to evaluate because of course with less numbers, you need to do less calculations, right? So this paper builds on top of this. And it basically says, all right, if if we do the following, if if we now take this network that we identified after training and we, we just take this network and we train it from the beginning, only this sub network, right? So three is retrain. Then it will also perform pretty well or even better under one condition, right? So if you only train this thing, it will perform well under one condition. And the condition is that you transfer over the initial weights. So right, the question is, can we train just the small network from the beginning so that we don't have to train the big network? Right? And the paper identifies that this works if your initial weights, theta zero of the small network are equal to the initial weights of the large network, right? Just so just the ones where you port them over. But basically, the short answer is no. And the reason is if you only want to train this small network, you need to know the good initialization of these of these weights all here. And the good initialization you only know after you've trained the large network and actually identified which of these connections makes sense. So you can just take a smaller network from the beginning, you have to train the larger one, then you know which weights and which initializations make sense. So this is the winning lottery ticket hypothesis, basically. It states and we can read it out in full. The lottery ticket hypothesis is a randomly initialized dense neural network contains a sub network that is initialized such that when trained in isolation, it can match the test accuracy of the original network after training for at most the same number of iterations, right? Now the important part here is that it contains a sub network, right? That is initialized such that when trained in isolation. So two things are important. It is important. The structure of the network of the sub network, but it is also important. What are the initialization of the connections? So the paper kind of hints at the neural networks work at all. And the reason why neural networks work is because we've often thought of pain neural networks have so many parameters. How can they even generalize? The reason is the following. If we have a neural network, we throw so many parameters at it, some of the parameters, one subset of the parameters, namely the red ones here, are going to be initialized in such a way, in such a beneficial way that training will perform, will make them the network perform well, right? So it's initialization plus SGD on that sub network, right? So it is actually only a very small sub network that is responsible for the performance of the neural network, but that sub network needs to be initialized at the correct position. And by over parameterizing these neural networks so much, we actually give it combinatorically many sub networks to choose from where the initialization could be well. So because of this combinatorics, it means that if we over parameterize by some margin, then there's almost guaranteed to be a good sub network in there that can then perform well, right? So I hope this makes sense. It is basically, it is not a way, it is not a magic thing where you now we can, we now can train the smaller networks. It is, it is an explanation of why the over parameterization in neural networks makes sense. Because by over parameterizing, we allow the neural networks to combinator, to exploit the combinatorics to find a good, well initialized sub network that will perform well. And the evidence for this is exactly the fact that if we transfer over the sub network, it by itself will reach the same performance or actually exceed the performance. But only if we initialize it at the same point as it was initialized in the original network. So here is how these sub networks are identified. We've already hinted at that, but here is how the paper does it, right? So it says identifying winning tickets. First, randomly initialize a neural network. This is the full neural network, right? Then train the network for j iterations arriving at some parameters, right? These are the trained parameters. Perune, p percent of the parameters, right? So of these parameters, prune some, right? And this is for in order to know which ones you prune, you need to have first the trained, the full neural network, right? So this is the catch here. You need to train the full neural network to know which ones you must prune. And thereby you create a mask M, right? And then they say reset the remaining parameters to their value in theta zero. Actually, you don't need to say remaining. You can just say reset the parameters to their values in theta zero. Now this is also important. This is the same theta zero as it was at the beginning of the training, right? So you need to actually set them back to those exact values. And thereby you create the winning ticket. This, okay, actually, if you just want to end up with a trained network, then you then this, this, this, this remaining thing here is important. But if you then want to retrain, you can only, you can set everything back and only train the masked version of the network, right? And they, they say this, this will identify these winning tickets and it will actually work better if you don't do this in what they call one shot. But if you do this iterative pruning, that means it repeatedly trains prunes and resets the network over n rounds, each round prunes p to the one over n percent of the weights that survive the previous round. Now, why might that be? It might be and this is, this is, I think, some valid hypothesis that I myself put forth here. It might be that if you prune some of the weights, right? Let's say you prune this one and this one. What you'll do is you'll put the responsibility of these weights onto other weights. So maybe on this one and this one. So as we said, they prune by by looking at which weights are large. So let's say here we have the weights of the layer and these are the magnitudes of the of the weights. Right. So, okay. So you would prune, let's say you only want to keep two of those around. So you would prune this one and this one because these are pretty small. Right. Here's the magnitude and you would also prune that one. Right. If you just do this one shot and then you would retrain and maybe these weights would end up somewhat different. But if you do this in multiple rounds, right. Let's say you first prune one of them. You only prune the smallest one. Right. This one here and then you retrain and then your weights actually change. And all of the responsibility that this weight carried before is now transferred onto this. Right. So your new your new weights look like this and you prune another one like this. And again, all the responsibility of this would in my hypothetical example fall on this one. Right. And now if you prune a third one, you would actually prune this one because you realize, ah, this weight here in absence of these two other weights is actually important. So you would prune this one as well. Right. So I think that is why this kind of iterative pruning method might work a bit better than the one shot pruning method that they say here. So they do a lot of empirical investigation. And I just want to highlight a very few of them. But so that you get the gist and then the paper goes into a lot of detail and a lot of different architectures that you can check out yourself. All right. So here we've a plot sorry that that deals with percent of weights remaining. So as you go to the right here, they drop on more and more weights and realize this is a log plot. Right. So if the dash lines here are random pruning, which means you just drop out a certain number of weights and then you you retrain. Right. And you can see that the dash line here, it it starts dropping and just becomes worse as you as you have less and less weights remaining, which is exactly what's expected, right. You prune the network, you make it smaller, you make it less performant and the more made weights you take away, the less performing it is. But interestingly enough, right. If you do this pruning that they suggest and then retrain with the correct initialization, which only do you retain the same level of accuracy for very long. You see here this is 2.9 or 1.2% of weights remaining, but you actually go higher. Right. So you can see here when you have 16% of weights remaining, you there's actually a significant difference between the full network and the prune network. And that's only by simply training this winning hypothesis. So this I find very, very fascinating. And again, this is not a magic bullet that you can do from the beginning, but it does give a clue that if you could train, sorry, if you could train these from the beginning, then then you might actually end up at a better point. So it does actually give a practical application. Also you see they train faster. So the blue line here is the full network over the course of training. Sorry, this should be blue. So here is training iterations and this is test accuracy. So you see the full network does something like this. Now if you prune to 20% of the weights, actually train faster and you go higher. And even if you have 7% of the weights, you go almost as high. So this is very interesting. Only when you go to like 1.9% of the weights, does your performance degrade again and eventually actually go lower than the original network? So that is pretty, pretty, pretty cool, I think. Now they do, as I said, they do a lot of investigation. And I think one of the, one of the main takeaways is that it is not only the structure of the winning hypothesis. So it's not only the structure of the sub network that makes it to be a winning hypothesis. It is actually the initialization. Here I want to show one of these plots. They have lots of plots. But you can see here, for example, sorry, this is from my own annotations. Again, this is percent of weights remaining and this is test accuracy at the final iteration. And if we initialize the sub network at its original position, like this method suggests, you see we first increase the accuracy and then decrease it after a long time. If we take the same sub network, right, but we randomly re-initialize it, then it drops much faster and actually immediately drops. So it really is about not only the structure of the sub network, but about its initialization. I think that is, that is the core of the hypothesis here. A very interesting related finding that I just want to mention, I find to be that they actually discover that the weights, so if you have a weight of the, so if you have two kinds of weights, let's actually go up to my original drawing here. If you have, if you compare how fast or how far do the weights travel in optimization space, right? So you can, you can basically look at how far weights travel during optimization. So you take the full neural network here and you look at a parameter that ends up being in the winning hypothesis theta, theta zero and it goes to theta end, which let's say theta final. And you also look at parameters that don't end up in the winning hypothesis. Let's call this theta one to theta also final prime, not too good at labeling. And you look at how far they travel. You'll find that the weights that end up in the winning hypothesis, they during optimization, they travel much further in optimization space, than weights that are not in the winning hypothesis, right? They just stay round much more. So it's not that the kind of good network is already contained in initialization. It's much more than the good network lends itself very favorably to be initialized by SGD, right? Because it travels further. It means as G has a bigger, bigger pull on it, right? I think there is a lot of things that are yet to be explored in this space. And I think this paper is a very cool contribution to our understanding of how neural networks work. All right, I invite you to check out all the experiments. They do a very thorough job. And with that, I say bye-bye. | [{"start": 0.0, "end": 5.16, "text": " Hi there. Today we're looking at the lottery ticket hypothesis finding sparse"}, {"start": 5.16, "end": 12.52, "text": " trainable neural networks by Jonathan Frankel and Michael Carben. So this paper is"}, {"start": 12.52, "end": 20.16, "text": " sort of an empirical paper into what makes neural networks train successfully."}, {"start": 20.16, "end": 25.88, "text": " And it comes out of the literature of pruning. So they say neural network"}, {"start": 25.88, "end": 31.2, "text": " pruning techniques, right? They have been around for a while. They can reduce the"}, {"start": 31.2, "end": 37.68, "text": " parameter counts of trained networks by over 90% decreasing storage requirements"}, {"start": 37.68, "end": 42.56, "text": " and improving computational performance or inference without compromising"}, {"start": 42.56, "end": 51.08, "text": " accuracy, right? So what does this mean? If you have a neural network, let's say"}, {"start": 51.08, "end": 58.62, "text": " you just have three nodes each layer. You have two layers here. You have a"}, {"start": 58.62, "end": 64.88, "text": " neural network here. If you have a fully connected neural network, every node is"}, {"start": 64.88, "end": 70.0, "text": " going to be connected with every node in the next layer, right? And these are"}, {"start": 70.0, "end": 75.96, "text": " these connections are your weights, your theta's. And you're going to train them,"}, {"start": 75.96, "end": 85.39999999999999, "text": " which means you have a, you have a number of steps in this direction. And let's say"}, {"start": 85.39999999999999, "end": 93.44, "text": " you have a test set accuracy, right? Here. So here is steps. You're going to train"}, {"start": 93.44, "end": 100.24, "text": " them. And if you train them, your accuracy will reach a certain point, right?"}, {"start": 100.24, "end": 105.11999999999999, "text": " I'm just going to draw the end point here. Let's say you reach a 90% test"}, {"start": 105.12, "end": 110.36, "text": " accuracy. So your network generalizes pretty well. That's pretty good. So people"}, {"start": 110.36, "end": 115.76, "text": " have been wondering, this, these networks, they require quite a lot of storage."}, {"start": 115.76, "end": 119.68, "text": " You know, this is nine connections right here. So three times three. And this is"}, {"start": 119.68, "end": 126.28, "text": " also nine connections. Can we make it smaller, but still retain the accuracy? And"}, {"start": 126.28, "end": 130.72, "text": " this is where pruning comes in. So with pruning, people would go and after you"}, {"start": 130.72, "end": 137.48, "text": " train them. So the first step is train the full network, right? And then the second"}, {"start": 137.48, "end": 146.72, "text": " step is prune. Now when you prune, you basically select among the weights that you"}, {"start": 146.72, "end": 155.44, "text": " have, that you have trained, you select the best ones in some form or another. In"}, {"start": 155.44, "end": 160.32, "text": " this case, people just select the ones with the largest magnitudes. But there"}, {"start": 160.32, "end": 165.35999999999999, "text": " are multiple techniques to do this. And this is very related to things like quantization"}, {"start": 165.35999999999999, "end": 171.76, "text": " or distillation. So with pruning, you just leave away some of the weights or most of the"}, {"start": 171.76, "end": 179.79999999999998, "text": " weights. And you hope that you still retain a pretty good accuracy right here, right?"}, {"start": 179.79999999999998, "end": 187.92, "text": " Sorry, actually, we don't need these steps thing. So you leave away weights and you retain"}, {"start": 187.92, "end": 192.32, "text": " a good accuracy. So pruning methods have been deployed successfully to make networks"}, {"start": 192.32, "end": 198.0, "text": " useless space or be faster to evaluate because of course with less numbers, you need to"}, {"start": 198.0, "end": 206.11999999999998, "text": " do less calculations, right? So this paper builds on top of this. And it basically says,"}, {"start": 206.11999999999998, "end": 213.64, "text": " all right, if if we do the following, if if we now take this network that we identified"}, {"start": 213.64, "end": 225.64, "text": " after training and we, we just take this network and we train it from the beginning, only"}, {"start": 225.64, "end": 239.16, "text": " this sub network, right? So three is retrain. Then it will also perform pretty well or even"}, {"start": 239.16, "end": 245.72, "text": " better under one condition, right? So if you only train this thing, it will perform"}, {"start": 245.72, "end": 254.4, "text": " well under one condition. And the condition is that you transfer over the initial weights."}, {"start": 254.4, "end": 260.4, "text": " So right, the question is, can we train just the small network from the beginning so that"}, {"start": 260.4, "end": 266.72, "text": " we don't have to train the big network? Right? And the paper identifies that this works"}, {"start": 266.72, "end": 276.72, "text": " if your initial weights, theta zero of the small network are equal to the initial weights"}, {"start": 276.72, "end": 283.52000000000004, "text": " of the large network, right? Just so just the ones where you port them over. But basically,"}, {"start": 283.52000000000004, "end": 290.04, "text": " the short answer is no. And the reason is if you only want to train this small network,"}, {"start": 290.04, "end": 298.08000000000004, "text": " you need to know the good initialization of these of these weights all here. And the good"}, {"start": 298.08000000000004, "end": 305.0, "text": " initialization you only know after you've trained the large network and actually identified"}, {"start": 305.0, "end": 309.40000000000003, "text": " which of these connections makes sense. So you can just take a smaller network from the"}, {"start": 309.40000000000003, "end": 314.76, "text": " beginning, you have to train the larger one, then you know which weights and which initializations"}, {"start": 314.76, "end": 321.0, "text": " make sense. So this is the winning lottery ticket hypothesis, basically. It states and"}, {"start": 321.0, "end": 330.32, "text": " we can read it out in full. The lottery ticket hypothesis is a randomly initialized dense"}, {"start": 330.32, "end": 337.24, "text": " neural network contains a sub network that is initialized such that when trained in isolation,"}, {"start": 337.24, "end": 343.48, "text": " it can match the test accuracy of the original network after training for at most the same"}, {"start": 343.48, "end": 351.0, "text": " number of iterations, right? Now the important part here is that it contains a sub network,"}, {"start": 351.0, "end": 359.88, "text": " right? That is initialized such that when trained in isolation. So two things are important."}, {"start": 359.88, "end": 369.28000000000003, "text": " It is important. The structure of the network of the sub network, but it is also important."}, {"start": 369.28, "end": 377.67999999999995, "text": " What are the initialization of the connections? So the paper kind of hints at the neural networks"}, {"start": 377.67999999999995, "end": 383.84, "text": " work at all. And the reason why neural networks work is because we've often thought of"}, {"start": 383.84, "end": 388.4, "text": " pain neural networks have so many parameters. How can they even generalize? The reason is"}, {"start": 388.4, "end": 395.23999999999995, "text": " the following. If we have a neural network, we throw so many parameters at it, some of"}, {"start": 395.24, "end": 400.88, "text": " the parameters, one subset of the parameters, namely the red ones here, are going to be"}, {"start": 400.88, "end": 409.84000000000003, "text": " initialized in such a way, in such a beneficial way that training will perform, will make"}, {"start": 409.84000000000003, "end": 421.12, "text": " them the network perform well, right? So it's initialization plus SGD on that sub network,"}, {"start": 421.12, "end": 426.88, "text": " right? So it is actually only a very small sub network that is responsible for the performance"}, {"start": 426.88, "end": 435.52, "text": " of the neural network, but that sub network needs to be initialized at the correct position."}, {"start": 435.52, "end": 443.44, "text": " And by over parameterizing these neural networks so much, we actually give it combinatorically"}, {"start": 443.44, "end": 449.28000000000003, "text": " many sub networks to choose from where the initialization could be well. So because of"}, {"start": 449.28, "end": 455.96, "text": " this combinatorics, it means that if we over parameterize by some margin, then there's"}, {"start": 455.96, "end": 462.52, "text": " almost guaranteed to be a good sub network in there that can then perform well, right?"}, {"start": 462.52, "end": 469.03999999999996, "text": " So I hope this makes sense. It is basically, it is not a way, it is not a magic thing where"}, {"start": 469.03999999999996, "end": 475.15999999999997, "text": " you now we can, we now can train the smaller networks. It is, it is an explanation of why"}, {"start": 475.16, "end": 481.32000000000005, "text": " the over parameterization in neural networks makes sense. Because by over parameterizing,"}, {"start": 481.32000000000005, "end": 489.84000000000003, "text": " we allow the neural networks to combinator, to exploit the combinatorics to find a good,"}, {"start": 489.84000000000003, "end": 495.8, "text": " well initialized sub network that will perform well. And the evidence for this is exactly"}, {"start": 495.8, "end": 504.16, "text": " the fact that if we transfer over the sub network, it by itself will reach the same performance"}, {"start": 504.16, "end": 511.0, "text": " or actually exceed the performance. But only if we initialize it at the same point as it"}, {"start": 511.0, "end": 520.24, "text": " was initialized in the original network. So here is how these sub networks are identified."}, {"start": 520.24, "end": 526.32, "text": " We've already hinted at that, but here is how the paper does it, right? So it says identifying"}, {"start": 526.32, "end": 531.28, "text": " winning tickets. First, randomly initialize a neural network. This is the full neural network,"}, {"start": 531.28, "end": 537.8, "text": " right? Then train the network for j iterations arriving at some parameters, right? These"}, {"start": 537.8, "end": 547.4, "text": " are the trained parameters. Perune, p percent of the parameters, right? So of these parameters,"}, {"start": 547.4, "end": 553.88, "text": " prune some, right? And this is for in order to know which ones you prune, you need to have"}, {"start": 553.88, "end": 559.92, "text": " first the trained, the full neural network, right? So this is the catch here. You need to"}, {"start": 559.92, "end": 565.9599999999999, "text": " train the full neural network to know which ones you must prune. And thereby you create a"}, {"start": 565.9599999999999, "end": 574.16, "text": " mask M, right? And then they say reset the remaining parameters to their value in theta zero."}, {"start": 574.16, "end": 578.7199999999999, "text": " Actually, you don't need to say remaining. You can just say reset the parameters to their"}, {"start": 578.7199999999999, "end": 586.0, "text": " values in theta zero. Now this is also important. This is the same theta zero as it was at the"}, {"start": 586.0, "end": 591.92, "text": " beginning of the training, right? So you need to actually set them back to those exact values."}, {"start": 591.92, "end": 600.16, "text": " And thereby you create the winning ticket. This, okay, actually, if you just want to end up"}, {"start": 600.16, "end": 606.68, "text": " with a trained network, then you then this, this, this, this remaining thing here is important."}, {"start": 606.68, "end": 612.52, "text": " But if you then want to retrain, you can only, you can set everything back and only train"}, {"start": 612.52, "end": 619.72, "text": " the masked version of the network, right? And they, they say this, this will identify these"}, {"start": 619.72, "end": 624.64, "text": " winning tickets and it will actually work better if you don't do this in what they call one"}, {"start": 624.64, "end": 633.16, "text": " shot. But if you do this iterative pruning, that means it repeatedly trains prunes and resets"}, {"start": 633.16, "end": 639.16, "text": " the network over n rounds, each round prunes p to the one over n percent of the weights"}, {"start": 639.16, "end": 649.0799999999999, "text": " that survive the previous round. Now, why might that be? It might be and this is, this is,"}, {"start": 649.0799999999999, "end": 657.36, "text": " I think, some valid hypothesis that I myself put forth here. It might be that if you prune"}, {"start": 657.36, "end": 666.36, "text": " some of the weights, right? Let's say you prune this one and this one. What you'll do is"}, {"start": 666.36, "end": 672.36, "text": " you'll put the responsibility of these weights onto other weights. So maybe on this one and"}, {"start": 672.36, "end": 679.92, "text": " this one. So as we said, they prune by by looking at which weights are large. So let's say"}, {"start": 679.92, "end": 688.92, "text": " here we have the weights of the layer and these are the magnitudes of the of the weights."}, {"start": 688.92, "end": 699.04, "text": " Right. So, okay. So you would prune, let's say you only want to keep two of those around."}, {"start": 699.04, "end": 703.88, "text": " So you would prune this one and this one because these are pretty small. Right. Here's the"}, {"start": 703.88, "end": 711.9599999999999, "text": " magnitude and you would also prune that one. Right. If you just do this one shot and then"}, {"start": 711.9599999999999, "end": 717.76, "text": " you would retrain and maybe these weights would end up somewhat different. But if you do"}, {"start": 717.76, "end": 723.84, "text": " this in multiple rounds, right. Let's say you first prune one of them. You only prune"}, {"start": 723.84, "end": 731.0, "text": " the smallest one. Right. This one here and then you retrain and then your weights actually"}, {"start": 731.0, "end": 737.84, "text": " change. And all of the responsibility that this weight carried before is now transferred"}, {"start": 737.84, "end": 746.24, "text": " onto this. Right. So your new your new weights look like this and you prune another one like"}, {"start": 746.24, "end": 751.96, "text": " this. And again, all the responsibility of this would in my hypothetical example fall on"}, {"start": 751.96, "end": 757.16, "text": " this one. Right. And now if you prune a third one, you would actually prune this one because"}, {"start": 757.16, "end": 763.16, "text": " you realize, ah, this weight here in absence of these two other weights is actually important."}, {"start": 763.16, "end": 768.5600000000001, "text": " So you would prune this one as well. Right. So I think that is why this kind of iterative"}, {"start": 768.5600000000001, "end": 774.96, "text": " pruning method might work a bit better than the one shot pruning method that they say"}, {"start": 774.96, "end": 782.72, "text": " here. So they do a lot of empirical investigation. And I just want to highlight a very few of"}, {"start": 782.72, "end": 789.08, "text": " them. But so that you get the gist and then the paper goes into a lot of detail and a lot"}, {"start": 789.08, "end": 795.2800000000001, "text": " of different architectures that you can check out yourself. All right. So here we've a plot"}, {"start": 795.2800000000001, "end": 802.44, "text": " sorry that that deals with percent of weights remaining. So as you go to the right here,"}, {"start": 802.44, "end": 809.24, "text": " they drop on more and more weights and realize this is a log plot. Right. So if the dash lines"}, {"start": 809.24, "end": 815.8800000000001, "text": " here are random pruning, which means you just drop out a certain number of weights and then"}, {"start": 815.8800000000001, "end": 826.2800000000001, "text": " you you retrain. Right. And you can see that the dash line here, it it starts dropping and"}, {"start": 826.28, "end": 832.88, "text": " just becomes worse as you as you have less and less weights remaining, which is exactly"}, {"start": 832.88, "end": 837.92, "text": " what's expected, right. You prune the network, you make it smaller, you make it less performant"}, {"start": 837.92, "end": 845.3199999999999, "text": " and the more made weights you take away, the less performing it is. But interestingly enough,"}, {"start": 845.3199999999999, "end": 854.76, "text": " right. If you do this pruning that they suggest and then retrain with the correct initialization,"}, {"start": 854.76, "end": 860.92, "text": " which only do you retain the same level of accuracy for very long. You see here this is 2.9"}, {"start": 860.92, "end": 870.12, "text": " or 1.2% of weights remaining, but you actually go higher. Right. So you can see here when you"}, {"start": 870.12, "end": 876.16, "text": " have 16% of weights remaining, you there's actually a significant difference between the"}, {"start": 876.16, "end": 884.88, "text": " full network and the prune network. And that's only by simply training this winning hypothesis."}, {"start": 884.88, "end": 891.28, "text": " So this I find very, very fascinating. And again, this is not a magic bullet that you can"}, {"start": 891.28, "end": 898.16, "text": " do from the beginning, but it does give a clue that if you could train, sorry, if you could"}, {"start": 898.16, "end": 904.16, "text": " train these from the beginning, then then you might actually end up at a better point."}, {"start": 904.16, "end": 909.28, "text": " So it does actually give a practical application. Also you see they train faster. So the blue"}, {"start": 909.28, "end": 916.7199999999999, "text": " line here is the full network over the course of training. Sorry, this should be blue. So here"}, {"start": 916.7199999999999, "end": 921.4, "text": " is training iterations and this is test accuracy. So you see the full network does something"}, {"start": 921.4, "end": 929.24, "text": " like this. Now if you prune to 20% of the weights, actually train faster and you go higher."}, {"start": 929.24, "end": 937.52, "text": " And even if you have 7% of the weights, you go almost as high. So this is very interesting."}, {"start": 937.52, "end": 944.28, "text": " Only when you go to like 1.9% of the weights, does your performance degrade again and eventually"}, {"start": 944.28, "end": 954.6, "text": " actually go lower than the original network? So that is pretty, pretty, pretty cool, I think."}, {"start": 954.6, "end": 961.2, "text": " Now they do, as I said, they do a lot of investigation. And I think one of the, one of the main"}, {"start": 961.2, "end": 965.96, "text": " takeaways is that it is not only the structure of the winning hypothesis. So it's not only"}, {"start": 965.96, "end": 972.8000000000001, "text": " the structure of the sub network that makes it to be a winning hypothesis. It is actually"}, {"start": 972.8000000000001, "end": 981.0400000000001, "text": " the initialization. Here I want to show one of these plots. They have lots of plots. But"}, {"start": 981.04, "end": 988.1999999999999, "text": " you can see here, for example, sorry, this is from my own annotations. Again, this is"}, {"start": 988.1999999999999, "end": 995.8, "text": " percent of weights remaining and this is test accuracy at the final iteration. And if"}, {"start": 995.8, "end": 1001.76, "text": " we initialize the sub network at its original position, like this method suggests, you see"}, {"start": 1001.76, "end": 1009.0799999999999, "text": " we first increase the accuracy and then decrease it after a long time. If we take the same"}, {"start": 1009.08, "end": 1017.44, "text": " sub network, right, but we randomly re-initialize it, then it drops much faster and actually immediately"}, {"start": 1017.44, "end": 1025.2, "text": " drops. So it really is about not only the structure of the sub network, but about its initialization."}, {"start": 1025.2, "end": 1033.96, "text": " I think that is, that is the core of the hypothesis here. A very interesting related finding that"}, {"start": 1033.96, "end": 1040.16, "text": " I just want to mention, I find to be that they actually discover that the weights, so if"}, {"start": 1040.16, "end": 1045.6000000000001, "text": " you have a weight of the, so if you have two kinds of weights, let's actually go up to"}, {"start": 1045.6000000000001, "end": 1054.1200000000001, "text": " my original drawing here. If you have, if you compare how fast or how far do the weights"}, {"start": 1054.1200000000001, "end": 1061.16, "text": " travel in optimization space, right? So you can, you can basically look at how far weights"}, {"start": 1061.16, "end": 1068.16, "text": " travel during optimization. So you take the full neural network here and you look at a parameter"}, {"start": 1068.16, "end": 1078.16, "text": " that ends up being in the winning hypothesis theta, theta zero and it goes to theta end,"}, {"start": 1078.16, "end": 1084.6000000000001, "text": " which let's say theta final. And you also look at parameters that don't end up in the"}, {"start": 1084.6, "end": 1093.8799999999999, "text": " winning hypothesis. Let's call this theta one to theta also final prime, not too good at labeling."}, {"start": 1093.8799999999999, "end": 1099.6799999999998, "text": " And you look at how far they travel. You'll find that the weights that end up in the winning"}, {"start": 1099.6799999999998, "end": 1106.56, "text": " hypothesis, they during optimization, they travel much further in optimization space,"}, {"start": 1106.56, "end": 1112.48, "text": " than weights that are not in the winning hypothesis, right? They just stay round much more. So"}, {"start": 1112.48, "end": 1118.08, "text": " it's not that the kind of good network is already contained in initialization. It's"}, {"start": 1118.08, "end": 1129.48, "text": " much more than the good network lends itself very favorably to be initialized by SGD,"}, {"start": 1129.48, "end": 1136.88, "text": " right? Because it travels further. It means as G has a bigger, bigger pull on it, right?"}, {"start": 1136.88, "end": 1142.8000000000002, "text": " I think there is a lot of things that are yet to be explored in this space. And I think"}, {"start": 1142.8000000000002, "end": 1148.48, "text": " this paper is a very cool contribution to our understanding of how neural networks work."}, {"start": 1148.48, "end": 1152.6000000000001, "text": " All right, I invite you to check out all the experiments. They do a very thorough job."}, {"start": 1152.6, "end": 1182.56, "text": " And with that, I say bye-bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=-0aM99dMu_4 | Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery | DDL is an auxiliary task for an agent to learn distances between states in episodes. This can then be used further to improve the agent's policy learning procedure.
Paper: https://arxiv.org/abs/1907.08225
Blog: https://sites.google.com/view/dynamical-distance-learning/home
Abstract:
Reinforcement learning requires manual specification of a reward function to learn a task. While in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time-consuming or even infeasible unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. This shaping is difficult to specify by hand, particularly when the task is learned from raw observations, such as images. In this paper, we study how we can automatically learn dynamical distances: a measure of the expected number of time steps to reach a given goal state from any other state. These dynamical distances can be used to provide well-shaped reward functions for reaching new goals, making it possible to learn complex tasks efficiently. We show that dynamical distances can be used in a semi-supervised regime, where unsupervised interaction with the environment is used to learn the dynamical distances, while a small amount of preference supervision is used to determine the task goal, without any manually engineered reward function or goal examples. We evaluate our method both on a real-world robot and in simulation. We show that our method can learn to turn a valve with a real-world 9-DoF hand, using raw image observations and just ten preference labels, without any other supervision. Videos of the learned skills can be found on the project website: this https URL.
Authors: Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. If you look at this robot, this robot has learned to turn this valve by itself. Now, by itself isn't really correct, but it has learned it in a semi-supervised way, with only 10 human inputs along the entire learning trajectory. So only 10 times was there a true reward for this reinforcement learning procedure, and the rest is unsupervised discovery of this skill. And the paper we're going to look at today, and the technique by which this was achieved, is dynamical distance learning for semi-supervised and unsupervised skill discovery by Christian Hartikeinen, Xinjiang Gang, Thomas Harnoia, and Sergei Lavigne. So this is a technique for reinforcement learning. So they claim reinforcement learning requires manual specification of a reward function to learn a task. And they say, while in principle this reward function only needs to specify the task goal, in practice reinforcement learning can be very time consuming or even infeasible, unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome. So what does this mean? Let's look at it. So if we want the robot here to turn the valve to the right, right? Ideally, you simply want to say, so the robot is here, this is the start state, right? Ideally, you just want to say, I want this, right? I want the thing to turn to be at the right, so this is good. All of this, I don't want any of that, right? And the reinforcement learning algorithm, I mean this is enough, this is a reward function, right? All of this is zero, and this is one, right? This is a reward function. And in theory, if you apply any sort of reinforcement learning algorithm with any sort of guarantee, this should get you there. But of course, we all know that it's not that easy, right? There is basically an exploration bottleneck where your robot has these three digits and lots of joints to move around. And the probability that by itself, it discovered that it needs to do this here and get this reward is very, very slim. So what you want to do is in your reward function that you're providing to the robot, you would want to say, okay, so this here, I see the blue thing is a bit more turned, so I'm maybe going to give this a 0.1, and then here it's a bit more turned, so maybe this is 0.2, and this I really like 0.3, here is 0.6, maybe, because it's even more, right? And then one at the end, right? So this is what they would call a smooth gradient in the reward function, where it's kind of the reward function ramps up until the goal is reached. But oftentimes this isn't really possible, because if you already knew how exactly to do the task, which then you could, you can only shape the reward function truly if you know how to perform the task in the first hand, and then why exactly do you do reinforcement learning, except for academic exercise? So the issue this paper has is clear, right? What they want to say is, let's assume that your reward function is actually pretty bad. Can we provide artificially a way that this discovery of these, of these, of these, what they call of these new skills is facilitated as if the reward function had some sort of a gradient. So that's the, the outset. Let's actually go back to the, to this for a second, and they have these mazes as, as a, as a kind of an example. So if you look at these mazes, what we want to keep in mind is, let's actually draw this over here. So let's say you have one of these mazes, right? And always there is a, there is a start state. So you're here, and then there is a goal state, right? Let's say over here. And the task is, you, you, you, you, you can move up, down, left, right? And the task is to reach the goal, right? But if the reward function is simply that if you reach the goal, you get a reward of one, and otherwise you get a reward of zero, then all the agent can do is kind of explore around, right, until it reaches the goal. Now, if you do random exploration, like a lot of, a lot of reinforcement learning algorithms, for example, Q learning or policy grading, they'll have some sort of a, just of a random exploration element, where they, if they don't, if they don't, absence of what they, of the, when they know what to do, they just kind of, uh, boogal around, like up, up, up, right, left, right, left, down, right, up, uh, that doesn't work, okay, down, down, uh, left, down, so it's sort of, and then up again, right, and then they just kind of wonk around. Um, so this, this method takes issue with that, and it says, okay, while the agent is doing its thing, trying to reach the goal, right? What we can do is we can learn a distance function between states. Now, we'll reduce the problem for now, and just say, the task is always that the goal state is reached in the shortest amount of steps, right? So, let's say the, the agent does something, right? It goes here, here, here, here, and then here, right? Right? That's, that's one rollout of the policy, and then it crashes into all, okay, that's bad. Um, so it gets a negative reward, right? But in addition to that, we can, we can learn, so it has visited all of these states here in between, right? Um, these are intermediate states. This paper wants us now to, to learn a distance function between the states. So, this distance function, let's call it D. It learns how far two states are away. So, it'll, you can, you can tell it, okay, this state here, let's call that state A, and this state here, state B, how far are those away? Now, this is not super well defined yet, but you want to say how far are they away for this agent here? So, the agent has maybe policy pi, like that's what it used to create this trajectory, under policy pi, how far away are states A and B? And how far away is simply going to be the amount of steps that it takes the agent to go from A to B. So, in this case, that would be two, right? So, the, the, and you can do this between any two states, right? This and this, this and this, the right here, here, these, all of these states, you can also start from this state, right? Let's do it in a different color and do every, so the, the, this distance function D can actually has a pretty tight reward signal, like a pretty wealth of information that it can learn these things from, right? So, the policy pi, in this case, can't learn much because it just got a reward of zero or something, because it didn't reach the goal, but the distance function has very, very big reward, or a big reward, it has a very denser word signal, where it can learn distances between two states, right? Now, let's say we've explored a bunch, right? A bunch, we've had many trajectories, some here, like to here, and then here, sometimes we even reach the goal, right? So, so sometimes we actually reach the goal, so we learn the, the distances between all of the, the states. Now, if we had a perfect distance function, let's assume we have a perfect distance function, our task now becomes very, very simple. So, let's assume, let's assume I'm here, where the green agent is, and I have these, I can either go up or down, and let's go, let's say that's x and the down is y, right? Which one should I choose? Now, without even asking my policy, per se, what I can do is I can ask, hey, a distance function. So, I can ask a distance function, two different things. So, first, let's do it like this. So, what do you think of the distance between x to the goal, and what do you think of the distance from y to the goal? And the distance function, if it's learned correctly, it will tell you the distance of x to the goal is whatever, maybe you need eight steps, the distance of y to the goal is 10 steps, right? So, if you had a good distance function, then you could solve the task fairly, fairly easily. Now, this by itself isn't super interesting. You will quickly notice that if you are able to learn such a good distance function, especially with the goal state here, then you might as well learn a good policy because that means you've reached the goal a fair number of times, right? So, the kind of information theoretical signal of d versus the signal on pi, if you just want to reach the same goal, to me, it seems the same. This paper, it tries to talk this up, I feel, but to me, if you are in the situation where you have a fixed goal, and that's it, then this doesn't seem too interesting or to beneficial with compared to, let's say, just learning a value function, right? Like you would do it in a three-seater or something. The difference between this and a value function. So, if the number of steps is actually your reward, so your negative reward you want to reach the goal and the shortest amount of time, then learning a value function is the same. The difference is, for a value function, the value function simply takes a state s, right, and a policy pi. While the distance function takes a state s and a goal state for the policy pi, the goal state for the value function is implicit, right? So, it implicitly has the goal state because you assume that the goal is always the same. With the distance function, you can technically change your goal, and this is where it becomes interesting. So, let's say you've explored, but you haven't quite reached the goal yet, right? But we said, okay, most of these RL algorithms, they have some sort of some notion of random exploration, right, in order to reach the goal. What if you went from here to here and to here and to here, and you learned the distances fairly well for the trajectories that you can do, but you just haven't been able to go any further. What you can say is, you can go to your replay, but for right, your memory of everything you've done, and you can ask which of these states has the furthest distance from my starting state. And the answer will be, okay, this state here has the furthest distance. So now what you can do is you can make this your goal, right? You can just try to reach that state, right? And once you reach the state, you can explore from that state, right, because this is the furthest away from your original starting state. That probably means that, you know, if you, that's kind of the frontier of what you know. So if you explore from here, you can go even further. Notice what? Because it is the furthest that you know. So it might turn out that from here, you can only go back, right? That's a possibility, but probably you could go even further, right? So then you go further and you might reach this state here, right? And again, you ask your replay buffer and tells you this state here is the furthest so far. So you take this as your new goal, and now you're just trying to reach that and explore from here. This is extremely similar to an algorithm like go explore that I already made a video about where it remembers what it did, and then it will always travel to the furthest states it has seen so far. And then from there, try to go farther, right? So this, this, if you, if you can learn a good distance function here, that will help you in exploring the space and eventually, of course, you think you might actually reach this goal set. So you might go far enough into in this maze, you might explore it enough such that you, you stumble over the goal state by itself, right? So this is, this is sort of the goal. This can be used in a number of different ways. Now instead of always going for the furthest what they did in the robot is they just let the algorithm explore, right? We explore explore explore if this is like a state tree, and then at some point it, it asked the human which one is closest to what you want, and then the human says this one, and then this, okay, cool. This is now the new goal, right? So we'll try to reach this as much as possible and then explore from here, right? So this in the case of the robot, the robot simply just like does some things. It explores in the unsupervised manner, and then at some point you ask the human which of these things that the robot has done you like the most, and then that becomes the new intermediate goal state and the algorithm explores from there. Right? So that's the the main gist and how you can use this. Now the entire learning thing is actually pretty simple. So what they propose is simply to, to learn the distance function, they put it pretty formal here. They say, okay, if you're two states that were visited after one another in an episode, then you can define the distance function as this sum from i to j if they were visited at time steps i and j respectively. This discounted cost function across this, but ultimately they consider problems where it's shortest path problems. So the cost function simply becomes how many steps does it take you to reach to reach the goal. So the cost function. So this becomes this, this becomes the identity, I guess you can you can set it to to one and this you can also set to one. So this simply becomes j minus i. How many steps does it take you to reach state state in time step j from the state you visited in time step i. And then they simply train a neural network or not even sure if it's a neural network, but you train a bunch of a parameterized function that learns to map the distance between these states to how many steps it took you from one to the other. And you do this simply by having a by regressing so mean squared regression means squared loss regression. So simple as that and that's how you learn the distance function and then you can use the distance function in the ways we discussed to either to improve your shortest path policy by giving it by providing it. So what you want to do is you want to provide the distance function as the negative reward, right. So they say they they provide the distance function as a negative reward for this or you can do this in an unsupervised fashion where you always propose to throw this away goals or you can do this in the semi supervised fashion. So there's a bunch of things that they did here. They have a bunch of videos of things that they trained this is from the human sorry from the semi supervised where the humans were simply selecting the hoppers that went further to the right. You can see over time this hops to the right with very, very sparse input only. So this is semi supervised right and then it goes to the right and it also has an unsupervised video where you simply let it perform and it on an unsupervised fashion. So it's a very supervised to discover states that are as far away as possible from its initial states and you can see it actually learns to move to the right and to the left because these are these reach states that are very far from its original state. So that's it's pretty cool that it turns out that the unsupervised method will discover such states. All right. So what to make of this this if you recognize this already it's very possible because I had seen this some sort of this idea in many, many papers before. So and they make some connections in their related work so if you know for example universal value functions sorry universal value estimation universal value functions and so on where basically it's also an unsupervised way where you always just you select two states you say this and this agent now try try to go from here to here right just try that. And so it is and then you select two new states so you basically teach your agent to go between two states that you choose at random and it's supposed to in an unsupervised fashion learn something about the environment very similar to what we have here right also a bunch of other a bunch of other things like just pure value functions are also pretty similar. I think to this go explore there's a big connection to go explore so this has been around in one way or the other but possibly not in this specific formulation and what I think is cool applied to this specific semi supervised task. So if I had to formulate a criticism to this method I would guess that it probably doesn't work when let's say the branching factor of the task is super high you see here you can you can only really turn the valve in one way or another of course the digits and the joints are they have degrees of freedom but if you think if the branching factor is super high right so from a given state here you can go in many many many different ways and then from each of those you can go in many many different ways right then the notion of something being far away right you go to this thing and use what's the farthest away right is almost meaningless because you have so much not explored right so if you have if you are three steps deep here right it will always tell you well this state here is the farthest away but you haven't explored these you know 15 directions here right so it might be that you actually miss so that you go so here's the goal and here's the start and you go a long way but you miss this obvious shortcut here because you always want to go along the longest path around so it seems like there is there there are probably requirements where this works well right but there right but but it appears that if if either the branching factor is super high or if there are maybe this this kind of loops in the game loops between states non obvious combinatorical things it might be somewhat even counterproductive sometimes not not sure about that but it seems to be very specific environments where this would work all right so this was my commentary I invite you to read the paper check it out and bye bye | [{"start": 0.0, "end": 8.0, "text": " Hi there. If you look at this robot, this robot has learned to turn this valve by itself."}, {"start": 8.0, "end": 14.0, "text": " Now, by itself isn't really correct, but it has learned it in a semi-supervised way,"}, {"start": 14.0, "end": 22.0, "text": " with only 10 human inputs along the entire learning trajectory. So only 10 times was there a true reward"}, {"start": 22.0, "end": 29.0, "text": " for this reinforcement learning procedure, and the rest is unsupervised discovery of this skill."}, {"start": 29.0, "end": 34.0, "text": " And the paper we're going to look at today, and the technique by which this was achieved,"}, {"start": 34.0, "end": 42.0, "text": " is dynamical distance learning for semi-supervised and unsupervised skill discovery by Christian Hartikeinen,"}, {"start": 42.0, "end": 51.0, "text": " Xinjiang Gang, Thomas Harnoia, and Sergei Lavigne. So this is a technique for reinforcement learning."}, {"start": 51.0, "end": 60.0, "text": " So they claim reinforcement learning requires manual specification of a reward function to learn a task."}, {"start": 60.0, "end": 67.0, "text": " And they say, while in principle this reward function only needs to specify the task goal,"}, {"start": 67.0, "end": 73.0, "text": " in practice reinforcement learning can be very time consuming or even infeasible,"}, {"start": 73.0, "end": 79.0, "text": " unless the reward function is shaped so as to provide a smooth gradient towards a successful outcome."}, {"start": 79.0, "end": 86.0, "text": " So what does this mean? Let's look at it. So if we want the robot here to turn the valve to the right,"}, {"start": 86.0, "end": 93.0, "text": " right? Ideally, you simply want to say, so the robot is here, this is the start state, right?"}, {"start": 93.0, "end": 100.0, "text": " Ideally, you just want to say, I want this, right? I want the thing to turn to be at the right,"}, {"start": 100.0, "end": 104.0, "text": " so this is good. All of this, I don't want any of that, right?"}, {"start": 104.0, "end": 111.0, "text": " And the reinforcement learning algorithm, I mean this is enough, this is a reward function,"}, {"start": 111.0, "end": 117.0, "text": " right? All of this is zero, and this is one, right? This is a reward function."}, {"start": 117.0, "end": 123.0, "text": " And in theory, if you apply any sort of reinforcement learning algorithm with any sort of guarantee,"}, {"start": 123.0, "end": 128.0, "text": " this should get you there. But of course, we all know that it's not that easy, right?"}, {"start": 128.0, "end": 137.0, "text": " There is basically an exploration bottleneck where your robot has these three digits"}, {"start": 137.0, "end": 144.0, "text": " and lots of joints to move around. And the probability that by itself,"}, {"start": 144.0, "end": 149.0, "text": " it discovered that it needs to do this here and get this reward is very, very slim."}, {"start": 149.0, "end": 154.0, "text": " So what you want to do is in your reward function that you're providing to the robot,"}, {"start": 154.0, "end": 162.0, "text": " you would want to say, okay, so this here, I see the blue thing is a bit more turned,"}, {"start": 162.0, "end": 167.0, "text": " so I'm maybe going to give this a 0.1, and then here it's a bit more turned,"}, {"start": 167.0, "end": 173.0, "text": " so maybe this is 0.2, and this I really like 0.3, here is 0.6, maybe,"}, {"start": 173.0, "end": 178.0, "text": " because it's even more, right? And then one at the end, right?"}, {"start": 178.0, "end": 184.0, "text": " So this is what they would call a smooth gradient in the reward function,"}, {"start": 184.0, "end": 189.0, "text": " where it's kind of the reward function ramps up until the goal is reached."}, {"start": 189.0, "end": 198.0, "text": " But oftentimes this isn't really possible, because if you already knew how exactly to do the task,"}, {"start": 198.0, "end": 204.0, "text": " which then you could, you can only shape the reward function truly if you know how to perform the task"}, {"start": 204.0, "end": 209.0, "text": " in the first hand, and then why exactly do you do reinforcement learning,"}, {"start": 209.0, "end": 216.0, "text": " except for academic exercise? So the issue this paper has is clear, right?"}, {"start": 216.0, "end": 222.0, "text": " What they want to say is, let's assume that your reward function is actually pretty bad."}, {"start": 222.0, "end": 229.0, "text": " Can we provide artificially a way that this discovery of these,"}, {"start": 229.0, "end": 238.0, "text": " of these, of these, what they call of these new skills is facilitated as if the reward function had some sort of a gradient."}, {"start": 238.0, "end": 244.0, "text": " So that's the, the outset. Let's actually go back to the, to this for a second,"}, {"start": 244.0, "end": 252.0, "text": " and they have these mazes as, as a, as a kind of an example."}, {"start": 252.0, "end": 260.0, "text": " So if you look at these mazes, what we want to keep in mind is, let's actually draw this over here."}, {"start": 260.0, "end": 270.0, "text": " So let's say you have one of these mazes, right? And always there is a, there is a start state."}, {"start": 270.0, "end": 275.0, "text": " So you're here, and then there is a goal state, right? Let's say over here."}, {"start": 275.0, "end": 283.0, "text": " And the task is, you, you, you, you, you can move up, down, left, right? And the task is to reach the goal, right?"}, {"start": 283.0, "end": 287.0, "text": " But if the reward function is simply that if you reach the goal, you get a reward of one,"}, {"start": 287.0, "end": 296.0, "text": " and otherwise you get a reward of zero, then all the agent can do is kind of explore around, right, until it reaches the goal."}, {"start": 296.0, "end": 302.0, "text": " Now, if you do random exploration, like a lot of, a lot of reinforcement learning algorithms,"}, {"start": 302.0, "end": 309.0, "text": " for example, Q learning or policy grading, they'll have some sort of a, just of a random exploration element,"}, {"start": 309.0, "end": 316.0, "text": " where they, if they don't, if they don't, absence of what they, of the, when they know what to do, they just kind of,"}, {"start": 316.0, "end": 322.0, "text": " uh, boogal around, like up, up, up, right, left, right, left, down, right, up,"}, {"start": 322.0, "end": 333.0, "text": " uh, that doesn't work, okay, down, down, uh, left, down, so it's sort of, and then up again, right, and then they just kind of wonk around."}, {"start": 333.0, "end": 344.0, "text": " Um, so this, this method takes issue with that, and it says, okay, while the agent is doing its thing, trying to reach the goal, right?"}, {"start": 344.0, "end": 353.0, "text": " What we can do is we can learn a distance function between states. Now, we'll reduce the problem for now, and just say,"}, {"start": 353.0, "end": 361.0, "text": " the task is always that the goal state is reached in the shortest amount of steps, right?"}, {"start": 361.0, "end": 370.0, "text": " So, let's say the, the agent does something, right? It goes here, here, here, here, and then here, right?"}, {"start": 370.0, "end": 375.0, "text": " Right? That's, that's one rollout of the policy, and then it crashes into all, okay, that's bad."}, {"start": 375.0, "end": 385.0, "text": " Um, so it gets a negative reward, right? But in addition to that, we can, we can learn, so it has visited all of these states here in between, right?"}, {"start": 385.0, "end": 394.0, "text": " Um, these are intermediate states. This paper wants us now to, to learn a distance function between the states."}, {"start": 394.0, "end": 403.0, "text": " So, this distance function, let's call it D. It learns how far two states are away."}, {"start": 403.0, "end": 414.0, "text": " So, it'll, you can, you can tell it, okay, this state here, let's call that state A, and this state here, state B, how far are those away?"}, {"start": 414.0, "end": 432.0, "text": " Now, this is not super well defined yet, but you want to say how far are they away for this agent here? So, the agent has maybe policy pi, like that's what it used to create this trajectory, under policy pi, how far away are states A and B?"}, {"start": 432.0, "end": 447.0, "text": " And how far away is simply going to be the amount of steps that it takes the agent to go from A to B. So, in this case, that would be two, right?"}, {"start": 447.0, "end": 475.0, "text": " So, the, the, and you can do this between any two states, right? This and this, this and this, the right here, here, these, all of these states, you can also start from this state, right? Let's do it in a different color and do every, so the, the, this distance function D can actually has a pretty tight reward signal, like a pretty wealth of information that it can learn these things from, right?"}, {"start": 475.0, "end": 496.0, "text": " So, the policy pi, in this case, can't learn much because it just got a reward of zero or something, because it didn't reach the goal, but the distance function has very, very big reward, or a big reward, it has a very denser word signal, where it can learn distances between two states, right?"}, {"start": 496.0, "end": 516.0, "text": " Now, let's say we've explored a bunch, right? A bunch, we've had many trajectories, some here, like to here, and then here, sometimes we even reach the goal, right? So, so sometimes we actually reach the goal, so we learn the, the distances between all of the, the states."}, {"start": 516.0, "end": 543.0, "text": " Now, if we had a perfect distance function, let's assume we have a perfect distance function, our task now becomes very, very simple. So, let's assume, let's assume I'm here, where the green agent is, and I have these, I can either go up or down, and let's go, let's say that's x and the down is y, right?"}, {"start": 543.0, "end": 554.0, "text": " Which one should I choose? Now, without even asking my policy, per se, what I can do is I can ask, hey, a distance function."}, {"start": 554.0, "end": 565.0, "text": " So, I can ask a distance function, two different things. So, first, let's do it like this."}, {"start": 565.0, "end": 587.0, "text": " So, what do you think of the distance between x to the goal, and what do you think of the distance from y to the goal? And the distance function, if it's learned correctly, it will tell you the distance of x to the goal is whatever, maybe you need eight steps, the distance of y to the goal is 10 steps, right?"}, {"start": 587.0, "end": 616.0, "text": " So, if you had a good distance function, then you could solve the task fairly, fairly easily. Now, this by itself isn't super interesting. You will quickly notice that if you are able to learn such a good distance function, especially with the goal state here, then you might as well learn a good policy because that means you've reached the goal a fair number of times, right?"}, {"start": 616.0, "end": 644.0, "text": " So, the kind of information theoretical signal of d versus the signal on pi, if you just want to reach the same goal, to me, it seems the same. This paper, it tries to talk this up, I feel, but to me, if you are in the situation where you have a fixed goal, and that's it, then this doesn't seem too interesting or to beneficial with"}, {"start": 644.0, "end": 670.0, "text": " compared to, let's say, just learning a value function, right? Like you would do it in a three-seater or something. The difference between this and a value function. So, if the number of steps is actually your reward, so your negative reward you want to reach the goal and the shortest amount of time, then learning a value function is the same."}, {"start": 670.0, "end": 689.0, "text": " The difference is, for a value function, the value function simply takes a state s, right, and a policy pi. While the distance function takes a state s and a goal state for the policy pi, the goal state for the value function is implicit, right?"}, {"start": 689.0, "end": 702.0, "text": " So, it implicitly has the goal state because you assume that the goal is always the same. With the distance function, you can technically change your goal, and this is where it becomes interesting."}, {"start": 702.0, "end": 720.0, "text": " So, let's say you've explored, but you haven't quite reached the goal yet, right? But we said, okay, most of these RL algorithms, they have some sort of some notion of random exploration, right, in order to reach the goal."}, {"start": 720.0, "end": 733.0, "text": " What if you went from here to here and to here and to here, and you learned the distances fairly well for the trajectories that you can do, but you just haven't been able to go any further."}, {"start": 733.0, "end": 746.0, "text": " What you can say is, you can go to your replay, but for right, your memory of everything you've done, and you can ask which of these states has the furthest distance from my starting state."}, {"start": 746.0, "end": 760.0, "text": " And the answer will be, okay, this state here has the furthest distance. So now what you can do is you can make this your goal, right? You can just try to reach that state, right?"}, {"start": 760.0, "end": 769.0, "text": " And once you reach the state, you can explore from that state, right, because this is the furthest away from your original starting state."}, {"start": 769.0, "end": 780.0, "text": " That probably means that, you know, if you, that's kind of the frontier of what you know. So if you explore from here, you can go even further. Notice what? Because it is the furthest that you know."}, {"start": 780.0, "end": 789.0, "text": " So it might turn out that from here, you can only go back, right? That's a possibility, but probably you could go even further, right?"}, {"start": 789.0, "end": 799.0, "text": " So then you go further and you might reach this state here, right? And again, you ask your replay buffer and tells you this state here is the furthest so far."}, {"start": 799.0, "end": 805.0, "text": " So you take this as your new goal, and now you're just trying to reach that and explore from here."}, {"start": 805.0, "end": 819.0, "text": " This is extremely similar to an algorithm like go explore that I already made a video about where it remembers what it did, and then it will always travel to the furthest states it has seen so far."}, {"start": 819.0, "end": 823.0, "text": " And then from there, try to go farther, right?"}, {"start": 823.0, "end": 837.0, "text": " So this, this, if you, if you can learn a good distance function here, that will help you in exploring the space and eventually, of course, you think you might actually reach this goal set."}, {"start": 837.0, "end": 847.0, "text": " So you might go far enough into in this maze, you might explore it enough such that you, you stumble over the goal state by itself, right?"}, {"start": 847.0, "end": 861.0, "text": " So this is, this is sort of the goal. This can be used in a number of different ways. Now instead of always going for the furthest what they did in the robot is they just let the algorithm explore, right?"}, {"start": 861.0, "end": 877.0, "text": " We explore explore explore if this is like a state tree, and then at some point it, it asked the human which one is closest to what you want, and then the human says this one, and then this, okay, cool."}, {"start": 877.0, "end": 891.0, "text": " This is now the new goal, right? So we'll try to reach this as much as possible and then explore from here, right? So this in the case of the robot, the robot simply just like does some things."}, {"start": 891.0, "end": 906.0, "text": " It explores in the unsupervised manner, and then at some point you ask the human which of these things that the robot has done you like the most, and then that becomes the new intermediate goal state and the algorithm explores from there."}, {"start": 906.0, "end": 924.0, "text": " Right? So that's the the main gist and how you can use this. Now the entire learning thing is actually pretty simple. So what they propose is simply to, to learn the distance function, they put it pretty formal here."}, {"start": 924.0, "end": 942.0, "text": " They say, okay, if you're two states that were visited after one another in an episode, then you can define the distance function as this sum from i to j if they were visited at time steps i and j respectively."}, {"start": 942.0, "end": 960.0, "text": " This discounted cost function across this, but ultimately they consider problems where it's shortest path problems. So the cost function simply becomes how many steps does it take you to reach to reach the goal. So the cost function."}, {"start": 960.0, "end": 982.0, "text": " So this becomes this, this becomes the identity, I guess you can you can set it to to one and this you can also set to one. So this simply becomes j minus i. How many steps does it take you to reach state state in time step j from the state you visited in time step i."}, {"start": 982.0, "end": 1003.0, "text": " And then they simply train a neural network or not even sure if it's a neural network, but you train a bunch of a parameterized function that learns to map the distance between these states to how many steps it took you from one to the other."}, {"start": 1003.0, "end": 1015.0, "text": " And you do this simply by having a by regressing so mean squared regression means squared loss regression."}, {"start": 1015.0, "end": 1030.0, "text": " So simple as that and that's how you learn the distance function and then you can use the distance function in the ways we discussed to either to improve your shortest path policy by giving it by providing it."}, {"start": 1030.0, "end": 1053.0, "text": " So what you want to do is you want to provide the distance function as the negative reward, right. So they say they they provide the distance function as a negative reward for this or you can do this in an unsupervised fashion where you always propose to throw this away goals or you can do this in the semi supervised fashion."}, {"start": 1053.0, "end": 1074.0, "text": " So there's a bunch of things that they did here. They have a bunch of videos of things that they trained this is from the human sorry from the semi supervised where the humans were simply selecting the hoppers that went further to the right."}, {"start": 1074.0, "end": 1098.0, "text": " You can see over time this hops to the right with very, very sparse input only. So this is semi supervised right and then it goes to the right and it also has an unsupervised video where you simply let it perform and it on an unsupervised fashion."}, {"start": 1098.0, "end": 1116.0, "text": " So it's a very supervised to discover states that are as far away as possible from its initial states and you can see it actually learns to move to the right and to the left because these are these reach states that are very far from its original state."}, {"start": 1116.0, "end": 1141.0, "text": " So that's it's pretty cool that it turns out that the unsupervised method will discover such states. All right. So what to make of this this if you recognize this already it's very possible because I had seen this some sort of this idea in many, many papers before."}, {"start": 1141.0, "end": 1169.0, "text": " So and they make some connections in their related work so if you know for example universal value functions sorry universal value estimation universal value functions and so on where basically it's also an unsupervised way where you always just you select two states you say this and this agent now try try to go from here to here right just try that."}, {"start": 1169.0, "end": 1195.0, "text": " And so it is and then you select two new states so you basically teach your agent to go between two states that you choose at random and it's supposed to in an unsupervised fashion learn something about the environment very similar to what we have here right also a bunch of other a bunch of other things like just pure value functions are also pretty similar."}, {"start": 1195.0, "end": 1214.0, "text": " I think to this go explore there's a big connection to go explore so this has been around in one way or the other but possibly not in this specific formulation and what I think is cool applied to this specific semi supervised task."}, {"start": 1214.0, "end": 1238.0, "text": " So if I had to formulate a criticism to this method I would guess that it probably doesn't work when let's say the branching factor of the task is super high you see here you can you can only really turn the valve in one way or another of course the digits and the joints are"}, {"start": 1238.0, "end": 1267.0, "text": " they have degrees of freedom but if you think if the branching factor is super high right so from a given state here you can go in many many many different ways and then from each of those you can go in many many different ways right then the notion of something being far away right you go to this thing and use what's the farthest away right is almost meaningless"}, {"start": 1267.0, "end": 1291.0, "text": " because you have so much not explored right so if you have if you are three steps deep here right it will always tell you well this state here is the farthest away but you haven't explored these you know 15 directions here right so it might be that you actually miss so that you"}, {"start": 1291.0, "end": 1310.0, "text": " go so here's the goal and here's the start and you go a long way but you miss this obvious shortcut here because you always want to go along the longest path around so it seems like there is there there are probably"}, {"start": 1310.0, "end": 1335.0, "text": " requirements where this works well right but there right but but it appears that if if either the branching factor is super high or if there are maybe this this kind of loops in the game loops between states non obvious combinatorical things it might be"}, {"start": 1335.0, "end": 1350.0, "text": " somewhat even counterproductive sometimes not not sure about that but it seems to be very specific environments where this would work all right so this was my commentary I invite you to"}, {"start": 1350.0, "end": 1365.0, "text": " read the paper check it out and bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=hg2Q_O5b9w4 | CURL: Contrastive Unsupervised Representations for Reinforcement Learning | Contrastive Learning has been an established method in NLP and Image classification. The authors show that with relatively minor adjustments, CL can be used to augment and improve RL dramatically.
Paper: https://arxiv.org/abs/2004.04136
Code: https://github.com/MishaLaskin/curl
Abstract:
We present CURL: Contrastive Unsupervised Representations for Reinforcement Learning. CURL extracts high-level features from raw pixels using contrastive learning and performs off-policy control on top of the extracted features. CURL outperforms prior pixel-based methods, both model-based and model-free, on complex tasks in the DeepMind Control Suite and Atari Games showing 2.8x and 1.6x performance gains respectively at the 100K interaction steps benchmark. On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features.
Authors: Aravind Srinivas, Michael Laskin, Pieter Abbeel
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're going to look at curl contrastive unsupervised representations for reinforcement learning by Ravind Shrinivas, Michael Laskin and Pieter Abil. So this is a general framework for unsupervised representation learning for RL. So let's untangle the title a little bit. It is for reinforcement learning, which if you don't know what reinforcement learning is, I've done a bunch of videos on RL framework. So it's for general reinforcement learning. That means it can be paired with almost any RL algorithm out there. So we're not going to dive into specific RL algorithms today. It is unsupervised, which means it doesn't need any sort of labels and it also doesn't need a reward signal for RL, which is pretty cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary reward. So now there is a training objective here, but it doesn't have to do with the RL reward. And then in the it is learning representations, which means it learns it learns intermediate representations of the input data that is useful. And in the end it is contrastive. And that is the kind of secret sauce in here. The training objective is what's called contrastive learning. And that's what we're going to spend most of our time on today exploring what that means. Alright, so here's the general framework. You can see it down here. Sorry about that. So you can see that reinforcement learning is just a box, which is we don't care about the RL algorithm you use. That's just, you know, what comes at the end. What comes at the beginning? Oh, here is the observation. So the observation in an RL algorithm is kind of fundamental. Now, if someone explains RL to you, reinforcement learning, usually what they'll say is there is some kind of actor and there is some kind of environment. Right? And the environment will give you an observation, right? Observation. Oh, which is some sort of, let's say here is an image, right? So in this in this RL framework, specifically, the examples they give are of image-based reinforcement learning. So let's say the Atari game where you have this little spaceship here and there are meteorites up here. And you need to shoot them. So there is a little shot here, right? You need to shoot those meteorites. Right? So this is the observation. Oh, and then as an age, as an actor, you have to come up with some sort of action. And the actions here can be something like move to the left, move to the right, press the button that does the shooting. So you have to come up with an action somehow, given this observation. And then the environment will give you back a reward along with the next observation, like the next frame of the game. And you're going to have to come up with another action in response to that. And the environment is going to give you back another reward and the next observation and so on. So what you want to do is you want to find a mapping from observation to action such that your reward is going to be as high as possible, right? This is the fundamental problem of RL. And usually what people do is they take this act, this mapping here from observation to action to be some sort of function, some sort of function that is parameterized maybe. And nowadays, of course, that's often a neural network. But you're trying to learn, given the input observation, what output action you need to do. And you can think of the same here. So you have this input observation up here. And down here, after the reinforcement learning, the output is going to be an action, right? And so this function we talked about up here is usually implemented, sorry, is usually implement like I said, put the observation into the RL framework. And then the RL framework learns this f of theta function to give you an action. Now here you can see the pipeline is a bit different. We don't want to shove the observation in directly, right? We don't want the observation directly, but what we put into the RL framework is this Q thing. Now the Q is supposed to be a representation of the observation and a useful representation. So if we think of this of this game here, of this Atari game up here, what could be the what could be a useful representation? If if I had to craft one by hand, how would I construct a useful representation? Keep in mind the representation, the goal is to have a representation of the observation that is more useful to the RL algorithm than just the pure pixels of the image, right? So if I had to craft a representation, let's say it's a vector, right? Let's say our representations need to be vectors. What I would do is I would probably take the X and Y coordinates of the little spaceship, right? X and Y put it in the vector. That's pretty useful. And then I would probably take the X and Y coordinates of the meteorites that are around, right? Let's say there are maximum of two, so X, Y, X, Y here. I would probably take the angle, right? The angle where my spaceship is pointing to, that should be pretty useful because if I shoot, I want to know where I shoot, right? So theta here, and then probably maybe the X and Y coordinate of the of the shot here of the red shot that I fired if there is one, right? Also, I'm going to put that into my representation. So X and Y and maybe Delta X Delta Y, something like this, right? So you can see if I had to hand craft something, if I, I can pretty much guarantee that if I put in this representation right here into the RL algorithm, right? If I put this in here, it would turn out guarantee, it would turn out to be a better RL agent that learns faster than if I put in the original observation, which is the pixel image of the game, right? Because, of course, in order to play the game correctly, in order to play the game to win, you need to extract this information, right? You need to get, oh, there's something like a spaceship, there's something like meteorites, this is all things that the RL algorithm doesn't know per se and would have to learn from the pixels, right? But if I already give it the information that is useful, it can learn much faster, right? So you can see if I hand craft a good representation, it's pretty easy for the RL algorithm to improve. Now we want to come up with a framework that automatically comes up with a good representation, right? So it alleviates the RL algorithm here, the reinforcement learning. It alleviates that from having to learn a good representation, right? It already is burdened with learning the what a good action is in any given situation, right? We want to alleviate it of the burden to also extract useful information from the observation space, right? So how do we do this? This Q here is supposed to be exactly that, is supposed to be a good representation, but not one that we handcrafted, but a used with a technique that can be a point employed pretty much everywhere. And the goal, sorry, the secret sauce here is this contrastive loss thing, okay? This bombed contrastive learning is this kind of magic thing that will make us good representations. So what is contrastive learning? In this case, I'm going to explain it in this case for this kind of image based for image based reinforcement learning, but just for image based neural networks, how can we come up with a contrastive loss? So you see there's kind of a two pipeline thing going on here, there is like this and this and then one of them is going to be the good encoding. All right, so let's check it out. Let's say we have this image that we had before, right? I'll draw it again. This little spaceship, this and this and shall, right? And we want to, we want to do this. What we need to do is we need to produce three different things from it. We need to produce an anchor, what's called an anchor. So we need to produce a positive sample, positive sample, and we need to produce negative samples. Let's just go with one negative sample for now, right? So the goal is to come up with a task that where we produce our own labels, right? So we want, since we're training a encoder, and the encoder is a neural network that is parameterized, we need some sort of loss function. So the goal is to come up with a method where we can create our own labels to a task, but that we construct the task in a way such that the neural network has no choice but learn something meaningful, even though we made the task up ourselves. All right, I hope this was kind of clear. So how are we going to do this? Our method of choice here is going to be random cropping. Now random cropping means that I just take an image, right? And I crop a piece from it. So a smaller piece from the image, I just take a view inside the image. So in case of the anchor, right, I'm going to draw the same picture here, bear with me. I'm going to draw the same picture here a couple of times. This is also supposed to be the same picture. And with the negative sample, I'm just going to leave it empty for now. Tada, two meteorites, two meteorites, shot, shot, right? So for the anchor, we're going to actually not random crop, but center crop, right? So we're going to take here the center image, right? So the assumption is kind of that if I center, if I center crop, I won't lose, you know, too much of the image. I can actually make the crop bigger such that almost everything of the image is somewhat contained in this. And the, yeah, all right. So this is going to be my anchor, then the positive sample is going to be a random crop of the same image. So I'm just randomly going to select a same size, same size section from that image. Let's say this is up right here. All right. And the negative sample is going to be a random crop from a different image, right? So a different image might be from the same game, right? But might be there is a meteorite here, right? And there is no shot. I don't, I don't shoot. And I'm going to take a random crop from this. Let's say I'm going to take a random crop here. Let's put a meteorite here as well, just for fun. All right. So these are going to do be our three samples. And now the question is going to be, if I give the anchor to the neural network, I'm going to say I give you the anchor, right? But I'm also going to give you this and this thing. And I'm not going to give any of this. I'm just going to give whatever I cropped, right? So just just these things. So I asked the neural network, I give you the anchor. Now which one of these two, which one of these two crops comes from the same image, right? So as human, you look at this. And if you just see the center crop, you see, okay, down here, there's this, this tip of this thing. And then there's the shot right in relation to the shot. There is a meteor here, right? And then you look at the second one and you say, okay, I don't see the spaceship, but there's the same relation here from the shot to the meteor. And I can kind of see the meteor up here. And this also fits with that, right? And the spaceship must be, you know, down here somewhere. And then I go over here and I try to do the same thing. It's okay, here's the meteor. And you know, it might be, it might be in the original image, it might be over here somewhere. So that's possible. I don't see it, right? That's possible. But then there should be, there should be a shot right somewhere here, or sorry, further up, oops, the, there should be a shot somewhere here, right? I'm pretty sure because there's, there's one over here. And I don't see it, right? So I am fairly sure, Mr. Task Asker, that this image here is the positive sample while this image here is the negative sample, right? So this is the task that you ask of the neural network. Give it the anchor and you ask which one of the, of these two comes from the same image, right? This is called contrastive learning. Now, there is a bit more complicated in that, of course, what you do is you encode these things using neural networks. And then, so each of the things you encode, so the anchor, you're going to encode all of these things using neural network, right? And then this is what's going to become the query, and these are becoming the keys, so key one or key two. And then you're going to feed, eat always two of them into a bilinear product, right? And a bilinear product is simply, you can think of it as an inner product in a perturbed space that you can learn. So you're going to have this, have these two here, these go into QWK1, and then these two here, sorry, this and this, go into QWK2. Now, W here is a learnable parameter, right? So you have some freedom, and then you basically take whichever one of those two is highest, right? So this might be this high, and this might only be this high, and then you say, ah-ha, cool, this one's higher, so this one must be the positive, right? And you train the W specifically to make this higher, to make the positive ones higher, and the negative ones lower. So this is a supervised learning task, right? Where these things here are going to be the logits, or the, so they're inner products, but you basically then pick the one that is highest, as a in a softmax way, and they put this in the paper. So if we go down here, the objective that they use to do the contrastive learning is this one. So as you can see, it's a softmax, like in multiclass classification, of the inner product, the bilinear product with the positive samples, over the bilinear product with the positive samples, plus the bilinear product with all of the negative samples. So you're going to come up with more than one negative sample. All right, now the only thing left that we don't have here is that the encoding, how you're going to come from the image space to this space here, is going to be slightly different, and depending on whether you're talking on the anchor, or on the, what, what are called the keys, the things you compare to. And this is out of a kind of a stability criterion. You already, maybe you don't know like something like double Q learning, or things like this, it sometimes when you train with your own thing, so in Q learning, you're kind of trying to come up with an actor and a critic, or it's not the same thing, but you're kind of using the same neural network twice in your setup. And then you compare the outputs to each other, which isn't, you know, it leads to instability. So in our case, we took it three times here, or multiple times, especially for the same objective here, we have twice something that was encoded by the same neural network, and it is on the two sides of this bilinear product. So if we were to use the same neural network, that tends to be somewhat unstable. So we have different neural networks, one that will encode the query, which is this fq, and one which will encode the keys, sorry, fk. Now we don't want to learn to neural networks, and that's why there's a bit of a compromise, where we say it is the same neural network, but, but, um, basically this one is the one we learn. And then we always, every now and then, we transfer over the parameters to that one. And in fact, each step we transfer over the parameters and do an exponentially moving average with the parameters of this momentum encoder from the step before. So the momentum encoder parameters are a moving average of the parameters of the query encoder. And that is, so you get kind of get the best of both worlds. You don't have to learn a second neural network, but your second neural network is not the same as your first neural network, but it kind of lags behind, but it is also, it is also performing almost as well. So that is, um, I don't know if that makes sense, but it is the best I can explain it. So to recap, take your observation. You encode it as a query, sorry, you crop, crop here for your anchor that gets your query. And then you random crop for your keys, right, into positive and negative samples, right? So you random crop from the same observation or from different observations, right? These become your positive and negative samples. Then you take, you take, you push these through your encoders for the query and for the keys respectively, you end up with the queue, which is the encoded anchor and the case, which are the encoded positive and negative samples. And then you learn, you update this encoder here using the contrastive loss, right? And at the same time, you feed the queue, you feed the queue here into the reinforcement learning algorithm. And you learn your reinforcement learning algorithm. Instead of having the observation directly as an input here, you now have the queue here as an input, right? That is it. The reinforcement learning works exactly the same, but except having the pixel input, oh, you now have the representation input queue. And you don't have to worry about anything else in terms of the reinforcement learning algorithm. It stays exactly the same, right? The this whole thing here can actually run either in parallel or you can think of it before, you can think of it off policy on policy. It is sort of modular how you how you fit this in. It simply comes up with good representation. So that is that is basically the deal here, right? And you hope that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here. If you encode that to the queue, you hope that this representation now is a good representation as a basis for the RL algorithm. And it turns out at least in their experiments, it is. So here you see the same thing. They actually they do something more where in RL, you usually deal with a stack of observations, not just a single observation, because so for example, in Atari, people always concatenate something like four, the four last frames, right? And their point is, okay, if we have the stack here, if we do this data augmentation, you know, these crops, we kind of need to do them consistently, right? We need to crop every single image at the same point for the query. And also, if we do a random crop, let's say a random crop down here, we need to do this same random crop for all of the of the stack of images here, right? So that is kind of the additional thing they introduce with respect to RL that deals with with stacked time frames. But it's kind of the same diagram as above here. Right, so they explain the the RL algorithms they use and exactly their thing. And here you can see that the anchor is a crop. And the positive sample is a random crop from the same image. This would be up here somewhere. The anchor is cropped from the middle. And then the negative would be a random crop from a different image or a different stack of images. They have a pseudo code here where so it's pretty simple. We'll just go through it quickly. Right, you start off with FQ and FK. These are the encoders for the query and keys. You start them off same. Then you go through your data loader. You do this random augmentation of query and your keys. And I don't not even sure if they're random augmentation needs actually to be a central crop for the anchor or just two different two different crops from the same image. That might be as well. So you know, just I guess it's a thing you could choose. I don't know what exactly is the best thing. All right, then I forward the query through the FQ and I forward the keys through the FK. Then important I detach this so I don't train. I don't want to train the FK. I only want to train the FQ. Right. Then I do the bilinear product here with the W. These are the bilinear product. And then I put this all of this into a cross entropy loss. Right. In the end, I update my FQ and my W and I do this exponentially moving average for my key encoder. And they test on two different things. They test on the deep mind control tasks and they always test 100k time steps. So their big point is data efficiency. They claim they can use learn useful representations with not much data. So the task is here. How good are you at 100k step time steps? So you don't optimize until the end, you just you get 100k time steps and then the question is how how good are you? And the curl here outperforms all of the bass lines handily in the deep mind control tasks. And it also outperforms a lot of the bass lines in the Atari tasks. And it actually if you look at the results, it doesn't outperform everything. But for example, here the red is curl and the dashed gray is state SAC. Now state SAC, the important thing to note here is it has access to the state, whereas curl only works from pixels. Right. So what I said before, like if I had to craft the use for representation, basically state SAC has access to that. And you see that in many of the tasks that the curl comes close or performs equally well to the state SAC. Right. So that's pretty impressive. Especially if you look at pixel SAC, sorry, which is the same algorithm, but does not have access to the state, just the pixels. It often fails terribly. Right. So that is pretty interesting to see. And even to me, it's pretty interesting to see that this kind of this kind of algorithm, this kind of self labeled algorithm comes up with such useful representations. All right. So I hope I have explained this satisfactorily. And um, check out the paper for more experiments, ablation studies, and just a general reading. And I wish you a good day. | [{"start": 0.0, "end": 7.08, "text": " Hi there. Today we're going to look at curl contrastive unsupervised representations for reinforcement"}, {"start": 7.08, "end": 15.4, "text": " learning by Ravind Shrinivas, Michael Laskin and Pieter Abil. So this is a general framework"}, {"start": 15.4, "end": 23.28, "text": " for unsupervised representation learning for RL. So let's untangle the title a little bit."}, {"start": 23.28, "end": 28.82, "text": " It is for reinforcement learning, which if you don't know what reinforcement learning"}, {"start": 28.82, "end": 35.16, "text": " is, I've done a bunch of videos on RL framework. So it's for general reinforcement learning."}, {"start": 35.16, "end": 42.64, "text": " That means it can be paired with almost any RL algorithm out there. So we're not going"}, {"start": 42.64, "end": 50.46, "text": " to dive into specific RL algorithms today. It is unsupervised, which means it doesn't"}, {"start": 50.46, "end": 57.96, "text": " need any sort of labels and it also doesn't need a reward signal for RL, which is pretty"}, {"start": 57.96, "end": 64.48, "text": " cool because usually the entire RL pipelines rely on some sort of a reward or auxiliary"}, {"start": 64.48, "end": 70.04, "text": " reward. So now there is a training objective here, but it doesn't have to do with the RL"}, {"start": 70.04, "end": 79.64, "text": " reward. And then in the it is learning representations, which means it learns it learns intermediate"}, {"start": 79.64, "end": 86.08, "text": " representations of the input data that is useful. And in the end it is contrastive. And"}, {"start": 86.08, "end": 90.96, "text": " that is the kind of secret sauce in here. The training objective is what's called contrastive"}, {"start": 90.96, "end": 95.8, "text": " learning. And that's what we're going to spend most of our time on today exploring what"}, {"start": 95.8, "end": 104.44, "text": " that means. Alright, so here's the general framework. You can see it down here. Sorry about"}, {"start": 104.44, "end": 114.32, "text": " that. So you can see that reinforcement learning is just a box, which is we don't care about"}, {"start": 114.32, "end": 120.52, "text": " the RL algorithm you use. That's just, you know, what comes at the end. What comes at the"}, {"start": 120.52, "end": 126.83999999999999, "text": " beginning? Oh, here is the observation. So the observation in an RL algorithm is kind"}, {"start": 126.83999999999999, "end": 132.48, "text": " of fundamental. Now, if someone explains RL to you, reinforcement learning, usually what"}, {"start": 132.48, "end": 138.0, "text": " they'll say is there is some kind of actor and there is some kind of environment. Right?"}, {"start": 138.0, "end": 146.68, "text": " And the environment will give you an observation, right? Observation. Oh, which is some sort"}, {"start": 146.68, "end": 154.6, "text": " of, let's say here is an image, right? So in this in this RL framework, specifically,"}, {"start": 154.6, "end": 159.48, "text": " the examples they give are of image-based reinforcement learning. So let's say the Atari"}, {"start": 159.48, "end": 168.2, "text": " game where you have this little spaceship here and there are meteorites up here. And you"}, {"start": 168.2, "end": 174.0, "text": " need to shoot them. So there is a little shot here, right? You need to shoot those meteorites."}, {"start": 174.0, "end": 179.67999999999998, "text": " Right? So this is the observation. Oh, and then as an age, as an actor, you have to come"}, {"start": 179.67999999999998, "end": 183.92, "text": " up with some sort of action. And the actions here can be something like move to the left,"}, {"start": 183.92, "end": 191.6, "text": " move to the right, press the button that does the shooting. So you have to come up with"}, {"start": 191.6, "end": 197.48, "text": " an action somehow, given this observation. And then the environment will give you back"}, {"start": 197.48, "end": 202.44, "text": " a reward along with the next observation, like the next frame of the game. And you're going"}, {"start": 202.44, "end": 206.88, "text": " to have to come up with another action in response to that. And the environment is going"}, {"start": 206.88, "end": 211.95999999999998, "text": " to give you back another reward and the next observation and so on. So what you want to"}, {"start": 211.96, "end": 221.12, "text": " do is you want to find a mapping from observation to action such that your reward is going to"}, {"start": 221.12, "end": 227.44, "text": " be as high as possible, right? This is the fundamental problem of RL. And usually what people"}, {"start": 227.44, "end": 233.72, "text": " do is they take this act, this mapping here from observation to action to be some sort"}, {"start": 233.72, "end": 240.64000000000001, "text": " of function, some sort of function that is parameterized maybe. And nowadays, of course,"}, {"start": 240.64, "end": 246.51999999999998, "text": " that's often a neural network. But you're trying to learn, given the input observation,"}, {"start": 246.51999999999998, "end": 251.83999999999997, "text": " what output action you need to do. And you can think of the same here. So you have this"}, {"start": 251.83999999999997, "end": 258.24, "text": " input observation up here. And down here, after the reinforcement learning, the output is"}, {"start": 258.24, "end": 265.64, "text": " going to be an action, right? And so this function we talked about up here is usually"}, {"start": 265.64, "end": 270.64, "text": " implemented, sorry, is usually implement like I said, put the observation into the RL"}, {"start": 270.64, "end": 276.76, "text": " framework. And then the RL framework learns this f of theta function to give you an action."}, {"start": 276.76, "end": 281.59999999999997, "text": " Now here you can see the pipeline is a bit different. We don't want to shove the observation"}, {"start": 281.59999999999997, "end": 289.32, "text": " in directly, right? We don't want the observation directly, but what we put into the RL framework"}, {"start": 289.32, "end": 296.68, "text": " is this Q thing. Now the Q is supposed to be a representation of the observation and"}, {"start": 296.68, "end": 303.24, "text": " a useful representation. So if we think of this of this game here, of this Atari game up"}, {"start": 303.24, "end": 309.76, "text": " here, what could be the what could be a useful representation? If if I had to craft one"}, {"start": 309.76, "end": 317.08, "text": " by hand, how would I construct a useful representation? Keep in mind the representation, the goal"}, {"start": 317.08, "end": 324.96, "text": " is to have a representation of the observation that is more useful to the RL algorithm than"}, {"start": 324.96, "end": 330.24, "text": " just the pure pixels of the image, right? So if I had to craft a representation, let's"}, {"start": 330.24, "end": 337.12, "text": " say it's a vector, right? Let's say our representations need to be vectors. What I would"}, {"start": 337.12, "end": 344.71999999999997, "text": " do is I would probably take the X and Y coordinates of the little spaceship, right? X and Y"}, {"start": 344.72, "end": 351.84000000000003, "text": " put it in the vector. That's pretty useful. And then I would probably take the X and Y coordinates"}, {"start": 351.84000000000003, "end": 358.52000000000004, "text": " of the meteorites that are around, right? Let's say there are maximum of two, so X, Y,"}, {"start": 358.52000000000004, "end": 370.0, "text": " X, Y here. I would probably take the angle, right? The angle where my spaceship is pointing"}, {"start": 370.0, "end": 375.48, "text": " to, that should be pretty useful because if I shoot, I want to know where I shoot, right?"}, {"start": 375.48, "end": 383.04, "text": " So theta here, and then probably maybe the X and Y coordinate of the of the shot here"}, {"start": 383.04, "end": 387.76, "text": " of the red shot that I fired if there is one, right? Also, I'm going to put that into"}, {"start": 387.76, "end": 396.84, "text": " my representation. So X and Y and maybe Delta X Delta Y, something like this, right?"}, {"start": 396.84, "end": 403.2, "text": " So you can see if I had to hand craft something, if I, I can pretty much guarantee that if"}, {"start": 403.2, "end": 409.91999999999996, "text": " I put in this representation right here into the RL algorithm, right? If I put this in"}, {"start": 409.91999999999996, "end": 417.4, "text": " here, it would turn out guarantee, it would turn out to be a better RL agent that learns"}, {"start": 417.4, "end": 425.4, "text": " faster than if I put in the original observation, which is the pixel image of the game, right?"}, {"start": 425.4, "end": 433.71999999999997, "text": " Because, of course, in order to play the game correctly, in order to play the game to win,"}, {"start": 433.71999999999997, "end": 439.15999999999997, "text": " you need to extract this information, right? You need to get, oh, there's something"}, {"start": 439.15999999999997, "end": 442.79999999999995, "text": " like a spaceship, there's something like meteorites, this is all things that the RL algorithm"}, {"start": 442.79999999999995, "end": 449.44, "text": " doesn't know per se and would have to learn from the pixels, right? But if I already give"}, {"start": 449.44, "end": 455.2, "text": " it the information that is useful, it can learn much faster, right? So you can see if I"}, {"start": 455.2, "end": 462.24, "text": " hand craft a good representation, it's pretty easy for the RL algorithm to improve. Now"}, {"start": 462.24, "end": 467.64, "text": " we want to come up with a framework that automatically comes up with a good representation,"}, {"start": 467.64, "end": 474.48, "text": " right? So it alleviates the RL algorithm here, the reinforcement learning. It alleviates"}, {"start": 474.48, "end": 483.28000000000003, "text": " that from having to learn a good representation, right? It already is burdened with learning"}, {"start": 483.28000000000003, "end": 489.20000000000005, "text": " the what a good action is in any given situation, right? We want to alleviate it of the burden"}, {"start": 489.20000000000005, "end": 500.56, "text": " to also extract useful information from the observation space, right? So how do we do this?"}, {"start": 500.56, "end": 507.36, "text": " This Q here is supposed to be exactly that, is supposed to be a good representation, but"}, {"start": 507.36, "end": 514.16, "text": " not one that we handcrafted, but a used with a technique that can be a point employed"}, {"start": 514.16, "end": 520.48, "text": " pretty much everywhere. And the goal, sorry, the secret sauce here is this contrastive"}, {"start": 520.48, "end": 528.4, "text": " loss thing, okay? This bombed contrastive learning is this kind of magic thing that will"}, {"start": 528.4, "end": 536.4, "text": " make us good representations. So what is contrastive learning? In this case, I'm going to explain"}, {"start": 536.4, "end": 548.16, "text": " it in this case for this kind of image based for image based reinforcement learning, but"}, {"start": 548.16, "end": 554.24, "text": " just for image based neural networks, how can we come up with a contrastive loss? So you"}, {"start": 554.24, "end": 560.4, "text": " see there's kind of a two pipeline thing going on here, there is like this and this and then"}, {"start": 560.4, "end": 571.6, "text": " one of them is going to be the good encoding. All right, so let's check it out. Let's say we have"}, {"start": 571.6, "end": 579.12, "text": " this image that we had before, right? I'll draw it again. This little spaceship,"}, {"start": 579.12, "end": 591.84, "text": " this and this and shall, right? And we want to, we want to do this. What we need to do is we need"}, {"start": 591.84, "end": 598.88, "text": " to produce three different things from it. We need to produce an anchor, what's called an anchor."}, {"start": 598.88, "end": 609.92, "text": " So we need to produce a positive sample, positive sample, and we need to produce negative samples."}, {"start": 610.64, "end": 616.48, "text": " Let's just go with one negative sample for now, right? So the goal is to come up with a task"}, {"start": 617.2, "end": 624.24, "text": " that where we produce our own labels, right? So we want, since we're training a encoder,"}, {"start": 624.24, "end": 628.4, "text": " and the encoder is a neural network that is parameterized, we need some sort of loss function."}, {"start": 628.4, "end": 634.56, "text": " So the goal is to come up with a method where we can create our own labels to a task,"}, {"start": 634.56, "end": 640.4, "text": " but that we construct the task in a way such that the neural network has no choice but learn something"}, {"start": 640.4, "end": 647.1999999999999, "text": " meaningful, even though we made the task up ourselves. All right, I hope this was kind of clear."}, {"start": 648.88, "end": 654.4, "text": " So how are we going to do this? Our method of choice here is going to be random cropping."}, {"start": 654.4, "end": 663.92, "text": " Now random cropping means that I just take an image, right? And I crop a piece from it."}, {"start": 663.92, "end": 668.4, "text": " So a smaller piece from the image, I just take a view inside the image."}, {"start": 669.4399999999999, "end": 676.3199999999999, "text": " So in case of the anchor, right, I'm going to draw the same picture here, bear with me."}, {"start": 676.3199999999999, "end": 680.72, "text": " I'm going to draw the same picture here a couple of times. This is also supposed to be the same"}, {"start": 680.72, "end": 687.0400000000001, "text": " picture. And with the negative sample, I'm just going to leave it empty for now."}, {"start": 688.72, "end": 692.0, "text": " Tada, two meteorites, two meteorites,"}, {"start": 693.84, "end": 700.88, "text": " shot, shot, right? So for the anchor, we're going to actually not random crop, but center crop,"}, {"start": 701.52, "end": 705.84, "text": " right? So we're going to take here the center image,"}, {"start": 705.84, "end": 715.44, "text": " right? So the assumption is kind of that if I center, if I center crop, I won't lose, you know,"}, {"start": 715.44, "end": 719.6800000000001, "text": " too much of the image. I can actually make the crop bigger such that almost everything of the"}, {"start": 719.6800000000001, "end": 727.52, "text": " image is somewhat contained in this. And the, yeah, all right. So this is going to be my anchor,"}, {"start": 727.52, "end": 735.2800000000001, "text": " then the positive sample is going to be a random crop of the same image. So I'm just randomly"}, {"start": 735.28, "end": 744.9599999999999, "text": " going to select a same size, same size section from that image. Let's say this is up right here."}, {"start": 745.68, "end": 752.9599999999999, "text": " All right. And the negative sample is going to be a random crop from a different image, right?"}, {"start": 752.9599999999999, "end": 760.8, "text": " So a different image might be from the same game, right? But might be there is a meteorite here,"}, {"start": 760.8, "end": 767.4399999999999, "text": " right? And there is no shot. I don't, I don't shoot. And I'm going to take a random crop from this."}, {"start": 767.4399999999999, "end": 775.4399999999999, "text": " Let's say I'm going to take a random crop here. Let's put a meteorite here as well, just for fun."}, {"start": 777.8399999999999, "end": 788.0799999999999, "text": " All right. So these are going to do be our three samples. And now the question is going to be,"}, {"start": 788.08, "end": 795.9200000000001, "text": " if I give the anchor to the neural network, I'm going to say I give you the anchor, right?"}, {"start": 796.88, "end": 803.2800000000001, "text": " But I'm also going to give you this and this thing. And I'm not going to give any of this."}, {"start": 803.2800000000001, "end": 807.9200000000001, "text": " I'm just going to give whatever I cropped, right?"}, {"start": 807.92, "end": 821.92, "text": " So just just these things. So I asked the neural network, I give you the anchor. Now which one of"}, {"start": 821.92, "end": 830.9599999999999, "text": " these two, which one of these two crops comes from the same image, right? So as human, you look at this."}, {"start": 830.9599999999999, "end": 836.9599999999999, "text": " And if you just see the center crop, you see, okay, down here, there's this, this tip of this thing."}, {"start": 836.96, "end": 841.84, "text": " And then there's the shot right in relation to the shot. There is a meteor here, right?"}, {"start": 842.48, "end": 847.6800000000001, "text": " And then you look at the second one and you say, okay, I don't see the spaceship, but there's"}, {"start": 847.6800000000001, "end": 854.08, "text": " the same relation here from the shot to the meteor. And I can kind of see the meteor up here."}, {"start": 854.08, "end": 860.72, "text": " And this also fits with that, right? And the spaceship must be, you know, down here somewhere."}, {"start": 861.52, "end": 866.08, "text": " And then I go over here and I try to do the same thing. It's okay, here's the meteor."}, {"start": 866.08, "end": 874.5600000000001, "text": " And you know, it might be, it might be in the original image, it might be over here somewhere."}, {"start": 874.5600000000001, "end": 880.48, "text": " So that's possible. I don't see it, right? That's possible. But then there should be,"}, {"start": 881.44, "end": 888.0, "text": " there should be a shot right somewhere here, or sorry, further up, oops, the,"}, {"start": 888.0, "end": 896.08, "text": " there should be a shot somewhere here, right? I'm pretty sure because there's, there's one over here."}, {"start": 896.08, "end": 903.84, "text": " And I don't see it, right? So I am fairly sure, Mr. Task Asker, that this image here is the"}, {"start": 903.84, "end": 911.68, "text": " positive sample while this image here is the negative sample, right? So this is the task that you"}, {"start": 911.68, "end": 919.3599999999999, "text": " ask of the neural network. Give it the anchor and you ask which one of the, of these two comes"}, {"start": 919.3599999999999, "end": 927.52, "text": " from the same image, right? This is called contrastive learning. Now, there is a bit more complicated"}, {"start": 927.52, "end": 935.76, "text": " in that, of course, what you do is you encode these things using neural networks. And then,"}, {"start": 935.76, "end": 943.68, "text": " so each of the things you encode, so the anchor, you're going to encode all of these things"}, {"start": 944.16, "end": 951.4399999999999, "text": " using neural network, right? And then this is what's going to become the query,"}, {"start": 952.24, "end": 958.4, "text": " and these are becoming the keys, so key one or key two. And then you're going to feed,"}, {"start": 959.2, "end": 965.12, "text": " eat always two of them into a bilinear product, right? And a bilinear product is simply,"}, {"start": 965.12, "end": 971.76, "text": " you can think of it as an inner product in a perturbed space that you can learn. So you're going to have"}, {"start": 971.76, "end": 983.36, "text": " this, have these two here, these go into QWK1, and then these two here, sorry, this and this,"}, {"start": 983.36, "end": 992.5600000000001, "text": " go into QWK2. Now, W here is a learnable parameter, right? So you have some freedom, and then"}, {"start": 992.56, "end": 999.92, "text": " you basically take whichever one of those two is highest, right? So this might be"}, {"start": 1001.1999999999999, "end": 1008.16, "text": " this high, and this might only be this high, and then you say, ah-ha, cool, this one's higher,"}, {"start": 1008.16, "end": 1014.0799999999999, "text": " so this one must be the positive, right? And you train the W specifically to make this higher,"}, {"start": 1014.0799999999999, "end": 1022.16, "text": " to make the positive ones higher, and the negative ones lower. So this is a supervised learning task,"}, {"start": 1022.16, "end": 1031.76, "text": " right? Where these things here are going to be the logits, or the, so they're inner products,"}, {"start": 1031.76, "end": 1038.3999999999999, "text": " but you basically then pick the one that is highest, as a in a softmax way, and they put this in"}, {"start": 1038.3999999999999, "end": 1046.6399999999999, "text": " the paper. So if we go down here, the objective that they use to do the contrastive learning is"}, {"start": 1046.64, "end": 1056.64, "text": " this one. So as you can see, it's a softmax, like in multiclass classification, of the inner product,"}, {"start": 1056.64, "end": 1064.0, "text": " the bilinear product with the positive samples, over the bilinear product with the positive samples,"}, {"start": 1064.0, "end": 1068.0, "text": " plus the bilinear product with all of the negative samples. So you're going to come up with more"}, {"start": 1068.0, "end": 1074.16, "text": " than one negative sample. All right, now the only thing left that we don't have here is that"}, {"start": 1074.16, "end": 1086.8000000000002, "text": " the encoding, how you're going to come from the image space to this space here, is going to be"}, {"start": 1086.8000000000002, "end": 1093.1200000000001, "text": " slightly different, and depending on whether you're talking on the anchor, or on the, what,"}, {"start": 1093.1200000000001, "end": 1099.28, "text": " what are called the keys, the things you compare to. And this is out of a kind of a stability"}, {"start": 1099.28, "end": 1105.12, "text": " criterion. You already, maybe you don't know like something like double Q learning, or things"}, {"start": 1105.12, "end": 1114.96, "text": " like this, it sometimes when you train with your own thing, so in Q learning, you're kind of"}, {"start": 1114.96, "end": 1121.92, "text": " trying to come up with an actor and a critic, or it's not the same thing, but you're kind of using"}, {"start": 1121.92, "end": 1133.52, "text": " the same neural network twice in your setup. And then you compare the outputs to each other,"}, {"start": 1133.52, "end": 1143.68, "text": " which isn't, you know, it leads to instability. So in our case, we took it three times here,"}, {"start": 1143.68, "end": 1150.4, "text": " or multiple times, especially for the same objective here, we have twice something that was encoded"}, {"start": 1150.4, "end": 1155.44, "text": " by the same neural network, and it is on the two sides of this bilinear product. So if we were to"}, {"start": 1155.44, "end": 1162.8000000000002, "text": " use the same neural network, that tends to be somewhat unstable. So we have different neural networks,"}, {"start": 1162.8000000000002, "end": 1171.52, "text": " one that will encode the query, which is this fq, and one which will encode the keys, sorry, fk."}, {"start": 1173.1200000000001, "end": 1177.8400000000001, "text": " Now we don't want to learn to neural networks, and that's why there's a bit of a compromise,"}, {"start": 1177.84, "end": 1188.0, "text": " where we say it is the same neural network, but, but, um, basically this one is the one we learn."}, {"start": 1188.0, "end": 1197.36, "text": " And then we always, every now and then, we transfer over the parameters to that one. And in fact,"}, {"start": 1197.36, "end": 1205.4399999999998, "text": " each step we transfer over the parameters and do an exponentially moving average with the parameters"}, {"start": 1205.44, "end": 1213.92, "text": " of this momentum encoder from the step before. So the momentum encoder parameters are a moving"}, {"start": 1213.92, "end": 1221.1200000000001, "text": " average of the parameters of the query encoder. And that is, so you get kind of get the best of both"}, {"start": 1221.1200000000001, "end": 1228.64, "text": " worlds. You don't have to learn a second neural network, but your second neural network is not the"}, {"start": 1228.64, "end": 1237.6000000000001, "text": " same as your first neural network, but it kind of lags behind, but it is also, it is also performing"}, {"start": 1237.6000000000001, "end": 1246.16, "text": " almost as well. So that is, um, I don't know if that makes sense, but it is the best I can explain it."}, {"start": 1246.16, "end": 1258.4, "text": " So to recap, take your observation. You encode it as a query, sorry, you crop, crop here for your"}, {"start": 1258.4, "end": 1267.52, "text": " anchor that gets your query. And then you random crop for your keys, right, into positive and"}, {"start": 1267.52, "end": 1273.8400000000001, "text": " negative samples, right? So you random crop from the same observation or from different"}, {"start": 1273.8400000000001, "end": 1280.48, "text": " observations, right? These become your positive and negative samples. Then you take, you take,"}, {"start": 1280.48, "end": 1288.88, "text": " you push these through your encoders for the query and for the keys respectively, you end up with"}, {"start": 1288.88, "end": 1294.56, "text": " the queue, which is the encoded anchor and the case, which are the encoded positive and negative"}, {"start": 1294.56, "end": 1306.24, "text": " samples. And then you learn, you update this encoder here using the contrastive loss, right?"}, {"start": 1306.24, "end": 1315.52, "text": " And at the same time, you feed the queue, you feed the queue here into the reinforcement learning"}, {"start": 1315.52, "end": 1323.84, "text": " algorithm. And you learn your reinforcement learning algorithm. Instead of having the observation"}, {"start": 1323.84, "end": 1332.8, "text": " directly as an input here, you now have the queue here as an input, right? That is it. The"}, {"start": 1332.8, "end": 1339.2, "text": " reinforcement learning works exactly the same, but except having the pixel input, oh, you now"}, {"start": 1339.2, "end": 1346.08, "text": " have the representation input queue. And you don't have to worry about anything else in terms of the"}, {"start": 1346.08, "end": 1352.6399999999999, "text": " reinforcement learning algorithm. It stays exactly the same, right? The this whole thing here can"}, {"start": 1352.6399999999999, "end": 1359.04, "text": " actually run either in parallel or you can think of it before, you can think of it off policy on"}, {"start": 1359.04, "end": 1365.6, "text": " policy. It is sort of modular how you how you fit this in. It simply comes up with good representation."}, {"start": 1365.6, "end": 1373.76, "text": " So that is that is basically the deal here, right? And you hope that the whole procedure of this"}, {"start": 1373.76, "end": 1382.6399999999999, "text": " contrastive learning then gives you good representation of this anchor thing here. If you encode that"}, {"start": 1382.64, "end": 1389.76, "text": " to the queue, you hope that this representation now is a good representation as a basis for the RL"}, {"start": 1389.76, "end": 1397.0400000000002, "text": " algorithm. And it turns out at least in their experiments, it is. So here you see the same thing."}, {"start": 1397.0400000000002, "end": 1403.92, "text": " They actually they do something more where in RL, you usually deal with a stack of observations,"}, {"start": 1403.92, "end": 1409.92, "text": " not just a single observation, because so for example, in Atari, people always concatenate"}, {"start": 1409.92, "end": 1416.48, "text": " something like four, the four last frames, right? And their point is, okay, if we have the stack here,"}, {"start": 1416.48, "end": 1422.0, "text": " if we do this data augmentation, you know, these crops, we kind of need to do them consistently,"}, {"start": 1422.0, "end": 1430.16, "text": " right? We need to crop every single image at the same point for the query. And also, if we do a"}, {"start": 1430.16, "end": 1436.72, "text": " random crop, let's say a random crop down here, we need to do this same random crop for all of the"}, {"start": 1436.72, "end": 1445.84, "text": " of the stack of images here, right? So that is kind of the additional thing they introduce with"}, {"start": 1445.84, "end": 1459.3600000000001, "text": " respect to RL that deals with with stacked time frames. But it's kind of the same diagram as above"}, {"start": 1459.36, "end": 1468.4799999999998, "text": " here. Right, so they explain the the RL algorithms they use and exactly their thing. And here you can"}, {"start": 1468.4799999999998, "end": 1475.6, "text": " see that the anchor is a crop. And the positive sample is a random crop from the same image. This"}, {"start": 1475.6, "end": 1481.28, "text": " would be up here somewhere. The anchor is cropped from the middle. And then the negative would be a"}, {"start": 1481.28, "end": 1488.08, "text": " random crop from a different image or a different stack of images. They have a pseudo code here where"}, {"start": 1488.08, "end": 1497.04, "text": " so it's pretty simple. We'll just go through it quickly. Right, you start off with FQ and FK."}, {"start": 1497.04, "end": 1504.0, "text": " These are the encoders for the query and keys. You start them off same. Then you go through your"}, {"start": 1504.0, "end": 1512.3999999999999, "text": " data loader. You do this random augmentation of query and your keys. And I don't not even sure if"}, {"start": 1512.4, "end": 1518.3200000000002, "text": " they're random augmentation needs actually to be a central crop for the anchor or just two different"}, {"start": 1519.2800000000002, "end": 1528.0800000000002, "text": " two different crops from the same image. That might be as well. So you know, just I guess it's a"}, {"start": 1528.0800000000002, "end": 1533.3600000000001, "text": " thing you could choose. I don't know what exactly is the best thing. All right, then I forward the"}, {"start": 1533.36, "end": 1543.36, "text": " query through the FQ and I forward the keys through the FK. Then important I detach this so I"}, {"start": 1543.36, "end": 1551.84, "text": " don't train. I don't want to train the FK. I only want to train the FQ. Right. Then I do the"}, {"start": 1551.84, "end": 1562.6399999999999, "text": " bilinear product here with the W. These are the bilinear product. And then I put this all of this"}, {"start": 1562.64, "end": 1574.0800000000002, "text": " into a cross entropy loss. Right. In the end, I update my FQ and my W and I do this exponentially"}, {"start": 1574.0800000000002, "end": 1582.4, "text": " moving average for my key encoder. And they test on two different things. They test on the deep"}, {"start": 1582.4, "end": 1593.8400000000001, "text": " mind control tasks and they always test 100k time steps. So their big point is data efficiency."}, {"start": 1594.96, "end": 1602.48, "text": " They claim they can use learn useful representations with not much data. So the task is here. How good"}, {"start": 1602.48, "end": 1610.72, "text": " are you at 100k step time steps? So you don't optimize until the end, you just you get 100k time steps"}, {"start": 1610.72, "end": 1618.96, "text": " and then the question is how how good are you? And the curl here outperforms all of the bass lines"}, {"start": 1620.0, "end": 1627.52, "text": " handily in the deep mind control tasks. And it also outperforms a lot of the bass lines in the"}, {"start": 1628.48, "end": 1639.28, "text": " Atari tasks. And it actually if you look at the results, it doesn't outperform everything. But for"}, {"start": 1639.28, "end": 1647.44, "text": " example, here the red is curl and the dashed gray is state SAC. Now state SAC, the important thing"}, {"start": 1647.44, "end": 1654.56, "text": " to note here is it has access to the state, whereas curl only works from pixels. Right. So what I"}, {"start": 1654.56, "end": 1660.16, "text": " said before, like if I had to craft the use for representation, basically state SAC has access to"}, {"start": 1660.16, "end": 1670.48, "text": " that. And you see that in many of the tasks that the curl comes close or performs equally well to"}, {"start": 1670.48, "end": 1679.68, "text": " the state SAC. Right. So that's pretty impressive. Especially if you look at pixel SAC, sorry,"}, {"start": 1680.4, "end": 1686.96, "text": " which is the same algorithm, but does not have access to the state, just the pixels. It often fails"}, {"start": 1686.96, "end": 1695.2, "text": " terribly. Right. So that is pretty interesting to see. And even to me, it's pretty interesting to see"}, {"start": 1695.2, "end": 1702.4, "text": " that this kind of this kind of algorithm, this kind of self labeled algorithm comes up with such"}, {"start": 1702.4, "end": 1712.16, "text": " useful representations. All right. So I hope I have explained this satisfactorily. And"}, {"start": 1712.16, "end": 1721.28, "text": " um, check out the paper for more experiments, ablation studies, and just a general reading. And I wish"}, {"start": 1721.28, "end": 1751.12, "text": " you a good day."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=gbG1X8Xq-T8 | Enhanced POET: Open-Ended RL through Unbounded Invention of Learning Challenges and their Solutions | The enhanced POET makes some substantial and well-crafted improvements over the original POET algorithm and excels at open-ended learning like no system before.
https://arxiv.org/abs/2003.08536
https://youtu.be/RX0sKDRq400
Abstract:
Creating open-ended algorithms, which generate their own never-ending stream of novel and appropriately challenging learning opportunities, could help to automate and accelerate progress in machine learning. A recent step in this direction is the Paired Open-Ended Trailblazer (POET), an algorithm that generates and solves its own challenges, and allows solutions to goal-switch between challenges to avoid local optima. However, the original POET was unable to demonstrate its full creative potential because of limitations of the algorithm itself and because of external issues including a limited problem space and lack of a universal progress measure. Importantly, both limitations pose impediments not only for POET, but for the pursuit of open-endedness in general. Here we introduce and empirically validate two new innovations to the original algorithm, as well as two external innovations designed to help elucidate its full potential. Together, these four advances enable the most open-ended algorithmic demonstration to date. The algorithmic innovations are (1) a domain-general measure of how meaningfully novel new challenges are, enabling the system to potentially create and solve interesting challenges endlessly, and (2) an efficient heuristic for determining when agents should goal-switch from one problem to another (helping open-ended search better scale). Outside the algorithm itself, to enable a more definitive demonstration of open-endedness, we introduce (3) a novel, more flexible way to encode environmental challenges, and (4) a generic measure of the extent to which a system continues to exhibit open-ended innovation. Enhanced POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved through other means.
Authors: Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jeff Clune, Kenneth O. Stanley
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | There, before we jump into today's paper, I just want to give a shout out to Machine Learning Street Talk, where every week we talk about current or big trends or topics in machine learning. The first discussion that we launched is actually on today's paper, The Enhanced Poet. So if you like the following video, you might want to jump over to Machine Learning Street Talk and check out our discussion about it. It's very interesting. All right, have fun. Hi there. What you're seeing here are many different environments from a single run of a system that's called The Enhanced Poet. Last time we've taken a look at a system called Poet, and the Enhanced Poet is kind of an improvement over the original Poet fixing some of its shortcomings. And you see here that the agent is able to solve this very, very diverse set of environments. And the notable thing is this is from a single run of this algorithm. So one run will produce all these different environments and will produce agents that are able to solve all the different environments at the same time in parallel. So it's a population-based method. If you haven't seen the video I did on Poet, I suggest you go and see that now. This is simply an enhancement to it, and I expect people to know kind of what I'm talking about. All right, it's going to be a short video, but I think it is a good addendum to Poet. So it's the Enhanced Poet, Open-ended Reinforcement Learning through unbounded invention of learning challenges and their solutions by Ruiwang-Jui-Lemon Aditarwal, Jial-Chi, Julin Li, Jeff Kloon, and Kenneth O'Stanley. So we'll jump right in. They make a number of improvements to the original Poet, and I simply want to discuss the most important ones. So you know, they have a nice graphic down here of what happens in Poet. Poet builds up this tree of environments and to each environment it has an agent that it trains to solve that environment at the same time. So at the same time it will kind of start out here, it will generate offspring, it will continuously generate offspring, and then it will also continuously train agents in each environment that it produced in order to solve that environment. And it keeps doing that while producing more and more offspring. And then once in a while it does what is called a transfer. So that means that, for example, you see here, the offspring produced here from this environment. You kind of see that the lineage here kind of focuses on squiggly environments, right? You see that there's a bit of squiggle here and a bit of a squiggle here. And then the offspring, all of a sudden, is a bit more smooth but has this little step here. And then this offspring of this environment has this large step here that agents that come from here have kind of been optimized to solve the squigglyness problem. But here, over here, this lineage has specified or specialized more and more in kind of like these kind of large drops or steep hills. So the agent that was trained over here was found to be very effective in this environment and therefore can be transferred. So this kind of population branching out into the different trees and then transferring solutions between the parts of the trees. That's what makes poet very, very powerful mechanism to solve these kind of tasks. All right. So how does this improve? Now, the first thing that poet does is it generates these environments and it always wants to generate new environments. So it always generates offspring to each environment. So let's say we are here, it will generate offspring to each environment here, right? Each that we have, let's see, we have only seen so far. And then it only picks the most novel ones, the ones that are most novel, which is this, probably this, then there are other criteria namely that it can be solved by some agents, but it cannot be solved by others. It's not too difficult, but also not too hard. But one of the aspects is it must be novel, right? So we're not seeing any here, which means that those weren't novel enough. How does it measure novel? In the original implementation of poet, you had this environment generator, which was like a level generator, which made these gaps here and the stumps here. And you could specify, I believe, five numbers. So there was a five point scale in which you could specify how high the stumps were, right? You get this kind of pentagon here. How high the stumps were and how deep the gaps were and how rough the terrain was and the level generator would generate this level. And so you're basically your distance metric between environments was a vector of size five, right? This is environment one. And you had environment two, which if it's more, it has higher stumps, right? Then this particular number here maybe would be higher than this number here. So it was kind of limited to taking the Euclidean distance between two environment encodings in order to measure the distance between environments. This is very, very domain specific and the authors here argue what we should rather do is have a general environment agnostic distance metric, right? So here's what they propose. They propose the following. Why don't we, if we have a new environment, right? Let's say we have a new environment, we measure all of the agents, the current agents and the ones we've already seen, right? We measure all the agents in our database on this new environment. That's this. And they come up with scores, right? Each of them gets a score. And then we, you know, clip and bound the score. So the max here is 300 and the minimum is 50. But in any case, we then rank them, right? So we evaluate them and then we rank them from best to worst. And then we normalize, which simply means that the best one gets a score of 0.5 and the worst one gets a score of negative 0.5. And now this vector here, this is now used to compare environments. So if we have another environment, right? Right here, we have E2. And that gets a different ordering, right? So maybe agent one is now the best agent two is really bad and so on. Right? That gets a different ordering. Then the resulting vector here will be very, very different from this vector right here. And this is very agnostic, so no matter which environment it is, if the ordering of agents in it, the score they get, the order of it is the same. The environments aren't really different from each other. They authorize argue. But if the scores are very differently ranked, right? So imagine the environment is harder, but essentially the same than the scores would be lower, but still the agents would be ranked the same. So you can argue, well, that's just kind of the same environment. It but it's except a step like this now has a super steep step, right? It's not very different. But if, you know, if instead of that, you get an environment that is like this, like you say, wow, that's qualitatively different. And you would expect from this one to this one that the agents performance would roughly stay the same, but you would expect from the middle one to this one that an entirely different set of agents might perform well, right? In this one. So that's how novelty is measured. And I think it's pretty cool. Cool, cool way. I don't have coronavirus by the way. Maybe who knows? Huh. No, I just have a dry throat. All right. So this is the first enhancement they make is that they now measure novelty in this domain agnostic way. Pretty cool so far. And what this allows them to do, this allows them to actually not rely on this level generator with the five, with, you know, the five parameters in order to generate these levels. But these levels can now be produced however they want with different generators. And that's exactly what they do. They now employ neural networks. Not well, it's kind of a proto typical. It's a it's called a CPNN that generates these things. You might have seen in the examples, the enhanced poet doesn't have these gaps and stumps anymore. It simply has these landscapes that are super diverse, but they're still just their landscapes. And what it does is it evolves neural networks at the same time as it evolves the population. It evolves these. So the architecture of these networks isn't fixed. It's actually evolving along with the agent to make the challenges harder and harder. So you see there are like co-signs and signs in here and you can add them and subtract and so on. And that will give you a mapping from X, which is the X coordinate here to Y, which is the Y coordinate. All right. And that will give you kind of a continuous landscape depending on the architecture here and on the internal parameters. Of course, I guess there would also be a node. Some here like times a lambda factor. And then the lambda would also be a thing that is evolved. So pretty cool. Of course, the internals of this now aren't just described by a fixed vector anymore, but you don't need that anymore because we have a method to compare environments even if they come from completely different architectures of generators. Right. So it's pretty cool that the the agnostic comparison of environments allows you to now have a much more general level generator. And of course, now produce much more diverse environments. And that's what they exactly what they see. Of course, you see here the environments get super crazy. So they also propose kind of a novel metric to measure novelty. Sorry, to measure progress. So the question is how do we measure progress in these algorithms in these open-ended algorithms? And what they propose is this annex score, which is I have to go and look it up. The annex score I think is the number of new environments that are solved. Right. Yes. So exactly. The question is whether a system continues to generate interesting new things. And the way they measure it is by the accumulated number of novel environments created and solved. Right. So the question here is accumulated. That means over the entire run, they count up how many environments that they've seen that are novel. And we've already had the definition of novel. And in this case, basically means that it must pass the minimal criterion. It's neither too hard nor too easy. We've already seen this in how the offspring of environments is generated. Right. There's a minimal criterion. And it must be eventually solved. So that means the novel environment's created and solved. Right. So how many new environments are created and then at a later point solved? You can see the difference to the original poet in this graph. So the original poet eventually runs out of new environments because its generator is just not powerful enough. It can only modify these five variables. And eventually the environment aren't substantially novel from the old environments. Whereas the enhanced poet you can see even after this run. And I'm sure they have large infrastructure to do these experiments. It just continues to innovate new, more elaborate environments continuously. So this I think are the main things. They also do some improvement to the transfers and so on. I don't want to go into that. I just wanted to show these improvements so that you have the complete picture of how such an algorithm runs. My criticism to this is that if you just look at their thing is that with the leaving out of the gaps and the stumps and so on, in a weird way, of course the levels are diverse but they have become even more similar it seems. Like you're really relying on the availability to kind of continuously create these levels kind of like a a gan for levels. And you're relying on your ability to smoothly increase the difficulty of the levels to actually have a diversity in your level generator but also a kind of a smoothness with regard to the difficulty in order to build this whole curriculum. And I think even though the environments look more diverse, it might be going into a direction where you kind of engineer yourself into a corner where you are now even more and more relying on these evolving and parameterizable generators. Nonetheless the ideas I think are pretty cool and that's all I have to say about it. Bye bye. | [{"start": 0.0, "end": 5.12, "text": " There, before we jump into today's paper, I just want to give a shout out to Machine Learning"}, {"start": 5.12, "end": 11.76, "text": " Street Talk, where every week we talk about current or big trends or topics in machine learning."}, {"start": 12.48, "end": 18.56, "text": " The first discussion that we launched is actually on today's paper, The Enhanced Poet."}, {"start": 18.56, "end": 23.76, "text": " So if you like the following video, you might want to jump over to Machine Learning Street Talk"}, {"start": 23.76, "end": 29.68, "text": " and check out our discussion about it. It's very interesting. All right, have fun."}, {"start": 30.080000000000002, "end": 36.96, "text": " Hi there. What you're seeing here are many different environments from a single run of a system"}, {"start": 36.96, "end": 44.160000000000004, "text": " that's called The Enhanced Poet. Last time we've taken a look at a system called Poet, and the"}, {"start": 44.160000000000004, "end": 51.44, "text": " Enhanced Poet is kind of an improvement over the original Poet fixing some of its shortcomings."}, {"start": 51.44, "end": 62.64, "text": " And you see here that the agent is able to solve this very, very diverse set of environments."}, {"start": 63.44, "end": 71.36, "text": " And the notable thing is this is from a single run of this algorithm. So one run will produce all"}, {"start": 71.36, "end": 76.4, "text": " these different environments and will produce agents that are able to solve all the different"}, {"start": 76.4, "end": 82.48, "text": " environments at the same time in parallel. So it's a population-based method. If you haven't seen"}, {"start": 82.48, "end": 89.44000000000001, "text": " the video I did on Poet, I suggest you go and see that now. This is simply an enhancement to it,"}, {"start": 89.44000000000001, "end": 95.12, "text": " and I expect people to know kind of what I'm talking about. All right, it's going to be a short"}, {"start": 95.12, "end": 102.24000000000001, "text": " video, but I think it is a good addendum to Poet. So it's the Enhanced Poet, Open-ended Reinforcement"}, {"start": 102.24, "end": 109.52, "text": " Learning through unbounded invention of learning challenges and their solutions by Ruiwang-Jui-Lemon"}, {"start": 109.52, "end": 119.84, "text": " Aditarwal, Jial-Chi, Julin Li, Jeff Kloon, and Kenneth O'Stanley. So we'll jump right in. They"}, {"start": 119.84, "end": 126.88, "text": " make a number of improvements to the original Poet, and I simply want to discuss the most"}, {"start": 126.88, "end": 133.44, "text": " important ones. So you know, they have a nice graphic down here of what happens in Poet."}, {"start": 134.48, "end": 141.6, "text": " Poet builds up this tree of environments and to each environment it has an agent that it trains"}, {"start": 141.6, "end": 147.04, "text": " to solve that environment at the same time. So at the same time it will kind of start out here,"}, {"start": 147.04, "end": 154.4, "text": " it will generate offspring, it will continuously generate offspring, and then it will also continuously"}, {"start": 154.4, "end": 160.96, "text": " train agents in each environment that it produced in order to solve that environment. And it keeps"}, {"start": 160.96, "end": 168.72, "text": " doing that while producing more and more offspring. And then once in a while it does what is called"}, {"start": 168.72, "end": 175.28, "text": " a transfer. So that means that, for example, you see here, the offspring produced here"}, {"start": 176.0, "end": 183.44, "text": " from this environment. You kind of see that the lineage here kind of focuses on"}, {"start": 183.44, "end": 188.56, "text": " squiggly environments, right? You see that there's a bit of squiggle here and a bit of a squiggle"}, {"start": 188.56, "end": 194.0, "text": " here. And then the offspring, all of a sudden, is a bit more smooth but has this little step here."}, {"start": 194.32, "end": 200.32, "text": " And then this offspring of this environment has this large step here that agents that come from"}, {"start": 200.32, "end": 209.84, "text": " here have kind of been optimized to solve the squigglyness problem. But here, over here, this lineage"}, {"start": 209.84, "end": 218.16, "text": " has specified or specialized more and more in kind of like these kind of large drops or steep hills."}, {"start": 218.16, "end": 225.76, "text": " So the agent that was trained over here was found to be very effective in this environment and"}, {"start": 225.76, "end": 233.6, "text": " therefore can be transferred. So this kind of population branching out into the different trees"}, {"start": 233.6, "end": 241.12, "text": " and then transferring solutions between the parts of the trees. That's what makes poet very,"}, {"start": 241.12, "end": 252.07999999999998, "text": " very powerful mechanism to solve these kind of tasks. All right. So how does this improve?"}, {"start": 252.07999999999998, "end": 259.68, "text": " Now, the first thing that poet does is it generates these environments and it always wants to generate"}, {"start": 259.68, "end": 266.40000000000003, "text": " new environments. So it always generates offspring to each environment. So let's say we are here,"}, {"start": 266.40000000000003, "end": 272.08, "text": " it will generate offspring to each environment here, right? Each that we have, let's see,"}, {"start": 272.08, "end": 279.6, "text": " we have only seen so far. And then it only picks the most novel ones, the ones that are most novel,"}, {"start": 279.6, "end": 285.84000000000003, "text": " which is this, probably this, then there are other criteria namely that it can be solved by some"}, {"start": 285.84, "end": 292.15999999999997, "text": " agents, but it cannot be solved by others. It's not too difficult, but also not too hard. But one of"}, {"start": 292.15999999999997, "end": 298.79999999999995, "text": " the aspects is it must be novel, right? So we're not seeing any here, which means that those weren't"}, {"start": 298.79999999999995, "end": 305.76, "text": " novel enough. How does it measure novel? In the original implementation of poet, you had this"}, {"start": 305.76, "end": 312.96, "text": " environment generator, which was like a level generator, which made these gaps here and the"}, {"start": 312.96, "end": 321.12, "text": " stumps here. And you could specify, I believe, five numbers. So there was a five point scale in which"}, {"start": 321.12, "end": 327.35999999999996, "text": " you could specify how high the stumps were, right? You get this kind of pentagon here. How high the"}, {"start": 327.35999999999996, "end": 334.08, "text": " stumps were and how deep the gaps were and how rough the terrain was and the level generator would"}, {"start": 334.08, "end": 342.71999999999997, "text": " generate this level. And so you're basically your distance metric between environments was a vector"}, {"start": 342.72, "end": 349.52000000000004, "text": " of size five, right? This is environment one. And you had environment two, which if it's more,"}, {"start": 349.52000000000004, "end": 355.36, "text": " it has higher stumps, right? Then this particular number here maybe would be higher than this"}, {"start": 355.36, "end": 362.96000000000004, "text": " number here. So it was kind of limited to taking the Euclidean distance between two"}, {"start": 362.96000000000004, "end": 369.12, "text": " environment encodings in order to measure the distance between environments."}, {"start": 369.12, "end": 376.64, "text": " This is very, very domain specific and the authors here argue what we should rather do"}, {"start": 378.96, "end": 387.36, "text": " is have a general environment agnostic distance metric, right? So here's what they propose."}, {"start": 388.48, "end": 396.08, "text": " They propose the following. Why don't we, if we have a new environment, right? Let's say we have a"}, {"start": 396.08, "end": 403.52, "text": " new environment, we measure all of the agents, the current agents and the ones we've already seen,"}, {"start": 403.52, "end": 410.0, "text": " right? We measure all the agents in our database on this new environment. That's this. And they come"}, {"start": 410.0, "end": 416.64, "text": " up with scores, right? Each of them gets a score. And then we, you know, clip and bound the score."}, {"start": 416.64, "end": 424.24, "text": " So the max here is 300 and the minimum is 50. But in any case, we then rank them, right? So we"}, {"start": 424.24, "end": 432.08, "text": " evaluate them and then we rank them from best to worst. And then we normalize, which simply"}, {"start": 432.08, "end": 442.08, "text": " means that the best one gets a score of 0.5 and the worst one gets a score of negative 0.5."}, {"start": 443.04, "end": 449.04, "text": " And now this vector here, this is now used to compare environments. So if we have another"}, {"start": 449.04, "end": 459.04, "text": " environment, right? Right here, we have E2. And that gets a different ordering, right? So maybe"}, {"start": 459.84000000000003, "end": 466.0, "text": " agent one is now the best agent two is really bad and so on. Right? That gets a different ordering."}, {"start": 466.0, "end": 473.20000000000005, "text": " Then the resulting vector here will be very, very different from this vector right here."}, {"start": 473.2, "end": 483.52, "text": " And this is very agnostic, so no matter which environment it is, if the ordering of agents in it,"}, {"start": 483.52, "end": 489.28, "text": " the score they get, the order of it is the same. The environments aren't really different from"}, {"start": 489.28, "end": 497.44, "text": " each other. They authorize argue. But if the scores are very differently ranked, right? So imagine"}, {"start": 497.44, "end": 503.92, "text": " the environment is harder, but essentially the same than the scores would be lower, but still"}, {"start": 503.92, "end": 508.96, "text": " the agents would be ranked the same. So you can argue, well, that's just kind of the same"}, {"start": 508.96, "end": 514.24, "text": " environment. It but it's except a step like this now has a super steep step, right? It's not"}, {"start": 515.12, "end": 522.4, "text": " very different. But if, you know, if instead of that, you get an environment that"}, {"start": 522.4, "end": 530.3199999999999, "text": " is like this, like you say, wow, that's qualitatively different. And you would expect from this one"}, {"start": 530.3199999999999, "end": 535.76, "text": " to this one that the agents performance would roughly stay the same, but you would expect"}, {"start": 536.3199999999999, "end": 542.24, "text": " from the middle one to this one that an entirely different set of agents might perform well,"}, {"start": 542.24, "end": 547.4399999999999, "text": " right? In this one. So that's how novelty is measured. And I think it's pretty cool."}, {"start": 547.44, "end": 554.0, "text": " Cool, cool way. I don't have coronavirus by the way. Maybe who knows? Huh."}, {"start": 556.6400000000001, "end": 566.0, "text": " No, I just have a dry throat. All right. So this is the first enhancement they make is that they"}, {"start": 566.0, "end": 573.0400000000001, "text": " now measure novelty in this domain agnostic way. Pretty cool so far. And what this allows them to do,"}, {"start": 573.04, "end": 580.88, "text": " this allows them to actually not rely on this level generator with the five, with, you know, the five"}, {"start": 582.0, "end": 587.68, "text": " parameters in order to generate these levels. But these levels can now be produced however they"}, {"start": 587.68, "end": 595.1999999999999, "text": " want with different generators. And that's exactly what they do. They now employ neural networks."}, {"start": 595.2, "end": 603.84, "text": " Not well, it's kind of a proto typical. It's a it's called a CPNN that generates these things."}, {"start": 603.84, "end": 609.6800000000001, "text": " You might have seen in the examples, the enhanced poet doesn't have these gaps and stumps anymore."}, {"start": 609.6800000000001, "end": 615.84, "text": " It simply has these landscapes that are super diverse, but they're still just their landscapes."}, {"start": 617.76, "end": 624.4000000000001, "text": " And what it does is it evolves neural networks at the same time as it evolves the population."}, {"start": 624.4, "end": 630.24, "text": " It evolves these. So the architecture of these networks isn't fixed. It's actually evolving"}, {"start": 631.1999999999999, "end": 637.1999999999999, "text": " along with the agent to make the challenges harder and harder. So you see there are like co-signs"}, {"start": 637.1999999999999, "end": 642.8, "text": " and signs in here and you can add them and subtract and so on. And that will give you a mapping"}, {"start": 642.8, "end": 650.24, "text": " from X, which is the X coordinate here to Y, which is the Y coordinate. All right. And that will"}, {"start": 650.24, "end": 658.0, "text": " give you kind of a continuous landscape depending on the architecture here and on the internal"}, {"start": 658.0, "end": 664.5600000000001, "text": " parameters. Of course, I guess there would also be a node. Some here like times a lambda factor."}, {"start": 664.5600000000001, "end": 671.2, "text": " And then the lambda would also be a thing that is evolved. So pretty cool. Of course,"}, {"start": 671.76, "end": 677.28, "text": " the internals of this now aren't just described by a fixed vector anymore, but you don't need that"}, {"start": 677.28, "end": 682.48, "text": " anymore because we have a method to compare environments even if they come from completely"}, {"start": 682.48, "end": 689.68, "text": " different architectures of generators. Right. So it's pretty cool that the the"}, {"start": 690.24, "end": 697.4399999999999, "text": " agnostic comparison of environments allows you to now have a much more general level generator."}, {"start": 697.4399999999999, "end": 703.8399999999999, "text": " And of course, now produce much more diverse environments. And that's what they exactly what they"}, {"start": 703.84, "end": 709.76, "text": " see. Of course, you see here the environments get super crazy. So they also"}, {"start": 711.12, "end": 719.6, "text": " propose kind of a novel metric to measure novelty. Sorry, to measure progress. So the question is"}, {"start": 719.6, "end": 726.32, "text": " how do we measure progress in these algorithms in these open-ended algorithms? And what they propose"}, {"start": 726.32, "end": 737.0400000000001, "text": " is this annex score, which is I have to go and look it up. The annex score I think is the number"}, {"start": 737.0400000000001, "end": 751.2, "text": " of new environments that are solved. Right. Yes. So exactly. The question is whether a system"}, {"start": 751.2, "end": 757.5200000000001, "text": " continues to generate interesting new things. And the way they measure it is by the"}, {"start": 758.5600000000001, "end": 767.2800000000001, "text": " accumulated number of novel environments created and solved. Right. So the question here is"}, {"start": 767.2800000000001, "end": 774.72, "text": " accumulated. That means over the entire run, they count up how many environments that they've seen"}, {"start": 774.72, "end": 781.44, "text": " that are novel. And we've already had the definition of novel. And in this case,"}, {"start": 782.08, "end": 789.36, "text": " basically means that it must pass the minimal criterion. It's neither too hard nor too easy."}, {"start": 789.36, "end": 795.0400000000001, "text": " We've already seen this in how the offspring of environments is generated. Right. There's a minimal"}, {"start": 795.0400000000001, "end": 804.5600000000001, "text": " criterion. And it must be eventually solved. So that means the novel environment's created"}, {"start": 804.56, "end": 811.04, "text": " and solved. Right. So how many new environments are created and then at a later point solved?"}, {"start": 811.5999999999999, "end": 819.4399999999999, "text": " You can see the difference to the original poet in this graph. So the original poet eventually"}, {"start": 820.16, "end": 826.9599999999999, "text": " runs out of new environments because its generator is just not powerful enough. It can only"}, {"start": 826.9599999999999, "end": 833.52, "text": " modify these five variables. And eventually the environment aren't substantially novel"}, {"start": 833.52, "end": 839.4399999999999, "text": " from the old environments. Whereas the enhanced poet you can see even after this run. And I'm sure"}, {"start": 839.4399999999999, "end": 846.48, "text": " they have large infrastructure to do these experiments. It just continues to innovate new,"}, {"start": 846.48, "end": 856.56, "text": " more elaborate environments continuously. So this I think are the main things. They also do some"}, {"start": 856.56, "end": 861.1999999999999, "text": " improvement to the transfers and so on. I don't want to go into that. I just wanted to show these"}, {"start": 861.2, "end": 867.6, "text": " improvements so that you have the complete picture of how such an algorithm runs. My criticism to"}, {"start": 867.6, "end": 877.0400000000001, "text": " this is that if you just look at their thing is that with the leaving out of the gaps and the"}, {"start": 877.0400000000001, "end": 883.2, "text": " stumps and so on, in a weird way, of course the levels are diverse but they have become even more"}, {"start": 883.2, "end": 890.1600000000001, "text": " similar it seems. Like you're really relying on the availability to kind of continuously"}, {"start": 890.16, "end": 899.28, "text": " create these levels kind of like a a gan for levels. And you're relying on your ability to"}, {"start": 899.28, "end": 908.3199999999999, "text": " smoothly increase the difficulty of the levels to actually have a diversity in your level generator"}, {"start": 908.3199999999999, "end": 915.68, "text": " but also a kind of a smoothness with regard to the difficulty in order to build this whole curriculum."}, {"start": 915.68, "end": 922.8, "text": " And I think even though the environments look more diverse, it might be going into a direction where"}, {"start": 922.8, "end": 930.3199999999999, "text": " you kind of engineer yourself into a corner where you are now even more and more relying on these"}, {"start": 930.3199999999999, "end": 937.68, "text": " evolving and parameterizable generators. Nonetheless the ideas I think are pretty cool and that's"}, {"start": 937.68, "end": 947.68, "text": " all I have to say about it. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=klPuEHCKG9M | Evolving Normalization-Activation Layers | Normalization and activation layers have seen a long history of hand-crafted variants with various results. This paper proposes an evolutionary search to determine the ultimate, final and best combined normalization-activation layer... in a very specific setting.
https://arxiv.org/abs/2004.02967
Abstract:
Normalization layers and activation functions are critical components in deep neural networks that frequently co-locate with each other. Instead of designing them separately, we unify them into a single computation graph, and evolve its structure starting from low-level primitives. Our layer search algorithm leads to the discovery of EvoNorms, a set of new normalization-activation layers that go beyond existing design patterns. Several of these layers enjoy the property of being independent from the batch statistics. Our experiments show that EvoNorms not only excel on a variety of image classification models including ResNets, MobileNets and EfficientNets, but also transfer well to Mask R-CNN for instance segmentation and BigGAN for image synthesis, outperforming BatchNorm and GroupNorm based layers by a significant margin in many cases.
Authors: Hanxiao Liu, Andrew Brock, Karen Simonyan, Quoc V. Le
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at evolving normalization activation layers by Hangiao Liu and Drew Brock, Karen Simone and Guo Vili. These are people from Google Brain and Google Deep Mind. The topic of this paper is, as you can see, it's about normalization activation layers and we want to evolve them. I think the print title says a lot, but let's go down here and see what this is about. So we'll look at image neural networks and current architectures are kind of focused around the same principles. What they'll have is, so ever since ResNet, these neural networks will be composed of these kind of blocks that come one after another. So there'll be a block up here and then the signal will propagate and there'll be another block down here. Now these blocks usually consist of what's called a skip connection out here. This is the fundamental ingredient of ResNet that made ResNet. So effective seem to be the introduction of the skip connection. You see all of these have the skip connection here. So these are variants on ResNet and then we see that these are always mixed between convolutional layers and then other things that here are called EVO norm. Now in a classic ResNet, you would have something like a convolutional layer, then you would have a batch normalization and then you would have a non-linearity for example, a relu and then you would go on to the next convolutional layer. So you see that the paper mainly cares about these two layers here. The batch norm and the relu and it combines them into what it's called, a calls an EVO norm. So the EVO norm layers here are supposed to replace the normalization and the activation layers, combine them and make them better. And how does it do that? Through evolutionary search. Alright, so these three models here are the ResNet, mobile net and efficient net architectures. They're all image classifier architectures. Alright, so let's see how it does that. What it does is it evolves these layers from single primitives. Now if you've seen the batch normalization paper, then you know that the batch normalization is just kind of a formula you can write down. Alright? And these other normalization methods. So people have developed other ones than batch norm. For example, this is group norm with a relu activation function and you can write these two layers down as this mathematical expression. So it features things like this is the input signal. I think this is the mean across some groups. This is a bias term that you can train. This is the standard deviation across the same groups and so on. So and this here is the relu term. So you can you can kind of write this down as a combination of these primitives. And so you can write it in a graph. Now this graph here is actually an activation function that this paper has found. It's called Evo norm s0 and the mathematical equation is the thing down here. It's not that different. You can see from previous activations. It also has the input signal. It has the this variance or standard deviation across groups. It has a nonlinearity here. And the graph here is is is simply a graph of mathematical operations made out of primitives. Right? So this paper takes all of the primitives that they can think of and puts them in this table. They say okay, our search space for these layers. So we want to evolve these layers. Our search space is a combination of any of these primitives. So you can see here you have something like addition, multiplication, negation. So you can you can subtract things. You can take the log of things. You can take the square root and so on. So here we have a max, which is the reloo activation function. But if you put zero as one of them, you have the sigmoid, which is another nonlinearity. Then you can also do something like I want to compute the batch mean or I want to compute a group standard deviation. Pretty much anything that current activation functions that have been handcrafted use are available as primitives across this search. So how does this method search? This is the process of how the method searches. It does this in an evolutionary way. And evolutionary protocols, it means that you don't develop one layer like you would do if you were to do something like gradient descent. You develop a whole population of layers. So these can be maybe a couple of hundred or a couple of thousands different layer architectures that you develop at the same time. What you do each time you put them into a tournament, which basically means you want a sample, a couple of them. I don't think they do all at the same time. They sample a couple of them. And then they train on what they call a proxy task. Their proxy task is C410. So C410 is a fairly small image classification task. And they train on C410 because it's pretty fast. You can train something on C410 in like a couple of minutes or an hour or so. And get a pretty good feeling for how good the final accuracy will be. You can get that pretty fast. So this is a fast classification task because they need to do this a lot. The population is large and they need to repeat this over time. In any case, they take a sample. They train on C410 and then the winner, the winning layer is picked from this sample. And only the winning layer is allowed to mutate. So the winning layer is mutated then. And mutation means you kind of change it a bit. Now you don't change it in an informed way. You just change it at random. And of course the... and then you put the mutated layers back into the population. Of course the hope is that by repeating this process, you repeat and repeat and repeat that simply by picking the winning layers over and over and over again is a selective pressure such that through the random mutations, but the tournament style evaluation and picking of the winner, that over time, the best performing models in your population, the best scoring model here will get better and better and better. So the assumption is that this isn't like a pure combinatorical optimization or like a pure random function is that if I take something that works well, there are ways that I can perturb it that make it work even better. So even if most of the perturbations are worse, there are some and the tournament style will always find these ones for me that perform better. And then I can modify these again at random and then among these I can again find the ones that perform even better. So that is the method. So the question is how do you mutate a layer? And mutation I believe is done in different ways here. But if you look at this here, at this expression, so what you have here is the input is this signal here. And you always start out, I believe, with the input with a layer that just emits the number one with a layer that just emits the number zero or a component. And then you have two trainable vectors that you can include. And you just start out with these four things. And then every time you mutate, you add one of these blocks. And I believe there's also a method like a randomness of changing the individual things or of actually starting from scratch again. It's pretty important. Otherwise you just grow bigger and bigger monsters. But the way you mutate is the following. You add a new block. Let's say I add one here. And you decide on one of the primitives from the table. Here I'm going to simply decide on a minus operation. So a subtraction operation. And then once you've done that, you choose two children. Sorry, two parents. I will see it. You choose two parents because the minus operation needs two parents. You choose two of the parents at random here. So I'm going to choose this thing here to be a parent. And I'm going to choose this thing here to be a parent at random. And then this new node will become the new output of the layer. So you see that this was the previous output here. This multiplication node between this and this. Now this is no longer the output. Now this is obsolete. This is no longer part of the final mathematical expression here. So you see all the gray nodes here were actually sort of obsolete nodes, but they are still kept because in subsequent steps, you can choose them as parents. And then they become part of the expression again. You can see here this 10H node. It was just a node that was sort of a dead end in the expression before. But now with a new mutation, it is again included in the expression because I randomly selected it as a parent. But then this node here, and that was we said this node here, they are now obsolete nodes because they are no longer part of the expression. The expression in this case would go from here to here, including this node. And it would go from here over here. So these nodes are now part of the the expression. So this is how you mutate. And as I said, you can also mutate such that you start from scratch. And so that's how you take the second part in this thing is how do you exactly determine the winner and what is the tournament. So how do you do that? The tournament exactly is what we've seen before when we looked at the different layers. So we said we train on C410. And what we do is we train these three architectures on C410. So the resonant, the mobile net, and the efficient net. We train these three architectures on C410 with the EvoNorm layer, instantiated by that particular sample from the population. And then we look at their accuracies. And we do, we determine what is called the Pareto Frontier of the population. So I think this is further up. Oh, right here. Okay. So the dots here, the red and the gray dots would be our sample. So all of this would be our samples. And their performance here, it's on actually on two models, but in practice, we have three just to graph it better. So we put, I'm here and we determine the Pareto Frontier. Now here, A, B, and C are part of the Pareto Frontier because A performs, outperforms everything else on Model 1, C outperforms everything else on Model 2, and B outperforms C on Model 1, but also outperforms A on Model 2. This is what's called the Pareto Frontier. And we pick one of those as the winners. So they all are kind of one third winners here. So this is how you do the tournament. You pick the winner like this. And then you allow the winner to mutate the last part that is not drawn in here actually is somewhere here ish, which is called the rejection step. So the rejection step is important because what they want to do is they say how we have these mutated layers, but some of the mutations are probably going to be just terrible, like destroying everything, not trainable layers. Just horrible, horrible, such that the layer is useless. They don't want to keep, they don't want to put them back here into the population because that might either deteriorate or severely slow down this progress here. So they want to stop, stop them and only the good ones, only the ones that are somewhat fairly okay, get back to the population. They don't always have to improve, but they have to be at minimally useful. So the rejection step, they describe down here. In the rejection protocols, they have two criterion for rejecting mutated architectures. First, they have a quality criterion. Say we discard layers that achieve less than 20% validation accuracy in 100 training steps on any of the three anchor architectures. So the reasoning behind this is if you have 100 training steps and you achieve less than 20% validation accuracy, you're not going anywhere, because 10% is already random performance. If you are less than 20% after 100 steps, your layer is pretty useless and can be discarded. So they say this simple mechanism ensures the compute resources to concentrate on the full training process of a small subset of promising candidates. So the 100 training steps of course is not enough to train fully, but you can see after 100 training steps whether or not the layer even does something. So you reject those. So this makes pretty much sense. The second criterion is what they call stability. They say we reject layers that are subject to numerical instability. How do they find numerical instability? They define it like this. So what they do is they take the parameters, so the layers, and this is an architecture. So the model, these are the convolutional weights are the theta. And the G is the computation graph, which is the evo norm in this case, and there is a loss defined across them. Of course, this is the loss of the neural network on the samples. So these are the convolutional layers, and these are the normalization layers. Now what we want to do is we want to see how does this loss change when we change the convolutional layers. So you have to imagine here are the convolutional layers, and then there are these weird normalization layers, and then again, there are the convolutional layers. Now we want to see how does the loss change if we change the weights of the convolution by a little bit, right? We just change it a little bit and see how does the loss change? This is the gradient of the weights basically. This is how you train. This entire thing here is how you train the neural network, right? So you want to see how large is this gradient, and you kind of want to do this in an adversarial way, so you want to find the maximum perturbation you can achieve, right? You say, okay, if I change this a little bit in the worst direction I possibly can, how large is the perturbation going to be? And that's how they define numerical instability. So it basically means if this is very high, then the network might be doing well right where it is, but just a little bit changing it will make it terrible, right? So they say we ascend the value on this direction for 100 steps, and layer with the worst case gradient norm greater than 10 to the eighth are rejected. In addition, so as a reason, there seems pretty strange, right? This quality criterion, it made sense, but this stability criterion, it kind of seems, I mean, reasonable, but strange in here, the reason now. So do tests are complementary with each other. For example, we found a layer like this is able to achieve reasonable accuracies on c410 across all the anchor architectures, so it passes the quality criterion above, but it's gradients quickly explode on ImageNet possibly due to the absence of normalization operations. And then you say, aha, okay, so what probably happened is the following, they did their experiment without this, right? Just with this quality criterion, which I guess makes sense. They did this, right? They trained on c410, that's how they do their evolutionary research. Then they took their best performing things among them is this one, and they went to ImageNet, and they said, oh, let's test these now on ImageNet clusters. Right? We found these new architectures, let's see. And then they got exploding gradients, right? And then they went back into their original problem formulation, say, okay, what can we build in to the evolution such that this won't happen? And here you already see kind of the problem with this. What you would like to have is kind of an algorithm that is general, such as to not depend on the architectures and so on, that is used. But you see already here that the authors, they don't direct the search, right? The search is evolution, but they guide, the evolution is very much guided by what these rejection protocols are. And you see here the authors tailoring their rejection protocols to the specific data sets and architectures that they use and the specific problems they experienced when trying to apply their method. And that I think weakens a bit the application of this method because it seems that this particular form of protocols, of this particular form of rejection protocols is very much tailored to this, let's do these three architectures on C410 and then go to ImageNet. And that tells me if I want to do this in a very different domain that I would have to couldn't, it's not very clear that I could just to plop whatever they found works in and it would just work just as outperformingly of the others as in their experiment. It tells me that there is probably like a somewhat large dependence on the specifics here. But that being said, these are the rejection criteria, so they reject each step here, the worst ones, and they go back into the population and then that process repeats and repeats and repeats and then at the end you hopefully end up with some very good normalization layers. Now I have to see here, if you compare now these found normalization layers with the classic variant, so the classic thing here is this red line, this is batch norm and reloot, this is a classic activation normalization combo you put in a network and you see that these methods outperform this on a very kind of a stable basis. So that's pretty cool, but that is as we said on c410, that is on the exact thing that they search over, so it's not really a surprise that if I search a bunch of combinations and always get the best ones, I would perform just one of them. The interesting thing is what happens now if we take what we found and put them into a different architecture for a different data set. Now here the architecture isn't really different because it's kind of the same, but they do evaluate it on ImageNet. ImageNet different data set than c410, much larger and so they put their architectures which here, EvoNorm into ImageNet and devaluated. You can see that it has fairly competitive results across. I find that to be fairly cool, that the best performing ones on c410 would also perform better than the corresponding ones on ImageNet. You already see as well that it's not super high. So the difference is here or I would say it is improving, but sometimes it's the same, sometimes it's actually worse. It doesn't appear to, to me those kind of things are not super convincing, especially because this is the paper that suggests these methods, so they're naturally going to present them in the best possible way. So it seems like the massive outperformance on c4 translates only marginally to ImageNet and these are the same architectures, the ResNet50 and MobileNet and Fission. These were already the architectures that they searched over. So my trust that this new normalization layer put into an actual different architecture is less still. They do actually do some experiments on that as well, but I just, this is my thoughts when reading this. And as well, and this I find very interesting, this column here are random search. So if you just do a random search, which means you just produce random layers, then it doesn't work at all. So you take the best ones of the random ones you found and it doesn't transfer at all. But interestingly, if you do random search plus rejection, so the same rejection that they do, just you don't do this tournament evolution mutation style, you simply random search and then do rejection. That gives you fairly competitive numbers, right? And in some cases, even see here, it does, it outperforms some of the classic methods. So just that will give you fairly decent results, right? And that is to me, that seems to be even more a sign of, okay, this, what this method is mostly doing is just searching like mad for something that works on these particular architectures. And of course, you can find things that work better if you search like mad, but then what do you do with it? Like what does it mean? It can we generalize? Now they do two additional tasks to show that it does generalize two other architecture and tasks. So first of all, they do object detection and instance segmentation, right? On Coco. So this is a very different task. This is a mask or CNN, right? And they just put in their layer there. And you can see here that they generally outperform the baseline. I don't, I can't speak to how much this is, this outperformance is here. It seems like the numbers are fairly close together, but they are consistently better. And again, I don't, I don't, I don't necessarily trust these kind of experiments too much because who knows how much effort you can spend on making your method better. But in any case, they show that they are better, which is already something. But again, here the R50 indicates that we're again dealing with like ResNet 50, ResNet 101 architectures, which are fairly similar to the ones that we that the method was searching over. So the second thing is they say we generalize to GAN training. So they take a big GAN a big GAN deep and they show that their method will outperform these other methods on the IS and FID metrics. I don't even know what inception score and fresh let inception distance. Yeah. So it will outperform them. But in kind of a weird way, okay, here it outperforms them consistently. But then in the inception score, this batch and R1 plus RLU still seems to be like a lot higher than this EVO norm B0. And then this thing here that was performing worse in the image net is now performing somewhat better. So it is a cool result and definitely cool that you can pop this in here. I just think that the things that turn out here that they are tuned to very specific architectures to very specific tasks. So I think the big GAN deep, the kind of architectures will always be kind of the same. It will always be kind of ResNet-ish style neural networks. And the tasks here will always be sort of see for image net style things. And therefore I believe with the results we've seen, the fact that it outperforms so much on C410. But then the GANs on ImageNet become more marginal. I think that indicates that the GANs here most probably don't translate the further away you go. So I'm not sure that the EVO norm that they find, like that this particular thing here will remain the best thing. Across tasks. I think they just found this to work well in their particular setting here. And if I run the same thing with the slightly different architectures and slightly different tasks, I will come up with a different best thing. Yeah. All right. So these were my comments. They do some interesting experiments where they show that if they just do random layers, it's not as performant. Which I can believe if you just jumble these things randomly. It's probably not as good. So you need some kind of search criterion. And yeah, that was my thoughts on this paper. I invite you to read it. Look at it. Look at the existing experiment. It is a very good evaluated paper. And that bye bye. | [{"start": 0.0, "end": 5.9, "text": " Hi there. Today we're looking at evolving normalization activation layers by"}, {"start": 5.9, "end": 13.56, "text": " Hangiao Liu and Drew Brock, Karen Simone and Guo Vili. These are people from"}, {"start": 13.56, "end": 21.3, "text": " Google Brain and Google Deep Mind. The topic of this paper is, as you can see, it's"}, {"start": 21.3, "end": 27.28, "text": " about normalization activation layers and we want to evolve them. I think the"}, {"start": 27.28, "end": 32.980000000000004, "text": " print title says a lot, but let's go down here and see what this is about. So we'll"}, {"start": 32.980000000000004, "end": 42.82, "text": " look at image neural networks and current architectures are kind of focused"}, {"start": 42.82, "end": 49.02, "text": " around the same principles. What they'll have is, so ever since ResNet, these"}, {"start": 49.02, "end": 52.94, "text": " neural networks will be composed of these kind of blocks that come one after"}, {"start": 52.94, "end": 57.44, "text": " another. So there'll be a block up here and then the signal will propagate and"}, {"start": 57.44, "end": 63.199999999999996, "text": " there'll be another block down here. Now these blocks usually consist of what's"}, {"start": 63.199999999999996, "end": 68.66, "text": " called a skip connection out here. This is the fundamental ingredient of ResNet"}, {"start": 68.66, "end": 75.86, "text": " that made ResNet. So effective seem to be the introduction of the skip"}, {"start": 75.86, "end": 80.6, "text": " connection. You see all of these have the skip connection here. So these are"}, {"start": 80.6, "end": 87.16, "text": " variants on ResNet and then we see that these are always mixed between"}, {"start": 87.16, "end": 92.96, "text": " convolutional layers and then other things that here are called EVO norm. Now in"}, {"start": 92.96, "end": 98.19999999999999, "text": " a classic ResNet, you would have something like a convolutional layer, then you"}, {"start": 98.19999999999999, "end": 103.52, "text": " would have a batch normalization and then you would have a non-linearity for"}, {"start": 103.52, "end": 109.03999999999999, "text": " example, a relu and then you would go on to the next convolutional layer. So you"}, {"start": 109.04, "end": 115.28, "text": " see that the paper mainly cares about these two layers here. The batch norm and"}, {"start": 115.28, "end": 121.84, "text": " the relu and it combines them into what it's called, a calls an EVO norm. So the"}, {"start": 121.84, "end": 128.6, "text": " EVO norm layers here are supposed to replace the normalization and the"}, {"start": 128.6, "end": 135.20000000000002, "text": " activation layers, combine them and make them better. And how does it do that?"}, {"start": 135.2, "end": 143.44, "text": " Through evolutionary search. Alright, so these three models here are the ResNet,"}, {"start": 143.44, "end": 148.6, "text": " mobile net and efficient net architectures. They're all image classifier"}, {"start": 148.6, "end": 156.39999999999998, "text": " architectures. Alright, so let's see how it does that. What it does is it"}, {"start": 156.39999999999998, "end": 162.44, "text": " evolves these layers from single primitives. Now if you've seen the batch"}, {"start": 162.44, "end": 171.88, "text": " normalization paper, then you know that the batch normalization is just kind of"}, {"start": 171.88, "end": 177.64, "text": " a formula you can write down. Alright? And these other normalization methods. So"}, {"start": 177.64, "end": 181.56, "text": " people have developed other ones than batch norm. For example, this is group"}, {"start": 181.56, "end": 187.56, "text": " norm with a relu activation function and you can write these two layers down as"}, {"start": 187.56, "end": 192.36, "text": " this mathematical expression. So it features things like this is the input signal."}, {"start": 192.36, "end": 198.6, "text": " I think this is the mean across some groups. This is a bias term that you can"}, {"start": 198.6, "end": 204.68, "text": " train. This is the standard deviation across the same groups and so on. So and"}, {"start": 204.68, "end": 210.6, "text": " this here is the relu term. So you can you can kind of write this down as a"}, {"start": 210.6, "end": 218.48, "text": " combination of these primitives. And so you can write it in a graph. Now this"}, {"start": 218.48, "end": 224.92, "text": " graph here is actually an activation function that this paper has found."}, {"start": 224.92, "end": 232.76, "text": " It's called Evo norm s0 and the mathematical equation is the thing down here."}, {"start": 232.76, "end": 237.48, "text": " It's not that different. You can see from previous activations. It also has the"}, {"start": 237.48, "end": 244.28, "text": " input signal. It has the this variance or standard deviation across groups. It"}, {"start": 244.28, "end": 252.28, "text": " has a nonlinearity here. And the graph here is is is simply a graph of"}, {"start": 252.28, "end": 259.32, "text": " mathematical operations made out of primitives. Right? So this paper takes all"}, {"start": 259.32, "end": 264.36, "text": " of the primitives that they can think of and puts them in this table. They say"}, {"start": 264.36, "end": 269.72, "text": " okay, our search space for these layers. So we want to evolve these layers. Our"}, {"start": 269.72, "end": 275.16, "text": " search space is a combination of any of these primitives. So you can see here"}, {"start": 276.04, "end": 283.08000000000004, "text": " you have something like addition, multiplication, negation. So you can you can"}, {"start": 283.08000000000004, "end": 288.92, "text": " subtract things. You can take the log of things. You can take the square root and so on."}, {"start": 288.92, "end": 296.12, "text": " So here we have a max, which is the reloo activation function. But if you put"}, {"start": 296.12, "end": 300.84000000000003, "text": " zero as one of them, you have the sigmoid, which is another nonlinearity."}, {"start": 300.84000000000003, "end": 305.8, "text": " Then you can also do something like I want to compute the batch mean or I want to"}, {"start": 305.8, "end": 311.16, "text": " compute a group standard deviation. Pretty much anything that current activation"}, {"start": 311.16, "end": 316.44, "text": " functions that have been handcrafted use are available as primitives across"}, {"start": 316.44, "end": 323.4, "text": " this search. So how does this method search? This is the process of how the method"}, {"start": 323.4, "end": 329.48, "text": " searches. It does this in an evolutionary way. And evolutionary protocols, it means that you don't"}, {"start": 330.2, "end": 334.84, "text": " develop one layer like you would do if you were to do something like gradient descent."}, {"start": 335.64, "end": 341.72, "text": " You develop a whole population of layers. So these can be maybe a couple of hundred or a couple"}, {"start": 341.72, "end": 347.48, "text": " of thousands different layer architectures that you develop at the same time."}, {"start": 348.6, "end": 353.88000000000005, "text": " What you do each time you put them into a tournament, which basically means you want a sample,"}, {"start": 356.6, "end": 360.76000000000005, "text": " a couple of them. I don't think they do all at the same time. They sample a couple of them."}, {"start": 361.88000000000005, "end": 367.72, "text": " And then they train on what they call a proxy task. Their proxy task is C410."}, {"start": 367.72, "end": 377.8, "text": " So C410 is a fairly small image classification task. And they train on C410 because it's pretty"}, {"start": 377.8, "end": 384.44000000000005, "text": " fast. You can train something on C410 in like a couple of minutes or an hour or so. And get a"}, {"start": 384.44000000000005, "end": 392.92, "text": " pretty good feeling for how good the final accuracy will be. You can get that pretty fast."}, {"start": 392.92, "end": 397.56, "text": " So this is a fast classification task because they need to do this a lot. The population is"}, {"start": 397.56, "end": 404.68, "text": " large and they need to repeat this over time. In any case, they take a sample. They train on C410"}, {"start": 404.68, "end": 411.56, "text": " and then the winner, the winning layer is picked from this sample. And only the winning layer"}, {"start": 411.56, "end": 418.2, "text": " is allowed to mutate. So the winning layer is mutated then. And mutation means you kind of change"}, {"start": 418.2, "end": 424.6, "text": " it a bit. Now you don't change it in an informed way. You just change it at random. And of course the"}, {"start": 424.6, "end": 433.56, "text": "... and then you put the mutated layers back into the population. Of course the hope is that by"}, {"start": 433.56, "end": 440.52000000000004, "text": " repeating this process, you repeat and repeat and repeat that simply by picking the winning layers"}, {"start": 440.52000000000004, "end": 446.12, "text": " over and over and over again is a selective pressure such that through the random mutations,"}, {"start": 446.12, "end": 455.0, "text": " but the tournament style evaluation and picking of the winner, that over time, the best performing"}, {"start": 455.0, "end": 460.2, "text": " models in your population, the best scoring model here will get better and better and better."}, {"start": 461.08, "end": 466.44, "text": " So the assumption is that this isn't like a pure combinatorical optimization or like a pure"}, {"start": 467.0, "end": 474.36, "text": " random function is that if I take something that works well, there are ways that I can perturb it"}, {"start": 474.36, "end": 481.08000000000004, "text": " that make it work even better. So even if most of the perturbations are worse, there are some"}, {"start": 481.08000000000004, "end": 487.72, "text": " and the tournament style will always find these ones for me that perform better. And then I can"}, {"start": 487.72, "end": 495.16, "text": " modify these again at random and then among these I can again find the ones that perform even better."}, {"start": 495.16, "end": 507.64000000000004, "text": " So that is the method. So the question is how do you mutate a layer? And mutation I believe is"}, {"start": 507.64000000000004, "end": 515.4, "text": " done in different ways here. But if you look at this here, at this expression, so what you have"}, {"start": 515.4, "end": 525.0, "text": " here is the input is this signal here. And you always start out, I believe, with the input with a"}, {"start": 526.76, "end": 533.24, "text": " layer that just emits the number one with a layer that just emits the number zero or a component."}, {"start": 533.24, "end": 541.0799999999999, "text": " And then you have two trainable vectors that you can include. And you just start out with these"}, {"start": 541.08, "end": 548.6, "text": " four things. And then every time you mutate, you add one of these blocks. And I believe there's"}, {"start": 548.6, "end": 554.44, "text": " also a method like a randomness of changing the individual things or of actually starting from"}, {"start": 554.44, "end": 561.48, "text": " scratch again. It's pretty important. Otherwise you just grow bigger and bigger monsters. But"}, {"start": 561.48, "end": 572.52, "text": " the way you mutate is the following. You add a new block. Let's say I add one here. And you decide"}, {"start": 572.52, "end": 578.28, "text": " on one of the primitives from the table. Here I'm going to simply decide on a minus operation."}, {"start": 578.28, "end": 587.64, "text": " So a subtraction operation. And then once you've done that, you choose two children. Sorry,"}, {"start": 587.64, "end": 593.4, "text": " two parents. I will see it. You choose two parents because the minus operation needs two parents."}, {"start": 593.4, "end": 602.4399999999999, "text": " You choose two of the parents at random here. So I'm going to choose this thing here to be a parent."}, {"start": 602.4399999999999, "end": 610.6, "text": " And I'm going to choose this thing here to be a parent at random. And then this new node will"}, {"start": 610.6, "end": 618.76, "text": " become the new output of the layer. So you see that this was the previous output here. This multiplication"}, {"start": 618.76, "end": 625.8000000000001, "text": " node between this and this. Now this is no longer the output. Now this is obsolete. This is no longer"}, {"start": 625.8000000000001, "end": 633.4, "text": " part of the final mathematical expression here. So you see all the gray nodes here were actually"}, {"start": 633.4, "end": 640.68, "text": " sort of obsolete nodes, but they are still kept because in subsequent steps, you can choose them"}, {"start": 640.68, "end": 646.68, "text": " as parents. And then they become part of the expression again. You can see here this 10H node."}, {"start": 647.3199999999999, "end": 656.68, "text": " It was just a node that was sort of a dead end in the expression before. But now with a new"}, {"start": 656.68, "end": 663.56, "text": " mutation, it is again included in the expression because I randomly selected it as a parent. But"}, {"start": 663.56, "end": 669.56, "text": " then this node here, and that was we said this node here, they are now obsolete nodes because they"}, {"start": 669.56, "end": 677.0, "text": " are no longer part of the expression. The expression in this case would go from here to here,"}, {"start": 677.0, "end": 688.68, "text": " including this node. And it would go from here over here. So these nodes are now part of the"}, {"start": 688.68, "end": 693.4, "text": " the expression. So this is how you mutate. And as I said, you can also mutate such that you start"}, {"start": 693.4, "end": 704.6, "text": " from scratch. And so that's how you take the second part in this thing is how do you exactly"}, {"start": 704.6, "end": 711.48, "text": " determine the winner and what is the tournament. So how do you do that? The tournament exactly"}, {"start": 711.48, "end": 717.96, "text": " is what we've seen before when we looked at the different layers. So we said we train on C410."}, {"start": 717.96, "end": 725.24, "text": " And what we do is we train these three architectures on C410. So the resonant, the mobile net,"}, {"start": 725.24, "end": 733.1600000000001, "text": " and the efficient net. We train these three architectures on C410 with the EvoNorm layer,"}, {"start": 733.16, "end": 740.28, "text": " instantiated by that particular sample from the population. And then we look at their accuracies."}, {"start": 740.8399999999999, "end": 749.9599999999999, "text": " And we do, we determine what is called the Pareto Frontier of the population. So I think this is"}, {"start": 749.9599999999999, "end": 758.76, "text": " further up. Oh, right here. Okay. So the dots here, the red and the gray dots would be our sample."}, {"start": 758.76, "end": 765.64, "text": " So all of this would be our samples. And their performance here, it's on actually on two models,"}, {"start": 765.64, "end": 772.04, "text": " but in practice, we have three just to graph it better. So we put, I'm here and we determine"}, {"start": 772.04, "end": 778.2, "text": " the Pareto Frontier. Now here, A, B, and C are part of the Pareto Frontier because A performs,"}, {"start": 778.2, "end": 786.52, "text": " outperforms everything else on Model 1, C outperforms everything else on Model 2, and B outperforms"}, {"start": 786.52, "end": 793.48, "text": " C on Model 1, but also outperforms A on Model 2. This is what's called the Pareto Frontier."}, {"start": 793.48, "end": 800.76, "text": " And we pick one of those as the winners. So they all are kind of one third winners here."}, {"start": 802.36, "end": 809.96, "text": " So this is how you do the tournament. You pick the winner like this. And then you allow the winner"}, {"start": 809.96, "end": 821.08, "text": " to mutate the last part that is not drawn in here actually is somewhere here ish, which is called"}, {"start": 821.08, "end": 830.6800000000001, "text": " the rejection step. So the rejection step is important because what they want to do is they say"}, {"start": 830.6800000000001, "end": 838.12, "text": " how we have these mutated layers, but some of the mutations are probably going to be just terrible,"}, {"start": 838.12, "end": 848.12, "text": " like destroying everything, not trainable layers. Just horrible, horrible, such that the"}, {"start": 848.12, "end": 855.32, "text": " layer is useless. They don't want to keep, they don't want to put them back here into the population"}, {"start": 855.32, "end": 863.72, "text": " because that might either deteriorate or severely slow down this progress here. So they want to"}, {"start": 863.72, "end": 871.72, "text": " stop, stop them and only the good ones, only the ones that are somewhat fairly okay,"}, {"start": 871.72, "end": 876.84, "text": " get back to the population. They don't always have to improve, but they have to be at minimally"}, {"start": 877.4, "end": 887.72, "text": " useful. So the rejection step, they describe down here. In the rejection protocols, they have two"}, {"start": 887.72, "end": 894.44, "text": " criterion for rejecting mutated architectures. First, they have a quality criterion."}, {"start": 895.24, "end": 902.12, "text": " Say we discard layers that achieve less than 20% validation accuracy in 100 training steps on"}, {"start": 902.12, "end": 909.4, "text": " any of the three anchor architectures. So the reasoning behind this is if you have 100 training"}, {"start": 909.4, "end": 918.28, "text": " steps and you achieve less than 20% validation accuracy, you're not going anywhere, because"}, {"start": 918.28, "end": 925.64, "text": " 10% is already random performance. If you are less than 20% after 100 steps, your layer is"}, {"start": 925.64, "end": 933.72, "text": " pretty useless and can be discarded. So they say this simple mechanism ensures the compute"}, {"start": 933.72, "end": 939.5600000000001, "text": " resources to concentrate on the full training process of a small subset of promising candidates."}, {"start": 939.5600000000001, "end": 947.96, "text": " So the 100 training steps of course is not enough to train fully, but you can see after 100"}, {"start": 947.96, "end": 953.4, "text": " training steps whether or not the layer even does something. So you reject those. So this makes"}, {"start": 953.4, "end": 961.8000000000001, "text": " pretty much sense. The second criterion is what they call stability. They say we reject layers"}, {"start": 961.8, "end": 970.52, "text": " that are subject to numerical instability. How do they find numerical instability? They define it"}, {"start": 970.52, "end": 983.4799999999999, "text": " like this. So what they do is they take the parameters, so the layers, and this is an architecture."}, {"start": 983.48, "end": 998.36, "text": " So the model, these are the convolutional weights are the theta. And the G is the computation graph,"}, {"start": 998.36, "end": 1005.16, "text": " which is the evo norm in this case, and there is a loss defined across them. Of course,"}, {"start": 1005.16, "end": 1012.36, "text": " this is the loss of the neural network on the samples. So these are the convolutional layers,"}, {"start": 1012.36, "end": 1017.64, "text": " and these are the normalization layers. Now what we want to do is we want to see how does this"}, {"start": 1017.64, "end": 1026.04, "text": " loss change when we change the convolutional layers. So you have to imagine here are the convolutional"}, {"start": 1026.04, "end": 1031.16, "text": " layers, and then there are these weird normalization layers, and then again, there are the convolutional"}, {"start": 1031.16, "end": 1043.48, "text": " layers. Now we want to see how does the loss change if we change the weights of the convolution by"}, {"start": 1043.48, "end": 1047.88, "text": " a little bit, right? We just change it a little bit and see how does the loss change? This is the"}, {"start": 1047.88, "end": 1056.68, "text": " gradient of the weights basically. This is how you train. This entire thing here is how you train"}, {"start": 1056.68, "end": 1063.4, "text": " the neural network, right? So you want to see how large is this gradient, and you kind of want to"}, {"start": 1063.4, "end": 1070.44, "text": " do this in an adversarial way, so you want to find the maximum perturbation you can achieve, right?"}, {"start": 1071.8, "end": 1079.3200000000002, "text": " You say, okay, if I change this a little bit in the worst direction I possibly can, how large"}, {"start": 1079.32, "end": 1091.08, "text": " is the perturbation going to be? And that's how they define numerical instability. So it basically"}, {"start": 1091.08, "end": 1098.36, "text": " means if this is very high, then the network might be doing well right where it is, but just a little"}, {"start": 1098.36, "end": 1110.52, "text": " bit changing it will make it terrible, right? So they say we ascend the value on this direction"}, {"start": 1110.52, "end": 1115.9599999999998, "text": " for 100 steps, and layer with the worst case gradient norm greater than 10 to the eighth are"}, {"start": 1115.9599999999998, "end": 1123.08, "text": " rejected. In addition, so as a reason, there seems pretty strange, right? This quality criterion,"}, {"start": 1123.08, "end": 1130.52, "text": " it made sense, but this stability criterion, it kind of seems, I mean, reasonable, but strange"}, {"start": 1130.52, "end": 1138.84, "text": " in here, the reason now. So do tests are complementary with each other. For example, we found a layer"}, {"start": 1138.84, "end": 1145.8799999999999, "text": " like this is able to achieve reasonable accuracies on c410 across all the anchor architectures,"}, {"start": 1145.88, "end": 1154.5200000000002, "text": " so it passes the quality criterion above, but it's gradients quickly explode on ImageNet"}, {"start": 1154.5200000000002, "end": 1160.68, "text": " possibly due to the absence of normalization operations. And then you say, aha, okay, so what"}, {"start": 1160.68, "end": 1166.44, "text": " probably happened is the following, they did their experiment without this, right? Just with this"}, {"start": 1166.44, "end": 1172.3600000000001, "text": " quality criterion, which I guess makes sense. They did this, right? They trained on c410,"}, {"start": 1172.36, "end": 1177.56, "text": " that's how they do their evolutionary research. Then they took their best performing things among"}, {"start": 1177.56, "end": 1184.84, "text": " them is this one, and they went to ImageNet, and they said, oh, let's test these now on ImageNet"}, {"start": 1184.84, "end": 1191.08, "text": " clusters. Right? We found these new architectures, let's see. And then they got exploding gradients,"}, {"start": 1191.08, "end": 1197.1599999999999, "text": " right? And then they went back into their original problem formulation, say, okay, what can we build"}, {"start": 1197.16, "end": 1203.64, "text": " in to the evolution such that this won't happen? And here you already see kind of the problem with"}, {"start": 1203.64, "end": 1209.4, "text": " this. What you would like to have is kind of an algorithm that is general, such as to not"}, {"start": 1209.4, "end": 1216.76, "text": " depend on the architectures and so on, that is used. But you see already here that the authors,"}, {"start": 1217.5600000000002, "end": 1223.64, "text": " they don't direct the search, right? The search is evolution, but they guide, the evolution is very"}, {"start": 1223.64, "end": 1229.96, "text": " much guided by what these rejection protocols are. And you see here the authors tailoring their"}, {"start": 1229.96, "end": 1236.5200000000002, "text": " rejection protocols to the specific data sets and architectures that they use and the specific"}, {"start": 1236.5200000000002, "end": 1245.5600000000002, "text": " problems they experienced when trying to apply their method. And that I think weakens a bit the"}, {"start": 1245.56, "end": 1254.36, "text": " application of this method because it seems that this particular form of protocols, of this"}, {"start": 1254.36, "end": 1259.72, "text": " particular form of rejection protocols is very much tailored to this, let's do these three"}, {"start": 1259.72, "end": 1268.6799999999998, "text": " architectures on C410 and then go to ImageNet. And that tells me if I want to do this in a very"}, {"start": 1268.68, "end": 1277.48, "text": " different domain that I would have to couldn't, it's not very clear that I could just to plop"}, {"start": 1277.48, "end": 1283.48, "text": " whatever they found works in and it would just work just as outperformingly of the others"}, {"start": 1284.04, "end": 1292.6000000000001, "text": " as in their experiment. It tells me that there is probably like a somewhat large dependence on"}, {"start": 1292.6, "end": 1305.32, "text": " the specifics here. But that being said, these are the rejection criteria, so they reject each step"}, {"start": 1305.32, "end": 1312.12, "text": " here, the worst ones, and they go back into the population and then that process repeats and"}, {"start": 1312.12, "end": 1318.28, "text": " repeats and repeats and then at the end you hopefully end up with some very good normalization layers."}, {"start": 1318.28, "end": 1332.92, "text": " Now I have to see here, if you compare now these found normalization layers with the classic"}, {"start": 1332.92, "end": 1339.6399999999999, "text": " variant, so the classic thing here is this red line, this is batch norm and reloot, this is a classic"}, {"start": 1340.2, "end": 1346.2, "text": " activation normalization combo you put in a network and you see that these methods outperform"}, {"start": 1346.2, "end": 1357.96, "text": " this on a very kind of a stable basis. So that's pretty cool, but that is as we said on c410,"}, {"start": 1357.96, "end": 1364.1200000000001, "text": " that is on the exact thing that they search over, so it's not really a surprise that if I"}, {"start": 1364.1200000000001, "end": 1372.6000000000001, "text": " search a bunch of combinations and always get the best ones, I would perform just one of them."}, {"start": 1372.6, "end": 1381.3999999999999, "text": " The interesting thing is what happens now if we take what we found and put them into a different"}, {"start": 1381.3999999999999, "end": 1387.56, "text": " architecture for a different data set. Now here the architecture isn't really different because"}, {"start": 1388.1999999999998, "end": 1395.24, "text": " it's kind of the same, but they do evaluate it on ImageNet. ImageNet different data set than c410,"}, {"start": 1395.24, "end": 1405.48, "text": " much larger and so they put their architectures which here, EvoNorm into ImageNet and devaluated."}, {"start": 1405.48, "end": 1417.08, "text": " You can see that it has fairly competitive results across. I find that to be fairly cool,"}, {"start": 1417.08, "end": 1425.8, "text": " that the best performing ones on c410 would also perform better than the corresponding ones on ImageNet."}, {"start": 1427.0, "end": 1437.32, "text": " You already see as well that it's not super high. So the difference is here or I would say"}, {"start": 1438.1999999999998, "end": 1443.32, "text": " it is improving, but sometimes it's the same, sometimes it's actually worse."}, {"start": 1443.32, "end": 1453.56, "text": " It doesn't appear to, to me those kind of things are not super convincing, especially because"}, {"start": 1453.56, "end": 1458.9199999999998, "text": " this is the paper that suggests these methods, so they're naturally going to present them in the best"}, {"start": 1458.9199999999998, "end": 1470.2, "text": " possible way. So it seems like the massive outperformance on c4 translates only marginally to"}, {"start": 1470.2, "end": 1475.56, "text": " ImageNet and these are the same architectures, the ResNet50 and MobileNet and Fission. These"}, {"start": 1475.56, "end": 1481.0, "text": " were already the architectures that they searched over. So my trust that this new normalization"}, {"start": 1481.0, "end": 1489.64, "text": " layer put into an actual different architecture is less still. They do actually do some experiments"}, {"start": 1489.64, "end": 1497.56, "text": " on that as well, but I just, this is my thoughts when reading this. And as well, and this I find very"}, {"start": 1497.56, "end": 1504.6799999999998, "text": " interesting, this column here are random search. So if you just do a random search, which means"}, {"start": 1504.6799999999998, "end": 1511.56, "text": " you just produce random layers, then it doesn't work at all. So you take the best ones of the random"}, {"start": 1511.56, "end": 1520.9199999999998, "text": " ones you found and it doesn't transfer at all. But interestingly, if you do random search plus"}, {"start": 1520.92, "end": 1528.6000000000001, "text": " rejection, so the same rejection that they do, just you don't do this tournament evolution"}, {"start": 1531.24, "end": 1538.1200000000001, "text": " mutation style, you simply random search and then do rejection. That gives you fairly competitive"}, {"start": 1538.1200000000001, "end": 1549.64, "text": " numbers, right? And in some cases, even see here, it does, it outperforms some of the classic"}, {"start": 1549.64, "end": 1559.24, "text": " methods. So just that will give you fairly decent results, right? And that is to me, that seems to be"}, {"start": 1560.68, "end": 1569.48, "text": " even more a sign of, okay, this, what this method is mostly doing is just searching like mad for"}, {"start": 1569.48, "end": 1577.0, "text": " something that works on these particular architectures. And of course, you can find things that work"}, {"start": 1577.0, "end": 1583.16, "text": " better if you search like mad, but then what do you do with it? Like what does it mean? It"}, {"start": 1584.68, "end": 1592.6, "text": " can we generalize? Now they do two additional tasks to show that it does generalize two other"}, {"start": 1592.6, "end": 1599.88, "text": " architecture and tasks. So first of all, they do object detection and instance segmentation,"}, {"start": 1599.88, "end": 1609.16, "text": " right? On Coco. So this is a very different task. This is a mask or CNN, right? And they just"}, {"start": 1609.16, "end": 1616.7600000000002, "text": " put in their layer there. And you can see here that they generally outperform the baseline. I"}, {"start": 1616.7600000000002, "end": 1623.5600000000002, "text": " don't, I can't speak to how much this is, this outperformance is here. It seems like the numbers"}, {"start": 1623.56, "end": 1630.12, "text": " are fairly close together, but they are consistently better. And again, I don't, I don't, I don't"}, {"start": 1630.12, "end": 1637.1599999999999, "text": " necessarily trust these kind of experiments too much because who knows how much effort you can"}, {"start": 1637.1599999999999, "end": 1641.8, "text": " spend on making your method better. But in any case, they show that they are better, which is"}, {"start": 1641.8, "end": 1648.2, "text": " already something. But again, here the R50 indicates that we're again dealing with like ResNet 50,"}, {"start": 1648.2, "end": 1656.3600000000001, "text": " ResNet 101 architectures, which are fairly similar to the ones that we that the method was"}, {"start": 1656.3600000000001, "end": 1665.0, "text": " searching over. So the second thing is they say we generalize to GAN training. So they take a big GAN"}, {"start": 1667.16, "end": 1675.16, "text": " a big GAN deep and they show that their method will outperform these other methods"}, {"start": 1675.16, "end": 1685.3200000000002, "text": " on the IS and FID metrics. I don't even know what inception score and fresh let inception"}, {"start": 1685.3200000000002, "end": 1695.5600000000002, "text": " distance. Yeah. So it will outperform them. But in kind of a weird way, okay, here it outperforms"}, {"start": 1695.5600000000002, "end": 1703.88, "text": " them consistently. But then in the inception score, this batch and R1 plus RLU still seems to be"}, {"start": 1703.88, "end": 1715.3200000000002, "text": " like a lot higher than this EVO norm B0. And then this thing here that was performing worse"}, {"start": 1715.3200000000002, "end": 1726.68, "text": " in the image net is now performing somewhat better. So it is a cool result and definitely cool"}, {"start": 1726.68, "end": 1733.64, "text": " that you can pop this in here. I just think that the things that turn out here that they are"}, {"start": 1734.52, "end": 1741.48, "text": " tuned to very specific architectures to very specific tasks. So I think the big GAN deep,"}, {"start": 1741.48, "end": 1746.8400000000001, "text": " the kind of architectures will always be kind of the same. It will always be kind of ResNet-ish"}, {"start": 1746.8400000000001, "end": 1754.3600000000001, "text": " style neural networks. And the tasks here will always be sort of see for image net style things."}, {"start": 1754.36, "end": 1762.1999999999998, "text": " And therefore I believe with the results we've seen, the fact that it outperforms so much on C410."}, {"start": 1762.1999999999998, "end": 1769.8, "text": " But then the GANs on ImageNet become more marginal. I think that indicates that the GANs here"}, {"start": 1769.8, "end": 1777.56, "text": " most probably don't translate the further away you go. So I'm not sure that the EVO norm"}, {"start": 1777.56, "end": 1785.96, "text": " that they find, like that this particular thing here will remain the best thing."}, {"start": 1787.96, "end": 1795.1599999999999, "text": " Across tasks. I think they just found this to work well in their particular setting here."}, {"start": 1795.72, "end": 1799.96, "text": " And if I run the same thing with the slightly different architectures and slightly different tasks,"}, {"start": 1799.96, "end": 1803.8, "text": " I will come up with a different best thing. Yeah."}, {"start": 1803.8, "end": 1809.96, "text": " All right. So these were my comments. They do some interesting experiments where they show that"}, {"start": 1809.96, "end": 1817.8, "text": " if they just do random layers, it's not as performant. Which I can believe if you just"}, {"start": 1817.8, "end": 1823.24, "text": " jumble these things randomly. It's probably not as good. So you need some kind of search criterion."}, {"start": 1823.96, "end": 1831.1599999999999, "text": " And yeah, that was my thoughts on this paper. I invite you to read it. Look at it. Look at the"}, {"start": 1831.16, "end": 1837.3200000000002, "text": " existing experiment. It is a very good evaluated paper. And that bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=DRy_Mr732yA | [Drama] Who invented Contrast Sets? | Funny Twitter spat between researchers arguing who was the first to invent an idea that has probably been around since 1990 :D
References:
https://arxiv.org/abs/2004.02709
https://twitter.com/nlpmattg/status/1247326213296672768
https://arxiv.org/abs/1909.12434
https://twitter.com/zacharylipton/status/1247357810410762240
https://twitter.com/nlpmattg/status/1247373386839252992
https://twitter.com/zacharylipton/status/1247383141075083267
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | I love me some good Twitter drama look at this this is awesome so after this contrast set paper appeared and I've done a video on that the author of it tweeted it out with one of the these long Twitter threads with screenshots and all this seems to be the new marketing tool of academics and as you know I'm not a fan of this paper I think that the number that comes out of such a contrast set is very either useless or counterproductive and you can see my video on that in any case there was another researcher Zachary Lipton who felt like he needed to jump in here saying before the media plits and retweet party gets out of control this idea exists has been published it has a name and a clear justification is called counterfactually augmented data this is amazing look at that and here's the published paper of course and if we look at the published paper this is it right here of course Zach Lipton is an author on that paper and so let's just just read the abstract I haven't read the paper but let's just read the abstract it so I have it classically I have it here my nifty thing here so we can analyze it so this paper if you read the abstract it does sound similar right despite alarm over the reliance of million non-examines blah blah blah blasperious correlation so it talks about the same problems now what do they say given documents in their initial labels we task humans with revise in each document so that it occurs with a counterfactual target label retains internal coherence and avoids unnecessary changes right so this sounds very similar to what these contrast sets do so the counterfactual target label would be the necessary of the contrast set to change the label retains internal coherence which is the in the contrast at this simply given by it supposed to conform to the intent of the data set makers which the intent probably includes internal coherence and it avoids unnecessary changes that conforms to the contrast set only searching in the local environment of a test set sample so you see the definition of these is pretty similar then we go on and say they say class first trained on original data fail on their counterfactual revised count ports and vice versa this experiment was also done by the contrast at paper and then they say class first trained on combined data sets perform remarkably well just shy of those specialized in either domain so immediately we see some differences as well right the main difference I see is they say we task humans and then they train on the they train on the counterfactual revised count parts which probably means they use some mechanical Turks here they say humans because if you want to create a training data set you need lots of data so they probably take a data set and run its training data set again through something like mechanical Turk to get annotations this is exactly what the people of the of the contrast sets claim is wrong with the current pipeline and they so here we have this this thing counterfactually augmented stuff so the contrast sets what they say is we actually need the experts to do this that this the these humans are exactly the wrong people to make the data sets so it has the CFA has some elements correctly the same namely how they construct these labels but who construct the labels and for what reason so here it's experts and for what reason it's for testing it's they say the experts that make the data set should provide an additional contrast test set so this is I mean if if this is just my opinion if this is the same idea of course it's very similar but if this counts as the same idea then 95% of all research counts as the same idea as something that Yergen Schmidhuber has done in the 1990s which of course Yergen Schmidhuber will eloquently argue exactly he invented Gans basically the same thing so yeah so if this is it's not the same like I have to say this it's very close but it's not saying as I understand they even cited the other ones so then the big ring starts and this is funny unlike this this is just funny to me so Zach Lippenjump see here it says this has been published has a name and a clearer justification it's called Confraction of the Data here is the published paper we just looked at this right and then Matt Gardner answers and he says Zach and Divionne's work is sorry and Divionne's work is excellent recommend you all go look at it I work provides a different concurrent take on similar issues right and I think here's someone comments that so he says it is in the related works section although mischaracterized and misattributed as contemporary work so position is really that it is kind of a stolen idea and they were apparently in contact with each other during that so this Matt Gardner here says what the differences are he says we take a geometrical view we demonstrate such a wide variety I mean for all intents and purposes if you go through any of the research go to computer vision go to NLP you'll find like the exact I have like I have I review two papers each here that want to produce data that better defines the decision boundary like these people here I mean this is this idea is just get rehashed over and over in slightly different form these two are particularly close but and then see how they pick our paper was finished two months after theirs and then they say we started the project well before and so on why do we feel defensive and then he answers again with this is absolutely false our paper was drafted in July your paper was finished the night before the ACL deadline this is not two months ago but a half a year it is nothing to do it's why do you presume to know when we started drop the nonsense we did this work in May 2019 present the public results in July post it as if there were dropped the posturing so much of what you're doing here is the very cancer in the system I mean I agree just you know slightly refining ideas that previously were there is very bad problem in academia so this is actually a correct point out but I don't think that this particular instance is particularly bad and then says I'm afraid you're simply mistaken I have a history of publishing similar so I've I've something like the last thing I'm gonna say I just invite you to read this is beautiful the last thing to say here if if this counterfactually augmented data if this is in fact the first instance of this general idea to produce counterfactually augmented data that that that does actually fulfill these criteria I would be extremely surprised because this is nothing to do with deep learning right and the real novelty in our field is mostly deep learning so I'm pretty sure someone must have thought of something like this when everyone was just doing grammars and manual features and things like this so I'm I would be extremely surprised if this hasn't been there in one form or another and why the authors of that shouldn't make exactly the same argument that being said it is fairly close like the the fun part here is that it is actually fairly similar idea except after so the idea itself is fairly similar but here the focus is on different things and it's also on different data sets and I believe yeah as I said 95 percent of research falls into exactly this category so much fun check it out yeah bye bye | [{"start": 0.0, "end": 7.36, "text": " I love me some good Twitter drama look at this this is awesome so after this"}, {"start": 7.36, "end": 13.52, "text": " contrast set paper appeared and I've done a video on that the author of it"}, {"start": 13.52, "end": 19.68, "text": " tweeted it out with one of the these long Twitter threads with screenshots and"}, {"start": 19.68, "end": 25.8, "text": " all this seems to be the new marketing tool of academics and as you know I'm"}, {"start": 25.8, "end": 30.52, "text": " not a fan of this paper I think that the number that comes out of such a contrast"}, {"start": 30.52, "end": 36.120000000000005, "text": " set is very either useless or counterproductive and you can see my video on"}, {"start": 36.120000000000005, "end": 45.84, "text": " that in any case there was another researcher Zachary Lipton who felt like he"}, {"start": 45.84, "end": 51.36, "text": " needed to jump in here saying before the media plits and retweet party gets out"}, {"start": 51.36, "end": 57.36, "text": " of control this idea exists has been published it has a name and a clear"}, {"start": 57.36, "end": 64.8, "text": " justification is called counterfactually augmented data this is amazing look"}, {"start": 64.8, "end": 70.68, "text": " at that and here's the published paper of course and if we look at the"}, {"start": 70.68, "end": 77.32, "text": " published paper this is it right here of course Zach Lipton is an author on"}, {"start": 77.32, "end": 83.19999999999999, "text": " that paper and so let's just just read the abstract I haven't read the paper"}, {"start": 83.19999999999999, "end": 89.8, "text": " but let's just read the abstract it so I have it classically I have it here my"}, {"start": 89.8, "end": 98.75999999999999, "text": " nifty thing here so we can analyze it so this paper if you read the abstract"}, {"start": 98.75999999999999, "end": 105.28, "text": " it does sound similar right despite alarm over the reliance of million"}, {"start": 105.28, "end": 108.72, "text": " non-examines blah blah blah blasperious correlation so it talks about the same"}, {"start": 108.72, "end": 114.84, "text": " problems now what do they say given documents in their initial labels we task"}, {"start": 114.84, "end": 119.88, "text": " humans with revise in each document so that it occurs with a counterfactual"}, {"start": 119.88, "end": 124.8, "text": " target label retains internal coherence and avoids unnecessary changes"}, {"start": 124.8, "end": 131.12, "text": " right so this sounds very similar to what these contrast sets do so the"}, {"start": 131.12, "end": 137.16, "text": " counterfactual target label would be the necessary of the contrast set to"}, {"start": 137.16, "end": 144.0, "text": " change the label retains internal coherence which is the in the contrast at"}, {"start": 144.0, "end": 149.92000000000002, "text": " this simply given by it supposed to conform to the intent of the data set"}, {"start": 149.92000000000002, "end": 155.48000000000002, "text": " makers which the intent probably includes internal coherence and it avoids"}, {"start": 155.48, "end": 161.44, "text": " unnecessary changes that conforms to the contrast set only searching in the local"}, {"start": 161.44, "end": 168.44, "text": " environment of a test set sample so you see the definition of these is pretty"}, {"start": 168.44, "end": 175.2, "text": " similar then we go on and say they say class first trained on original data"}, {"start": 175.2, "end": 179.28, "text": " fail on their counterfactual revised count ports and vice versa this experiment"}, {"start": 179.28, "end": 185.2, "text": " was also done by the contrast at paper and then they say class first"}, {"start": 185.2, "end": 189.76, "text": " trained on combined data sets perform remarkably well just shy of those"}, {"start": 189.76, "end": 196.16, "text": " specialized in either domain so immediately we see some differences as well"}, {"start": 196.16, "end": 201.79999999999998, "text": " right the main difference I see is they say we task humans and then they train"}, {"start": 201.79999999999998, "end": 207.56, "text": " on the they train on the counterfactual revised count parts which probably"}, {"start": 207.56, "end": 212.6, "text": " means they use some mechanical Turks here they say humans because if you want"}, {"start": 212.6, "end": 218.07999999999998, "text": " to create a training data set you need lots of data so they probably take a"}, {"start": 218.07999999999998, "end": 222.0, "text": " data set and run its training data set again through something like mechanical"}, {"start": 222.0, "end": 230.04, "text": " Turk to get annotations this is exactly what the people of the of the"}, {"start": 230.04, "end": 236.44, "text": " contrast sets claim is wrong with the current pipeline and they so here we"}, {"start": 236.44, "end": 241.92, "text": " have this this thing counterfactually augmented stuff so the contrast sets"}, {"start": 241.92, "end": 247.2, "text": " what they say is we actually need the experts to do this that this the these"}, {"start": 247.2, "end": 253.79999999999998, "text": " humans are exactly the wrong people to make the data sets so it has the CFA has"}, {"start": 253.79999999999998, "end": 260.32, "text": " some elements correctly the same namely how they construct these labels but who"}, {"start": 260.32, "end": 265.91999999999996, "text": " construct the labels and for what reason so here it's experts and for what reason"}, {"start": 265.92, "end": 271.76, "text": " it's for testing it's they say the experts that make the data set should provide"}, {"start": 271.76, "end": 280.36, "text": " an additional contrast test set so this is I mean if if this is just my opinion"}, {"start": 280.36, "end": 285.12, "text": " if this is the same idea of course it's very similar but if this counts as the"}, {"start": 285.12, "end": 290.40000000000003, "text": " same idea then 95% of all research counts as the same idea as something that"}, {"start": 290.4, "end": 295.96, "text": " Yergen Schmidhuber has done in the 1990s which of course Yergen Schmidhuber will"}, {"start": 295.96, "end": 305.91999999999996, "text": " eloquently argue exactly he invented Gans basically the same thing so yeah so"}, {"start": 305.91999999999996, "end": 311.56, "text": " if this is it's not the same like I have to say this it's very close but it's"}, {"start": 311.56, "end": 316.96, "text": " not saying as I understand they even cited the other ones so then the"}, {"start": 316.96, "end": 321.2, "text": " big ring starts and this is funny unlike this this is just funny to me so"}, {"start": 321.2, "end": 326.15999999999997, "text": " Zach Lippenjump see here it says this has been published has a name and a"}, {"start": 326.15999999999997, "end": 330.24, "text": " clearer justification it's called Confraction of the Data here is the"}, {"start": 330.24, "end": 336.2, "text": " published paper we just looked at this right and then Matt Gardner answers and he"}, {"start": 336.2, "end": 343.28, "text": " says Zach and Divionne's work is sorry and Divionne's work is excellent"}, {"start": 343.28, "end": 347.88, "text": " recommend you all go look at it I work provides a different concurrent take on"}, {"start": 347.88, "end": 355.15999999999997, "text": " similar issues right and I think here's someone comments that so he says it"}, {"start": 355.15999999999997, "end": 359.76, "text": " is in the related works section although mischaracterized and misattributed as"}, {"start": 359.76, "end": 366.76, "text": " contemporary work so position is really that it is kind of a stolen idea and"}, {"start": 366.76, "end": 372.52, "text": " they were apparently in contact with each other during that so this Matt"}, {"start": 372.52, "end": 377.88, "text": " Gardner here says what the differences are he says we take a geometrical view we"}, {"start": 377.88, "end": 381.71999999999997, "text": " demonstrate such a wide variety I mean for all intents and purposes if you go"}, {"start": 381.71999999999997, "end": 386.08, "text": " through any of the research go to computer vision go to NLP you'll find like"}, {"start": 386.08, "end": 394.88, "text": " the exact I have like I have I review two papers each here that want to produce"}, {"start": 394.88, "end": 399.44, "text": " data that better defines the decision boundary like these people here I mean"}, {"start": 399.44, "end": 405.64, "text": " this is this idea is just get rehashed over and over in slightly different"}, {"start": 405.64, "end": 413.72, "text": " form these two are particularly close but and then see how they pick our paper"}, {"start": 413.72, "end": 422.0, "text": " was finished two months after theirs and then they say we started the project"}, {"start": 422.0, "end": 432.2, "text": " well before and so on why do we feel defensive and then he answers again with"}, {"start": 432.2, "end": 437.84, "text": " this is absolutely false our paper was drafted in July your paper was finished"}, {"start": 437.84, "end": 442.64, "text": " the night before the ACL deadline this is not two months ago but a half a year"}, {"start": 442.64, "end": 448.64, "text": " it is nothing to do it's why do you presume to know when we started drop the"}, {"start": 448.64, "end": 453.52, "text": " nonsense we did this work in May 2019 present the public results in July"}, {"start": 453.52, "end": 458.96, "text": " post it as if there were dropped the posturing so much of what you're doing"}, {"start": 458.96, "end": 466.56, "text": " here is the very cancer in the system I mean I agree just you know slightly"}, {"start": 466.56, "end": 472.64, "text": " refining ideas that previously were there is very bad problem in academia so"}, {"start": 472.64, "end": 476.12, "text": " this is actually a correct point out but I don't think that this particular"}, {"start": 476.12, "end": 480.32, "text": " instance is particularly bad and then says I'm afraid you're simply mistaken I"}, {"start": 480.32, "end": 484.64, "text": " have a history of publishing similar so I've I've something like the last thing"}, {"start": 484.64, "end": 488.64, "text": " I'm gonna say I just invite you to read this is beautiful the last thing to say"}, {"start": 488.64, "end": 498.4, "text": " here if if this counterfactually augmented data if this is in fact the first"}, {"start": 498.4, "end": 504.92, "text": " instance of this general idea to produce counterfactually augmented data"}, {"start": 504.92, "end": 512.08, "text": " that that that does actually fulfill these criteria I would be extremely"}, {"start": 512.08, "end": 517.28, "text": " surprised because this is nothing to do with deep learning right and the real"}, {"start": 517.28, "end": 522.96, "text": " novelty in our field is mostly deep learning so I'm pretty sure someone must"}, {"start": 522.96, "end": 527.48, "text": " have thought of something like this when everyone was just doing grammars and"}, {"start": 527.48, "end": 534.88, "text": " manual features and things like this so I'm I would be extremely surprised"}, {"start": 534.88, "end": 540.12, "text": " if this hasn't been there in one form or another and why the authors of that"}, {"start": 540.12, "end": 544.4, "text": " shouldn't make exactly the same argument that being said it is fairly close"}, {"start": 544.4, "end": 549.48, "text": " like the the fun part here is that it is actually fairly similar idea except"}, {"start": 549.48, "end": 556.0, "text": " after so the idea itself is fairly similar but here the focus is on different"}, {"start": 556.0, "end": 562.12, "text": " things and it's also on different data sets and I believe yeah as I said 95"}, {"start": 562.12, "end": 566.68, "text": " percent of research falls into exactly this category so much fun check it out"}, {"start": 566.68, "end": 596.64, "text": " yeah bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=qeEO2GECQk0 | Evaluating NLP Models via Contrast Sets | Current NLP models are often "cheating" on supervised learning tasks by exploiting correlations that arise from the particularities of the dataset. Therefore they often fail to learn the original intent of the dataset creators. This paper argues that NLP models should be evaluated on Contrast Sets, which are hand-crafted perturbations by the dataset authors that capture their intent in a meaningful way.
https://arxiv.org/abs/2004.02709
Abstract:
Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.
Authors: Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there, today we're looking at evaluating NLP models via contrast sets and these are too many authors from too many places for me to read out so we'll just jump right into the problem. So what is the problem or let's jump into the solution? Here you see a visual question answering task, visual question answering in this case you have two pictures right here, picture one, picture two, and a sentence. Two similarly colored and similarly post-chou dogs are face to face in one image. And I guess the task here is to see to have the system answer is this correct or incorrect. And as you see here I believe that's a correct statement. Or you're maybe tasked to ask which is the image that this applies to, right? Is it image one or image two? And of course here it's image one. Now the problem with such systems is that there are a lot of easy things that the models can do that will usually get them the answer. What we like to imagine is that the model will look at this and recognize that this is a dog here, right? This is a dog, here is its face and this is a dog and here is its face and it will see a ha, there's a count, there's two of them, right? There's two of them and there's a notion of face and there's a notion of pose and so on. But usually there are tricks that the models can do to get this easier. For example I know that in a particular visual question answering system whenever there is a question of what is the ground covered in or something like this? The answer is always snow, right? You don't even have to look at the image. And similarly there are a lot of these kind of tricks that the models learn and the authors recognize correctly that this is mostly a data set problem, right? So the data set usually what you do in these data sets is you have an image, right? You have an image that you scrape from the web or something and it has some mountains and maybe, right? And then there's snow, right? On the mountains, on the ground and maybe it has. And you give this to a bunch of mechanical turks or someone like a raider and you instruct them, you produce a question to this image. You give them a couple of examples, right? And they're usually kind of lazy and they will just look at it and be like, what questions could I ask? Okay, I'll ask. You know, you need to ask something. Usually the instructions are it must be visual and it must maybe be answerable with a one word answer or something like this, right? Or it must, you know, it must be a multiple choice question. So there are these number of instructions will usually be like, okay, what's kind of special about this picture? Okay, there's snow. I'm going to ask about that snow, right? So the problem is mainly the process of data set generation, right? That is that will lead to kind of biases and easy solutions for the models where the models they will simply kind of learn statistical correlations between things and the intention. So we have a big divergent divergence between the intention of what the data set creators want, right? The intention is in this case is visual understanding visual of the world. And there's a big, big difference between this and between how the data set is really constructed. So the authors are trying to address this with what they call contrast sets. Now they say, okay, you get out of this process here, you get a data set, right? You get a training data set and a test data set, maybe here a smaller test data set. What they say is what we should do is we should additionally have these things called contrast sets, right? So this is train, this is test. And usually these two come from the same distribution, right? You simply make them and then you split them somehow and you take the test from the train, but these here are not from the same distribution. This is the contrast. What they argue is that the authors of the data set should create the contrast set, right? So you see that there's a split here where the data set comes from. They argue that the authors of the data set with knowing what intention they have, they should create the contrast data set manually by hand in order to make kind of really hard examples that show what they want out of the system. They capture this here in their example. So if we go back to the example, here are things and they suggest to do this via perturbations. So what they would do is they would start at this example up here, right? They would start and they would perturb it textually or via image. So they would perturb it to make it change the label, the gold label. This is different from adversarial examples. In adversarial examples, you want to perturb a sample such that it is still the same, but to the classifier, it's different. Here you have the opposite goal. You want to make something that is means kind of the opposite, but you want to test whether your classifier can pick up on that. So in this case, the one example would be two similarly colored and similarly post cats instead of dogs, right? Or face-to-face in one image. That would change the label, right? Whereas before the answer was yes, that's a correct sentence. Now it's no, that's not a correct sentence. There are no cats in these images. Also, here three similarly colored dogs. So the intention of the authors, you have to view it through this lens. The intention here is that the system can recognize the species, right? Of the entities in the images, the system can count and the system can compare, right? Compare in this case colors. So you want to kind of make perturbations on these attributes from a test image, right? You can also think about image perturbations where you keep the sentence, but you modify the image, such that there are still two dogs and they're still facing each other. But they're not similarly colored anymore, right? So the similarly colored here would be the attribute that, where before it was true, now it's false with the new image, right? So you get the gist that the people that created the dataset that know their intention will create manually these samples. And the authors, they propose a new metric to track this, but essentially the authors propose how well the models do on these contrast sets will be a reflection. It should be, it should be kind of an additional thing that people do with their NLP models. All right, so you get the picture. That is, I believe the entire gist of this paper. And I have some problems. First of all, here they say, all right, let's give a toy example in two dimensions. Say you have this dataset, right? And the red one is the correct decision boundary, right? And you want to capture that. But because you only have limited training date and because you in this generation processes, you have systematic biases. So if we had non-systematic biases, we would think that, okay, we maybe pick this and this one and this one here and this one here and this one here, right? We don't get all of them, but we kind of get an IID sample, right? That wouldn't be so much of a problem. You could still kind of recover the decision boundary, but because we have systematic biases, the authors argue, we actually introduce biases. So the systematic bias here is that we would, of the blue ones, we would only capture things on this layer up here and of the red ones, orange ones, we don't really capture things of the level down here. And thereby we introduce the kind of dataset, the bias. It doesn't require this complex decision boundary anymore. Right? And if we now, the problem is if we collect the dataset like this and we simply say, well, these ones are the test set and these ones are the train set, right? It will generalize well to the test set, but it will not generalize well to what we actually want. And therefore the author say, well, if we introduce these contrast sets here, then you see that the decision boundary that we found will not perform well on these contrast sets, right? So they would say we take one example of the test set, right? This is, you can see this is this example right here. And we would perturb it to make it change its label, or in this case, one of them where the label remains that we would kind of perturb it meaningfully. And yeah, so as I said, I have multiple problems with this. First, 2D toy examples, very, very bad for NLP models. First of all, low-dimensional link tuition does not generalize to high-dimensional link tuition, like very, very little. Second of all, even worse, usually these NLP models have so many parameters, much more parameters than you have dataset, which means that your decision boundary is incidentally going to be simple, even if you had all the data you could possibly want. It's just a very different kind of problem. And then the next problem is if even by doing this contrast set, and you already see it here, right? You already see it, you can only kind of bicker about the data. Okay, but with the contrast set, you only really capture this one aspect. So if that was actually well adhered to, you could measure very locally whether or not this would work or not. And the ability to come up with meaningful contrast sets to ever capture what the model is doing is almost impossible because you have to create them manually. And then you suggest that the authors themselves make these contrast sets. Remember, the authors are the ones that gave these instructions, right? These instructions right here, the authors provided them to the dataset annotators. So the authors will probably be even more biased if they have to do their own, right? If they have to now create their own contrast examples, they will probably, even though they know their intention, they will probably be like more biased than if you at least this year, at least this year is a distributed process across people, right? So you get things that you wouldn't have thought of. But if just the three authors of the date of the paper make the contrast examples, I would argue that's that's even more biased measure often. So all of this it just strikes me as the paper is basically saying let's try on a few things. And I think the fundamental problem is much, much deeper and it goes with this intention part. Like I get it, the visual question answering dataset doesn't capture the, doesn't capture what you want. It doesn't make the model suddenly understand that there are dogs and there is species of animal and so on. It simply makes it correlate things. But that's what deep learning and especially NLP does. So, right, it's like saying you build a build an image in that classifier and it can't fly. And if I try it on my test that it requires my computer to fly and my image net model can do this, then it doesn't serve my intention, right? And I mean it's a cross example, but ultimately you the correct approach should be to better encapsulate your intention into the dataset generating process and then correctly interpreting the results. That mean, okay, on this dataset as far as we can tell the way we created it, this is the performance of the model. It doesn't, the model will never learn to fulfill your intention. And I get it, that's what you're saying, but still even with this contrast that I think it's a really bad measure to formally propose. It's, I think you should much more propose how is the dataset generating process different from what you want and what are the limitations there, right? And so that's, that, I think that will lead to much more meaningful, meaningful results than simply the authors providing a few manually put examples that they feel capture their intention. It will not, will not, the reason we do deep learning instead of straight forward, if else programming is because we cannot capture even our intentions. And therefore dataset generation is the only, is the only method we have, so to say. All right, so ultimately I believe this, this whole NLP, especially the visual question answering and so on the natural language, understanding part needs to have a grounding. So ultimately I think grounding grounded NLP, it means basically that you're not only doing NLP, which is simply you take text and you take images and you correlate them somehow, right? You just make a statistical connection. Grounded NLP models is the hope that you could build something that actually understands the world, understands that there's entities that is interacted, there's something like a pose that there is something like what the color means, right? What a dog is and so on. And as entities, I think we're not there yet. And I think that will be the ultimate solution to these kind of tasks, not, not any sort of local, very local, very low dimensional perturbation. I mean, yeah, let's say you create a contrast set, you will be able to capture one tiny little bit of your intention, one tiny little bit, even though you know your intention, you will capture tiny little bit, all of the thousand other degrees of freedom of your own intention, you won't be able to capture in the contrast set. I guarantee you. All right, that was my course with that. I invite you to read the whole paper. They actually do this for NLP data sets. It's a lot of work and they show that the models perform much worse on their contrast sets. And interestingly, the human stones, the humans are able to solve the contrast set, of course, of course, because you tell the humans what the task is, right? That's like human succeed on contrast set, like how surprising what you should do is you should just provide the humans with the data set, not tell them what the task is, not even worse, just provide them with the encoded data set, like not the text itself, but actually the token IDs, right? And then and then make them do the thing. And the humans will just as well make a statistical correlation between the tokens and the images or whatnot. And the humans will fail just as well on the test on these contrast sets because the humans, maybe they'll figure out what the task is, but probably not. So human succeed on contrast set, how surprising you tell them the intention while you don't tell it to the model. Yes, I see a bit critical, but yeah, please read the paper. It's an interesting paper. And with that, goodbye. | [{"start": 0.0, "end": 6.04, "text": " Hi there, today we're looking at evaluating NLP models via contrast sets and"}, {"start": 6.04, "end": 13.56, "text": " these are too many authors from too many places for me to read out so we'll"}, {"start": 13.56, "end": 22.64, "text": " just jump right into the problem. So what is the problem or let's jump into the"}, {"start": 22.64, "end": 29.44, "text": " solution? Here you see a visual question answering task, visual question answering"}, {"start": 29.44, "end": 35.72, "text": " in this case you have two pictures right here, picture one, picture two, and a"}, {"start": 35.72, "end": 43.92, "text": " sentence. Two similarly colored and similarly post-chou dogs are face to face in"}, {"start": 43.92, "end": 53.040000000000006, "text": " one image. And I guess the task here is to see to have the system answer is this"}, {"start": 53.04, "end": 61.44, "text": " correct or incorrect. And as you see here I believe that's a correct statement. Or"}, {"start": 61.44, "end": 68.28, "text": " you're maybe tasked to ask which is the image that this applies to, right? Is it"}, {"start": 68.28, "end": 76.12, "text": " image one or image two? And of course here it's image one. Now the problem with"}, {"start": 76.12, "end": 83.08, "text": " such systems is that there are a lot of easy things that the models can do that"}, {"start": 83.08, "end": 87.60000000000001, "text": " will usually get them the answer. What we like to imagine is that the model will"}, {"start": 87.60000000000001, "end": 92.52000000000001, "text": " look at this and recognize that this is a dog here, right? This is a dog, here is"}, {"start": 92.52000000000001, "end": 98.44, "text": " its face and this is a dog and here is its face and it will see a ha, there's a"}, {"start": 98.44, "end": 105.8, "text": " count, there's two of them, right? There's two of them and there's a notion of"}, {"start": 105.8, "end": 115.88, "text": " face and there's a notion of pose and so on. But usually there are tricks that the"}, {"start": 115.88, "end": 120.84, "text": " models can do to get this easier. For example I know that in a particular"}, {"start": 120.84, "end": 127.52, "text": " visual question answering system whenever there is a question of what is the"}, {"start": 127.52, "end": 139.64, "text": " ground covered in or something like this? The answer is always snow, right? You"}, {"start": 139.64, "end": 145.04, "text": " don't even have to look at the image. And similarly there are a lot of these"}, {"start": 145.04, "end": 151.28, "text": " kind of tricks that the models learn and the authors recognize correctly that"}, {"start": 151.28, "end": 158.32, "text": " this is mostly a data set problem, right? So the data set usually what you do in"}, {"start": 158.32, "end": 162.64, "text": " these data sets is you have an image, right? You have an image that you scrape"}, {"start": 162.64, "end": 167.72, "text": " from the web or something and it has some mountains and maybe, right? And then"}, {"start": 167.72, "end": 175.0, "text": " there's snow, right? On the mountains, on the ground and maybe it has. And you"}, {"start": 175.0, "end": 181.08, "text": " give this to a bunch of mechanical turks or someone like a raider and you"}, {"start": 181.08, "end": 186.48000000000002, "text": " instruct them, you produce a question to this image. You give them a couple of"}, {"start": 186.48000000000002, "end": 190.76000000000002, "text": " examples, right? And they're usually kind of lazy and they will just look at it"}, {"start": 190.76000000000002, "end": 196.56, "text": " and be like, what questions could I ask? Okay, I'll ask. You know, you need to ask"}, {"start": 196.56, "end": 203.12, "text": " something. Usually the instructions are it must be visual and it must maybe be"}, {"start": 203.12, "end": 210.12, "text": " answerable with a one word answer or something like this, right? Or it must, you"}, {"start": 210.12, "end": 214.0, "text": " know, it must be a multiple choice question. So there are these number of"}, {"start": 214.0, "end": 217.52, "text": " instructions will usually be like, okay, what's kind of special about this"}, {"start": 217.52, "end": 224.68, "text": " picture? Okay, there's snow. I'm going to ask about that snow, right? So the"}, {"start": 224.68, "end": 232.4, "text": " problem is mainly the process of data set generation, right? That is that will"}, {"start": 232.4, "end": 238.08, "text": " lead to kind of biases and easy solutions for the models where the models"}, {"start": 238.08, "end": 243.8, "text": " they will simply kind of learn statistical correlations between things and the"}, {"start": 243.8, "end": 254.44, "text": " intention. So we have a big divergent divergence between the intention of what"}, {"start": 254.44, "end": 262.92, "text": " the data set creators want, right? The intention is in this case is visual"}, {"start": 262.92, "end": 273.04, "text": " understanding visual of the world. And there's a big, big difference between"}, {"start": 273.04, "end": 280.20000000000005, "text": " this and between how the data set is really constructed. So the authors are"}, {"start": 280.20000000000005, "end": 285.64, "text": " trying to address this with what they call contrast sets. Now they say, okay,"}, {"start": 285.64, "end": 290.88, "text": " you get out of this process here, you get a data set, right? You get a training"}, {"start": 290.88, "end": 297.36, "text": " data set and a test data set, maybe here a smaller test data set. What they say is"}, {"start": 297.36, "end": 303.64, "text": " what we should do is we should additionally have these things called contrast"}, {"start": 303.64, "end": 312.28, "text": " sets, right? So this is train, this is test. And usually these two come from the"}, {"start": 312.28, "end": 316.44, "text": " same distribution, right? You simply make them and then you split them somehow"}, {"start": 316.44, "end": 322.2, "text": " and you take the test from the train, but these here are not from the same"}, {"start": 322.2, "end": 330.32, "text": " distribution. This is the contrast. What they argue is that the authors of the"}, {"start": 330.32, "end": 338.4, "text": " data set should create the contrast set, right? So you see that there's a split"}, {"start": 338.4, "end": 343.24, "text": " here where the data set comes from. They argue that the authors of the data set"}, {"start": 343.24, "end": 347.8, "text": " with knowing what intention they have, they should create the contrast data set"}, {"start": 347.8, "end": 355.52, "text": " manually by hand in order to make kind of really hard examples that show what"}, {"start": 355.52, "end": 362.16, "text": " they want out of the system. They capture this here in their example. So if we go"}, {"start": 362.16, "end": 370.56, "text": " back to the example, here are things and they suggest to do this via perturbations."}, {"start": 370.56, "end": 376.12, "text": " So what they would do is they would start at this example up here, right? They"}, {"start": 376.12, "end": 383.24, "text": " would start and they would perturb it textually or via image. So they would"}, {"start": 383.24, "end": 390.0, "text": " perturb it to make it change the label, the gold label. This is different from"}, {"start": 390.0, "end": 395.64, "text": " adversarial examples. In adversarial examples, you want to perturb a sample such"}, {"start": 395.64, "end": 400.84, "text": " that it is still the same, but to the classifier, it's different. Here you have"}, {"start": 400.84, "end": 407.4, "text": " the opposite goal. You want to make something that is means kind of the opposite,"}, {"start": 407.4, "end": 411.88, "text": " but you want to test whether your classifier can pick up on that. So in this"}, {"start": 411.88, "end": 416.64, "text": " case, the one example would be two similarly colored and similarly post cats"}, {"start": 416.64, "end": 421.47999999999996, "text": " instead of dogs, right? Or face-to-face in one image. That would change the"}, {"start": 421.48, "end": 427.8, "text": " label, right? Whereas before the answer was yes, that's a correct sentence. Now"}, {"start": 427.8, "end": 433.28000000000003, "text": " it's no, that's not a correct sentence. There are no cats in these images. Also,"}, {"start": 433.28000000000003, "end": 439.68, "text": " here three similarly colored dogs. So the intention of the authors, you have to"}, {"start": 439.68, "end": 445.36, "text": " view it through this lens. The intention here is that the system can recognize"}, {"start": 445.36, "end": 452.68, "text": " the species, right? Of the entities in the images, the system can count and the"}, {"start": 452.68, "end": 458.56, "text": " system can compare, right? Compare in this case colors. So you want to kind of"}, {"start": 458.56, "end": 464.56, "text": " make perturbations on these attributes from a test image, right? You can also"}, {"start": 464.56, "end": 470.28000000000003, "text": " think about image perturbations where you keep the sentence, but you modify the"}, {"start": 470.28000000000003, "end": 475.0, "text": " image, such that there are still two dogs and they're still facing each other."}, {"start": 475.0, "end": 482.4, "text": " But they're not similarly colored anymore, right? So the similarly colored here"}, {"start": 482.4, "end": 489.76, "text": " would be the attribute that, where before it was true, now it's false with the"}, {"start": 489.76, "end": 495.36, "text": " new image, right? So you get the gist that the people that created the"}, {"start": 495.36, "end": 503.32, "text": " dataset that know their intention will create manually these samples. And the"}, {"start": 503.32, "end": 508.76, "text": " authors, they propose a new metric to track this, but essentially the authors"}, {"start": 508.76, "end": 513.92, "text": " propose how well the models do on these contrast sets will be a reflection."}, {"start": 513.92, "end": 520.6, "text": " It should be, it should be kind of an additional thing that people do with"}, {"start": 520.6, "end": 528.0, "text": " their NLP models. All right, so you get the picture. That is, I believe the"}, {"start": 528.0, "end": 537.88, "text": " entire gist of this paper. And I have some problems. First of all, here they say,"}, {"start": 537.88, "end": 542.8, "text": " all right, let's give a toy example in two dimensions. Say you have this"}, {"start": 542.8, "end": 547.56, "text": " dataset, right? And the red one is the correct decision boundary, right? And you"}, {"start": 547.56, "end": 553.08, "text": " want to capture that. But because you only have limited training date and because"}, {"start": 553.08, "end": 561.6, "text": " you in this generation processes, you have systematic biases. So if we had"}, {"start": 561.6, "end": 567.84, "text": " non-systematic biases, we would think that, okay, we maybe pick this and this one"}, {"start": 567.84, "end": 572.64, "text": " and this one here and this one here and this one here, right? We don't get all of"}, {"start": 572.64, "end": 576.88, "text": " them, but we kind of get an IID sample, right? That wouldn't be so much of a"}, {"start": 576.88, "end": 580.88, "text": " problem. You could still kind of recover the decision boundary, but because we"}, {"start": 580.88, "end": 589.04, "text": " have systematic biases, the authors argue, we actually introduce biases. So the"}, {"start": 589.04, "end": 594.4, "text": " systematic bias here is that we would, of the blue ones, we would only capture"}, {"start": 594.4, "end": 603.64, "text": " things on this layer up here and of the red ones, orange ones, we don't"}, {"start": 603.64, "end": 611.04, "text": " really capture things of the level down here. And thereby we introduce the kind of"}, {"start": 611.04, "end": 618.24, "text": " dataset, the bias. It doesn't require this complex decision boundary anymore."}, {"start": 618.24, "end": 624.4399999999999, "text": " Right? And if we now, the problem is if we collect the dataset like this and we"}, {"start": 624.4399999999999, "end": 629.48, "text": " simply say, well, these ones are the test set and these ones are the train"}, {"start": 629.48, "end": 634.28, "text": " set, right? It will generalize well to the test set, but it will not generalize"}, {"start": 634.28, "end": 641.48, "text": " well to what we actually want. And therefore the author say, well, if we introduce"}, {"start": 641.48, "end": 647.6800000000001, "text": " these contrast sets here, then you see that the decision boundary that we found"}, {"start": 647.6800000000001, "end": 654.96, "text": " will not perform well on these contrast sets, right? So they would say we take"}, {"start": 654.96, "end": 660.36, "text": " one example of the test set, right? This is, you can see this is this example right"}, {"start": 660.36, "end": 667.5600000000001, "text": " here. And we would perturb it to make it change its label, or in this case, one of"}, {"start": 667.5600000000001, "end": 674.96, "text": " them where the label remains that we would kind of perturb it meaningfully. And"}, {"start": 674.96, "end": 681.96, "text": " yeah, so as I said, I have multiple problems with this. First, 2D toy examples,"}, {"start": 681.96, "end": 688.24, "text": " very, very bad for NLP models. First of all, low-dimensional link tuition does not"}, {"start": 688.24, "end": 694.96, "text": " generalize to high-dimensional link tuition, like very, very little. Second of all,"}, {"start": 694.96, "end": 700.64, "text": " even worse, usually these NLP models have so many parameters, much more"}, {"start": 700.64, "end": 705.48, "text": " parameters than you have dataset, which means that your decision boundary is"}, {"start": 705.48, "end": 711.48, "text": " incidentally going to be simple, even if you had all the data you could possibly"}, {"start": 711.48, "end": 723.48, "text": " want. It's just a very different kind of problem. And then the next problem is if"}, {"start": 723.48, "end": 729.48, "text": " even by doing this contrast set, and you already see it here, right? You already see it,"}, {"start": 729.48, "end": 734.98, "text": " you can only kind of bicker about the data. Okay, but with the contrast set, you only"}, {"start": 734.98, "end": 743.6800000000001, "text": " really capture this one aspect. So if that was actually well adhered to, you could"}, {"start": 743.6800000000001, "end": 751.08, "text": " measure very locally whether or not this would work or not. And the ability to"}, {"start": 751.08, "end": 757.28, "text": " come up with meaningful contrast sets to ever capture what the model is doing is"}, {"start": 757.28, "end": 762.48, "text": " almost impossible because you have to create them manually. And then you suggest"}, {"start": 762.48, "end": 768.98, "text": " that the authors themselves make these contrast sets. Remember, the authors are"}, {"start": 768.98, "end": 773.98, "text": " the ones that gave these instructions, right? These instructions right here, the"}, {"start": 773.98, "end": 782.98, "text": " authors provided them to the dataset annotators. So the authors will probably be"}, {"start": 782.98, "end": 788.48, "text": " even more biased if they have to do their own, right? If they have to now create"}, {"start": 788.48, "end": 793.48, "text": " their own contrast examples, they will probably, even though they know their"}, {"start": 793.48, "end": 799.48, "text": " intention, they will probably be like more biased than if you at least this"}, {"start": 799.48, "end": 803.98, "text": " year, at least this year is a distributed process across people, right? So you get"}, {"start": 803.98, "end": 807.98, "text": " things that you wouldn't have thought of. But if just the three authors of the"}, {"start": 807.98, "end": 811.98, "text": " date of the paper make the contrast examples, I would argue that's that's"}, {"start": 811.98, "end": 820.98, "text": " even more biased measure often. So all of this it just strikes me as the paper"}, {"start": 820.98, "end": 826.48, "text": " is basically saying let's try on a few things. And I think the fundamental"}, {"start": 826.48, "end": 832.48, "text": " problem is much, much deeper and it goes with this intention part. Like I get it,"}, {"start": 832.48, "end": 841.48, "text": " the visual question answering dataset doesn't capture the, doesn't capture"}, {"start": 841.48, "end": 845.98, "text": " what you want. It doesn't make the model suddenly understand that there are"}, {"start": 845.98, "end": 849.98, "text": " dogs and there is species of animal and so on. It simply makes it correlate"}, {"start": 849.98, "end": 856.48, "text": " things. But that's what deep learning and especially NLP does. So, right, it's"}, {"start": 856.48, "end": 864.98, "text": " like saying you build a build an image in that classifier and it can't fly."}, {"start": 864.98, "end": 871.48, "text": " And if I try it on my test that it requires my computer to fly and my"}, {"start": 871.48, "end": 876.98, "text": " image net model can do this, then it doesn't serve my intention, right?"}, {"start": 876.98, "end": 884.48, "text": " And I mean it's a cross example, but ultimately you the correct approach"}, {"start": 884.48, "end": 890.98, "text": " should be to better encapsulate your intention into the dataset generating"}, {"start": 890.98, "end": 895.48, "text": " process and then correctly interpreting the results. That mean, okay, on this"}, {"start": 895.48, "end": 900.98, "text": " dataset as far as we can tell the way we created it, this is the performance of"}, {"start": 900.98, "end": 906.98, "text": " the model. It doesn't, the model will never learn to fulfill your intention."}, {"start": 906.98, "end": 910.98, "text": " And I get it, that's what you're saying, but still even with this contrast"}, {"start": 910.98, "end": 918.98, "text": " that I think it's a really bad measure to formally propose. It's, I think you"}, {"start": 918.98, "end": 923.98, "text": " should much more propose how is the dataset generating process different from"}, {"start": 923.98, "end": 931.98, "text": " what you want and what are the limitations there, right? And so that's, that,"}, {"start": 931.98, "end": 937.98, "text": " I think that will lead to much more meaningful, meaningful results than simply"}, {"start": 937.98, "end": 944.98, "text": " the authors providing a few manually put examples that they feel capture their"}, {"start": 944.98, "end": 949.98, "text": " intention. It will not, will not, the reason we do deep learning instead of straight"}, {"start": 949.98, "end": 955.98, "text": " forward, if else programming is because we cannot capture even our"}, {"start": 955.98, "end": 963.98, "text": " intentions. And therefore dataset generation is the only, is the only method we"}, {"start": 963.98, "end": 970.98, "text": " have, so to say. All right, so ultimately I believe this, this whole NLP,"}, {"start": 970.98, "end": 974.98, "text": " especially the visual question answering and so on the natural language,"}, {"start": 974.98, "end": 980.98, "text": " understanding part needs to have a grounding. So ultimately I think"}, {"start": 980.98, "end": 988.98, "text": " grounding grounded NLP, it means basically that you're not only doing NLP,"}, {"start": 988.98, "end": 992.98, "text": " which is simply you take text and you take images and you correlate them"}, {"start": 992.98, "end": 997.98, "text": " somehow, right? You just make a statistical connection. Grounded NLP models"}, {"start": 997.98, "end": 1001.98, "text": " is the hope that you could build something that actually understands the world,"}, {"start": 1001.98, "end": 1005.98, "text": " understands that there's entities that is interacted, there's something like a"}, {"start": 1005.98, "end": 1010.98, "text": " pose that there is something like what the color means, right? What a dog is"}, {"start": 1010.98, "end": 1016.98, "text": " and so on. And as entities, I think we're not there yet. And I think that"}, {"start": 1016.98, "end": 1023.98, "text": " will be the ultimate solution to these kind of tasks, not, not any sort of"}, {"start": 1023.98, "end": 1030.98, "text": " local, very local, very low dimensional perturbation. I mean, yeah,"}, {"start": 1030.98, "end": 1036.98, "text": " let's say you create a contrast set, you will be able to capture one tiny little"}, {"start": 1036.98, "end": 1041.98, "text": " bit of your intention, one tiny little bit, even though you know your intention,"}, {"start": 1041.98, "end": 1046.98, "text": " you will capture tiny little bit, all of the thousand other degrees of freedom"}, {"start": 1046.98, "end": 1051.98, "text": " of your own intention, you won't be able to capture in the contrast set. I guarantee"}, {"start": 1051.98, "end": 1056.98, "text": " you. All right, that was my course with that. I invite you to read the whole"}, {"start": 1056.98, "end": 1063.98, "text": " paper. They actually do this for NLP data sets. It's a lot of work and they show"}, {"start": 1063.98, "end": 1068.98, "text": " that the models perform much worse on their contrast sets. And interestingly,"}, {"start": 1068.98, "end": 1072.98, "text": " the human stones, the humans are able to solve the contrast set, of course,"}, {"start": 1072.98, "end": 1079.98, "text": " of course, because you tell the humans what the task is, right? That's like human"}, {"start": 1079.98, "end": 1086.98, "text": " succeed on contrast set, like how surprising what you should do is you should"}, {"start": 1086.98, "end": 1091.98, "text": " just provide the humans with the data set, not tell them what the task is,"}, {"start": 1091.98, "end": 1095.98, "text": " not even worse, just provide them with the encoded data set, like not the text"}, {"start": 1095.98, "end": 1101.98, "text": " itself, but actually the token IDs, right? And then and then make them do the"}, {"start": 1101.98, "end": 1106.98, "text": " thing. And the humans will just as well make a statistical correlation between"}, {"start": 1106.98, "end": 1112.98, "text": " the tokens and the images or whatnot. And the humans will fail just as well on"}, {"start": 1112.98, "end": 1118.98, "text": " the test on these contrast sets because the humans, maybe they'll figure out what"}, {"start": 1118.98, "end": 1123.98, "text": " the task is, but probably not. So human succeed on contrast set, how surprising"}, {"start": 1123.98, "end": 1130.98, "text": " you tell them the intention while you don't tell it to the model. Yes, I see"}, {"start": 1130.98, "end": 1135.98, "text": " a bit critical, but yeah, please read the paper. It's an interesting paper. And"}, {"start": 1135.98, "end": 1137.98, "text": " with that, goodbye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=8wkgDnNxiVs | POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions | From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods, open-ended learning and curriculum learning.
https://arxiv.org/abs/1901.01753
Abstract:
While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that learn their solutions, an important question is whether the problems themselves can be generated by the algorithm at the same time as they are being solved. Such a process would in effect build its own diverse and expanding curricula, and the solutions to problems at various stages would become stepping stones towards solving even more challenging problems later in the process. The Paired Open-Ended Trailblazer (POET) algorithm introduced in this paper does just that: it pairs the generation of environmental challenges and the optimization of agents to solve those challenges. It simultaneously explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems if better, catalyzing innovation. The term open-ended signifies the intriguing potential for algorithms like POET to continue to create novel and increasingly complex capabilities without bound. Our results show that POET produces a diverse range of sophisticated behaviors that solve a wide range of environmental challenges, many of which cannot be solved by direct optimization alone, or even through a direct-path curriculum-building control algorithm introduced to highlight the critical role of open-endedness in solving ambitious challenges. The ability to transfer solutions from one environment to another proves essential to unlocking the full potential of the system as a whole, demonstrating the unpredictable nature of fortuitous stepping stones. We hope that POET will inspire a new push towards open-ended discovery across many domains, where algorithms like POET can blaze a trail through their interesting possible manifestations and solutions.
Authors: Rui Wang, Joel Lehman, Jeff Clune, Kenneth O. Stanley
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called Poet. So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it encounters various obstacles. And it is and remains a challenging reinforcement learning problem to have an agent learn to overcome various obstacles and walk well in different environments. So the paper we're going to look at is called Poet. It's called it's by Uber Engineering. And the full pronunciation is the paired open ended trail blazer endlessly generating increasingly complex and diverse learning environments and their solutions by Roy Wong, Joel, Aiman, Jeff, Clune, and Kenneth, oh, Stanley, as I said from Uber AI labs. So as you already saw the challenge, they take on is this bipedal walker problem. Now their method is very general and not limited to this problem, but this is the problem that they focus on. And a jump sum of the explanations here and dig right into the problem. As you can see, the problem is the following you have this thing here, which is the walker and it has two legs and specifically it has four joints. So the four joints are here to and here to and you can give torque on all of the four joints. So it's a it's basically a four output problem and you do have sensors as input. So the inputs, I believe, is a lidar. So the lidar is this red line you see here. I think it has 16 of those in various angles and also it has pressure detection on the feet, I believe, to see whether or not they are in contact with the ground. And it might also have a kind of an angle, so a gyroscope in that tells you which angle with respect to the ground the head is. And you have various sensors on these things and you're able to basically control what the legs are doing. And your goal is to make this go as far to the right and as fast as possible. So you see the reward down here is negative 100 if the robot falls over. That means if the head hits the ground and then it is 130 times delta X. That's how far you go to the right minus the whole angle and the whole angle, as I said, is this angle here. Keep it as stable as possible because if there if there's a difference in the angle per step, then you get penalized. And also you get penalized for each torque you apply. So you want to kind of apply minimal force on the joints in order to go very far. But by by far the most important point is to go to the right as far and as fast as you can. There is an end here somewhere. And if you reach it, you get a score that is above 230 and they choose the limit of 230 here to determine. So if the agent gets 230 or more than it has solved the environment. That's what they they claim that's from experience. So as you see the environment has various obstacles here. There are holes that you can fall into that you need to jump or step over. There are these kind of stumps. Here they can be of various height. So this is a bit shorter and this is a bit longer. And the general terrain has a roughness and this can go to very rough from very smooth. So this is a parameterized environment and they're obviously they are able to generate these environments from parameters. And the the goal now is to have an agent that walks well in any environment that you can think of. Right so here on the left you see this is very challenging down the stairs. This also isn't too easy because there's a gap here. And there are five parameters of these of these environments. So there is the general roughness of the terrain. That means kind of how many hills it has and how fast they're coming. There is the stump lower bound stump upper bound. I believe so how high the stumps are and also how long the gaps are. And with these parameters you control how difficult an environment is. So the straightforward thing to do is simply to let's say sample environments and have a reinforcement learn approach to this. And that usually doesn't work. I already want to see this without having talked about what the algorithm is. This is the approach where you try this thing is called evolution strategies. But you can think of it as just an straightforward optimization procedure. There's an agent and there's an environment and you're trying to solve the environment using just straightforward optimization. Now the evolution strategies are not your classic or algorithm but you can compare it to it. I have the feeling they like the more esoteric learning algorithms. In any case, so you see in these environments large gap rough surface and so on. These are supposed to be the plane figures three. So these two environments and also these environments here. The evolution strategy so the classic approach if you just straight forward optimize they get very low scores on average. Whereas poet gets here very high scores above the 230 threshold. So what's happening if you're trying to just solve these environments from scratch you basically don't really have a big chance of solving them because let's say you're here and you're trying to move to the right you know you might learn how to do this and you see this from scratch solution actually. Manages to get to the right but then as soon as you reach this you're in this gap and you just fall down the gap because all you've learned so far is how to move right right so what you would need to do is you would need to plan ahead like what poet does you need to see that there's a gap into plan ahead and already lift up the leg in order to then step over the gap here and then do a little jump right here. And this sequence of action this kind of planning ahead it is very difficult to learn this for a classic or a lot of reason because you basically get reward for everything you do so in initially you get reward for moving to the right so that's 10 if you reach here another 10 if you reach here and so there is another 10 if you reach here and another 10 if you reach here whereas if you lift up your leg that's like minus five because now this you've changed this angle and we saw this is negative reward right so a classic optimization algorithm will always fall into the whole because that is where you get the immediate reward whereas you have to you'd have to do a sequence of action that doesn't give you a reward right now. But it gives you more reward later and in order to learn this you need a better algorithm that's just straight forward optimization so maybe I can explain this if you have a maze and here is the start and here is the goal and there is like walls and the walls are something like this. So what you need to do is go around here but what a classic optimization algorithm does is always like it goes here because that's ever so closer to the goal and then it just gets stuck because it can't get stuck. So it needs to go further away before it gets closer. So these people we've talked about this before in like open ended learning novelty search what you would want to do is you would want to gradually build up solutions that can explore the space. And basically build up these solutions and there are two components to what this poet algorithm does so the first component is curriculum learning. Curriculum learning. What does curriculum learning mean curriculum learning means that you start off with easy tasks and you increasingly build up more and more and more complex tasks. So let's say I have an environment here and I'm going to try to draw and at the beginning we just kind of start off with this flat surface right and here is our little walker right here and we'll just train it to move right on that and that should be doable with kind of with a classic approach. And then we gradually move to more difficult environments so maybe we'll make it a bit more rough right and an agent that can already walk to the right already kind of has think of it as a pre training in like NLP. You can then get more and more challenging and then maybe at some point you can build in you can build in a gap right so you build in one of these gaps and I would already knows how to move to the right and now I might actually it might might learn to jump a small gap right if you make it small at the beginning not like this one down here. It's a very large gap but if you make it small by accident it might stumble over it and then learn and continuously how to master the gap. So this is the curriculum learning approach it means that from environment to environment you get harder harder and harder challenges so first flat then more rough then more rough with a gap and so on. The second approach, the second ingredient to poet is what I what they call stepping stone learning or transfer learning or things like this and that's where you kind of have to think of this not as a single agent optimizing but as a population of agents. So let's say you do this curriculum learning right and you're getting fairly well here at rough terrains right more and more rough terrains but you in parallel you also have a second optimization procedure you also start out kind of flat but with this thing you go as we said before small gap. You keep it flat but you just increase the number of gaps here right gap. Whereas over here you just keep making the terrain rougher and rougher right so what these what the philosophy is that an agent that might you know be able to master this rougher terrain it might actually that skill because here you this kind of this kind of. This kind of this kind of looks like a gap here the skill of hopping over this gap here might actually transfer to the environment over here where you do have a proper you know gap in the environment or the skill that you learn from an environment where you have one of these stumps right so here let's draw in one of these stumps where you have to go over. And if you have a walker that can successfully walk over this that skill now might transfer over here in order to get over this over this picky terrain here so the idea of poet is to start off with a generic flat very easy environment and this is a very easy environment. And then spawn new ones so you want to spawn new environments in kind of a hereditary way so this one might get a bit rougher this one might include this and this one might include a gap or something like this and then again you want to spawn new environments and more rough. More rough more rough with a stump here and this one retains the gap sorry and this one now gets two gaps and so on and you want to continuously train these and then always you want to check whether or not the skill that you learn over here might actually transfer to anyone over here. So you get this three of this continuous three of solutions and once you improve on one branch this might actually be good on another branch right they always make the comparison to let's say biological evolution where a strategy that works over here for birds is all of a sudden can be cross adopted by mammals. It for an entirely different problem but the same skill might be valuable. So this this is basically the two ingredients of poet and now I want to show you the complete poet algorithm. So what does it do you start off with an initial environment right and in poet every environment is paired with an agent so there is one agent per environment right so for the time steps what you do is first of all you go through your environments and you mutate them and we already seen these environments they can be generated from a parameter value. So we have five numbers right how rough how stumpy and how how wide the gaps are let's say we have three numbers to two and this might be one this might be two this might be five right so what you want to do is you want to mutate them right you want to spawn children and each of these parameters has a chance of mutating this might be. One three five and this environment might be one four six and this one might be two. Two five right you spawn new ones. You already see that the requirement here is that you can actually have environments that are procedurally generated and mutated like this where a small mutation probably is going to lead to a small change in the environment in any case you mutate them. And then you want to optimize your age each agent so each of these environments is paired with a new agent that always tries to solve that particular environment so now within one environment you simply do your classic optimization. We already saw here the evolution strategy is akin to a classic optimization algorithm from reinforcement learning. So each agent you optimize for a couple of steps right not fully every time but for a couple of steps so each agent including the one in the original environment each agent is continuously trained on its environment throughout the process. Of course you have to be you have bounded computation so you need to drop out the very old ones but in principle continuously as all of this goes on all the agents are always trained on their environments so the agent here this walker will always try to solve this particular environment and the walker here that is now newly generated when the environment is generated will only try to solve this particular environment throughout the whole algorithm right and then all right so you do mutations you spawn new ones and then you do a couple of steps in optimization right and ES step and then you do this transfer attempts right what you want to do is you want to evaluate all the candidates on all the environments in principle you can you can cut this down. But in principle you want to go through the environments and say okay this environment right here when I evaluate all of the other agents in this environment you can do this in a couple of different ways where you just straight up try them or try to optimize them for a few steps to see whether they can be adapted easily to that environment but ultimately you have to come up with a query. So you can go back with a criterion to say for each agent is the agent better or worse than the agent that is continuously trained on this environment if it's worse than you keep this one if if anyone is better then you transfer that better one to replace this one right and you basically copy it over to this new environment and that's where this transfer learning comes in. So you continuously trying all the agents on all the environments and if they are better you transfer them right so here you say if the environment score is better than the one that you have transfer it. Now there is a lot hidden here for example in this mutate environment step they do check whether or not the new mutated environments are not too hard and not too easy and that basically means whether or not the agents can solve them but not solve them too easily. They also check whether the environments are enough novel so you need a couple of checks here you solvable and that means not too easy and not too hard right so they need to pass like a certain score but they need to be kind of solvable to an okay score so there's a score range and also novel. They check whether or not the mutated environments are novel enough and I believe they just do this by calculating the distance between two environments in terms of their parameter vectors so to determine whether or not these are novel. I don't mean the distance just between two but the distance of all of the ones you've seen so far so if we go to a very beautiful drawing here where is the tree. If you create a new environment let's say you create a new environment right here then you want to check it against all environments you've seen so far to determine whether or not it is new or not so you want to create the distance to all of these and if you have enough distance to your nearest neighbors then you are novel and that's kind of how they determine whether environment is new. Alright so that's basically the poet algorithm you continuously create new environments by mutation you ensure that they are solvable not hard enough sorry not too hard but hard enough ensure that they are novel and then you optimize each agent for its own environment continuously as the process goes on. So it's not I want to stress this it's not only the frontier so you're not only looking at the newest generation but you're always looking at all of the generation of the because the older ones while the environments are easier they have been optimized for longer on this environment so the skills might be very handy so you always want to look at your entire population. And then you do crucially do this these transfer attempts so that's the poet algorithm there is a lot hidden here and I can't want to stress that just if you just look at the amount of hyper parameters there's so many hyper parameters in this how much you transfer how much you mutate how many steps you do each of these sub routines here has a billion. So that's a billion hyper parameters and learning rates and and so on. So to me that's a that is kind of a if I look at this algorithm I'm very scared if I attempt to do something like this myself it's is going to be a long and hard thing to evaluate all of these different hyper parameters that you have to do. So shortly when a dip into what the evolution strategy does just so you know because you just might be familiar with your classic your classic reinforce algorithm so in policy gradient methods what you do is you scale your parameters of your network which is you can this is your policy then your policy network here you want to scale the gradient according to your reward so in classical reinforcement learning this here would be the reward you got which basically means if you did an action and you got higher reward you want to make your network do that action more right here in evolution strategies what you do is you spawn it's a different way of of doing the same thing basically you spawn different environments and then sorry you spawn you spawn different agents so you have your current parameters and you want to spawn a number of noisy versions of those parameters and then you want to evaluate each one right and now you want to adjust your parameters your parameters into the direction of that particular so basically you are here with your parameters you create a bunch of noisy versions of it right and let's say these two performed really well you want to adjust your parameters into the direction of those two right that's basically what this says so this is the noisy version and then this is the noise that produced the noisy version so if if this is high if this number here is high then you will adjust your parameters into that direction it's a fairly cool way if you especially if you can't crop through your policy is is pretty neat thing so this is the ES step algorithm but you can think of it just as a oral algorithm all right so they do various experiments to show that this actually has merits I already shown you if you're trying if you take the same environments and try to solve them directly by this evolution step then it will not succeed because of the problems we've discussed before now the comparison is a bit unfair because of course these environments for poet poet the problem here is you can't have it solve a particular environments because the environments they constantly change right you constantly mutate the environments you never know where it's going it's not directed so if your goal is to solve a particular environment you cannot do it with poet you can hope that the agent that comes out will perform well writing do something like this but I believe I believe that these environments that they test on here are ones that appeared during the poet run right so it's kind of an unfair comparison I feel to do this on an environment that you know this poet agent actually comes from an environment that poet has generated in its all mutation tree curriculum while building it up and then the poor ES algorithm is simply tasked with solving that particular environment from scratch so yes always keep in mind this is this can have a goal this doesn't have a goal right that's kind of the drawback but as you can see poet does get super high scores whereas ES the classic algorithm completely fails and they also investigate the importance of transfer learning so they compare to like a classic classic curriculum learning algorithms there are curriculum learning algorithms where you can continuously try to build up the difficulties of these environments but you also do it in a goal directed way so as I said if you have an environment that has like a gap and then a stump a high stump or two high stumps you want to start out flat and then maybe building a small gap and a small stump and so on until you're here it's very much goal directed but it doesn't have this kind of population with transfer learning aspect of poet so if they compare this you can see here the red the red the red one sorry colored it blue stupidly the red one is whatever poet was able to solve now this is the five dimensions of the parameters and the more on the outside it is the harder the environment and for the same for the same environment the blue one is what the curriculum learning algorithm has managed so it's the best environment the curriculum learning algorithm has been able to solve while trying to build up to the so if we take this here is the environment that poet solved again the comparison is kind of unfair because we're starting out from an environment that poet has already solved and then we're trying to build our way up to it with the classic algorithm by basically again this is it's comparing a non goal directed thing something that just happened to a goal directed process that needs to get this particular environment to work in any case at some point this curriculum learning algorithm will fail like let's say that's here that's the environment that has somewhat of a gap but no stump right and that would be the the blue line here they do like five runs and they plot them here and you can see every time the classic curriculum learning algorithm manages to only solve a much much less challenging environment than the poet algorithm achieved even though it's it's trying to reach exactly that right and so here they show the difference so if you just the classified environments if it's just challenging than the classic algorithm the curriculum learning algorithm can solve it somewhat so the distance is close to zero but as you go more and more challenging the distance between poet and the classic algorithm becomes larger and larger they do give some examples of what this transfer learning does so they have this parent environment that just kind of slouches forward on the ground and then the child environment has a mutation that has now little stumps in it right so you can't get over it right now but the child environment because it's it's a small stump so it might stumble across stumble across learns to lift its leg here and it transfers this back to the parent right at a later iteration which is pretty cool and then the parent gets even better as a result of that transfer so we have two transfer learning events here that mutually help these agents remember both the parent and the child are continuously trained as the process goes on all right and they do some more things where they do actual poet not a classic algorithm but poet without transfer learning and they see that okay the poet without transfer is able to solve some of the very challenging problems but never reaches the extremely challenging stage and that's kind of their argument why the transfer learning is necessary so in total I would say this is a cool algorithm it has many many many many many many hyper parameters and these experimental results with that many hyper parameters you need to take it with the grain of salt because it's always possible that they just haven't put as much effort into the comparisons as they have into their own thing to get it to work all right with that I wish you a nice day and check out the paper have lots of descriptions check out the blog post where they have animations and the YouTube video and with that bye bye | [{"start": 0.0, "end": 9.0, "text": " Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called Poet."}, {"start": 9.0, "end": 21.0, "text": " So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it encounters various obstacles."}, {"start": 21.0, "end": 35.0, "text": " And it is and remains a challenging reinforcement learning problem to have an agent learn to overcome various obstacles and walk well in different environments."}, {"start": 35.0, "end": 45.0, "text": " So the paper we're going to look at is called Poet. It's called it's by Uber Engineering."}, {"start": 45.0, "end": 64.0, "text": " And the full pronunciation is the paired open ended trail blazer endlessly generating increasingly complex and diverse learning environments and their solutions by Roy Wong, Joel, Aiman, Jeff, Clune, and Kenneth, oh, Stanley, as I said from Uber AI labs."}, {"start": 64.0, "end": 77.0, "text": " So as you already saw the challenge, they take on is this bipedal walker problem. Now their method is very general and not limited to this problem, but this is the problem that they focus on."}, {"start": 77.0, "end": 83.0, "text": " And a jump sum of the explanations here and dig right into the problem."}, {"start": 83.0, "end": 102.0, "text": " As you can see, the problem is the following you have this thing here, which is the walker and it has two legs and specifically it has four joints. So the four joints are here to and here to and you can give torque on all of the four joints."}, {"start": 102.0, "end": 119.0, "text": " So it's a it's basically a four output problem and you do have sensors as input. So the inputs, I believe, is a lidar. So the lidar is this red line you see here."}, {"start": 119.0, "end": 133.0, "text": " I think it has 16 of those in various angles and also it has pressure detection on the feet, I believe, to see whether or not they are in contact with the ground."}, {"start": 133.0, "end": 147.0, "text": " And it might also have a kind of an angle, so a gyroscope in that tells you which angle with respect to the ground the head is."}, {"start": 147.0, "end": 162.0, "text": " And you have various sensors on these things and you're able to basically control what the legs are doing. And your goal is to make this go as far to the right and as fast as possible."}, {"start": 162.0, "end": 188.0, "text": " So you see the reward down here is negative 100 if the robot falls over. That means if the head hits the ground and then it is 130 times delta X. That's how far you go to the right minus the whole angle and the whole angle, as I said, is this angle here."}, {"start": 188.0, "end": 197.0, "text": " Keep it as stable as possible because if there if there's a difference in the angle per step, then you get penalized."}, {"start": 197.0, "end": 209.0, "text": " And also you get penalized for each torque you apply. So you want to kind of apply minimal force on the joints in order to go very far."}, {"start": 209.0, "end": 232.0, "text": " But by by far the most important point is to go to the right as far and as fast as you can. There is an end here somewhere. And if you reach it, you get a score that is above 230 and they choose the limit of 230 here to determine."}, {"start": 232.0, "end": 242.0, "text": " So if the agent gets 230 or more than it has solved the environment. That's what they they claim that's from experience."}, {"start": 242.0, "end": 253.0, "text": " So as you see the environment has various obstacles here. There are holes that you can fall into that you need to jump or step over. There are these kind of stumps."}, {"start": 253.0, "end": 268.0, "text": " Here they can be of various height. So this is a bit shorter and this is a bit longer. And the general terrain has a roughness and this can go to very rough from very smooth."}, {"start": 268.0, "end": 290.0, "text": " So this is a parameterized environment and they're obviously they are able to generate these environments from parameters. And the the goal now is to have an agent that walks well in any environment that you can think of."}, {"start": 290.0, "end": 301.0, "text": " Right so here on the left you see this is very challenging down the stairs. This also isn't too easy because there's a gap here."}, {"start": 301.0, "end": 314.0, "text": " And there are five parameters of these of these environments. So there is the general roughness of the terrain. That means kind of how many hills it has and how fast they're coming."}, {"start": 314.0, "end": 326.0, "text": " There is the stump lower bound stump upper bound. I believe so how high the stumps are and also how long the gaps are."}, {"start": 326.0, "end": 332.0, "text": " And with these parameters you control how difficult an environment is."}, {"start": 332.0, "end": 347.0, "text": " So the straightforward thing to do is simply to let's say sample environments and have a reinforcement learn approach to this. And that usually doesn't work."}, {"start": 347.0, "end": 365.0, "text": " I already want to see this without having talked about what the algorithm is. This is the approach where you try this thing is called evolution strategies. But you can think of it as just an straightforward optimization procedure."}, {"start": 365.0, "end": 382.0, "text": " There's an agent and there's an environment and you're trying to solve the environment using just straightforward optimization. Now the evolution strategies are not your classic or algorithm but you can compare it to it."}, {"start": 382.0, "end": 399.0, "text": " I have the feeling they like the more esoteric learning algorithms. In any case, so you see in these environments large gap rough surface and so on."}, {"start": 399.0, "end": 418.0, "text": " These are supposed to be the plane figures three. So these two environments and also these environments here. The evolution strategy so the classic approach if you just straight forward optimize they get very low scores on average."}, {"start": 418.0, "end": 446.0, "text": " Whereas poet gets here very high scores above the 230 threshold. So what's happening if you're trying to just solve these environments from scratch you basically don't really have a big chance of solving them because let's say you're here and you're trying to move to the right you know you might learn how to do this and you see this from scratch solution actually."}, {"start": 446.0, "end": 475.0, "text": " Manages to get to the right but then as soon as you reach this you're in this gap and you just fall down the gap because all you've learned so far is how to move right right so what you would need to do is you would need to plan ahead like what poet does you need to see that there's a gap into plan ahead and already lift up the leg in order to then step over the gap here and then do a little jump right here."}, {"start": 475.0, "end": 495.0, "text": " And this sequence of action this kind of planning ahead it is very difficult to learn this for a classic or a lot of reason because you basically get reward for everything you do so in initially you get reward for moving to the right so that's 10 if you reach here another 10 if you reach here and so"}, {"start": 495.0, "end": 524.0, "text": " there is another 10 if you reach here and another 10 if you reach here whereas if you lift up your leg that's like minus five because now this you've changed this angle and we saw this is negative reward right so a classic optimization algorithm will always fall into the whole because that is where you get the immediate reward whereas you have to you'd have to do a sequence of action that doesn't give you a reward right now."}, {"start": 524.0, "end": 548.0, "text": " But it gives you more reward later and in order to learn this you need a better algorithm that's just straight forward optimization so maybe I can explain this if you have a maze and here is the start and here is the goal and there is like walls and the walls are something like this."}, {"start": 548.0, "end": 561.0, "text": " So what you need to do is go around here but what a classic optimization algorithm does is always like it goes here because that's ever so closer to the goal and then it just gets stuck because it can't"}, {"start": 561.0, "end": 585.0, "text": " get stuck. So it needs to go further away before it gets closer. So these people we've talked about this before in like open ended learning novelty search what you would want to do is you would want to gradually build up solutions that can explore the space."}, {"start": 585.0, "end": 600.0, "text": " And basically build up these solutions and there are two components to what this poet algorithm does so the first component is curriculum learning."}, {"start": 600.0, "end": 606.0, "text": " Curriculum learning."}, {"start": 606.0, "end": 620.0, "text": " What does curriculum learning mean curriculum learning means that you start off with easy tasks and you increasingly build up more and more and more complex tasks."}, {"start": 620.0, "end": 645.0, "text": " So let's say I have an environment here and I'm going to try to draw and at the beginning we just kind of start off with this flat surface right and here is our little walker right here and we'll just train it to move right on that and that should be doable with kind of with a classic approach."}, {"start": 645.0, "end": 660.0, "text": " And then we gradually move to more difficult environments so maybe we'll make it a bit more rough right and an agent that can already walk to the right already kind of has think of it as a pre training in like NLP."}, {"start": 660.0, "end": 684.0, "text": " You can then get more and more challenging and then maybe at some point you can build in you can build in a gap right so you build in one of these gaps and I would already knows how to move to the right and now I might actually it might might learn to jump a small gap right if you make it small at the beginning not like this one down here."}, {"start": 684.0, "end": 694.0, "text": " It's a very large gap but if you make it small by accident it might stumble over it and then learn and continuously how to master the gap."}, {"start": 694.0, "end": 709.0, "text": " So this is the curriculum learning approach it means that from environment to environment you get harder harder and harder challenges so first flat then more rough then more rough with a gap and so on."}, {"start": 709.0, "end": 734.0, "text": " The second approach, the second ingredient to poet is what I what they call stepping stone learning or transfer learning or things like this and that's where you kind of have to think of this not as a single agent optimizing but as a population of agents."}, {"start": 734.0, "end": 761.0, "text": " So let's say you do this curriculum learning right and you're getting fairly well here at rough terrains right more and more rough terrains but you in parallel you also have a second optimization procedure you also start out kind of flat but with this thing you go as we said before small gap."}, {"start": 761.0, "end": 769.0, "text": " You keep it flat but you just increase the number of gaps here right gap."}, {"start": 769.0, "end": 790.0, "text": " Whereas over here you just keep making the terrain rougher and rougher right so what these what the philosophy is that an agent that might you know be able to master this rougher terrain it might actually that skill because here you this kind of this kind of."}, {"start": 790.0, "end": 819.0, "text": " This kind of this kind of looks like a gap here the skill of hopping over this gap here might actually transfer to the environment over here where you do have a proper you know gap in the environment or the skill that you learn from an environment where you have one of these stumps right so here let's draw in one of these stumps where you have to go over."}, {"start": 819.0, "end": 847.0, "text": " And if you have a walker that can successfully walk over this that skill now might transfer over here in order to get over this over this picky terrain here so the idea of poet is to start off with a generic flat very easy environment and this is a very easy environment."}, {"start": 847.0, "end": 876.0, "text": " And then spawn new ones so you want to spawn new environments in kind of a hereditary way so this one might get a bit rougher this one might include this and this one might include a gap or something like this and then again you want to spawn new environments and more rough."}, {"start": 876.0, "end": 905.0, "text": " More rough more rough with a stump here and this one retains the gap sorry and this one now gets two gaps and so on and you want to continuously train these and then always you want to check whether or not the skill that you learn over here might actually transfer to anyone over here."}, {"start": 905.0, "end": 932.0, "text": " So you get this three of this continuous three of solutions and once you improve on one branch this might actually be good on another branch right they always make the comparison to let's say biological evolution where a strategy that works over here for birds is all of a sudden can be cross adopted by mammals."}, {"start": 932.0, "end": 939.0, "text": " It for an entirely different problem but the same skill might be valuable."}, {"start": 939.0, "end": 951.0, "text": " So this this is basically the two ingredients of poet and now I want to show you the complete poet algorithm."}, {"start": 951.0, "end": 980.0, "text": " So what does it do you start off with an initial environment right and in poet every environment is paired with an agent so there is one agent per environment right so for the time steps what you do is first of all you go through your environments and you mutate them and we already seen these environments they can be generated from a parameter value."}, {"start": 980.0, "end": 1009.0, "text": " So we have five numbers right how rough how stumpy and how how wide the gaps are let's say we have three numbers to two and this might be one this might be two this might be five right so what you want to do is you want to mutate them right you want to spawn children and each of these parameters has a chance of mutating this might be."}, {"start": 1009.0, "end": 1019.0, "text": " One three five and this environment might be one four six and this one might be two."}, {"start": 1019.0, "end": 1025.0, "text": " Two five right you spawn new ones."}, {"start": 1025.0, "end": 1045.0, "text": " You already see that the requirement here is that you can actually have environments that are procedurally generated and mutated like this where a small mutation probably is going to lead to a small change in the environment in any case you mutate them."}, {"start": 1045.0, "end": 1074.0, "text": " And then you want to optimize your age each agent so each of these environments is paired with a new agent that always tries to solve that particular environment so now within one environment you simply do your classic optimization."}, {"start": 1074.0, "end": 1084.0, "text": " We already saw here the evolution strategy is akin to a classic optimization algorithm from reinforcement learning."}, {"start": 1084.0, "end": 1103.0, "text": " So each agent you optimize for a couple of steps right not fully every time but for a couple of steps so each agent including the one in the original environment each agent is continuously trained on its environment throughout the process."}, {"start": 1103.0, "end": 1132.0, "text": " Of course you have to be you have bounded computation so you need to drop out the very old ones but in principle continuously as all of this goes on all the agents are always trained on their environments so the agent here this walker will always try to solve this particular environment and the walker here that is now newly generated when the environment is generated will only try to solve this particular environment"}, {"start": 1132.0, "end": 1161.0, "text": " throughout the whole algorithm right and then all right so you do mutations you spawn new ones and then you do a couple of steps in optimization right and ES step and then you do this transfer attempts right what you want to do is you want to evaluate all the candidates on all the environments in principle you can you can cut this down."}, {"start": 1161.0, "end": 1190.0, "text": " But in principle you want to go through the environments and say okay this environment right here when I evaluate all of the other agents in this environment you can do this in a couple of different ways where you just straight up try them or try to optimize them for a few steps to see whether they can be adapted easily to that environment but ultimately you have to come up with a query."}, {"start": 1190.0, "end": 1219.0, "text": " So you can go back with a criterion to say for each agent is the agent better or worse than the agent that is continuously trained on this environment if it's worse than you keep this one if if anyone is better then you transfer that better one to replace this one right and you basically copy it over to this new environment and that's where this transfer learning comes in."}, {"start": 1219.0, "end": 1239.0, "text": " So you continuously trying all the agents on all the environments and if they are better you transfer them right so here you say if the environment score is better than the one that you have transfer it."}, {"start": 1239.0, "end": 1263.0, "text": " Now there is a lot hidden here for example in this mutate environment step they do check whether or not the new mutated environments are not too hard and not too easy and that basically means whether or not the agents can solve them but not solve them too easily."}, {"start": 1263.0, "end": 1289.0, "text": " They also check whether the environments are enough novel so you need a couple of checks here you solvable and that means not too easy and not too hard right so they need to pass like a certain score but they need to be kind of solvable to an okay score so there's a score range and also novel."}, {"start": 1289.0, "end": 1309.0, "text": " They check whether or not the mutated environments are novel enough and I believe they just do this by calculating the distance between two environments in terms of their parameter vectors so to determine whether or not these are novel."}, {"start": 1309.0, "end": 1325.0, "text": " I don't mean the distance just between two but the distance of all of the ones you've seen so far so if we go to a very beautiful drawing here where is the tree."}, {"start": 1325.0, "end": 1353.0, "text": " If you create a new environment let's say you create a new environment right here then you want to check it against all environments you've seen so far to determine whether or not it is new or not so you want to create the distance to all of these and if you have enough distance to your nearest neighbors then you are novel and that's kind of how they determine whether environment is new."}, {"start": 1353.0, "end": 1379.0, "text": " Alright so that's basically the poet algorithm you continuously create new environments by mutation you ensure that they are solvable not hard enough sorry not too hard but hard enough ensure that they are novel and then you optimize each agent for its own environment continuously as the process goes on."}, {"start": 1379.0, "end": 1405.0, "text": " So it's not I want to stress this it's not only the frontier so you're not only looking at the newest generation but you're always looking at all of the generation of the because the older ones while the environments are easier they have been optimized for longer on this environment so the skills might be very handy so you always want to look at your entire population."}, {"start": 1405.0, "end": 1434.0, "text": " And then you do crucially do this these transfer attempts so that's the poet algorithm there is a lot hidden here and I can't want to stress that just if you just look at the amount of hyper parameters there's so many hyper parameters in this how much you transfer how much you mutate how many steps you do each of these sub routines here has a billion."}, {"start": 1434.0, "end": 1441.0, "text": " So that's a billion hyper parameters and learning rates and and so on."}, {"start": 1441.0, "end": 1462.0, "text": " So to me that's a that is kind of a if I look at this algorithm I'm very scared if I attempt to do something like this myself it's is going to be a long and hard thing to evaluate all of these different hyper parameters that you have to do."}, {"start": 1462.0, "end": 1489.0, "text": " So shortly when a dip into what the evolution strategy does just so you know because you just might be familiar with your classic your classic reinforce algorithm so in policy gradient methods what you do is you scale your parameters of your network which is you can"}, {"start": 1489.0, "end": 1518.0, "text": " this is your policy then your policy network here you want to scale the gradient according to your reward so in classical reinforcement learning this here would be the reward you got which basically means if you did an action and you got higher reward you want to make your network do that action more right here in evolution strategies what you do is you spawn"}, {"start": 1518.0, "end": 1546.0, "text": " it's a different way of of doing the same thing basically you spawn different environments and then sorry you spawn you spawn different agents so you have your current parameters and you want to spawn a number of noisy versions of those parameters and then you want to evaluate each one right and now you want to adjust your parameters"}, {"start": 1546.0, "end": 1575.0, "text": " your parameters into the direction of that particular so basically you are here with your parameters you create a bunch of noisy versions of it right and let's say these two performed really well you want to adjust your parameters into the direction of those two right that's basically what this says so"}, {"start": 1575.0, "end": 1598.0, "text": " this is the noisy version and then this is the noise that produced the noisy version so if if this is high if this number here is high then you will adjust your parameters into that direction it's a fairly cool way if you especially if you can't"}, {"start": 1598.0, "end": 1612.0, "text": " crop through your policy is is pretty neat thing so this is the ES step algorithm but you can think of it just as a oral algorithm"}, {"start": 1612.0, "end": 1639.0, "text": " all right so they do various experiments to show that this actually has merits I already shown you if you're trying if you take the same environments and try to solve them directly by this evolution step then it will not succeed because of the problems we've discussed before now the comparison is a bit unfair because of course these environments for poet"}, {"start": 1639.0, "end": 1656.0, "text": " poet the problem here is you can't have it solve a particular environments because the environments they constantly change right you constantly mutate the environments you never know where it's going it's not directed so if your goal is to solve a particular environment you cannot do it with poet"}, {"start": 1656.0, "end": 1683.0, "text": " you can hope that the agent that comes out will perform well writing do something like this but I believe I believe that these environments that they test on here are ones that appeared during the poet run right so it's kind of an unfair comparison I feel to do this on an environment that you know this"}, {"start": 1683.0, "end": 1704.0, "text": " poet agent actually comes from an environment that poet has generated in its all mutation tree curriculum while building it up and then the poor ES algorithm is simply tasked with solving that particular environment from scratch so yes always keep in mind this is this can have a goal"}, {"start": 1704.0, "end": 1724.0, "text": " this doesn't have a goal right that's kind of the drawback but as you can see poet does get super high scores whereas ES the classic algorithm completely fails and they also investigate the importance of transfer learning"}, {"start": 1724.0, "end": 1746.0, "text": " so they compare to like a classic classic curriculum learning algorithms there are curriculum learning algorithms where you can continuously try to build up the difficulties of these environments but you also do it in a goal directed way so as I said if you have an environment that has like a gap"}, {"start": 1746.0, "end": 1770.0, "text": " and then a stump a high stump or two high stumps you want to start out flat and then maybe building a small gap and a small stump and so on until you're here it's very much goal directed but it doesn't have this kind of population with transfer learning aspect of poet so if they compare this"}, {"start": 1770.0, "end": 1798.0, "text": " you can see here the red the red the red one sorry colored it blue stupidly the red one is whatever poet was able to solve now this is the five dimensions of the parameters and the more on the outside it is the harder the environment and for the same"}, {"start": 1798.0, "end": 1822.0, "text": " for the same environment the blue one is what the curriculum learning algorithm has managed so it's the best environment the curriculum learning algorithm has been able to solve while trying to build up to the so if we take this here is the environment that poet solved again the comparison is kind of unfair"}, {"start": 1822.0, "end": 1846.0, "text": " because we're starting out from an environment that poet has already solved and then we're trying to build our way up to it with the classic algorithm by basically again this is it's comparing a non goal directed thing something that just happened to a goal directed process that needs to get this particular environment to work"}, {"start": 1846.0, "end": 1875.0, "text": " in any case at some point this curriculum learning algorithm will fail like let's say that's here that's the environment that has somewhat of a gap but no stump right and that would be the the blue line here they do like five runs and they plot them here and you can see every time the classic curriculum learning algorithm manages to only solve a much much less challenging environment than the poet algorithm"}, {"start": 1875.0, "end": 1904.0, "text": " achieved even though it's it's trying to reach exactly that right and so here they show the difference so if you just the classified environments if it's just challenging than the classic algorithm the curriculum learning algorithm can solve it somewhat so the distance is close to zero but as you go more and more challenging the distance between poet and the classic"}, {"start": 1904.0, "end": 1933.0, "text": " algorithm becomes larger and larger they do give some examples of what this transfer learning does so they have this parent environment that just kind of slouches forward on the ground and then the child environment has a mutation that has now little stumps in it right so you can't get over it right now but the child environment because it's it's a small stump so it might stumble across"}, {"start": 1933.0, "end": 1960.0, "text": " stumble across learns to lift its leg here and it transfers this back to the parent right at a later iteration which is pretty cool and then the parent gets even better as a result of that transfer so we have two transfer learning events here that mutually help these agents remember both the parent and the child are continuously trained as the process goes on"}, {"start": 1960.0, "end": 1986.0, "text": " all right and they do some more things where they do actual poet not a classic algorithm but poet without transfer learning and they see that okay the poet without transfer is able to solve some of the very challenging problems but never reaches the extremely challenging stage and that's kind of their argument why the transfer learning is necessary"}, {"start": 1986.0, "end": 2013.0, "text": " so in total I would say this is a cool algorithm it has many many many many many many hyper parameters and these experimental results with that many hyper parameters you need to take it with the grain of salt because it's always possible that they just haven't put as much effort into the comparisons as they have into their own thing to get it to work"}, {"start": 2013.0, "end": 2028.0, "text": " all right with that I wish you a nice day and check out the paper have lots of descriptions check out the blog post where they have animations and the YouTube video and with that bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=awyuuJoHawo | Dream to Control: Learning Behaviors by Latent Imagination | Dreamer is a new RL agent by DeepMind that learns a continuous control task through forward-imagination in latent space.
https://arxiv.org/abs/1912.01603
Videos: https://dreamrl.github.io/
Abstract:
Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
Authors: Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at Dream to Control Learning Behaviors by latent imagination by Donniger Hoffner Timothy Lilly Krup, Timmy sorry, Jimmy Bach and Muhammad Norosi. This is a reinforcement learning paper that iterates on a series of previous papers where the goal is to learn a policy. In this case they want to learn policies for these kind of continuous control tasks. So these physics-based robots, these hopper or walker types of tasks where you have to control this robot, these joints in order to move forward. And so the goal is that you have multiple observations as you do in reinforcement learning and from each observation you need to somehow come up with an action of what to do. And then that will give you the next observation as well as a reward. A reward for, you know, if your goal is to move this spider, maybe the reward is proportional to how far you move. So your goal is to collect the maximum reward, which should mean you have to move the spider as far as possible simply by doing the correct actions. The goal of this paper now is to do this by learning how the, by learning to sort of plan ahead in this latent space. So as you can see here, the way they do it is they take the observation and they feed it through an encoder. Now you can think of this as maybe a convolutional neural network or something anything that can work that can take an image as an input and give you a hidden representation. So now this here is the hidden representation. From this hidden representation, you can determine what the next action is going to be. And then you get a new observation and then again you can feed that along with the last hidden state into a new hidden state. So this, this is already previous, previous models do this a lot, right? You encode your observation and you have a sort of an, let's say, a recurrent neural network that incorporates all of the observations into hidden state along with the actions you take. And then you always decide on a next action to do. So what does this model do differently? This model wants to do this all in hidden space. So this model wants to do is it wants to say, okay, I am here. I have this observation. Now my encoder tells me that this is going to give me this hidden state. And now what it wants to do is it wants to take in the action that is doing. And without seeing the next observation, right, it wants to predict it. Right, it wants to say, well, if I am here and I do this action, what might the action be? The action might be to put the joystick to the right. It will learn the hidden state corresponding to the spider being a bit more to the right. Right. So this is a bit more to the right than it is right now. And it will you need to do so a number of time steps into the future. And it will kind of learn from its own imagination. So this this is what it will imagine into the future, how the hidden states look. And then it will learn from that instead of having to really do the actions in the real world. Now we've already looked at a number of papers, including something like mu zero or I2A or something like this. This now is only is slightly different. So you can see. So what's different here? What is different is in mu zero, we had this we use this latent model in order to plan ahead like in order to do our decision three planning ahead and so on. This model doesn't do this. This model still wants to come up with a single policy where you encode your state. Right. This is on the right is the final result. You encode your state gets you to a hidden representation. And then from that, you determine what your action is going to be. And you have your next state and so on. So the final goal is simply going to be a policy like a single shot policy without any Monte Carlo tree expansion and so on. But what it wants to do is it wants to learn this policy not by interacting in the real world like here on the left, but actually by interacting only in the dream world right here. So the crucial part if you want to learn from your dreams, right, is to make sure that your dreams are an accurate representation of the real world, right. We already saw this in a paper called World Models by Eurgen Schmidhooper, I believe. And in that paper, what they did was they first collected experience such like experience like this one. And then they learned from the one observation to predict the next ones. And I will or to predict the next hidden states, right. They did so by basically moving in the world at random. So they have this little spider thingy and they just do random movements, right. They randomly move around and thus they collect these trajectories and then they learn from the random trajectories. The difference that this paper does is it does these steps iteratively. So it will not learn from random policy, but it will actually first, yeah, it'll start out learning this random learning a good policy for its environment model. Then acting going back and using that policy in order to learn a better environment model. And then again, learn using the better environment model in order to learn a better policy. If this wasn't clear enough, we'll jump to the algorithm. The algorithm isn't actually too complicated. It's as I said, it's I think it's a relatively minor iteration on previous research, but it appears to work and it works in these kind of continuous control tasks. So you see you have three models here that you need to learn. And that's what you see over here. There is representation, transition and reward, and you'll see they all have the same parameters. So that gives you an indication that these things are a single model. Now what are what is the model representation, transition and reward. So let me this is the the thing on the left here. In this part of the algorithm, you assume that you have a policy. You already know what action you do. Or you can even assume that you have some experience, right? You have your agent is running with a given policy and you simply collect that. And now you're trying to learn. So let me scratch all of this. What do you have given? Given is the observation sequence and the actions you took. Right? And the rewards you got that's also given. So each action gives you reward. Right? So these things are are given provided to you. And now what do you want to learn? You want to learn a representation and a transition. And let's say a reward. So you also want to predict the next reward. This thing, this thing, right? So as we already said, you can do this by encoding the state using, for example, CNN and then using an LSTM in order to incorporate this over time. So what you learn is the transition from one hidden state to the next hidden state. And you also learn the how the observation goes into the hidden state. And thirdly, you learn that if I'm in this hidden state and I take this particular action, I will get this reward in the future. Right? You can learn this from just a set of pre-computed or from a set of experience that you have in your, let's say your replay buffer. Right? This is one model. And you learn this here in this first step in this called Dynamics Learning section. Right? So you see while not converged. So you do Dynamics Learning, you draw data sequences from your experience. Right? Then you compute the model states. These are the hidden states. And then you update this parameter theta using representation learning. Now they don't really specify what representation learning is, but they do give examples of what you can do. I think their point is whatever you need to do in order to learn this representation. And one example is actually drawn here. One example is you can learn a model that reconstructs the next state or actually sorry reconstructs the same state. So you can learn a model that predicts. So if you give the observation as an input, it goes through the hidden state. You can learn a decoder that reconstructs that observation. This is usually done in things like variational auto encoders in order to produce generative models. So the this part here would be the generator. And that would be kind of the thing of interest if you were doing a variational auto encoder. But of course here, our quantity of interest is this encoder model because we want a good representation of the state. But it comes down to the same thing. If you can learn a model that learns to accurately reconstruct the observation, then your representation here in the middle is probably an informative one. Because you learn the same model across multiple observations, that means it can accurately encode what makes one observation different from another one. So this is how you learn the theta parameters. Now the other models here are the action and the value parameters. And this is here in the step called behavior learning. So in the behavior learning, what they say is imagine trajectories from each of the states that you have. So what you're going to do is from each of the observations here, you're going to obtain the hidden states, right? This these hidden states. Now from each of the hidden states here. So here is an observation from its hidden state. You're going to use the model that you learned here through the LSTM. Sorry. Well, this is terrible. Through the LSTM, you're going to use that model to imagine future trajectories, right, of hidden states. So you have given, sorry, given, or now is the observation here, and the hidden state. And you're going to imagine future hidden states. You're also going to imagine future rewards, right? And you are going to use your policy kind of to, you're going to use your policy in order to determine which actions you're going to take, right? And the ultimate goal here is to learn a good policy. So a policy that will give you better rewards in the future, as you would do. So this is regular reinforcement learning, except that the difference is in regular reinforcement learning. I have my observation. I encode it. And then I determine what action I want to take. Then I feed that action back into the environment, which would give me the next observation. And then I'd use that to determine maybe in conjunction with the last hidden state, the next action. In this thing, since we learned a dynamics model of the hidden states, we can simply determine the action, and then simply compute what the probable next hidden state is going to be. And then use that to determine an action again. And so on. So there's no need to go through the environment, which means potentially we can learn much, much faster without having to expensively interact with the environment. So that allows us to basically also these models here, they might be quite large. So our back prop now only needs to happen through this path, basically, if we want to. Or through this path here, in case we have discrete actions. Yes, so that's in that will be the dynamics learning. It's done here. As you can see, we predict the rewards and the values and compute value estimates. And then we update these parameters using. So what we have is here a value function. See, the value function is dependent on this psi here. And this we update using a gradient of its output minus the true value. So this this here is an estimate of the value. And as you know, a value function is supposed to tell you the complete future reward given a state. And it's important for us that we have a function that can estimate that because of course, then we can take actions if we can make this function go high. And this is an accurate function. That means we get a lot of reward in the future. So it's important to learn this function. And here you can see we adjusted into the direction of matching this quantity better. Now we'll get to this quantity in a second. You can also see we update this parameter, which is the action model. So here you see that action model depends on this. This is this is our policy, right? This thing here determines which action we take. And we update it into the direction. This is a gradient with respect to this value function, right? So we train the policy to maximize the value, which is all the future rewards that we get. Of course, we can do this because we can now back propagate through all of these time steps because we have this transition model. We can back propagate through all of this, which is pretty cool. I think, in my opinion, the kind of workhorse of this paper might be this quantity here. So what how exactly do you compute the value of a state? Especially in these continuous control tasks, you sometimes have a lot of steps. So these trajectories might be pretty long and they might be longer than what you can back propagate here reasonably from time step to time step, right? Even an LSTM might only be able to back prop through, let's say, a couple of dozen or maybe a few hundred steps in time. And maybe you have longer trajectories here. So it's pretty, I think this value estimate here is a main component of extending that range. So they say this is according to equation six. And this is what it does. Again, this is my opinion that this here is kind of the workhorse of the method. So it's a three step process actually. It's pretty, pretty heavy. So you see this is the quantity they estimate with the value function. It is set between an average over, so H is the time horizon, right? That you're looking for. It is set between these two things across the sum over the time horizon. Now each of those things, again, here is a sum over this tau, this tau here, which is this tau, and the H minus one. And H here is the minimum of tables K and tables horizon. So this goes, this looks, this quantity looks K steps into the future. So for each step to the horizon, we look K steps into the future. And, and for each step, we look into the future, we sum again across these quantities here. And these quantities here, what is that? It's a mixture of the reward you get in that particular step. Plus your own, your estimate of the value function at the horizon step discounted by that. So it's a pretty, so if you imagine you have like a time number of steps that you took and each time you get a reward, right? This is a very complicated way of summing of going into the future, summing up the rewards, going more steps, summing up the rewards again in different fashion, and then mixing these, these individual quantities. So this one, this one, this one that you got from accumulating all of these in a weird fashion. And that allows you to look way beyond. Especially, you see here, your estimate of the value function will actually include your own value function that again, probably looks into the future. So what you accumulate from the last step in your time horizon already includes information from all the future steps, because you take your own value estimate into account. This is, I think it's very convoluted. But again, I think this is, this complicated value estimate allows you to have a better value estimate foreign to the future. They do show some, some kind of samples here of what they can do. I haven't found any videos of it, unfortunately, but it appears to work pretty well. They have a discussion of different representation learning methods and different experiments and ablations and so on. So I invite you to look at this paper and I hope this was somewhat clear. Bye bye. | [{"start": 0.0, "end": 6.0600000000000005, "text": " Hi there. Today we're looking at Dream to Control Learning Behaviors by latent"}, {"start": 6.0600000000000005, "end": 13.24, "text": " imagination by Donniger Hoffner Timothy Lilly Krup, Timmy sorry, Jimmy Bach and"}, {"start": 13.24, "end": 22.240000000000002, "text": " Muhammad Norosi. This is a reinforcement learning paper that iterates on a"}, {"start": 22.24, "end": 32.08, "text": " series of previous papers where the goal is to learn a policy. In this case they"}, {"start": 32.08, "end": 36.2, "text": " want to learn policies for these kind of continuous control tasks. So these"}, {"start": 36.2, "end": 43.2, "text": " physics-based robots, these hopper or walker types of tasks where you have to"}, {"start": 43.2, "end": 54.28, "text": " control this robot, these joints in order to move forward. And so the goal is that"}, {"start": 54.28, "end": 59.72, "text": " you have multiple observations as you do in reinforcement learning and from"}, {"start": 59.72, "end": 66.36, "text": " each observation you need to somehow come up with an action of what to do. And"}, {"start": 66.36, "end": 76.44, "text": " then that will give you the next observation as well as a reward. A reward for, you"}, {"start": 76.44, "end": 81.52, "text": " know, if your goal is to move this spider, maybe the reward is proportional to"}, {"start": 81.52, "end": 86.6, "text": " how far you move. So your goal is to collect the maximum reward, which should"}, {"start": 86.6, "end": 92.88, "text": " mean you have to move the spider as far as possible simply by doing the correct"}, {"start": 92.88, "end": 103.72, "text": " actions. The goal of this paper now is to do this by learning how the, by"}, {"start": 103.72, "end": 111.36, "text": " learning to sort of plan ahead in this latent space. So as you can see here, the"}, {"start": 111.36, "end": 115.8, "text": " way they do it is they take the observation and they feed it through an encoder."}, {"start": 115.8, "end": 121.16, "text": " Now you can think of this as maybe a convolutional neural network or"}, {"start": 121.16, "end": 126.2, "text": " something anything that can work that can take an image as an input and give you"}, {"start": 126.2, "end": 133.0, "text": " a hidden representation. So now this here is the hidden representation. From this"}, {"start": 133.0, "end": 138.16, "text": " hidden representation, you can determine what the next action is going to be. And"}, {"start": 138.16, "end": 144.72, "text": " then you get a new observation and then again you can feed that along with the"}, {"start": 144.72, "end": 151.16, "text": " last hidden state into a new hidden state. So this, this is already previous, previous"}, {"start": 151.16, "end": 156.72, "text": " models do this a lot, right? You encode your observation and you have a sort of"}, {"start": 156.72, "end": 162.92, "text": " an, let's say, a recurrent neural network that incorporates all of the"}, {"start": 162.92, "end": 167.76, "text": " observations into hidden state along with the actions you take. And then you"}, {"start": 167.76, "end": 175.64, "text": " always decide on a next action to do. So what does this model do differently? This"}, {"start": 175.64, "end": 186.0, "text": " model wants to do this all in hidden space. So this model wants to do is it wants to"}, {"start": 186.0, "end": 192.76, "text": " say, okay, I am here. I have this observation. Now my encoder tells me that this"}, {"start": 192.76, "end": 197.32, "text": " is going to give me this hidden state. And now what it wants to do is it wants to"}, {"start": 197.32, "end": 203.67999999999998, "text": " take in the action that is doing. And without seeing the next observation, right,"}, {"start": 203.67999999999998, "end": 211.0, "text": " it wants to predict it. Right, it wants to say, well, if I am here and I do this"}, {"start": 211.0, "end": 214.56, "text": " action, what might the action be? The action might be to put the joystick to the"}, {"start": 214.56, "end": 221.04, "text": " right. It will learn the hidden state corresponding to the spider being a bit"}, {"start": 221.04, "end": 226.28, "text": " more to the right. Right. So this is a bit more to the right than it is right"}, {"start": 226.28, "end": 232.92, "text": " now. And it will you need to do so a number of time steps into the future. And it"}, {"start": 232.92, "end": 240.04, "text": " will kind of learn from its own imagination. So this this is what it will"}, {"start": 240.04, "end": 246.72, "text": " imagine into the future, how the hidden states look. And then it will learn from"}, {"start": 246.72, "end": 252.48, "text": " that instead of having to really do the actions in the real world. Now we've"}, {"start": 252.48, "end": 258.0, "text": " already looked at a number of papers, including something like mu zero or I2A"}, {"start": 258.0, "end": 266.56, "text": " or something like this. This now is only is slightly different. So you can see."}, {"start": 266.56, "end": 273.68, "text": " So what's different here? What is different is in mu zero, we had this we use"}, {"start": 273.68, "end": 278.56, "text": " this latent model in order to plan ahead like in order to do our decision three"}, {"start": 278.56, "end": 283.40000000000003, "text": " planning ahead and so on. This model doesn't do this. This model still wants to"}, {"start": 283.40000000000003, "end": 289.8, "text": " come up with a single policy where you encode your state. Right. This is on the"}, {"start": 289.8, "end": 293.76, "text": " right is the final result. You encode your state gets you to a hidden"}, {"start": 293.76, "end": 298.32, "text": " representation. And then from that, you determine what your action is going to be."}, {"start": 298.32, "end": 305.04, "text": " And you have your next state and so on. So the final goal is simply going to be a"}, {"start": 305.04, "end": 314.28, "text": " policy like a single shot policy without any Monte Carlo tree expansion and so on."}, {"start": 314.28, "end": 320.15999999999997, "text": " But what it wants to do is it wants to learn this policy not by interacting in"}, {"start": 320.15999999999997, "end": 328.0, "text": " the real world like here on the left, but actually by interacting only in the"}, {"start": 328.0, "end": 333.48, "text": " dream world right here. So the crucial part if you want to learn from your"}, {"start": 333.48, "end": 340.36, "text": " dreams, right, is to make sure that your dreams are an accurate representation of"}, {"start": 340.36, "end": 350.04, "text": " the real world, right. We already saw this in a paper called World Models by"}, {"start": 350.04, "end": 357.6, "text": " Eurgen Schmidhooper, I believe. And in that paper, what they did was they first"}, {"start": 357.6, "end": 364.48, "text": " collected experience such like experience like this one. And then they learned"}, {"start": 364.48, "end": 372.72, "text": " from the one observation to predict the next ones. And I will or to predict the"}, {"start": 372.72, "end": 381.48, "text": " next hidden states, right. They did so by basically moving in the world at"}, {"start": 381.48, "end": 387.68, "text": " random. So they have this little spider thingy and they just do random"}, {"start": 387.68, "end": 392.28000000000003, "text": " movements, right. They randomly move around and thus they collect these"}, {"start": 392.28000000000003, "end": 397.56, "text": " trajectories and then they learn from the random trajectories. The difference"}, {"start": 397.56, "end": 403.4, "text": " that this paper does is it does these steps iteratively. So it will not learn"}, {"start": 403.4, "end": 409.0, "text": " from random policy, but it will actually first, yeah, it'll start out learning"}, {"start": 409.0, "end": 415.52, "text": " this random learning a good policy for its environment model. Then acting going"}, {"start": 415.52, "end": 422.96, "text": " back and using that policy in order to learn a better environment model. And then"}, {"start": 422.96, "end": 428.52, "text": " again, learn using the better environment model in order to learn a better"}, {"start": 428.52, "end": 435.44, "text": " policy. If this wasn't clear enough, we'll jump to the algorithm. The algorithm"}, {"start": 435.44, "end": 444.0, "text": " isn't actually too complicated. It's as I said, it's I think it's a relatively"}, {"start": 444.0, "end": 450.71999999999997, "text": " minor iteration on previous research, but it appears to work and it works in"}, {"start": 450.72, "end": 455.84000000000003, "text": " these kind of continuous control tasks. So you see you have three models here"}, {"start": 455.84000000000003, "end": 461.0, "text": " that you need to learn. And that's what you see over here. There is representation,"}, {"start": 461.0, "end": 466.36, "text": " transition and reward, and you'll see they all have the same parameters. So that"}, {"start": 466.36, "end": 472.84000000000003, "text": " gives you an indication that these things are a single model. Now what are what"}, {"start": 472.84, "end": 480.67999999999995, "text": " is the model representation, transition and reward. So let me this is the the"}, {"start": 480.67999999999995, "end": 490.03999999999996, "text": " thing on the left here. In this part of the algorithm, you assume that you have"}, {"start": 490.03999999999996, "end": 497.47999999999996, "text": " a policy. You already know what action you do. Or you can even assume that you"}, {"start": 497.47999999999996, "end": 502.23999999999995, "text": " have some experience, right? You have your agent is running with a given"}, {"start": 502.24, "end": 510.08, "text": " policy and you simply collect that. And now you're trying to learn. So let me"}, {"start": 510.08, "end": 522.28, "text": " scratch all of this. What do you have given? Given is the observation sequence"}, {"start": 522.28, "end": 533.8, "text": " and the actions you took. Right? And the rewards you got that's also given. So each"}, {"start": 533.8, "end": 541.0, "text": " action gives you reward. Right? So these things are are given provided to you. And"}, {"start": 541.0, "end": 550.24, "text": " now what do you want to learn? You want to learn a representation and a"}, {"start": 550.24, "end": 560.6, "text": " transition. And let's say a reward. So you also want to predict the next"}, {"start": 560.6, "end": 569.0, "text": " reward. This thing, this thing, right? So as we already said, you can do this by"}, {"start": 569.0, "end": 578.88, "text": " encoding the state using, for example, CNN and then using an LSTM in order to"}, {"start": 578.88, "end": 585.52, "text": " incorporate this over time. So what you learn is the transition from one hidden"}, {"start": 585.52, "end": 593.56, "text": " state to the next hidden state. And you also learn the how the observation goes"}, {"start": 593.56, "end": 600.24, "text": " into the hidden state. And thirdly, you learn that if I'm in this hidden state"}, {"start": 600.24, "end": 606.4, "text": " and I take this particular action, I will get this reward in the future."}, {"start": 606.4, "end": 613.4399999999999, "text": " Right? You can learn this from just a set of pre-computed or from a set of"}, {"start": 613.4399999999999, "end": 618.64, "text": " experience that you have in your, let's say your replay buffer. Right? This is one"}, {"start": 618.64, "end": 624.48, "text": " model. And you learn this here in this first step in this called Dynamics"}, {"start": 624.48, "end": 634.0, "text": " Learning section. Right? So you see while not converged. So you do Dynamics"}, {"start": 634.0, "end": 641.68, "text": " Learning, you draw data sequences from your experience. Right? Then you compute"}, {"start": 641.68, "end": 649.12, "text": " the model states. These are the hidden states. And then you update this parameter"}, {"start": 649.12, "end": 654.8, "text": " theta using representation learning. Now they don't really specify what"}, {"start": 654.8, "end": 660.56, "text": " representation learning is, but they do give examples of what you can do. I think"}, {"start": 660.56, "end": 667.68, "text": " their point is whatever you need to do in order to learn this representation."}, {"start": 667.68, "end": 677.52, "text": " And one example is actually drawn here. One example is you can learn a model"}, {"start": 677.52, "end": 684.2399999999999, "text": " that reconstructs the next state or actually sorry reconstructs the same state."}, {"start": 684.24, "end": 691.36, "text": " So you can learn a model that predicts. So if you give the observation as an input,"}, {"start": 691.36, "end": 699.84, "text": " it goes through the hidden state. You can learn a decoder that reconstructs that"}, {"start": 699.84, "end": 703.44, "text": " observation. This is usually done in things like"}, {"start": 703.44, "end": 709.12, "text": " variational auto encoders in order to produce generative models. So the"}, {"start": 709.12, "end": 713.28, "text": " this part here would be the generator. And that would be kind of the thing of"}, {"start": 713.28, "end": 717.76, "text": " interest if you were doing a variational auto encoder. But of course here,"}, {"start": 717.76, "end": 723.8399999999999, "text": " our quantity of interest is this encoder model because we want a good"}, {"start": 723.8399999999999, "end": 731.4399999999999, "text": " representation of the state. But it comes down to the same thing."}, {"start": 731.4399999999999, "end": 738.3199999999999, "text": " If you can learn a model that learns to accurately reconstruct the observation,"}, {"start": 738.32, "end": 744.4000000000001, "text": " then your representation here in the middle is probably an informative one."}, {"start": 744.4000000000001, "end": 750.0, "text": " Because you learn the same model across multiple observations,"}, {"start": 750.0, "end": 754.6400000000001, "text": " that means it can accurately encode what makes one observation different from"}, {"start": 754.6400000000001, "end": 761.84, "text": " another one. So this is how you learn the theta parameters."}, {"start": 761.84, "end": 770.5600000000001, "text": " Now the other models here are the action and the value parameters."}, {"start": 770.5600000000001, "end": 775.6800000000001, "text": " And this is here in the step called behavior learning. So in the behavior"}, {"start": 775.6800000000001, "end": 780.96, "text": " learning, what they say is imagine trajectories from each of the states that"}, {"start": 780.96, "end": 786.0, "text": " you have. So what you're going to do is from each of the observations here,"}, {"start": 786.0, "end": 790.88, "text": " you're going to obtain the hidden states, right? This these hidden states."}, {"start": 790.88, "end": 796.88, "text": " Now from each of the hidden states here. So here is an observation from its"}, {"start": 796.88, "end": 800.24, "text": " hidden state. You're going to use the model that you"}, {"start": 800.24, "end": 805.04, "text": " learned here through the LSTM. Sorry."}, {"start": 805.92, "end": 810.48, "text": " Well, this is terrible. Through the LSTM, you're going to use that model"}, {"start": 810.48, "end": 818.4, "text": " to imagine future trajectories, right, of hidden states. So you have given,"}, {"start": 818.4, "end": 824.0799999999999, "text": " sorry, given, or now is the observation here, and the hidden state."}, {"start": 824.0799999999999, "end": 831.04, "text": " And you're going to imagine future hidden states. You're also going to imagine future rewards,"}, {"start": 831.04, "end": 840.9599999999999, "text": " right? And you are going to use your policy kind of to,"}, {"start": 841.92, "end": 847.76, "text": " you're going to use your policy in order to determine which actions you're going to take,"}, {"start": 847.76, "end": 855.68, "text": " right? And the ultimate goal here is to learn a good policy. So a policy that will give you"}, {"start": 855.68, "end": 860.56, "text": " better rewards in the future, as you would do. So this is regular reinforcement learning,"}, {"start": 860.56, "end": 870.3199999999999, "text": " except that the difference is in regular reinforcement learning. I have my observation."}, {"start": 871.28, "end": 877.6, "text": " I encode it. And then I determine what action I want to take. Then I feed that action back"}, {"start": 877.6, "end": 883.28, "text": " into the environment, which would give me the next observation. And then I'd use that to"}, {"start": 883.28, "end": 889.0400000000001, "text": " determine maybe in conjunction with the last hidden state, the next action. In this thing,"}, {"start": 889.0400000000001, "end": 895.6, "text": " since we learned a dynamics model of the hidden states, we can simply determine the action,"}, {"start": 895.6, "end": 902.5600000000001, "text": " and then simply compute what the probable next hidden state is going to be. And then use that"}, {"start": 902.56, "end": 908.7199999999999, "text": " to determine an action again. And so on. So there's no need to go through the environment,"}, {"start": 908.7199999999999, "end": 914.0, "text": " which means potentially we can learn much, much faster without having to"}, {"start": 914.8, "end": 921.28, "text": " expensively interact with the environment. So that allows us to basically"}, {"start": 923.8399999999999, "end": 930.8, "text": " also these models here, they might be quite large. So our back prop now only needs to happen"}, {"start": 930.8, "end": 939.52, "text": " through this path, basically, if we want to. Or through this path here, in case we have discrete"}, {"start": 939.52, "end": 949.76, "text": " actions. Yes, so that's in that will be the dynamics learning. It's done here."}, {"start": 952.4, "end": 960.24, "text": " As you can see, we predict the rewards and the values and compute value estimates. And then we"}, {"start": 960.24, "end": 968.96, "text": " update these parameters using. So what we have is here a value function. See, the value function"}, {"start": 968.96, "end": 980.24, "text": " is dependent on this psi here. And this we update using a gradient of its output minus the true"}, {"start": 980.88, "end": 986.0, "text": " value. So this this here is an estimate of the value. And as you know, a value function is"}, {"start": 986.0, "end": 994.96, "text": " supposed to tell you the complete future reward given a state. And it's important for us that we"}, {"start": 994.96, "end": 1001.28, "text": " have a function that can estimate that because of course, then we can take actions if we can make"}, {"start": 1001.28, "end": 1008.24, "text": " this function go high. And this is an accurate function. That means we get a lot of reward in the"}, {"start": 1008.24, "end": 1015.6, "text": " future. So it's important to learn this function. And here you can see we adjusted into the direction"}, {"start": 1015.6, "end": 1022.72, "text": " of matching this quantity better. Now we'll get to this quantity in a second. You can also see"}, {"start": 1022.72, "end": 1032.0, "text": " we update this parameter, which is the action model. So here you see that action model depends on"}, {"start": 1032.0, "end": 1040.88, "text": " this. This is this is our policy, right? This thing here determines which action we take. And we"}, {"start": 1040.88, "end": 1047.0400000000002, "text": " update it into the direction. This is a gradient with respect to this value function, right? So we"}, {"start": 1047.0400000000002, "end": 1055.2, "text": " train the policy to maximize the value, which is all the future rewards that we get. Of course,"}, {"start": 1055.2, "end": 1062.88, "text": " we can do this because we can now back propagate through all of these time steps because we have this"}, {"start": 1062.88, "end": 1073.1200000000001, "text": " transition model. We can back propagate through all of this, which is pretty cool. I think, in my"}, {"start": 1073.1200000000001, "end": 1082.3200000000002, "text": " opinion, the kind of workhorse of this paper might be this quantity here. So what how exactly do"}, {"start": 1082.3200000000002, "end": 1090.5600000000002, "text": " you compute the value of a state? Especially in these continuous control tasks, you sometimes"}, {"start": 1090.56, "end": 1100.56, "text": " have a lot of steps. So these trajectories might be pretty long and they might be longer than"}, {"start": 1100.56, "end": 1111.2, "text": " what you can back propagate here reasonably from time step to time step, right? Even an LSTM might"}, {"start": 1111.2, "end": 1118.1599999999999, "text": " only be able to back prop through, let's say, a couple of dozen or maybe a few hundred steps in time."}, {"start": 1118.16, "end": 1127.28, "text": " And maybe you have longer trajectories here. So it's pretty, I think this value estimate here"}, {"start": 1128.0, "end": 1134.88, "text": " is a main component of extending that range. So they say this is according to equation six."}, {"start": 1134.88, "end": 1143.1200000000001, "text": " And this is what it does. Again, this is my opinion that this here is kind of the workhorse of"}, {"start": 1143.12, "end": 1151.12, "text": " the method. So it's a three step process actually. It's pretty, pretty heavy. So you see this is"}, {"start": 1151.12, "end": 1163.1999999999998, "text": " the quantity they estimate with the value function. It is set between an average over, so H is the"}, {"start": 1163.2, "end": 1174.32, "text": " time horizon, right? That you're looking for. It is set between these two things across the sum"}, {"start": 1174.32, "end": 1184.24, "text": " over the time horizon. Now each of those things, again, here is a sum over this tau,"}, {"start": 1184.24, "end": 1198.0, "text": " this tau here, which is this tau, and the H minus one. And H here is the minimum of"}, {"start": 1198.0, "end": 1205.28, "text": " tables K and tables horizon. So this goes, this looks, this quantity looks K steps into the future."}, {"start": 1205.28, "end": 1219.68, "text": " So for each step to the horizon, we look K steps into the future. And, and for each step,"}, {"start": 1219.68, "end": 1227.04, "text": " we look into the future, we sum again across these quantities here. And these quantities here,"}, {"start": 1227.04, "end": 1235.44, "text": " what is that? It's a mixture of the reward you get in that particular step. Plus your own, your"}, {"start": 1235.44, "end": 1245.36, "text": " estimate of the value function at the horizon step discounted by that. So it's a pretty,"}, {"start": 1245.36, "end": 1252.48, "text": " so if you imagine you have like a time number of steps that you took and each time you get a reward,"}, {"start": 1252.48, "end": 1259.6, "text": " right? This is a very complicated way of summing of going into the future, summing up the"}, {"start": 1259.6, "end": 1265.92, "text": " rewards, going more steps, summing up the rewards again in different fashion, and then mixing"}, {"start": 1265.92, "end": 1272.0, "text": " these, these individual quantities. So this one, this one, this one that you got from accumulating"}, {"start": 1272.0, "end": 1282.08, "text": " all of these in a weird fashion. And that allows you to look way beyond. Especially, you see here,"}, {"start": 1282.08, "end": 1292.3999999999999, "text": " your estimate of the value function will actually include your own value function that again,"}, {"start": 1292.3999999999999, "end": 1300.6399999999999, "text": " probably looks into the future. So what you accumulate from the last step in your time horizon"}, {"start": 1300.6399999999999, "end": 1308.8, "text": " already includes information from all the future steps, because you take your own value estimate"}, {"start": 1308.8, "end": 1318.8799999999999, "text": " into account. This is, I think it's very convoluted. But again, I think this is, this complicated"}, {"start": 1318.8799999999999, "end": 1325.12, "text": " value estimate allows you to have a better value estimate foreign to the future."}, {"start": 1325.12, "end": 1338.0, "text": " They do show some, some kind of samples here of what they can do. I haven't found any videos of it,"}, {"start": 1338.8, "end": 1344.4799999999998, "text": " unfortunately, but it appears to work pretty well. They have a discussion of different"}, {"start": 1344.4799999999998, "end": 1350.2399999999998, "text": " representation learning methods and different experiments and ablations and so on. So I invite you"}, {"start": 1350.24, "end": 1357.84, "text": " to look at this paper and I hope this was somewhat clear. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=XdpF9ZixIbI | Can we Contain Covid-19 without Locking-down the Economy? | My thoughts on the let-the-young-get-infected argument.
https://medium.com/amnon-shashua/can-we-contain-covid-19-without-locking-down-the-economy-2a134a71873f
Abstract:
In this article, we present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The high-risk group is quarantined until the low-risk group achieves herd-immunity. We tackle the question of whether this model is safe, in the sense that the health system can contain the number of low-risk people that require severe ICU care (such as life support systems).
Authors: Shai Shalev-Shwartz, Amnon Shashua
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Can we contain COVID-19 without locking down the economy? This is a question and I do care about this article because Shai Shalef Schwartz is one of the bigger names in machine learning theory. So it was interesting for me to see what he and his collaborator here had to say about the kind of outbreak and the strategy to contain it. So, contain maybe isn't the right word, maybe they asked, I think the way they asked the question is how are we going to survive this the best? And so this in no means is an endorsement by me, I'm not a medical professional, please just view this as a commentary and an explanation of what they are saying. I'll give my opinions along the way of course. So they identify three different models for handling the spread of COVID-19 and we'll start with the third one because they argue for the first one and this builds more suspense. So they say there is country-wide lockdown, right, until the spread of the virus is under control. They say it could take anywhere from weeks to months, it is the safest route but it does not prevent a second wave from occurring. Now of course if you have people, let's say these are people, right, then the thing is everyone just stays in their stay in your house, right, everybody. Until it's kind of gone. Now they say correctly there is a risk of a second wave because only a single infected person because there's no immunity still has a potential of creating another epicenter. So they don't consider this option. The next option is called containment-based selective quarantine, which means find all the positive cases and put them in quarantine. So let's say we go here and we let you roam around freely but we know we can test people and we know that some of them are positive. So we simply tell them to stay at home, right, and this depends a lot on how well you can test people or and it also depends on what they claim the contagious time interval. We know that there are people that are contagious without showing symptoms so unless you can test every single person all the time, this is likely to not really help a lot. Now there's various data from various countries that actually shows it can reduce the load but they basically argue against that because there are these contagious people and you can never test fast enough or accurate or thoroughly enough. And then they say there is risk-based selective quarantine, which means what? It means that so some of these people are going to be at risk and in this case we obviously mean old people. So old people, I'm going to draw them with a cane, not because old people aren't fit, just because they have better tastes in canes. So and then there are young people and they run a smartphone with TikTok and what we're going to say is that you youngsters, you're not really at risk from this. So you go out, you sneeze on each other, you go about your life normally and you old people basically stay at home until all the young people have immunity. So we ramp up the cases and then it flattens out eventually in the low risk population and at that point there is enough herd immunity, right? All these people are now immune so that the old person here, even if they now go out again, they won't catch it because everyone's already had it. So they argue for this particular strategy and or at least they analyze this particular strategy. Now I have to say at the beginning that the core assumption here is that this quarantine of the high risk people, you can do basically in a perfect way. So the assumption here is that you are able to perfectly quarantine all the high risk people and that's the level of infection in the low risk population has no influence on the level of infection in the high risk population. And in my opinion, I simply don't believe that I simply don't believe you can build this quarantine. I think even these old people, they need food sometimes, the nursing home need staff. So even if they can reduce their contact to the outside world, they cannot fully be sheltered and that means the more infections you have in the low risk population, the more infections you will have in the high risk population. So I think the fundamental core assumption of this model is quite flawed. That being said, let's analyze it. So we assume that all the high risk people, none of them is going to get sick because they all stay at home. So the math in this paper is actually pretty basic. So we'll go through it a bit more detailed. So we will understand the core argument. So they introduced the following quantities, M here, M is the low risk population, right? This is the population size. V or new, let's call it new. New here is the probability. So that's the probability that if you are sick, you need to go to the ICU. So sick means simply you have the virus and ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease. So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get? We get a worst case scenario. So basically the author is here and I find this is the good part of this analysis. They don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on. They simply consider the worst case. So MD here, if you multiply these two numbers, what does that mean? That is the number of severe cases, severe meaning you need ICU cases. If everybody gets sick, if all get sick, if everybody gets sick at the same time, right? So let's say we all go out, the lower is population and we all sneeze in each other's faces as much as we can and we just all get sick at the same time. And this here is the number of people going to the ICU. Right. And if this, so they introduce this quantity B here, B is the number of beds in the ICU. And the number of beds in the ICU is larger than the worst case, severe cases, right? Then we are safe. So that's the argument. Basically, it's not that we are safe. It is no one will die from lack of an ICU bed, right? Which is kind of the lever we have as a population. If you assume everyone's going to get sick anyway and so on. So if the number of beds is larger than the worst case number of ICU patients, we are safe. That's at least how they define safe. All right. So that's their premise. Now, what are they going to do? They're going to find a quantity where they can bound this thing. And they are going to find a bound an upper bound on the number of severe cases. And if this upper bound is lower than the number of beds, then they can they can say with, we're safe with this method. So they say, this is the worst case analysis under their assumptions. All right. So I said they don't resort to any kind of epidemiological dynamics. They simply estimate this thing from current numbers. Then to do two more quantities here, P star and K. Now K is the current number of severe cases. Right. So this is kind of an analog to this thing here. So these two are connected. This is the current number of severe cases. And this up here is the total possible like the worst case number of severe cases in the future. Likewise, P star here is the percentage of people. The percent of people that are sick. Right. And they claim correctly, this is unknown. So if we could test everybody, who is sick, not severe, just sick. And up here, this has no connection. That because of course you can imagine here another factor times, let's call this P plus or something, which is the number of people who are sick in the worst case, which of course in our worst case scenario is one. So that's why they don't include it here. So this is the current percentage of sick people. So this here is a percentage. And this here is an actual number. Keep that in mind. All right. Now if we do some basic reformulation here, if we take this P star and multiply that by, you see them in this corner here, multiply it by the total size of the population. Right. We get the number of people who are currently sick. Sick. Right. This is the percentage of current sequence. This is the total size of the population to get the number of people who are currently sick. We take that in the denominator and put K here, which is the number of people who are currently severe. Then we get an estimate of this quantity new. Remember what new is new is the probability. If you are sick, you go to the ICU. That's so the ICU means your severe. Right. So these are the current number of sick people. And these are the current number of severe people. This gives you an estimate of if you are sick, right. What's the probability that you're severe. Now they argue that this number, it doesn't change. It's independent of so that this this quantity here is a constant. So the probability that if you are sick from this virus, you go to the ICU, it doesn't change over time. So we can estimate it with current numbers, right. Which is a pretty, you know, a reasonable thing to assume that this stays constant, unless the virus mutates or something. So we know the total size of the population. We know the current number of severe cases. You can make an argument about that. So do we really know the current number of severe cases because there's an exponential growth involved. This might be difficult to estimate. And they say the same thing. So they say this is the only time where they reference the dynamics of the situation grows at an exponential rate. So what we can do is we can take a worst case upper bounds. They say to be on the safe side, perform a worst case analysis. So they instead of taking K, they add this confidence interval on it that is based on concentration inequalities. So they don't use K. They use this K to the here, which has two additional summons here that is supposed to be an upper bound with confidence, at least at least one over delta here. And this you can set, for example, to be 0.05 that gives you a 95% sorry, a 95% confidence that this is a upper bound on that. Now this comes from some concentration bounds and there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here. I'm going to assume they are reasonable. So if they are not, then of course that is an additional point of criticism of this work. All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde. So we know this as well. The unknown quantity, of course, is this thing here, P star, what is the percentage of people that are currently sick, right? Yeah, so the goal is now to find to find that. So they say, okay, if we plug in this upper bound of K tilde, then with this probability, we can upper bound this quantity new, which is exactly what we wanted because we need to upper bound Md. And that's what they say here. So since at the top, we saw that M times new equals Md and we want to upper bound Md. So we can rearrange this thing. So if we plug in these two together, we see that the M cancels out and we can upper bound Md by this quantity here, the upper bound on the current severe cases divided by the percentage of the current, by the percentage of the currently sick people. So again, they reformulate and they plug in this, of course, needs to be smaller than the number beds. So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities, we know, then we are safe. Now again, our goal is going to be to find the quantity that lower bounds, P star, but up, but is larger than this quantity here. And they do this via hypothesis testing, they call this quantity here, they call it P tilde, and they do a hypothesis test for classic statistics, where they ask is P star significantly larger than P tilde. If that's the case, then we're safe, if not, we're not. And how do they do that? They say, okay, we have the population. Right, I did draw this at one point. Let's go back there. We have the population here, right? And what we can do is we can just go out and uniformly, uniformly test people. Like just randomly select people. Now this is an old person, though, people stay at home. So we randomly select people to test, and their test results come back. And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy. Right, and so we have four tests and out of the four, one was positive. Can we work out a hypothesis test from that? So can we decide whether P star is probably much larger than P tilde or not? And the answer is yes, because this is a uniform sample, you can work out using classic statistical tools, whether or not you can reject an all hypothesis or not. And they actually work this out, and they do give a number here. And that's this. So they say if we test N, which is four, let's say four and a half times this quantity, be divided by K. So the number of beds divided by the upper bound on the current severe cases. So we test 4.5 times this many people. Right. Then if we find at least 10 positive cases or more, then with a probability of 95%. We know that the risk based model is safe. Right. So the more, of course, the more infect people you find in this case, the better because that means because a number of severe cases stays constant at any given time. It means that a lot more people are infected. That means the probability that you are going to become severe is lower. That why that's why it says at least. So again, you go out, you test N people and according to this formula, plug in the numbers here for your current situation. If you find at least 10 people, then with a probability of at least 95%. You know that this model is safe. Cool. And this is done using, you know, classic statistical testing hypothesis testing literature. So I think that is a pretty cool result. But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine. Of course, if you can't, it means that there is a direct correlation between the number of sick people in your low risk population, the number of sick people in your high risk population, which means that more of the high risk population are going to get infected as well. Which again, means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate, which makes your entire model that we developed down there. Less valid because now this used to be a constant in the model. It's now no longer a constant. It's sinking. And the worse it gets, the more it's sinking. And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly. And that doesn't include all the high risk people that are going to be in danger, additionally, because you can't enforce the quarantine. Alright, so this was my take on that. Take it for what it's worth. And I wish you a healthy pandemic. Bye bye. | [{"start": 0.0, "end": 5.76, "text": " Can we contain COVID-19 without locking down the economy?"}, {"start": 5.76, "end": 16.6, "text": " This is a question and I do care about this article because Shai Shalef Schwartz is one of the bigger names in machine learning theory."}, {"start": 16.6, "end": 27.8, "text": " So it was interesting for me to see what he and his collaborator here had to say about the kind of outbreak and the strategy to contain it."}, {"start": 27.8, "end": 43.8, "text": " So, contain maybe isn't the right word, maybe they asked, I think the way they asked the question is how are we going to survive this the best?"}, {"start": 43.8, "end": 58.8, "text": " And so this in no means is an endorsement by me, I'm not a medical professional, please just view this as a commentary and an explanation of what they are saying."}, {"start": 58.8, "end": 62.8, "text": " I'll give my opinions along the way of course."}, {"start": 62.8, "end": 77.8, "text": " So they identify three different models for handling the spread of COVID-19 and we'll start with the third one because they argue for the first one and this builds more suspense."}, {"start": 77.8, "end": 85.8, "text": " So they say there is country-wide lockdown, right, until the spread of the virus is under control."}, {"start": 85.8, "end": 96.8, "text": " They say it could take anywhere from weeks to months, it is the safest route but it does not prevent a second wave from occurring."}, {"start": 96.8, "end": 110.8, "text": " Now of course if you have people, let's say these are people, right, then the thing is everyone just stays in their stay in your house, right, everybody."}, {"start": 110.8, "end": 115.8, "text": " Until it's kind of gone."}, {"start": 115.8, "end": 128.8, "text": " Now they say correctly there is a risk of a second wave because only a single infected person because there's no immunity still has a potential of creating another epicenter."}, {"start": 128.8, "end": 132.8, "text": " So they don't consider this option."}, {"start": 132.8, "end": 143.8, "text": " The next option is called containment-based selective quarantine, which means find all the positive cases and put them in quarantine."}, {"start": 143.8, "end": 157.8, "text": " So let's say we go here and we let you roam around freely but we know we can test people and we know that some of them are positive."}, {"start": 157.8, "end": 172.8, "text": " So we simply tell them to stay at home, right, and this depends a lot on how well you can test people or and it also depends on what they claim the contagious time interval."}, {"start": 172.8, "end": 185.8, "text": " We know that there are people that are contagious without showing symptoms so unless you can test every single person all the time, this is likely to not really help a lot."}, {"start": 185.8, "end": 205.8, "text": " Now there's various data from various countries that actually shows it can reduce the load but they basically argue against that because there are these contagious people and you can never test fast enough or accurate or thoroughly enough."}, {"start": 205.8, "end": 222.8, "text": " And then they say there is risk-based selective quarantine, which means what? It means that so some of these people are going to be at risk and in this case we obviously mean old people."}, {"start": 222.8, "end": 234.8, "text": " So old people, I'm going to draw them with a cane, not because old people aren't fit, just because they have better tastes in canes."}, {"start": 234.8, "end": 261.8, "text": " So and then there are young people and they run a smartphone with TikTok and what we're going to say is that you youngsters, you're not really at risk from this. So you go out, you sneeze on each other, you go about your life normally and you old people basically stay at home until all the young people have immunity."}, {"start": 261.8, "end": 283.8, "text": " So we ramp up the cases and then it flattens out eventually in the low risk population and at that point there is enough herd immunity, right? All these people are now immune so that the old person here, even if they now go out again, they won't catch it because everyone's already had it."}, {"start": 283.8, "end": 308.8, "text": " So they argue for this particular strategy and or at least they analyze this particular strategy. Now I have to say at the beginning that the core assumption here is that this quarantine of the high risk people, you can do basically in a perfect way."}, {"start": 308.8, "end": 327.8, "text": " So the assumption here is that you are able to perfectly quarantine all the high risk people and that's the level of infection in the low risk population has no influence on the level of infection in the high risk population."}, {"start": 327.8, "end": 342.8, "text": " And in my opinion, I simply don't believe that I simply don't believe you can build this quarantine. I think even these old people, they need food sometimes, the nursing home need staff."}, {"start": 342.8, "end": 359.8, "text": " So even if they can reduce their contact to the outside world, they cannot fully be sheltered and that means the more infections you have in the low risk population, the more infections you will have in the high risk population."}, {"start": 359.8, "end": 365.8, "text": " So I think the fundamental core assumption of this model is quite flawed."}, {"start": 365.8, "end": 378.8, "text": " That being said, let's analyze it. So we assume that all the high risk people, none of them is going to get sick because they all stay at home."}, {"start": 378.8, "end": 388.8, "text": " So the math in this paper is actually pretty basic. So we'll go through it a bit more detailed. So we will understand the core argument."}, {"start": 388.8, "end": 400.8, "text": " So they introduced the following quantities, M here, M is the low risk population, right? This is the population size."}, {"start": 400.8, "end": 420.8, "text": " V or new, let's call it new. New here is the probability. So that's the probability that if you are sick, you need to go to the ICU."}, {"start": 420.8, "end": 434.8, "text": " So sick means simply you have the virus and ICU means that the symptoms are so bad that you need help from the medical system in order to overcome the disease."}, {"start": 434.8, "end": 447.8, "text": " So if we multiply the population size by the probability that if you get sick, you need to go to the ICU, what do we get? We get a worst case scenario."}, {"start": 447.8, "end": 463.8, "text": " So basically the author is here and I find this is the good part of this analysis. They don't rely on kind of pandemic dynamics, epidemiology, exponential growth and so on."}, {"start": 463.8, "end": 481.8, "text": " They simply consider the worst case. So MD here, if you multiply these two numbers, what does that mean? That is the number of severe cases, severe meaning you need ICU cases."}, {"start": 481.8, "end": 497.8, "text": " If everybody gets sick, if all get sick, if everybody gets sick at the same time, right?"}, {"start": 497.8, "end": 512.8, "text": " So let's say we all go out, the lower is population and we all sneeze in each other's faces as much as we can and we just all get sick at the same time."}, {"start": 512.8, "end": 529.8, "text": " And this here is the number of people going to the ICU. Right. And if this, so they introduce this quantity B here, B is the number of beds in the ICU."}, {"start": 529.8, "end": 544.8, "text": " And the number of beds in the ICU is larger than the worst case, severe cases, right? Then we are safe. So that's the argument. Basically, it's not that we are safe."}, {"start": 544.8, "end": 556.8, "text": " It is no one will die from lack of an ICU bed, right? Which is kind of the lever we have as a population. If you assume everyone's going to get sick anyway and so on."}, {"start": 556.8, "end": 579.8, "text": " So if the number of beds is larger than the worst case number of ICU patients, we are safe. That's at least how they define safe. All right. So that's their premise. Now, what are they going to do? They're going to find a quantity where they can bound this thing."}, {"start": 579.8, "end": 593.8, "text": " And they are going to find a bound an upper bound on the number of severe cases. And if this upper bound is lower than the number of beds, then they can they can say with, we're safe with this method."}, {"start": 593.8, "end": 614.8, "text": " So they say, this is the worst case analysis under their assumptions. All right. So I said they don't resort to any kind of epidemiological dynamics. They simply estimate this thing from current numbers. Then to do two more quantities here, P star and K."}, {"start": 614.8, "end": 633.8, "text": " Now K is the current number of severe cases. Right. So this is kind of an analog to this thing here. So these two are connected."}, {"start": 633.8, "end": 654.8, "text": " This is the current number of severe cases. And this up here is the total possible like the worst case number of severe cases in the future. Likewise, P star here is the percentage of people."}, {"start": 654.8, "end": 672.8, "text": " The percent of people that are sick. Right. And they claim correctly, this is unknown. So if we could test everybody, who is sick, not severe, just sick."}, {"start": 672.8, "end": 693.8, "text": " And up here, this has no connection. That because of course you can imagine here another factor times, let's call this P plus or something, which is the number of people who are sick in the worst case, which of course in our worst case scenario is one."}, {"start": 693.8, "end": 708.8, "text": " So that's why they don't include it here. So this is the current percentage of sick people. So this here is a percentage. And this here is an actual number. Keep that in mind."}, {"start": 708.8, "end": 731.8, "text": " All right. Now if we do some basic reformulation here, if we take this P star and multiply that by, you see them in this corner here, multiply it by the total size of the population. Right."}, {"start": 731.8, "end": 746.8, "text": " We get the number of people who are currently sick. Sick. Right. This is the percentage of current sequence. This is the total size of the population to get the number of people who are currently sick."}, {"start": 746.8, "end": 764.8, "text": " We take that in the denominator and put K here, which is the number of people who are currently severe. Then we get an estimate of this quantity new."}, {"start": 764.8, "end": 782.8, "text": " Remember what new is new is the probability. If you are sick, you go to the ICU. That's so the ICU means your severe. Right. So these are the current number of sick people. And these are the current number of severe people."}, {"start": 782.8, "end": 799.8, "text": " This gives you an estimate of if you are sick, right. What's the probability that you're severe. Now they argue that this number, it doesn't change. It's independent of so that this this quantity here is a constant."}, {"start": 799.8, "end": 816.8, "text": " So the probability that if you are sick from this virus, you go to the ICU, it doesn't change over time. So we can estimate it with current numbers, right. Which is a pretty, you know, a reasonable thing to assume that this stays constant, unless the virus mutates or something."}, {"start": 816.8, "end": 836.8, "text": " So we know the total size of the population. We know the current number of severe cases. You can make an argument about that. So do we really know the current number of severe cases because there's an exponential growth involved. This might be difficult to estimate."}, {"start": 836.8, "end": 856.8, "text": " And they say the same thing. So they say this is the only time where they reference the dynamics of the situation grows at an exponential rate. So what we can do is we can take a worst case upper bounds. They say to be on the safe side, perform a worst case analysis."}, {"start": 856.8, "end": 884.8, "text": " So they instead of taking K, they add this confidence interval on it that is based on concentration inequalities. So they don't use K. They use this K to the here, which has two additional summons here that is supposed to be an upper bound with confidence, at least at least one over delta here."}, {"start": 884.8, "end": 905.8, "text": " And this you can set, for example, to be 0.05 that gives you a 95% sorry, a 95% confidence that this is a upper bound on that."}, {"start": 905.8, "end": 922.8, "text": " Now this comes from some concentration bounds and there are certain assumptions behind this upper bound here, which I don't know enough about to critique them here. I'm going to assume they are reasonable."}, {"start": 922.8, "end": 940.8, "text": " So if they are not, then of course that is an additional point of criticism of this work. All right. So instead of using K here, we are saying we're on the safe side and we use this K tilde. So we know this as well."}, {"start": 940.8, "end": 955.8, "text": " The unknown quantity, of course, is this thing here, P star, what is the percentage of people that are currently sick, right?"}, {"start": 955.8, "end": 976.8, "text": " Yeah, so the goal is now to find to find that. So they say, okay, if we plug in this upper bound of K tilde, then with this probability, we can upper bound this quantity new, which is exactly what we wanted because we need to upper bound Md."}, {"start": 976.8, "end": 994.8, "text": " And that's what they say here. So since at the top, we saw that M times new equals Md and we want to upper bound Md. So we can rearrange this thing."}, {"start": 994.8, "end": 1019.8, "text": " So if we plug in these two together, we see that the M cancels out and we can upper bound Md by this quantity here, the upper bound on the current severe cases divided by the percentage of the current, by the percentage of the currently sick people."}, {"start": 1019.8, "end": 1043.8, "text": " So again, they reformulate and they plug in this, of course, needs to be smaller than the number beds. So they plug this in here and they say, now what we have to do to see is if this quantity is larger than this quantity of two quantities, we know, then we are safe."}, {"start": 1043.8, "end": 1056.8, "text": " Now again, our goal is going to be to find the quantity that lower bounds, P star, but up, but is larger than this quantity here."}, {"start": 1056.8, "end": 1074.8, "text": " And they do this via hypothesis testing, they call this quantity here, they call it P tilde, and they do a hypothesis test for classic statistics, where they ask is P star significantly larger than P tilde."}, {"start": 1074.8, "end": 1086.8, "text": " If that's the case, then we're safe, if not, we're not. And how do they do that? They say, okay, we have the population."}, {"start": 1086.8, "end": 1104.8, "text": " Right, I did draw this at one point. Let's go back there. We have the population here, right? And what we can do is we can just go out and uniformly, uniformly test people. Like just randomly select people."}, {"start": 1104.8, "end": 1119.8, "text": " Now this is an old person, though, people stay at home. So we randomly select people to test, and their test results come back. And this one, this one's healthy, this one's healthy, this one's healthy, this one's not healthy."}, {"start": 1119.8, "end": 1138.8, "text": " Right, and so we have four tests and out of the four, one was positive. Can we work out a hypothesis test from that? So can we decide whether P star is probably much larger than P tilde or not?"}, {"start": 1138.8, "end": 1151.8, "text": " And the answer is yes, because this is a uniform sample, you can work out using classic statistical tools, whether or not you can reject an all hypothesis or not."}, {"start": 1151.8, "end": 1161.8, "text": " And they actually work this out, and they do give a number here. And that's this."}, {"start": 1161.8, "end": 1177.8, "text": " So they say if we test N, which is four, let's say four and a half times this quantity, be divided by K. So the number of beds divided by the upper bound on the current severe cases."}, {"start": 1177.8, "end": 1195.8, "text": " So we test 4.5 times this many people. Right. Then if we find at least 10 positive cases or more, then with a probability of 95%."}, {"start": 1195.8, "end": 1212.8, "text": " We know that the risk based model is safe. Right. So the more, of course, the more infect people you find in this case, the better because that means because a number of severe cases stays constant at any given time."}, {"start": 1212.8, "end": 1222.8, "text": " It means that a lot more people are infected. That means the probability that you are going to become severe is lower. That why that's why it says at least."}, {"start": 1222.8, "end": 1237.8, "text": " So again, you go out, you test N people and according to this formula, plug in the numbers here for your current situation. If you find at least 10 people, then with a probability of at least 95%. You know that this model is safe."}, {"start": 1237.8, "end": 1248.8, "text": " Cool. And this is done using, you know, classic statistical testing hypothesis testing literature."}, {"start": 1248.8, "end": 1277.8, "text": " So I think that is a pretty cool result. But I do severely criticize the underlying assumption, which is that you can perfectly enforce this quarantine. Of course, if you can't, it means that there is a direct correlation between the number of sick people in your low risk population, the number of sick people in your high risk population, which means that more of the high risk population are going to get infected as well."}, {"start": 1277.8, "end": 1292.8, "text": " Which again, means that your number B of ICU beds is going to drop severely because they have a higher hospitalization rate, which makes your entire model that we developed down there."}, {"start": 1292.8, "end": 1303.8, "text": " Less valid because now this used to be a constant in the model. It's now no longer a constant. It's sinking. And the worse it gets, the more it's sinking."}, {"start": 1303.8, "end": 1315.8, "text": " And yes, so that that may make what you initially thought was a safe model into a very non safe model very quickly."}, {"start": 1315.8, "end": 1324.8, "text": " And that doesn't include all the high risk people that are going to be in danger, additionally, because you can't enforce the quarantine."}, {"start": 1324.8, "end": 1334.8, "text": " Alright, so this was my take on that. Take it for what it's worth. And I wish you a healthy pandemic. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=lqtlua-Ylts | State-of-Art-Reviewing: A Radical Proposal to Improve Scientific Publication | Peer Review is outdated and ineffective. SOAR is a new and revolutionary way to distribute scientific reviewing and scale to the new age of faster, better and more significant research.
https://arxiv.org/abs/2003.14415
Abstract:
Peer review forms the backbone of modern scientific manuscript evaluation. But after two hundred and eighty-nine years of egalitarian service to the scientific community, does this protocol remain fit for purpose in 2020? In this work, we answer this question in the negative (strong reject, high confidence) and propose instead State-Of-the-Art Review (SOAR), a neoteric reviewing pipeline that serves as a 'plug-and-play' replacement for peer review. At the heart of our approach is an interpretation of the review process as a multi-objective, massively distributed and extremely-high-latency optimisation, which we scalarise and solve efficiently for PAC and CMT-optimal solutions. We make the following contributions: (1) We propose a highly scalable, fully automatic methodology for review, drawing inspiration from best-practices from premier computer vision and machine learning conferences; (2) We explore several instantiations of our approach and demonstrate that SOAR can be used to both review prints and pre-review pre-prints; (3) We wander listlessly in vain search of catharsis from our latest rounds of savage CVPR rejections.
Authors: Samuel Albanie, Jaime Thewmore, Robert McCraith, Joao F. Henriques
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication. So this has been on my mind for a while. The review process for modern science, especially machine learning, is just broken. I've spoken numerous times about the fact that we need to replace it with a better system. And now Samuel Albany at all have actually come up with such a system and we're going to explore it today. I am a big fan of this work and I'm 100% on board with this. So they basically say peer review forms the backbone of modern scientific manuscript evaluation. If you don't know what peer review is, in machine learning right now if you have some genius idea, so here is your idea. That's a light bulb. By the way, you write it up into an eighth page PDF. Yes, it must be PDF and yes, it must be eight pages. You submit it to a conference which is some kind of, so you submitted to be accepted in a conference proceeding. And if the conference organizers, of course, there are just a bunch of people that can't review these 1000 million applications that come by themselves. So what they do is they recruit experts which are called peers. So peers are other people. These are called peers and they have usually written up their own papers and they can critique each other's paper and they decide what gets accepted and what doesn't. Now I've spoken numerous times of how this is super noisy right now. They're way, way, they're not enough peers, they're not experienced enough. So whether or not your particular idea gets accepted is extremely dependent on on probability on a coin flip usually. And it's just overloaded and just makes no sense. And they asked the same question, is this fit for purpose in 2020? And we need to replace it and I support. So they, you can already see they kind of want to automate this away with their state of the art review score and the score will be an out of 10 score that can be integrated into something like archive. And you know, display right away. So they have some requirements to this new system. What should it be done? It should have the ability to scale. Very important. Our current review system doesn't have this. Right. Our current review system relies on other humans reviewing your paper. And that means that the reviewers need to scale with the amount of papers which just isn't the case currently. So a new review system must have the ability to scale. Right. And them, you know, automating the reviews away or scaling it up in a distributed fashion does this speed. Yes, because right now if I submit my manuscript for review, it takes the months to review it. And our science progress is faster than that. So a speedy, more speedy version of peer review is definitely required. And then consistency. And this is the most shocking part. Right. There is a, the ground 2014 NURRIPS experiment which concluded that 57% of papers accepted by one committee were rejected by another committee and vice versa reviewing the exact same paper, different committees came to completely different conclusions to an astounding degree. So basically you're flipping a coin of whether or not your paper gets accepted or not. And I think this is just not acceptable. And so they propose these three things speed scale consistency. And their new methods certainly has this. Now let's jump down here where they introduce this state of the art reviewing SOAR. So they say, okay, the quality of a scientific work can be judged along three axes. Fxc, significance and novelty. So there are these three pillars. Right. Fxc, which means is kind of how effective is your work in achieving the goal in machine learning that's usually to train some good classifier or something like this. Then the other one, sorry, is significance. Right. So significance is how relevant is what you've done to the field. Right. And the third one is novelty. So you know, in your scientific work should be an original contribution to the knowledge of mankind. And therefore it should be novel. Right. So the more of these three things you have, of course, the higher your score should be. And here in the middle is where the highest scores should be. So imagine this is kind of a landscape. And so you want to grade papers along these three axes. But I have a pretty good method of of assessing these in an automated fashion. So first of all, assessing efficacy. So efficacy, they say, is best assessed by determining if the proposed method achieves a new state of the art. I think that's not really. I don't think you can really doubt this. I mean, this is this is kind of the gold standard of whether your paper is effective is whether or not it achieves state of the art. I it might be a bit of a controversial opinion, but if a paper doesn't achieve a state of the art, it's, you know, why? Why do you even care? Like no one cares. So from, they say, from an implementation perspective, they can use, they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself, but the authors themselves can be relied upon to state this repeatedly in the text. And this, this is important. So the authors will state that they have state of the art many, many times in the text, if they have actually achieved it, if they haven't achieved it or not so sure about it, they probably won't repeat it as many times. This is can be can kind of abuse now to distribute it basically, you know, imagine now these, these all of these reviewers, they don't they don't have to do this work anymore. They can just distribute to all the authors of their own papers, right? Because the authors in the text, by the way, the text is structures is kind of an NLP approach to reviewing. Kind of NLP mixed with game theory, right? So the author authors themselves, if they have state of the art, you have to do some standing and stuff, but they will put that into the text a lot. So it's a bit controversial, but the authors here propose to simply count the number of word occurrences of state of the art, in the case in sensitive, very important in the text, right? It stands to reason that a higher state of the art count is preferable, of course. All right, so the second thing, so this might be a bit controversial, the second thing significance, and they now make the claim significance is measured by efficacy. So they simply the efficacy term, so if your paper is effective at achieving its goal, you can also say it's significant for the community, because again, significance, like if you have state of the art, then your paper is significant. If you do not have state of the art, then your paper is obviously not significant, because why should it matter if you don't have state of the art in a given task? It's useless. All right, so we weigh it twice. That's pretty good. And then novelty. Now here they take much of the same approach. They say the authors probably state this, so how much they used the word novel in their manuscript will dictate. So here they say, okay, they novel, wow, this is failing me. How much they used the word novel in the text will probably be an indication. I don't think so, though they do do the smart thing of the include. They include the works, they include the related work section from this. Sorry, they exclude the related work section. They say we make the key observation that individuals best played to make the judgments or the authors themselves, since they have likely read at least one of the works cited in bibliography. I don't agree here. I think a better method would be to simply count the number of references and the lower the amount of references to relate to work, the higher the novelty, right? Because if you think, right, if this, these are current papers, right? And you place and your paper is here, right? You'll have a lot of related work, right? So it's not as novel. If you're like way out here, you'll have maybe one or two related works. So it's way more novel. If you have less references, so this would be my kind of criticism of this, so this this novelty thing here, I think this term should be replaced by kind of a graph centrality measure or something the account of how many references you have would be enough probably. All right, so they define their score, their score as we saw is the SOTA term weighted twice a geometric mean between that and the novelty term, which I've criticized. They add the suffix out of 10 because out of 10 score is pretty interpretable. So they divide by 10 here. So, sorry, yeah, say they say that here. We attach a suffix out of 10. Because that's easy to interpret. And as you saw in the Indie kind of archive implementation right here. Sorry, this this will be an easy to integrate right here. So they even give code, right, they give code in the paper themselves of how to implement this. It's pretty pretty easy. And I think, yeah, even though it's quite a short paper. It's it's thorough and it's a good new method. And I think this could revolutionize publishing and they even so as a bit of a bonus, they give the official pronunciation of state of the art reviewing, which is something like state of the art reviewing for you smooth. Yeah, with that, I hope you enjoyed this. And if the authors could just be a little more subtle next time, that would be great. I guess you have to go. Yeah, nothing more. Bye. | [{"start": 0.0, "end": 6.0, "text": " Alright, hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal"}, {"start": 6.0, "end": 13.0, "text": " to improve scientific publication. So this has been on my mind for a while. The review"}, {"start": 13.0, "end": 20.0, "text": " process for modern science, especially machine learning, is just broken. I've spoken numerous"}, {"start": 20.0, "end": 25.0, "text": " times about the fact that we need to replace it with a better system. And now Samuel Albany"}, {"start": 25.0, "end": 31.0, "text": " at all have actually come up with such a system and we're going to explore it today. I am a big fan"}, {"start": 31.0, "end": 40.0, "text": " of this work and I'm 100% on board with this. So they basically say peer review forms the"}, {"start": 40.0, "end": 44.0, "text": " backbone of modern scientific manuscript evaluation. If you don't know what peer review is,"}, {"start": 44.0, "end": 50.0, "text": " in machine learning right now if you have some genius idea, so here is your idea. That's a light bulb."}, {"start": 50.0, "end": 56.0, "text": " By the way, you write it up into an eighth page PDF. Yes, it must be PDF and yes, it must be"}, {"start": 56.0, "end": 66.0, "text": " eight pages. You submit it to a conference which is some kind of, so you submitted to be"}, {"start": 66.0, "end": 72.0, "text": " accepted in a conference proceeding. And if the conference organizers, of course,"}, {"start": 72.0, "end": 79.0, "text": " there are just a bunch of people that can't review these 1000 million applications that come by themselves."}, {"start": 79.0, "end": 86.0, "text": " So what they do is they recruit experts which are called peers. So peers are other people."}, {"start": 86.0, "end": 93.0, "text": " These are called peers and they have usually written up their own papers and they can critique"}, {"start": 93.0, "end": 98.0, "text": " each other's paper and they decide what gets accepted and what doesn't. Now I've spoken"}, {"start": 98.0, "end": 104.0, "text": " numerous times of how this is super noisy right now. They're way, way, they're not"}, {"start": 104.0, "end": 110.0, "text": " enough peers, they're not experienced enough. So whether or not your particular idea gets"}, {"start": 110.0, "end": 119.0, "text": " accepted is extremely dependent on on probability on a coin flip usually. And it's just overloaded"}, {"start": 119.0, "end": 125.0, "text": " and just makes no sense. And they asked the same question, is this fit for purpose in 2020?"}, {"start": 125.0, "end": 136.0, "text": " And we need to replace it and I support. So they, you can already see they kind of want to automate"}, {"start": 136.0, "end": 142.0, "text": " this away with their state of the art review score and the score will be an out of 10 score"}, {"start": 142.0, "end": 151.0, "text": " that can be integrated into something like archive. And you know, display right away."}, {"start": 151.0, "end": 157.0, "text": " So they have some requirements to this new system. What should it be done?"}, {"start": 157.0, "end": 163.0, "text": " It should have the ability to scale. Very important. Our current review system doesn't have this."}, {"start": 163.0, "end": 171.0, "text": " Right. Our current review system relies on other humans reviewing your paper."}, {"start": 171.0, "end": 177.0, "text": " And that means that the reviewers need to scale with the amount of papers which just isn't the case currently."}, {"start": 177.0, "end": 186.0, "text": " So a new review system must have the ability to scale. Right. And them, you know, automating the reviews away"}, {"start": 186.0, "end": 194.0, "text": " or scaling it up in a distributed fashion does this speed. Yes, because right now if I submit my manuscript"}, {"start": 194.0, "end": 201.0, "text": " for review, it takes the months to review it. And our science progress is faster than that."}, {"start": 201.0, "end": 207.0, "text": " So a speedy, more speedy version of peer review is definitely required."}, {"start": 207.0, "end": 215.0, "text": " And then consistency. And this is the most shocking part. Right. There is a, the ground 2014 NURRIPS experiment"}, {"start": 215.0, "end": 225.0, "text": " which concluded that 57% of papers accepted by one committee were rejected by another committee"}, {"start": 225.0, "end": 232.0, "text": " and vice versa reviewing the exact same paper, different committees came to completely different conclusions"}, {"start": 232.0, "end": 239.0, "text": " to an astounding degree. So basically you're flipping a coin of whether or not your paper gets accepted or not."}, {"start": 239.0, "end": 248.0, "text": " And I think this is just not acceptable. And so they propose these three things speed scale consistency."}, {"start": 248.0, "end": 260.0, "text": " And their new methods certainly has this. Now let's jump down here where they introduce this state of the art reviewing SOAR."}, {"start": 260.0, "end": 267.0, "text": " So they say, okay, the quality of a scientific work can be judged along three axes."}, {"start": 267.0, "end": 286.0, "text": " Fxc, significance and novelty. So there are these three pillars. Right. Fxc, which means is kind of how effective is your work in achieving the goal in machine learning"}, {"start": 286.0, "end": 300.0, "text": " that's usually to train some good classifier or something like this. Then the other one, sorry, is significance. Right."}, {"start": 300.0, "end": 318.0, "text": " So significance is how relevant is what you've done to the field. Right. And the third one is novelty. So you know, in your scientific work"}, {"start": 318.0, "end": 324.0, "text": " should be an original contribution to the knowledge of mankind. And therefore it should be novel. Right."}, {"start": 324.0, "end": 336.0, "text": " So the more of these three things you have, of course, the higher your score should be. And here in the middle is where the highest scores should be. So imagine this is kind of a landscape."}, {"start": 336.0, "end": 348.0, "text": " And so you want to grade papers along these three axes. But I have a pretty good method of of assessing these in an automated fashion."}, {"start": 348.0, "end": 362.0, "text": " So first of all, assessing efficacy. So efficacy, they say, is best assessed by determining if the proposed method achieves a new state of the art."}, {"start": 362.0, "end": 377.0, "text": " I think that's not really. I don't think you can really doubt this. I mean, this is this is kind of the gold standard of whether your paper is effective is whether or not it achieves state of the art."}, {"start": 377.0, "end": 389.0, "text": " I it might be a bit of a controversial opinion, but if a paper doesn't achieve a state of the art, it's, you know, why? Why do you even care? Like no one cares."}, {"start": 389.0, "end": 409.0, "text": " So from, they say, from an implementation perspective, they can use, they can kind of abuse a fact of the current research environment is that you don't actually have to review this yourself, but the authors themselves can be relied upon to state this repeatedly in the text."}, {"start": 409.0, "end": 424.0, "text": " And this, this is important. So the authors will state that they have state of the art many, many times in the text, if they have actually achieved it, if they haven't achieved it or not so sure about it, they probably won't repeat it as many times."}, {"start": 424.0, "end": 445.0, "text": " This is can be can kind of abuse now to distribute it basically, you know, imagine now these, these all of these reviewers, they don't they don't have to do this work anymore. They can just distribute to all the authors of their own papers, right?"}, {"start": 445.0, "end": 466.0, "text": " Because the authors in the text, by the way, the text is structures is kind of an NLP approach to reviewing. Kind of NLP mixed with game theory, right? So the author authors themselves, if they have state of the art, you have to do some standing and stuff, but they will put that into the text a lot."}, {"start": 466.0, "end": 487.0, "text": " So it's a bit controversial, but the authors here propose to simply count the number of word occurrences of state of the art, in the case in sensitive, very important in the text, right? It stands to reason that a higher state of the art count is preferable, of course."}, {"start": 487.0, "end": 500.0, "text": " All right, so the second thing, so this might be a bit controversial, the second thing significance, and they now make the claim significance is measured by efficacy."}, {"start": 500.0, "end": 514.0, "text": " So they simply the efficacy term, so if your paper is effective at achieving its goal, you can also say it's significant for the community, because again, significance, like if you have state of the art,"}, {"start": 514.0, "end": 531.0, "text": " then your paper is significant. If you do not have state of the art, then your paper is obviously not significant, because why should it matter if you don't have state of the art in a given task? It's useless."}, {"start": 531.0, "end": 549.0, "text": " All right, so we weigh it twice. That's pretty good. And then novelty. Now here they take much of the same approach. They say the authors probably state this, so how much they used the word novel in their manuscript will dictate."}, {"start": 549.0, "end": 571.0, "text": " So here they say, okay, they novel, wow, this is failing me. How much they used the word novel in the text will probably be an indication. I don't think so, though they do do the smart thing of the include."}, {"start": 571.0, "end": 584.0, "text": " They include the works, they include the related work section from this. Sorry, they exclude the related work section."}, {"start": 584.0, "end": 594.0, "text": " They say we make the key observation that individuals best played to make the judgments or the authors themselves, since they have likely read at least one of the works cited in bibliography."}, {"start": 594.0, "end": 608.0, "text": " I don't agree here. I think a better method would be to simply count the number of references and the lower the amount of references to relate to work, the higher the novelty, right?"}, {"start": 608.0, "end": 629.0, "text": " Because if you think, right, if this, these are current papers, right? And you place and your paper is here, right? You'll have a lot of related work, right? So it's not as novel. If you're like way out here, you'll have maybe one or two related works. So it's way more novel."}, {"start": 629.0, "end": 646.0, "text": " If you have less references, so this would be my kind of criticism of this, so this this novelty thing here, I think this term should be replaced by kind of a graph centrality measure or something the account of how many references you have would be enough probably."}, {"start": 646.0, "end": 661.0, "text": " All right, so they define their score, their score as we saw is the SOTA term weighted twice a geometric mean between that and the novelty term, which I've criticized."}, {"start": 661.0, "end": 681.0, "text": " They add the suffix out of 10 because out of 10 score is pretty interpretable. So they divide by 10 here. So, sorry, yeah, say they say that here. We attach a suffix out of 10."}, {"start": 681.0, "end": 696.0, "text": " Because that's easy to interpret. And as you saw in the Indie kind of archive implementation right here. Sorry, this this will be an easy to integrate right here."}, {"start": 696.0, "end": 716.0, "text": " So they even give code, right, they give code in the paper themselves of how to implement this. It's pretty pretty easy. And I think, yeah, even though it's quite a short paper."}, {"start": 716.0, "end": 736.0, "text": " It's it's thorough and it's a good new method. And I think this could revolutionize publishing and they even so as a bit of a bonus, they give the official pronunciation of state of the art reviewing, which is something like state of the art reviewing for you smooth."}, {"start": 736.0, "end": 759.0, "text": " Yeah, with that, I hope you enjoyed this. And if the authors could just be a little more subtle next time, that would be great. I guess you have to go. Yeah, nothing more. Bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=U3zmekzQ8WQ | Agent57: Outperforming the Atari Human Benchmark | DeepMind's Agent57 is the first RL agent to outperform humans in all 57 Atari benchmark games. It extends previous algorithms like Never Give Up and R2D2 by meta-learning the exploration-exploitation tradeoff controls.
https://arxiv.org/abs/2003.13350
https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark
Abstract:
Atari games have been a long-standing benchmark in the reinforcement learning (RL) community for the past decade. This benchmark was proposed to test general competency of RL algorithms. Previous work has achieved good average performance by doing outstandingly well on many games of the set, but very poorly in several of the most challenging games. We propose Agent57, the first deep RL agent that outperforms the standard human benchmark on all 57 Atari games. To achieve this result, we train a neural network which parameterizes a family of policies ranging from very exploratory to purely exploitative. We propose an adaptive mechanism to choose which policy to prioritize throughout the training process. Additionally, we utilize a novel parameterization of the architecture that allows for more consistent and stable learning.
Authors: Adrià Puigdomènech Badia, Bilal Piot, Steven Kapturowski, Pablo Sprechmann, Alex Vitvitskyi, Daniel Guo, Charles Blundell
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has been one of the hardest games for reinforcement learning agents to solve. What you're seeing is Agent 57, which is a new agent by DeepMind, that is the first one to beat all of the 57 games in the Atari suite to a superhuman performance. Some of these games have been pretty easy for RL agents, but some of them look at this one here, have been pretty hard, mainly because of the reward structure. So you see on the top edge the reward, it's not going up for a long time. This kind of game, where the reward doesn't go up for a long time, is very hard for RL agents. So Agent 57 builds on a number of previous improvements to the original DeepQ networks of DeepMind. And today we'll look into this. It's called Agent 57, as I said, because it beats all of these 57 games. They're quite diverse, and it's a cool thing that a single system can beat it. So they go into this. This is a printout of the website, so I can scribble on it. And this here, it's been cut off, but it should say, DeQN here. DeQN from 2015. 2015. All right. So this DeQN paper of 2015 was kind of the original paper that popularized this Atari benchmark and introduced neural networks to reinforcement learning, basically, that made it work. Since then there have been a number of improvements. So maybe we'll just go into what DeepQ learning is. So in reinforcement learning usually, you have an agent and agent here, and it, you have an environment over here, right? And the environment will give you an observation. Now the observation in our case would be something like the frame of a game, right? And you're here. You're a little rocket, and there is a bunch of meteors, right? And then the agent needs to somehow give back an action. So an action and the actions in the Atari benchmark are always defined. So you can, you know, in Atari, you used to have this kind of a joystick thing. You can put it up, down, left, right, or you can put it like upright, up, left, and so on. And also you have a button. I think one or two buttons. I don't actually remember, but you do, you can press at least one button. So these are the actions. Let's say there is something like 20 different actions. So all of the directions here, and then you can always press or not press a button with it. So you have to give, you set, you send back this, this action here, you say I want to, you know, put the joystick up and I want to press the button at the same time. And then the environment will give you back a, it will say, okay, we'll give you back a new observation, which would be the next frame of the game, you know, you've pressed up so you're a little rocket is a bit more forward. You've pressed the button so you fired a shot and the meteors are still here. And it will also give you back a reward. So the reward, different games give different rewards. For example, in Pac-Man, you know, every time your Pac-Man eats a little one of these dots you get a reward. But in other games, most famously games like Montezuma's Revenge, you're in this room and there are these platforms and ladders and stuff, and you're here and their opponents rolling around and there's a door over here. You like to need to go down, jump over here, get up, get some key and then go to the door. And only then will you get a reward. So games vary many, in many ways, in how you get this reward. So that's kind of the intrinsic problem here. So deep Q learning is the following. We have a neural network taking in the observation. So we have a neural network. Let's designate it as this. And the observation goes in here. This is the observation goes in here. And also the action goes in here. Now let's call this AI because we have different actions. And O observation at step T. And it will give you a Q value. So the Q value for the observation at time T and action I. Now you do this for every single action. So you put observation with action a J in the same network. You get an output that is the Q value for observation, the same observation with that's different action. You do this for every action. And wherever the Q value is the highest, wherever that's the highest, that's the action you go with. So what you have to do is you have to train this neural network to predict the Q value as accurate as possible. And the Q value is basically the reward that you expect from now until the end of the episode by performing this action in this situation. Right? That's Q learning. Simply predicting if I do action I right now, how much reward am I going to get? And from now until the end of the episode. That's basically it. That's deep Q and deep Q learning simply because you have a neural network doing the learning. So that was deep Q networks and they worked pretty well but they don't work for these long time horizons because you always just learn, you just see one observation, right? You kind of learn one step at a time and you rely on this Q value is propagating through from your experience. It just, it doesn't work very well for these long credit assignments. Now a significant improvement on that is this R2D2 algorithm that incorporated LSTMs or and GRUs which are recurrent neural networks. And not only does your observation go into the neural network, right? Now your history of observations. So what you what happened before is not only the current game state, right? But here you have the observation from step one, the action you did at that step, then the observation time two, the action you did at time two and so on. They now all so this is encoded and this is encoded and then you have a recurrent neural network that incorporates all of these things that happened previously, right? Into your current representation. So now not only does the agency what is happening right now, it also gets the information of what happened previously, what did it do previously and it can also now back propagate through these things and kind of learn a longer range credit assignment. Credit assignment means it gets to figure out which actions had actually an influence on the final reward. If you have if you incorporate the history, right, you can have a direct gradient flow across that history. So notably these these LSTMs or GRUs, you can let them compute over maybe 10 or 100 steps, and then you get a pretty good idea of which of these actions within those 100 steps led to which rewards. And the other thing on R2D2 is of course it is now more distributed. So this this was already here improvements to DQN, but the R2D2 agent is also distributed meaning that you have like a central instance. So this is now engineering, right? You have a central instance that is called the learner and the learner has the main weights which I'm going to designate with theta here and it just takes in experience from all of these workers. So there's worker one, worker two, worker three, or four and so on. And these they will all just run episodes, they will all do work, work, work, work, work, work, work, work, work independently of each other and then send back their experience to the learner. And every now and then the learner sinks out the weights of the neural networks to the workers. So that's kind of distributed RL in this in this sense. Do you have a central learner? Then you have many, many workers doing the actual interaction with the environment. So the one of the main pitfalls of R2D2 is still it has a poor exploration exploitation strategy, which I believe it is still just kind of epsilon greedy. What does it mean? So in order to understand this, maybe consider again our screen here, right? So let's say you're here with your space ship, right? And there are, there's a meteor coming right here and one right here and there is a gold coin right here. Let's make this gold, right? So let's say you get a reward for collecting the coin, but you also get a reward for shooting the meteors, right? So what happens if you shoot right now? So if you shoot, then let's say you shoot and this meteor explodes, right? So you get a reward. Yeah, so one reward, but then the meteor right behind it will hit you, right? It's coming toward you. You'll have no way, no time to get out of the way. So one reward and then death, right? Let's make a little arrow here, right? So in total, you get one reward. Now what happens instead if you move to the right? So move, right? So the next in the next frame, the meteor will fly past you. You are over here, right? Put the gold coin is here. Now this has given you so far zero reward, right? Oops. This has given you zero reward, but then in the next frame, you know, these meteors have passed now and you are going to get that gold coin and that gives you one reward and no death, right? So you can technically go on here and maybe you'll get five more rewards down the line. So the, this is, here's the exploration exploitation dilemma. If an agent has for some reason learned that the shooting action in this situation will give it a one reward and the move action will give it zero reward, but has not learned to look past us. So this is kind of nebulous here. It has only experienced, it has only experienced one frame. It will say, wait a minute, shoot here appears to be like really good. It gives me one reward and move gives me zero reward. So from now on, I'll just always do shoot, right? Shoot, shoot, shoot. Now what you would like to do, so this is called exploitation, right? Exploitation. It has learned something that gives it a reward. So it will just do that over and over again. Whereas here you could say, I might go this way, even though it's zero reward because I can hope, right? I don't know yet, but I can hope that I will get more reward down here. This is exploration. And the question in reinforcement learning is always how to trade off these two, right? Ideally, you would want your agent to collect maximum reward that speaks for exploitation of what it has already learned. But also, you never want to discard the possibility that down the line of things that you don't yet know, there might be even more reward. And that speaks for exploration. I'll just, both are abbreviated, same, exploit, explore. This was done. So in the original deep QN formulation, and I believe also in R2D2, this is done with Epsilon greedy, which is surprisingly performing well. So in Epsilon greedy, you simply say, I'm going to have a constant Epsilon. This is Epsilon. This is maybe 5% or something. I'm going to simply do something at random. And the other one minus Epsilon, I'm just going to go with the thing I have already learned. And this performs pretty well, but you might imagine that there is something smarter to do. So never give up this algorithm. It kind of goes into this exploration mode where it tries to get to smarter ways to do exploration. And the keywords here are things like intrinsic motivation. So intrinsic motivation and curiosity refer to the fact that it is so in addition to the reward you get from the environment here, right? This reward right here. You can also interject at this point and say, ah, I'm going to give some, are prime, some reward of myself, right, to kind of encourage some behavior in the agent. And this here we call intrinsic intrinsic. So that means you add to the reward of the environment, you add some reward of your own that has nothing to do with the environment or not much, but just encourages certain behavior in the agent that is now also trying to maximize this. Intrinsic reward. And in curiosity and intrinsic motivation, formulations, usually you are rewarded for novelty, novelty, which means the agent is rewarded for finding things that it has not yet seen. So you, in this situation over here, you might see why this encourages the agent to go this route here. Because it says, wait a minute, there's a bunch of stuff like here, I just die, right? But there's a bunch of stuff I haven't seen yet down here. So I might want to go explore that and we give it extra intrinsic reward or prime for seeing things it hasn't seen yet. So it will learn if I do things that I have never done, I will get this sweet intrinsic reward. And then it will go explore. Now of course, it's a big engineering question of how exactly to set this intrinsic reward and there are many, many different formulations of that that fall under this term of, let's say, curiosity or something like this. Nevertheless, this never give up has has improved overall to do to using ideas like that. And now agent 57 improves again. Now how does agent 57 improve again? And it is mainly, it is mainly in the, in this what I just said. So how exactly do you apply this intrinsic reward? How exactly do you navigate the exploration, exploitation trade off? That's where agent 57 comes in. Because what they've realized is that for these different Atari games right here, some are very easy. Some you don't need much exploration. Some you need a lot. Some you needed over a large time scale. And simply one agent, one never give up agent with the same settings of this curiosity of how long it looks into the future is not going to solve all the games. So agent 57 learns how to to modulate this exploration, exploitation trade off. So let's jump into the paper a bit more. I encourage you to read the blog post that is quite thorough and the paper is a bit more technical. Sorry, let me switch over. This is the paper agent 57 up forming the Atari human benchmark by Google DeepMind. And here they say improvements to NG to never give up. So the first improvement they do is, so we've already talked about how this is classic Q learning, right? So you're trying to learn this function that gives you the Q value of an action and the state. Now since we're going to deal with intrinsic reward, in addition to extrinsic reward, it makes sense, that's what they argue, to split the Q learning function into two different parts. One part that learns the extrinsic reward. And then you have a parameter beta in front of it. Now beta in this case is the trade off. How much do you want to value this intrinsic reward? Here we see our first lever on the exploitation exploration trade off. If an agent gets lots of reward for exploring, right, it might never exploit and exploiting might actually be a good option in the game that you're in. So you might want to set beta small, but in other games you might want to encourage exploration to the max and therefore set beta very high. After constant along with that that they modulate is the discount factor, which is called this gamma here. So you already see here this beta we've already seen and they also modulate this gamma. What does gamma do if I have my state and action we already said. So here is an observation one and I do action one and that gives me observation two and I do action two and that gives me observation three and I do action three. And each time I get a reward, right, an extrinsic reward and an intrinsic reward. So reward one, reward two, reward three and so on. Usually in our L agent we'll look at these rewards and let's say you are here, you are at observation one and you're trying to estimate your future rewards. What will be most important will be the reward that you're getting right now, right, because that's the most sure because this reward here that you might get two steps from now, you know a lot of things could happen, right. You are pretty sure that if you do action one you're going to get to this state but you're not entirely sure you could also get to another state and therefore you had to do another action and therefore this reward here could be something different. So these algorithms are having what's known as a discount factor that means the value of a state of a state s is going to be the sum from time zero, let's say k equals t, that's state that time t up until some horizon, I think they call it h in the paper. You could also think of this as infinity of the reward at step k but discounted by this factor and you raise it to the power of k usually or t minus k minus t. So basically means that this is if t is one, so it's the reward at this time step plus, let's say gamma here is 0.99, plus 0.99 the reward at the next time step plus 0.99 squared the reward of that after that and you see that the more the more into the future you look the less value these rewards have. So little bars here indicate that you're going to value future rewards less and less. This is called a discount factor right here and it's how to set it is very important because if you set it very low, let's say you set it to 0.1 that means all that you want to do is maximize the reward that you're getting in the like the next and next next step. You're not really looking into the future. This is very good for games that give you immediate reward for good actions but if you set it very high, let's say 0.99, that means a reward 100 steps from now, it's almost the same to you as a reward one step from now and this is very valuable for games that don't give you a reward immediately or that kind of trying to trick you as we saw before. Like if you shoot the meteor now then you get one reward but if you don't and pass on the opportunity you might get much more later. So the modulation of the discount factor is also very important to set and really depends on the game. So we have two quantities here that really depend on what kind of game it is and also they argue it also depends where in the learning process you are. So if you're at the very beginning of the learning process you might want to have a very high goal, the high intrinsic reward to go explore and you might want to get have a very low discount factor in order to learn a good immediate value function but then as time goes on you might want to bring down the intrinsic reward because now you really want actually because your end goal is to maximize the extrinsic reward and you want to up this discount factor to look more into the future now that you have already learned the immediate values very well. So if I had to summarize and simplify what Agent 57 does is it builds a neural network that adjusts these two quantities across the training. So it adjusts the beta and gamma across the training and it does this in a so called bandit setting. Now there is no real good picture in this paper that I can show you so I'm just going to have to draw. So you have an agent, right? It interacts with this environment here and it always gets these rewards. Now what you have here is a meta controller, right? So the agent has two parameters, it has this beta and this gamma and the meta controller. Now it observes this, it observes this interaction and it outputs values for these two constants and it does this dynamically as the training progresses, right? So the agent will kind of learn, the agent will change its behavior over time. Now this is actually implemented in a slightly different way in that the meta controller doesn't control the values directly but it has kind of options. So what you do is you define a bunch of possibilities for beta and gamma. So you say I have strategy one, strategy one has beta at 0.1 and gamma at 0.9. Strategy two has beta at 0.2 and gamma at 0.8 and so on, right? And now the meta controller has to choose between one of these in this case six different strategies across training. So it might start off as we said with a high beta which might be over here, 0.9.1. It might start off with a high beta and then transition to the lower ends. And it can do so depending on the game and depending on the progress in the game. So this is this is dynamic and this is the improvement over never give up over this other agent because this other agent simply had these strategies and trained them at the same time. And now this meta controller here controls which strategy is currently trained and which one is used to generate the experience. So this is basically, I mean there's a, they also, of course they also say, well we also increase the window of, let me go back. So this LSTM, this, this I've shown you these things here, that incorporate experience over time. They also say, well we increase the, the window of how long the LSTM, the time window of how much experience is incorporated and they do a bunch of other things, which I always find kind of annoying because it's always really, really hard to see where the improvements come from that they claim they made. So but you know, boring that basically they built this meta controller to choose the strategies for the agent over time. Now of course, this meta controller again is trained by the rewards that you get back from the environment. So the meta controller as an action has the choice of strategy, right, and the reward it gets back from the agent environment interaction, right. So in itself, it is a reinforcement learning problem. Now why, like, to me it seems just shifts the, it just shifts the problem. Some of exploration exploitation, one level higher. They use a sliding window bandit algorithm to do this, but again, you have hyper parameters there like how long is the sliding window and how does the bandit algorithm do the exploration exploitation trade off. So it seems to me you're just shifting it one level higher and it also seems like we're getting into the region of where we are meta over engineering our approaches to the specifics of this Tory benchmark because we're kind of observing all the K of these agents do this wrong, these agents do this wrong. So let's just build an agent that can do both sort of and then the kind of audastic thing I find that they open with how to measure artificial general intelligence, which I mean, come on, you're just, it's kind of a mystery right now, you're just kind of over and over and over fitting on this one benchmark. There's not really a need to make this into a story on artificial general intelligence. All right, so this was my two cents to this. I hope you enjoyed this and bye bye. | [{"start": 0.0, "end": 9.24, "text": " Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has"}, {"start": 9.24, "end": 14.84, "text": " been one of the hardest games for reinforcement learning agents to solve."}, {"start": 14.84, "end": 21.52, "text": " What you're seeing is Agent 57, which is a new agent by DeepMind, that is the first"}, {"start": 21.52, "end": 32.28, "text": " one to beat all of the 57 games in the Atari suite to a superhuman performance."}, {"start": 32.28, "end": 38.6, "text": " Some of these games have been pretty easy for RL agents, but some of them look at this one"}, {"start": 38.6, "end": 43.0, "text": " here, have been pretty hard, mainly because of the reward structure."}, {"start": 43.0, "end": 52.56, "text": " So you see on the top edge the reward, it's not going up for a long time."}, {"start": 52.56, "end": 57.88, "text": " This kind of game, where the reward doesn't go up for a long time, is very hard for RL"}, {"start": 57.88, "end": 58.88, "text": " agents."}, {"start": 58.88, "end": 66.52, "text": " So Agent 57 builds on a number of previous improvements to the original DeepQ networks"}, {"start": 66.52, "end": 68.48, "text": " of DeepMind."}, {"start": 68.48, "end": 71.4, "text": " And today we'll look into this."}, {"start": 71.4, "end": 76.48, "text": " It's called Agent 57, as I said, because it beats all of these 57 games."}, {"start": 76.48, "end": 84.36000000000001, "text": " They're quite diverse, and it's a cool thing that a single system can beat it."}, {"start": 84.36000000000001, "end": 86.08000000000001, "text": " So they go into this."}, {"start": 86.08000000000001, "end": 91.12, "text": " This is a printout of the website, so I can scribble on it."}, {"start": 91.12, "end": 95.68, "text": " And this here, it's been cut off, but it should say, DeQN here."}, {"start": 95.68, "end": 98.92, "text": " DeQN from 2015."}, {"start": 98.92, "end": 99.92, "text": " 2015."}, {"start": 99.92, "end": 102.60000000000001, "text": " All right."}, {"start": 102.60000000000001, "end": 108.8, "text": " So this DeQN paper of 2015 was kind of the original paper that popularized this Atari"}, {"start": 108.8, "end": 116.48, "text": " benchmark and introduced neural networks to reinforcement learning, basically, that made"}, {"start": 116.48, "end": 119.36, "text": " it work."}, {"start": 119.36, "end": 121.32000000000001, "text": " Since then there have been a number of improvements."}, {"start": 121.32000000000001, "end": 125.88, "text": " So maybe we'll just go into what DeepQ learning is."}, {"start": 125.88, "end": 133.07999999999998, "text": " So in reinforcement learning usually, you have an agent and agent here, and it, you have"}, {"start": 133.07999999999998, "end": 136.2, "text": " an environment over here, right?"}, {"start": 136.2, "end": 140.12, "text": " And the environment will give you an observation."}, {"start": 140.12, "end": 146.07999999999998, "text": " Now the observation in our case would be something like the frame of a game, right?"}, {"start": 146.07999999999998, "end": 147.07999999999998, "text": " And you're here."}, {"start": 147.07999999999998, "end": 150.6, "text": " You're a little rocket, and there is a bunch of meteors, right?"}, {"start": 150.6, "end": 156.28, "text": " And then the agent needs to somehow give back an action."}, {"start": 156.28, "end": 163.12, "text": " So an action and the actions in the Atari benchmark are always defined."}, {"start": 163.12, "end": 169.28, "text": " So you can, you know, in Atari, you used to have this kind of a joystick thing."}, {"start": 169.28, "end": 176.35999999999999, "text": " You can put it up, down, left, right, or you can put it like upright, up, left, and so"}, {"start": 176.35999999999999, "end": 177.35999999999999, "text": " on."}, {"start": 177.35999999999999, "end": 179.72, "text": " And also you have a button."}, {"start": 179.72, "end": 182.0, "text": " I think one or two buttons."}, {"start": 182.0, "end": 187.4, "text": " I don't actually remember, but you do, you can press at least one button."}, {"start": 187.4, "end": 188.64, "text": " So these are the actions."}, {"start": 188.64, "end": 192.88, "text": " Let's say there is something like 20 different actions."}, {"start": 192.88, "end": 198.4, "text": " So all of the directions here, and then you can always press or not press a button with"}, {"start": 198.4, "end": 201.24, "text": " it."}, {"start": 201.24, "end": 206.12, "text": " So you have to give, you set, you send back this, this action here, you say I want to,"}, {"start": 206.12, "end": 210.56, "text": " you know, put the joystick up and I want to press the button at the same time."}, {"start": 210.56, "end": 216.12, "text": " And then the environment will give you back a, it will say, okay, we'll give you back"}, {"start": 216.12, "end": 221.48000000000002, "text": " a new observation, which would be the next frame of the game, you know, you've pressed"}, {"start": 221.48000000000002, "end": 224.16, "text": " up so you're a little rocket is a bit more forward."}, {"start": 224.16, "end": 229.32, "text": " You've pressed the button so you fired a shot and the meteors are still here."}, {"start": 229.32, "end": 232.6, "text": " And it will also give you back a reward."}, {"start": 232.6, "end": 238.64, "text": " So the reward, different games give different rewards."}, {"start": 238.64, "end": 246.24, "text": " For example, in Pac-Man, you know, every time your Pac-Man eats a little one of these dots"}, {"start": 246.24, "end": 247.92, "text": " you get a reward."}, {"start": 247.92, "end": 253.79999999999998, "text": " But in other games, most famously games like Montezuma's Revenge, you're in this room"}, {"start": 253.79999999999998, "end": 259.48, "text": " and there are these platforms and ladders and stuff, and you're here and their opponents"}, {"start": 259.48, "end": 262.08, "text": " rolling around and there's a door over here."}, {"start": 262.08, "end": 267.35999999999996, "text": " You like to need to go down, jump over here, get up, get some key and then go to the door."}, {"start": 267.35999999999996, "end": 271.2, "text": " And only then will you get a reward."}, {"start": 271.2, "end": 277.59999999999997, "text": " So games vary many, in many ways, in how you get this reward."}, {"start": 277.59999999999997, "end": 281.03999999999996, "text": " So that's kind of the intrinsic problem here."}, {"start": 281.03999999999996, "end": 284.59999999999997, "text": " So deep Q learning is the following."}, {"start": 284.59999999999997, "end": 288.36, "text": " We have a neural network taking in the observation."}, {"start": 288.36, "end": 289.59999999999997, "text": " So we have a neural network."}, {"start": 289.59999999999997, "end": 291.47999999999996, "text": " Let's designate it as this."}, {"start": 291.48, "end": 293.96000000000004, "text": " And the observation goes in here."}, {"start": 293.96000000000004, "end": 296.68, "text": " This is the observation goes in here."}, {"start": 296.68, "end": 299.52000000000004, "text": " And also the action goes in here."}, {"start": 299.52000000000004, "end": 302.32, "text": " Now let's call this AI because we have different actions."}, {"start": 302.32, "end": 306.28000000000003, "text": " And O observation at step T."}, {"start": 306.28000000000003, "end": 310.04, "text": " And it will give you a Q value."}, {"start": 310.04, "end": 315.32, "text": " So the Q value for the observation at time T and action I."}, {"start": 315.32, "end": 318.24, "text": " Now you do this for every single action."}, {"start": 318.24, "end": 325.0, "text": " So you put observation with action a J in the same network."}, {"start": 325.0, "end": 331.40000000000003, "text": " You get an output that is the Q value for observation, the same observation with that's"}, {"start": 331.40000000000003, "end": 332.40000000000003, "text": " different action."}, {"start": 332.40000000000003, "end": 334.40000000000003, "text": " You do this for every action."}, {"start": 334.40000000000003, "end": 340.72, "text": " And wherever the Q value is the highest, wherever that's the highest, that's the action"}, {"start": 340.72, "end": 342.0, "text": " you go with."}, {"start": 342.0, "end": 348.12, "text": " So what you have to do is you have to train this neural network to predict the Q value as"}, {"start": 348.12, "end": 349.12, "text": " accurate as possible."}, {"start": 349.12, "end": 356.84, "text": " And the Q value is basically the reward that you expect from now until the end of the"}, {"start": 356.84, "end": 361.24, "text": " episode by performing this action in this situation."}, {"start": 361.24, "end": 362.24, "text": " Right?"}, {"start": 362.24, "end": 364.44, "text": " That's Q learning."}, {"start": 364.44, "end": 371.96, "text": " Simply predicting if I do action I right now, how much reward am I going to get?"}, {"start": 371.96, "end": 375.28, "text": " And from now until the end of the episode."}, {"start": 375.28, "end": 379.84, "text": " That's basically it."}, {"start": 379.84, "end": 385.79999999999995, "text": " That's deep Q and deep Q learning simply because you have a neural network doing the learning."}, {"start": 385.79999999999995, "end": 390.56, "text": " So that was deep Q networks and they worked pretty well but they don't work for these long"}, {"start": 390.56, "end": 396.12, "text": " time horizons because you always just learn, you just see one observation, right?"}, {"start": 396.12, "end": 403.12, "text": " You kind of learn one step at a time and you rely on this Q value is propagating through"}, {"start": 403.12, "end": 404.44, "text": " from your experience."}, {"start": 404.44, "end": 408.48, "text": " It just, it doesn't work very well for these long credit assignments."}, {"start": 408.48, "end": 417.88, "text": " Now a significant improvement on that is this R2D2 algorithm that incorporated LSTMs or"}, {"start": 417.88, "end": 420.96, "text": " and GRUs which are recurrent neural networks."}, {"start": 420.96, "end": 427.32, "text": " And not only does your observation go into the neural network, right?"}, {"start": 427.32, "end": 430.71999999999997, "text": " Now your history of observations."}, {"start": 430.71999999999997, "end": 435.15999999999997, "text": " So what you what happened before is not only the current game state, right?"}, {"start": 435.15999999999997, "end": 441.24, "text": " But here you have the observation from step one, the action you did at that step, then"}, {"start": 441.24, "end": 446.91999999999996, "text": " the observation time two, the action you did at time two and so on."}, {"start": 446.92, "end": 454.48, "text": " They now all so this is encoded and this is encoded and then you have a recurrent neural"}, {"start": 454.48, "end": 461.68, "text": " network that incorporates all of these things that happened previously, right?"}, {"start": 461.68, "end": 464.40000000000003, "text": " Into your current representation."}, {"start": 464.40000000000003, "end": 473.20000000000005, "text": " So now not only does the agency what is happening right now, it also gets the information of"}, {"start": 473.2, "end": 481.8, "text": " what happened previously, what did it do previously and it can also now back propagate through"}, {"start": 481.8, "end": 486.32, "text": " these things and kind of learn a longer range credit assignment."}, {"start": 486.32, "end": 493.96, "text": " Credit assignment means it gets to figure out which actions had actually an influence on"}, {"start": 493.96, "end": 496.88, "text": " the final reward."}, {"start": 496.88, "end": 503.44, "text": " If you have if you incorporate the history, right, you can have a direct gradient flow across"}, {"start": 503.44, "end": 504.64, "text": " that history."}, {"start": 504.64, "end": 514.16, "text": " So notably these these LSTMs or GRUs, you can let them compute over maybe 10 or 100 steps,"}, {"start": 514.16, "end": 520.12, "text": " and then you get a pretty good idea of which of these actions within those 100 steps led"}, {"start": 520.12, "end": 523.8, "text": " to which rewards."}, {"start": 523.8, "end": 531.16, "text": " And the other thing on R2D2 is of course it is now more distributed."}, {"start": 531.16, "end": 537.3199999999999, "text": " So this this was already here improvements to DQN, but the R2D2 agent is also distributed"}, {"start": 537.3199999999999, "end": 540.24, "text": " meaning that you have like a central instance."}, {"start": 540.24, "end": 541.4799999999999, "text": " So this is now engineering, right?"}, {"start": 541.4799999999999, "end": 549.1999999999999, "text": " You have a central instance that is called the learner and the learner has the main weights"}, {"start": 549.2, "end": 555.88, "text": " which I'm going to designate with theta here and it just takes in experience from all of"}, {"start": 555.88, "end": 557.1600000000001, "text": " these workers."}, {"start": 557.1600000000001, "end": 561.84, "text": " So there's worker one, worker two, worker three, or four and so on."}, {"start": 561.84, "end": 568.24, "text": " And these they will all just run episodes, they will all do work, work, work, work, work,"}, {"start": 568.24, "end": 572.1600000000001, "text": " work, work, work, work independently of each other and then send back their experience"}, {"start": 572.1600000000001, "end": 573.44, "text": " to the learner."}, {"start": 573.44, "end": 579.1600000000001, "text": " And every now and then the learner sinks out the weights of the neural networks to the"}, {"start": 579.1600000000001, "end": 580.1600000000001, "text": " workers."}, {"start": 580.1600000000001, "end": 583.5200000000001, "text": " So that's kind of distributed RL in this in this sense."}, {"start": 583.5200000000001, "end": 584.7600000000001, "text": " Do you have a central learner?"}, {"start": 584.7600000000001, "end": 593.96, "text": " Then you have many, many workers doing the actual interaction with the environment."}, {"start": 593.96, "end": 607.64, "text": " So the one of the main pitfalls of R2D2 is still it has a poor exploration exploitation"}, {"start": 607.64, "end": 613.2, "text": " strategy, which I believe it is still just kind of epsilon greedy."}, {"start": 613.2, "end": 614.36, "text": " What does it mean?"}, {"start": 614.36, "end": 623.9200000000001, "text": " So in order to understand this, maybe consider again our screen here, right?"}, {"start": 623.92, "end": 628.7199999999999, "text": " So let's say you're here with your space ship, right?"}, {"start": 628.7199999999999, "end": 636.52, "text": " And there are, there's a meteor coming right here and one right here and there is a gold"}, {"start": 636.52, "end": 638.24, "text": " coin right here."}, {"start": 638.24, "end": 641.36, "text": " Let's make this gold, right?"}, {"start": 641.36, "end": 646.12, "text": " So let's say you get a reward for collecting the coin, but you also get a reward for shooting"}, {"start": 646.12, "end": 648.9599999999999, "text": " the meteors, right?"}, {"start": 648.9599999999999, "end": 652.3199999999999, "text": " So what happens if you shoot right now?"}, {"start": 652.32, "end": 664.8000000000001, "text": " So if you shoot, then let's say you shoot and this meteor explodes, right?"}, {"start": 664.8000000000001, "end": 666.12, "text": " So you get a reward."}, {"start": 666.12, "end": 672.6400000000001, "text": " Yeah, so one reward, but then the meteor right behind it will hit you, right?"}, {"start": 672.6400000000001, "end": 673.8000000000001, "text": " It's coming toward you."}, {"start": 673.8000000000001, "end": 676.36, "text": " You'll have no way, no time to get out of the way."}, {"start": 676.36, "end": 680.1600000000001, "text": " So one reward and then death, right?"}, {"start": 680.16, "end": 683.56, "text": " Let's make a little arrow here, right?"}, {"start": 683.56, "end": 687.4, "text": " So in total, you get one reward."}, {"start": 687.4, "end": 692.3199999999999, "text": " Now what happens instead if you move to the right?"}, {"start": 692.3199999999999, "end": 694.48, "text": " So move, right?"}, {"start": 694.48, "end": 699.92, "text": " So the next in the next frame, the meteor will fly past you."}, {"start": 699.92, "end": 701.88, "text": " You are over here, right?"}, {"start": 701.88, "end": 704.0799999999999, "text": " Put the gold coin is here."}, {"start": 704.0799999999999, "end": 707.48, "text": " Now this has given you so far zero reward, right?"}, {"start": 707.48, "end": 708.56, "text": " Oops."}, {"start": 708.56, "end": 718.0, "text": " This has given you zero reward, but then in the next frame, you know, these meteors have"}, {"start": 718.0, "end": 727.0799999999999, "text": " passed now and you are going to get that gold coin and that gives you one reward and no"}, {"start": 727.0799999999999, "end": 728.0799999999999, "text": " death, right?"}, {"start": 728.0799999999999, "end": 734.52, "text": " So you can technically go on here and maybe you'll get five more rewards down the line."}, {"start": 734.52, "end": 740.16, "text": " So the, this is, here's the exploration exploitation dilemma."}, {"start": 740.16, "end": 746.64, "text": " If an agent has for some reason learned that the shooting action in this situation will"}, {"start": 746.64, "end": 753.48, "text": " give it a one reward and the move action will give it zero reward, but has not learned"}, {"start": 753.48, "end": 755.0799999999999, "text": " to look past us."}, {"start": 755.0799999999999, "end": 757.4399999999999, "text": " So this is kind of nebulous here."}, {"start": 757.4399999999999, "end": 764.4399999999999, "text": " It has only experienced, it has only experienced one frame."}, {"start": 764.44, "end": 773.6800000000001, "text": " It will say, wait a minute, shoot here appears to be like really good."}, {"start": 773.6800000000001, "end": 777.1600000000001, "text": " It gives me one reward and move gives me zero reward."}, {"start": 777.1600000000001, "end": 781.0400000000001, "text": " So from now on, I'll just always do shoot, right?"}, {"start": 781.0400000000001, "end": 783.5200000000001, "text": " Shoot, shoot, shoot."}, {"start": 783.5200000000001, "end": 788.08, "text": " Now what you would like to do, so this is called exploitation, right?"}, {"start": 788.08, "end": 790.32, "text": " Exploitation."}, {"start": 790.32, "end": 794.32, "text": " It has learned something that gives it a reward."}, {"start": 794.32, "end": 798.12, "text": " So it will just do that over and over again."}, {"start": 798.12, "end": 806.2800000000001, "text": " Whereas here you could say, I might go this way, even though it's zero reward because I"}, {"start": 806.2800000000001, "end": 807.88, "text": " can hope, right?"}, {"start": 807.88, "end": 813.36, "text": " I don't know yet, but I can hope that I will get more reward down here."}, {"start": 813.36, "end": 816.0, "text": " This is exploration."}, {"start": 816.0, "end": 822.0400000000001, "text": " And the question in reinforcement learning is always how to trade off these two, right?"}, {"start": 822.04, "end": 829.4399999999999, "text": " Ideally, you would want your agent to collect maximum reward that speaks for exploitation"}, {"start": 829.4399999999999, "end": 831.68, "text": " of what it has already learned."}, {"start": 831.68, "end": 839.0799999999999, "text": " But also, you never want to discard the possibility that down the line of things that you don't"}, {"start": 839.0799999999999, "end": 843.64, "text": " yet know, there might be even more reward."}, {"start": 843.64, "end": 845.64, "text": " And that speaks for exploration."}, {"start": 845.64, "end": 853.0, "text": " I'll just, both are abbreviated, same, exploit, explore."}, {"start": 853.0, "end": 855.84, "text": " This was done."}, {"start": 855.84, "end": 863.0, "text": " So in the original deep QN formulation, and I believe also in R2D2, this is done with"}, {"start": 863.0, "end": 870.0, "text": " Epsilon greedy, which is surprisingly performing well."}, {"start": 870.0, "end": 875.8, "text": " So in Epsilon greedy, you simply say, I'm going to have a constant Epsilon."}, {"start": 875.8, "end": 879.08, "text": " This is Epsilon."}, {"start": 879.08, "end": 882.96, "text": " This is maybe 5% or something."}, {"start": 882.96, "end": 886.96, "text": " I'm going to simply do something at random."}, {"start": 886.96, "end": 895.32, "text": " And the other one minus Epsilon, I'm just going to go with the thing I have already learned."}, {"start": 895.32, "end": 903.0, "text": " And this performs pretty well, but you might imagine that there is something smarter to do."}, {"start": 903.0, "end": 907.88, "text": " So never give up this algorithm."}, {"start": 907.88, "end": 916.72, "text": " It kind of goes into this exploration mode where it tries to get to smarter ways to do"}, {"start": 916.72, "end": 918.2, "text": " exploration."}, {"start": 918.2, "end": 924.08, "text": " And the keywords here are things like intrinsic motivation."}, {"start": 924.08, "end": 934.24, "text": " So intrinsic motivation and curiosity refer to the fact that it is so in addition to the"}, {"start": 934.24, "end": 938.4000000000001, "text": " reward you get from the environment here, right?"}, {"start": 938.4000000000001, "end": 941.08, "text": " This reward right here."}, {"start": 941.08, "end": 948.9200000000001, "text": " You can also interject at this point and say, ah, I'm going to give some, are prime, some"}, {"start": 948.92, "end": 955.1999999999999, "text": " reward of myself, right, to kind of encourage some behavior in the agent."}, {"start": 955.1999999999999, "end": 962.56, "text": " And this here we call intrinsic intrinsic."}, {"start": 962.56, "end": 968.92, "text": " So that means you add to the reward of the environment, you add some reward of your own"}, {"start": 968.92, "end": 975.28, "text": " that has nothing to do with the environment or not much, but just encourages certain behavior"}, {"start": 975.28, "end": 978.8, "text": " in the agent that is now also trying to maximize this."}, {"start": 978.8, "end": 982.16, "text": " Intrinsic reward."}, {"start": 982.16, "end": 989.92, "text": " And in curiosity and intrinsic motivation, formulations, usually you are rewarded for novelty,"}, {"start": 989.92, "end": 998.76, "text": " novelty, which means the agent is rewarded for finding things that it has not yet seen."}, {"start": 998.76, "end": 1004.76, "text": " So you, in this situation over here, you might see why this encourages the agent to go"}, {"start": 1004.76, "end": 1006.56, "text": " this route here."}, {"start": 1006.56, "end": 1010.92, "text": " Because it says, wait a minute, there's a bunch of stuff like here, I just die, right?"}, {"start": 1010.92, "end": 1014.3199999999999, "text": " But there's a bunch of stuff I haven't seen yet down here."}, {"start": 1014.3199999999999, "end": 1022.64, "text": " So I might want to go explore that and we give it extra intrinsic reward or prime for seeing"}, {"start": 1022.64, "end": 1024.8799999999999, "text": " things it hasn't seen yet."}, {"start": 1024.8799999999999, "end": 1030.6399999999999, "text": " So it will learn if I do things that I have never done, I will get this sweet intrinsic"}, {"start": 1030.6399999999999, "end": 1032.04, "text": " reward."}, {"start": 1032.04, "end": 1034.2, "text": " And then it will go explore."}, {"start": 1034.2, "end": 1041.96, "text": " Now of course, it's a big engineering question of how exactly to set this intrinsic reward"}, {"start": 1041.96, "end": 1048.2, "text": " and there are many, many different formulations of that that fall under this term of, let's"}, {"start": 1048.2, "end": 1052.64, "text": " say, curiosity or something like this."}, {"start": 1052.64, "end": 1061.32, "text": " Nevertheless, this never give up has has improved overall to do to using ideas like that."}, {"start": 1061.32, "end": 1065.24, "text": " And now agent 57 improves again."}, {"start": 1065.24, "end": 1069.8, "text": " Now how does agent 57 improve again?"}, {"start": 1069.8, "end": 1079.24, "text": " And it is mainly, it is mainly in the, in this what I just said."}, {"start": 1079.24, "end": 1082.6799999999998, "text": " So how exactly do you apply this intrinsic reward?"}, {"start": 1082.6799999999998, "end": 1087.8, "text": " How exactly do you navigate the exploration, exploitation trade off?"}, {"start": 1087.8, "end": 1090.36, "text": " That's where agent 57 comes in."}, {"start": 1090.36, "end": 1096.1599999999999, "text": " Because what they've realized is that for these different Atari games right here, some are"}, {"start": 1096.1599999999999, "end": 1097.8799999999999, "text": " very easy."}, {"start": 1097.8799999999999, "end": 1099.9199999999998, "text": " Some you don't need much exploration."}, {"start": 1099.9199999999998, "end": 1101.76, "text": " Some you need a lot."}, {"start": 1101.76, "end": 1104.6, "text": " Some you needed over a large time scale."}, {"start": 1104.6, "end": 1112.3999999999999, "text": " And simply one agent, one never give up agent with the same settings of this curiosity of"}, {"start": 1112.3999999999999, "end": 1117.08, "text": " how long it looks into the future is not going to solve all the games."}, {"start": 1117.08, "end": 1128.0, "text": " So agent 57 learns how to to modulate this exploration, exploitation trade off."}, {"start": 1128.0, "end": 1131.04, "text": " So let's jump into the paper a bit more."}, {"start": 1131.04, "end": 1139.6, "text": " I encourage you to read the blog post that is quite thorough and the paper is a bit more"}, {"start": 1139.6, "end": 1140.6, "text": " technical."}, {"start": 1140.6, "end": 1143.52, "text": " Sorry, let me switch over."}, {"start": 1143.52, "end": 1150.76, "text": " This is the paper agent 57 up forming the Atari human benchmark by Google DeepMind."}, {"start": 1150.76, "end": 1161.0, "text": " And here they say improvements to NG to never give up."}, {"start": 1161.0, "end": 1167.56, "text": " So the first improvement they do is, so we've already talked about how this is classic"}, {"start": 1167.56, "end": 1169.52, "text": " Q learning, right?"}, {"start": 1169.52, "end": 1176.32, "text": " So you're trying to learn this function that gives you the Q value of an action and the"}, {"start": 1176.32, "end": 1178.16, "text": " state."}, {"start": 1178.16, "end": 1186.04, "text": " Now since we're going to deal with intrinsic reward, in addition to extrinsic reward, it"}, {"start": 1186.04, "end": 1192.28, "text": " makes sense, that's what they argue, to split the Q learning function into two different"}, {"start": 1192.28, "end": 1193.44, "text": " parts."}, {"start": 1193.44, "end": 1196.56, "text": " One part that learns the extrinsic reward."}, {"start": 1196.56, "end": 1203.56, "text": " And then you have a parameter beta in front of it."}, {"start": 1203.56, "end": 1211.12, "text": " Now beta in this case is the trade off."}, {"start": 1211.12, "end": 1217.6799999999998, "text": " How much do you want to value this intrinsic reward?"}, {"start": 1217.6799999999998, "end": 1221.52, "text": " Here we see our first lever on the exploitation exploration trade off."}, {"start": 1221.52, "end": 1230.08, "text": " If an agent gets lots of reward for exploring, right, it might never exploit and exploiting"}, {"start": 1230.08, "end": 1234.16, "text": " might actually be a good option in the game that you're in."}, {"start": 1234.16, "end": 1241.44, "text": " So you might want to set beta small, but in other games you might want to encourage exploration"}, {"start": 1241.44, "end": 1248.28, "text": " to the max and therefore set beta very high."}, {"start": 1248.28, "end": 1263.28, "text": " After constant along with that that they modulate is the discount factor, which is called this"}, {"start": 1263.28, "end": 1265.72, "text": " gamma here."}, {"start": 1265.72, "end": 1272.36, "text": " So you already see here this beta we've already seen and they also modulate this gamma."}, {"start": 1272.36, "end": 1280.04, "text": " What does gamma do if I have my state and action we already said."}, {"start": 1280.04, "end": 1287.3999999999999, "text": " So here is an observation one and I do action one and that gives me observation two and I"}, {"start": 1287.3999999999999, "end": 1294.04, "text": " do action two and that gives me observation three and I do action three."}, {"start": 1294.04, "end": 1299.76, "text": " And each time I get a reward, right, an extrinsic reward and an intrinsic reward."}, {"start": 1299.76, "end": 1307.08, "text": " So reward one, reward two, reward three and so on."}, {"start": 1307.08, "end": 1317.52, "text": " Usually in our L agent we'll look at these rewards and let's say you are here, you are at observation"}, {"start": 1317.52, "end": 1323.44, "text": " one and you're trying to estimate your future rewards."}, {"start": 1323.44, "end": 1327.8, "text": " What will be most important will be the reward that you're getting right now, right, because"}, {"start": 1327.8, "end": 1334.84, "text": " that's the most sure because this reward here that you might get two steps from now, you"}, {"start": 1334.84, "end": 1337.32, "text": " know a lot of things could happen, right."}, {"start": 1337.32, "end": 1341.44, "text": " You are pretty sure that if you do action one you're going to get to this state but you're"}, {"start": 1341.44, "end": 1346.48, "text": " not entirely sure you could also get to another state and therefore you had to do another"}, {"start": 1346.48, "end": 1351.72, "text": " action and therefore this reward here could be something different."}, {"start": 1351.72, "end": 1359.2, "text": " So these algorithms are having what's known as a discount factor that means the value"}, {"start": 1359.2, "end": 1370.08, "text": " of a state of a state s is going to be the sum from time zero, let's say k equals t,"}, {"start": 1370.08, "end": 1378.0, "text": " that's state that time t up until some horizon, I think they call it h in the paper."}, {"start": 1378.0, "end": 1385.84, "text": " You could also think of this as infinity of the reward at step k but discounted by this"}, {"start": 1385.84, "end": 1400.84, "text": " factor and you raise it to the power of k usually or t minus k minus t."}, {"start": 1400.84, "end": 1412.9199999999998, "text": " So basically means that this is if t is one, so it's the reward at this time step plus,"}, {"start": 1412.9199999999998, "end": 1427.6399999999999, "text": " let's say gamma here is 0.99, plus 0.99 the reward at the next time step plus 0.99 squared"}, {"start": 1427.64, "end": 1434.0, "text": " the reward of that after that and you see that the more the more into the future you look"}, {"start": 1434.0, "end": 1437.5600000000002, "text": " the less value these rewards have."}, {"start": 1437.5600000000002, "end": 1444.88, "text": " So little bars here indicate that you're going to value future rewards less and less."}, {"start": 1444.88, "end": 1452.2, "text": " This is called a discount factor right here and it's how to set it is very important"}, {"start": 1452.2, "end": 1459.2, "text": " because if you set it very low, let's say you set it to 0.1 that means all that you"}, {"start": 1459.2, "end": 1465.2, "text": " want to do is maximize the reward that you're getting in the like the next and next next"}, {"start": 1465.2, "end": 1466.2, "text": " step."}, {"start": 1466.2, "end": 1470.1200000000001, "text": " You're not really looking into the future."}, {"start": 1470.1200000000001, "end": 1479.0, "text": " This is very good for games that give you immediate reward for good actions but if you"}, {"start": 1479.0, "end": 1489.68, "text": " set it very high, let's say 0.99, that means a reward 100 steps from now, it's almost"}, {"start": 1489.68, "end": 1496.08, "text": " the same to you as a reward one step from now and this is very valuable for games that"}, {"start": 1496.08, "end": 1502.72, "text": " don't give you a reward immediately or that kind of trying to trick you as we saw before."}, {"start": 1502.72, "end": 1509.24, "text": " Like if you shoot the meteor now then you get one reward but if you don't and pass on"}, {"start": 1509.24, "end": 1512.48, "text": " the opportunity you might get much more later."}, {"start": 1512.48, "end": 1519.56, "text": " So the modulation of the discount factor is also very important to set and really depends"}, {"start": 1519.56, "end": 1520.56, "text": " on the game."}, {"start": 1520.56, "end": 1527.04, "text": " So we have two quantities here that really depend on what kind of game it is and also"}, {"start": 1527.04, "end": 1532.8, "text": " they argue it also depends where in the learning process you are."}, {"start": 1532.8, "end": 1539.6399999999999, "text": " So if you're at the very beginning of the learning process you might want to have a very"}, {"start": 1539.6399999999999, "end": 1547.52, "text": " high goal, the high intrinsic reward to go explore and you might want to get have a very"}, {"start": 1547.52, "end": 1555.36, "text": " low discount factor in order to learn a good immediate value function but then as time"}, {"start": 1555.36, "end": 1562.6799999999998, "text": " goes on you might want to bring down the intrinsic reward because now you really want actually"}, {"start": 1562.6799999999998, "end": 1567.32, "text": " because your end goal is to maximize the extrinsic reward and you want to up this discount"}, {"start": 1567.32, "end": 1573.6399999999999, "text": " factor to look more into the future now that you have already learned the immediate values"}, {"start": 1573.6399999999999, "end": 1575.8, "text": " very well."}, {"start": 1575.8, "end": 1589.76, "text": " So if I had to summarize and simplify what Agent 57 does is it builds a neural network that"}, {"start": 1589.76, "end": 1597.36, "text": " adjusts these two quantities across the training."}, {"start": 1597.36, "end": 1605.36, "text": " So it adjusts the beta and gamma across the training and it does this in a so called"}, {"start": 1605.36, "end": 1608.4399999999998, "text": " bandit setting."}, {"start": 1608.4399999999998, "end": 1614.9199999999998, "text": " Now there is no real good picture in this paper that I can show you so I'm just going"}, {"start": 1614.9199999999998, "end": 1616.6, "text": " to have to draw."}, {"start": 1616.6, "end": 1618.9199999999998, "text": " So you have an agent, right?"}, {"start": 1618.9199999999998, "end": 1626.1599999999999, "text": " It interacts with this environment here and it always gets these rewards."}, {"start": 1626.1599999999999, "end": 1630.56, "text": " Now what you have here is a meta controller, right?"}, {"start": 1630.56, "end": 1638.2, "text": " So the agent has two parameters, it has this beta and this gamma and the meta controller."}, {"start": 1638.2, "end": 1648.12, "text": " Now it observes this, it observes this interaction and it outputs values for these two constants"}, {"start": 1648.12, "end": 1652.96, "text": " and it does this dynamically as the training progresses, right?"}, {"start": 1652.96, "end": 1663.48, "text": " So the agent will kind of learn, the agent will change its behavior over time."}, {"start": 1663.48, "end": 1669.1200000000001, "text": " Now this is actually implemented in a slightly different way in that the meta controller"}, {"start": 1669.1200000000001, "end": 1673.72, "text": " doesn't control the values directly but it has kind of options."}, {"start": 1673.72, "end": 1680.64, "text": " So what you do is you define a bunch of possibilities for beta and gamma."}, {"start": 1680.64, "end": 1687.2, "text": " So you say I have strategy one, strategy one has beta at 0.1 and gamma at 0.9."}, {"start": 1687.2, "end": 1692.5600000000002, "text": " Strategy two has beta at 0.2 and gamma at 0.8 and so on, right?"}, {"start": 1692.5600000000002, "end": 1700.72, "text": " And now the meta controller has to choose between one of these in this case six different"}, {"start": 1700.72, "end": 1702.88, "text": " strategies across training."}, {"start": 1702.88, "end": 1709.2, "text": " So it might start off as we said with a high beta which might be over here, 0.9.1."}, {"start": 1709.2, "end": 1717.24, "text": " It might start off with a high beta and then transition to the lower ends."}, {"start": 1717.24, "end": 1723.72, "text": " And it can do so depending on the game and depending on the progress in the game."}, {"start": 1723.72, "end": 1729.68, "text": " So this is this is dynamic and this is the improvement over never give up over this"}, {"start": 1729.68, "end": 1734.72, "text": " other agent because this other agent simply had these strategies and trained them at the"}, {"start": 1734.72, "end": 1736.64, "text": " same time."}, {"start": 1736.64, "end": 1743.64, "text": " And now this meta controller here controls which strategy is currently trained and which"}, {"start": 1743.64, "end": 1748.4, "text": " one is used to generate the experience."}, {"start": 1748.4, "end": 1759.2, "text": " So this is basically, I mean there's a, they also, of course they also say, well we also"}, {"start": 1759.2, "end": 1764.92, "text": " increase the window of, let me go back."}, {"start": 1764.92, "end": 1771.8000000000002, "text": " So this LSTM, this, this I've shown you these things here, that incorporate experience"}, {"start": 1771.8000000000002, "end": 1772.8000000000002, "text": " over time."}, {"start": 1772.8000000000002, "end": 1779.0, "text": " They also say, well we increase the, the window of how long the LSTM, the time window"}, {"start": 1779.0, "end": 1786.1200000000001, "text": " of how much experience is incorporated and they do a bunch of other things, which I always"}, {"start": 1786.1200000000001, "end": 1791.64, "text": " find kind of annoying because it's always really, really hard to see where the improvements"}, {"start": 1791.64, "end": 1795.5200000000002, "text": " come from that they claim they made."}, {"start": 1795.5200000000002, "end": 1802.64, "text": " So but you know, boring that basically they built this meta controller to choose the"}, {"start": 1802.64, "end": 1807.16, "text": " strategies for the agent over time."}, {"start": 1807.16, "end": 1816.1200000000001, "text": " Now of course, this meta controller again is trained by the rewards that you get back"}, {"start": 1816.12, "end": 1822.3999999999999, "text": " from the environment."}, {"start": 1822.3999999999999, "end": 1827.1599999999999, "text": " So the meta controller as an action has the choice of strategy, right, and the reward it"}, {"start": 1827.1599999999999, "end": 1831.6399999999999, "text": " gets back from the agent environment interaction, right."}, {"start": 1831.6399999999999, "end": 1835.6799999999998, "text": " So in itself, it is a reinforcement learning problem."}, {"start": 1835.6799999999998, "end": 1846.0, "text": " Now why, like, to me it seems just shifts the, it just shifts the problem."}, {"start": 1846.0, "end": 1850.72, "text": " Some of exploration exploitation, one level higher."}, {"start": 1850.72, "end": 1856.8, "text": " They use a sliding window bandit algorithm to do this, but again, you have hyper parameters"}, {"start": 1856.8, "end": 1863.04, "text": " there like how long is the sliding window and how does the bandit algorithm do the exploration"}, {"start": 1863.04, "end": 1864.04, "text": " exploitation trade off."}, {"start": 1864.04, "end": 1869.2, "text": " So it seems to me you're just shifting it one level higher and it also seems like we're"}, {"start": 1869.2, "end": 1878.8400000000001, "text": " getting into the region of where we are meta over engineering our approaches to the specifics"}, {"start": 1878.8400000000001, "end": 1887.0800000000002, "text": " of this Tory benchmark because we're kind of observing all the K of these agents do this"}, {"start": 1887.0800000000002, "end": 1888.56, "text": " wrong, these agents do this wrong."}, {"start": 1888.56, "end": 1897.8, "text": " So let's just build an agent that can do both sort of and then the kind of audastic thing"}, {"start": 1897.8, "end": 1904.3999999999999, "text": " I find that they open with how to measure artificial general intelligence, which I mean,"}, {"start": 1904.3999999999999, "end": 1908.96, "text": " come on, you're just, it's kind of a mystery right now, you're just kind of over and over"}, {"start": 1908.96, "end": 1912.56, "text": " and over fitting on this one benchmark."}, {"start": 1912.56, "end": 1921.44, "text": " There's not really a need to make this into a story on artificial general intelligence."}, {"start": 1921.44, "end": 1924.76, "text": " All right, so this was my two cents to this."}, {"start": 1924.76, "end": 1927.4, "text": " I hope you enjoyed this and bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=lmAj0SU_bW0 | Axial Attention & MetNet: A Neural Weather Model for Precipitation Forecasting | MetNet is a predictive neural network model for weather prediction. It uses axial attention to capture long-range dependencies. Axial attention decomposes attention layers over images into row-attention and column-attention in order to save memory and computation.
https://ai.googleblog.com/2020/03/a-neural-weather-model-for-eight-hour.html
https://arxiv.org/abs/1912.12180
Abstract:
Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km2 and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States.
Authors: Casper Kaae Sønderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver,Tim Salimans, Shreya Agrawal, Jason Hickey, Nal Kalchbrenner
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. So what you're looking at here is a weather forecast model. Specifically, the very top row is a new weather forecast model called NetNet by Google Research. So the goal of weather prediction is pretty simple. You want to know what the weather is going to be in the future. Specifically here, you want to know precipitation rates. And so this is a new work that uses neural network instead of physical models in order to predict precipitation. So in the middle here, you see this is the ground truth of what really happened at that particular time. You see precipitation rates in red here moving across the country. Now the bottom there is a physical model. And as far as I understand that physical models have been used so far to make weather predictions, which basically means that you simulate these these rain clouds and the movement of them across the country. And you do a physical simulation like a particle simulation type of thing. And then that allows you to predict and then you run that maybe multiple times. And you get an idea of the kind of distribution you're going to get. Now what NetNet does is it simply uses a neural network to predict the outcome directly. So there is no physical simulation involved. Right. There is just a neural network that takes as input what's the situation now and maybe over a stretch of time. And then you ask it, please make a prediction in eight hours or something. And then the NetNet will make that prediction and it will just output it like, you know, snap no physical simulation needed. And you also see here that NetNet outputs things in kind of a cloud way in a probabilistic way. In one forward pass, you don't need to run it multiple times. But we'll get to that. On the bottom here, you see the measurement. So the axis is F1. F1 is kind of the the overlap of the how well you're able to predict a precipitation. And you see here the NetNet is above the HRR baseline for most of this time up to 480 minutes into the future. Right. Which is eight hours, I believe. All right. So the paper is the following. It's called NetNet, a neural weather model for precipitation forecasting. And I'm not going to read all the names here. The main corresponding authors are Casper Case Underby and now Calc Brenner. And it's a team of Google research. So specifically, they use the input of these two things here. So one is this goes 16, which is what you see here on the left. And the precipitation rates are here depicted on the right. So you want to take these things as input into your model. Now how do you do that? Of course, we're going to build a neural network. And this is the architecture they come up with. So on the bottom here, they feed in the data. And they feed in the data in 15 minute interval from 90 minutes into the past. So you have to imagine it like this. So there's a timeline. Use a little bit of a finer thing. So there's a timeline. And do you let's say are here? This is now. And then here in the future, this is maybe one hour into the future. This is your target. Right. This is your here and you're looking out. You would like to know what's the the precipitation going to be in one hour from now. What magnet does is it takes a so-called takes an input. And specifically, it takes the last 19 minutes before now as an input. And it samples it in frequencies of 15 minute intervals. Right. So each one of these is going to be 15 minutes. And each 15 minutes, you get like a snapshot of this entire of the input region. Now the input region, if I can jump back here to the website for a second, they show it. What the input region is, the input region, if you want to predict in the middle of the small square, the input region is actually the entire 1224 square kilometers around it. So it's very big input. Though the actual region you consider is the inside 64 square kilometers. But you take an information from the big region. And the main point of the paper, I believe, is how to do that. Right. So each 15 minutes, you take a snapshot. And these are these snapshots here on the bottom. So these are, and you have to imagine in here every 15 minutes, there's a stack of these inputs. So what are these inputs? These inputs are some kind of features that you have. So there is the target time, which in this case would be this one hour here. There is the monthday hour, which is important for weather prediction, right. So the time of year, time of day, and so on, on std. latitude is probably pretty important. Elevation map is probably pretty important. So these, you can see these are all maps. Now sometimes this is how you encode things in these. Since it's a neural network, you know, all of these things must be of the same dimensions here. So if you have 256 dimensions here, and probably 256 dimensions here, then all of these things must be of the same dimension. And if you want to give a feature such as the target time, which in this case, let's say it's one hour, so you just put here one hour, what's one hour? Let's say 60 minutes. So you just put the number 60 here, 60, 60, 60, 60, 60, 256 times, and 256 times, 256, times 256, sorry, 265 times. Now 56. I'm confusing with German. So this is how you encode features. It's pretty primitive, but it turns out it works the best if you do it this way. All right. So you have these planes. And some, as I said, are just features such as the target time, month, day, and hour, and so on. Elevation, I guess, is a map. Is like an elevation map of the region you consider. And this corresponds now to this, these 64 kilometers times 64 kilometers here. And that's exactly what these center crops are here. So this center crop thing, that now this thing here, this plane, sorry, is this 64 by 64 region. That's this plane here. And also the, that's for the precipitation, the GOES, that's this thing here. Now we also have these down sampled things, which these are the 1,024 kilometers. So this here and this here, these are the 1,024 square kilometer patches, but they are down sampled. So everything is down sampled, I guess, to 256 by 256 pixels. So you don't really take into account every nuance of that very big of that very big input, but you do down sample it. So you can get the big picture of the outer frame. And in the inner frame, you take it in a much higher resolution in order to get the details. All right, so you stack all of this up into a big tensor and then you feed it into here into a spatial down sampler, which I guess, no, I have read, is a, some just a convolutional neural network. Right. So this is your typical image processing pipeline. So you do this for each of these stacks, right. And then what you get out of it is a, is a, is a lower size representation right here. So you get these representation. And then you let a temporal encoder run over it. What does a temporal encoder do? This in particular is a convolutional LSTM. And if you already know what an LSTM is, a convolutional LSTM is nothing more than an LSTM that has as intermediate layers, convolutional layers. So it's pretty suited to do, for example, videos or any sort of image processing that goes over time like this one. So the temporal encoder simply starts out here with an initial state. My pens are screwing me today. So it starts out here with an initial state. And then it simply inputs each of these representations takes them one by one runs across time, right. And each time producing a new intermediate representation of the, of the input until it finally reaches this here, final representation. So this thing here is a single final representation of all of this input, right, of, of this entire, of this entire time span of all of these stacks here. Yeah, so you compress this into a single input with first a convolutional network to down sample each time point individually. And then with a recurrent neural network, an LSTM to integrate the information over time, you end up with a single piece here. And then what you do, so you still, here you still retain kind of an image sort of thing. So this representation here, you can see it in the background, maybe I get a thama scribbles here. This here is still sort of an image tensor, though I guess it's a hidden representation. So you couldn't really look at it, but it is, it still has dimensions of images. So this here is still I, I think the same or corresponding to these dimensions here. So this has still has some spatial information where this might be north south here in this axis might be east and west, right. And then these are just the hidden channels, the channels of the hidden representations. Right. So what you would like to do now is, is to basically encode information from the space around you. If you look at, let's look at one of these, one of the big pictures. What you would like to do in weather prediction, let's say you are right here, what's a good color, you are right here, right. Now, if you want to know if this particular cloud over here is going to move to your direction, what you want to know is, for example, is there a mountain range here, right. Because then it's more probable that this cloud is going maybe to move up there. You would also want to know how this cloud here moves, right. If this cloud here moves somewhere here around, then it's probably this cloud down here might be pulled with it or something like this. So you're very much, sorry, you're there. You're very much kind of want to look out into each of the directions here. And you want to incorporate kind of what's happening across the space. We're already used to kind of convolutional networks being able to do this, but in here the authors use attention to do that. So if you don't know what attention is, my most popular video is about attention and you can do attention for images. So the way that works is that you have a series of images of stacked blocks of an neural network. Let me draw this here. So you have an image here. And let's say it has just four pixels, right. So you have the next layer of these four pixels, right. So you have layers of this. So the next layers of the four pixels, they all emit what are called queries. And queries are just vectors. So each pixel emits a single vector. Let's say this, that, that, this, right. And each of the lower layers emits what is called a key. This, this, this, this, this. And now the keys and the queries are routed together based on their inner product. So these two would be routed together. This would probably be routed here. This is well. This would probably route it here. So what in effect each of the pixels of the higher layer can look at specific pixels of the lower layer. Now you can imagine this is exactly what we want here in that. If there is a mountain range here, and we might be interested in that. So we'll be able from our, from our point here to specifically attend to that location using, using attention, right. So the authors here build basically a stacked model of attention layers. That's what's happening in the third part here. This is the attention is in order to incorporate long range dependencies. As I made the example with the mountain range, this might be far away, but it might actually influence your weather very much. So the attention is to incorporate these long range dependencies, but the problem with attention is, is, as you saw in the example, each of these pixels can attend to each of the pixels in the lower layer. So what you'd end up with, so each can attend to that. This can attend to each, this can attend to each. You'll see you'll end up with 16 connections, you can't even draw them. So you'll end up with 16 connections in general. If you have D here, you will end up with a D squared number of things you need to calculate, right. So if this here, and now of course we have images, so generally we'll think of D by D pixels. Now we have D by D pixels, and that thing squared number of things we need to calculate. This quickly gets too much. So in, for example, M-ness, you have a 28 by 28, F-28 by 28 pixel images. This is 780, so two or something. I don't quite remember. But you'll have to calculate this squared many connections between things. This is simply gets impossible pretty quickly, especially if you scale up the images and then have some channels in here as well. So attention for image processing has been a bit lagging compared to natural language processing. Natural language processing usually have maybe 500 tokens or something. Images you have much more. So attention is much more expensive, so you can't really do it on current hardware. Now this paper uses something called axial attention. Axial attention is kind of the trick of how to make these, this tension happen for images. And for that, I want to switch over to this paper. It's called axial attention in multi-dimensional transformers. I, some of the same authors, so Jonathan Ho and Nell Koutbrenner, also of Google Brain and UC Berkeley, and they proposed this axial transformer. Now they originally proposed axial attention for auto-regressive models. If you know transformers, they also started by making auto-regressive models, so language modeling, and so on. But we can decouple the axial attention from the auto-regressivity of these models. So I'm not going to talk about auto-regressive models here, just axial attention. So what is axial attention? It's pretty simple actually. And I want to start by talking about convolutions. So what does a convolution do? Let's just take a one-dimensional image, which is pretty boring, but let's say it has these eight pixels here. So this is an image. It just has one row of eight pixels. What do I do when I run a convolutional filter across that? This is the lower layer, and now this is the next layer that is produced by a convolution. So for each of the pixels in the next layer, what I can do with a convolutional layer, I can look at its neighbors in the lower layer, right? So these three would be part of that. And then I go on to this, and again, I look at its neighbors at these three, right? I might have done this in a different color. And then I look at this, and it can look at itself and its neighbors, right? So a convolution is pretty smart, and then of course in the next layer, that repeats. Now, if you think what's the difference between doing this and a fully connected layer? So if I have a fully connected layer, you know, classic neural network, fully connected layer, then this pixel here would incorporate information from all of the pixels here, right? And this pixel here would incorporate information from all the pixels. Now, why might this be better? Because, you know, the information that I want here for this pixel might depend on this pixel over here. So I might benefit from a connection over there, or it might benefit from, sorry, from this pixel here, right? Which you can't reach. And with a convolutional network, I can't do that. Why are then convolutional networks preferable? Because the convolutional network can do the same thing across multiple layers, right? So let's assume, again, that this pixel here needs information from this pixel right here. And as you can see in just one layer, it can only get information from those, right? But now take the next layer. So the same pixel here, it can attend to these three, right? Now, these three can each turn attend to it their neighbors, right? And I'm not going to draw everything, but the resolution field for this pixel here will end up being all of this, right? Now, we still don't have our desired pixel in here, but if we just go one layer more, ta-da-da-da-da, then this pixel right here, a different color, this pixel right here, right? The resolution field across the layers increases because it's always incorporating information from downstream and the downstream, again, incorporates information from the downstream. So eventually, you can aggregate the same information. So instead of having a single layer with all of these connections, we have convolutional layers which seem like it worse idea because they can only do less things, attend to less things, but across the layers, they actually can do the same thing, right? And that turns out to be a huge advantage of these convolutional layers and that's why convolutional layers are used for image processing and not the multi-layer perceptrons. So the same exact thing happens with axial attention just in a different form. It is a bit poorly drawn here, I believe, but this is how you have to imagine it. As before, right? This pixel, the red pixel here, if I just have a normal transformer layer, the red pixel can attend to all of the other pixels in the image, right? That's the, that's basically, and each of the pixels can do that. So that's your de-squared computation right here. Now, what we want to do is, in a convolutional layer, what we would do is, okay, you can only attend to your neighbors and then in the next layer, the neighbors can attend to their neighbors and thereby you go out and out. In axial attention, you say, okay, this thing can only attend to its row and its column, right? That's it. You can only do attention to your row and your column and I believe they don't even do it at the same time. So in one layer, you can attend to the row you're in and in the other, you can attend to the column you're in. Now, let's see how the same thing happens as for a convolutional layer. So in the, basically, how then if the red pixel needs access to information in this green pixel, how does it do that? So in the first layer, it can attend to its row and its column, right? And so can every other pixel including, sorry, including, of course, the pixel where that, so let's say, this square here can also attend to its row and its column and its row happens to be including the green one, right? So in layer one, this red square here gets information from the green square via row attention, right? And then in layer two now, this our red square of interest now can row attend to this other red square here. So they get connected in layer two. I'm sorry, I don't want that. So you see that within just two layers, we've transferred information from the green square via this red square to that red square. So we can, in the same way as a convolution, you can replace the long range arbitrary dependencies between pixels by simply having multiple layers of restricted dependence. Same goes for this axial attention. So you can replace the arbitrary attention in layers, right? You can replace that by a two step process where you first transfer information via the column and then transfer it via the row. It's a bit like, you know, in chess, you can have a queen that can move any direction, especially diagonally. And then if you just have a rook, you kind of need to do two moves. So in the queen is like the full attention. And the rook is the multi layer axial attention. They can achieve the same thing. You just need more layers. But as a trade off, you get a super, super saving in requirement of memory and computation, right? So they stress that, you know, kind of you can represent the same distributions with the axial attention. And you know, the trade off is you just have to do multiple layers of it. Right? So this is axial attention. And they are now able to incorporate this into their model right here. So they have, I believe, eight blocks. So four row attention. You see this right here and four column attention blocks in their model. And finally, they output a, they output this distribution here across their region of interest. Now this again is your, I believe, this 64 by 64 resolution. So you can see how they kind of aggregated information across the 64 using this axial attention. And then that makes their prediction in in this one hour. So this is this. All right. So this, this was a long way. So recap. They have 15 minute snapshots of these, is this input data across along with some features. They use a spatial down sampler, which is a CNN on each of them individually. Then they use a convolutional LSTM to encode this cross time to end up with a single representation here at the end. Then they use axial attention in order to aggregate information across the spatial dimensions. They do this in multiple stages. And at the end, they make a participation prediction, which is, is a distribution, as you can see here. So as an output, you directly get a distribution of results, which is also cool because the physical simulation, you have to let it run many, many times in order to get a distribution of results. And this neural network can simply give you a distribution right away. That's what they say right here. So they go bit into the architecture compared to baseline. I want to get back to what I've shown you at the beginning. And this here is just the picture kind of the picture book. Example also left is the ground truth in the middle is metnet and on the right is a baseline method. This here is in, as you can see, in two hours in four, six, and eight. So you can see the metnet gives you as an output distribution. What I find interesting for example is this sample two right here. So in this sample one, you can see there is a consistent difference. And this is the forecast time. So how much in advance you want to know this would be a was our one hour, but it can go up to eight hours. Here is a consistent gap in F1, which means the metnet does it better across this span of time, which is for the top sample right here. For the bottom sample though, you can see here there is a big gap at the beginning again. There's a big gap at the beginning and then this gap gets smaller and smaller and smaller. And this I think might give you an indication of let's say the weakness of this approach doing it with neural networks. So with neural networks you kind of rely on regularities, you kind of rely on broad scale, correct things that you can learn from the data. And this might work well as as long as things are regular, which of course across shorter time spans things tend to be more regular, right. But if you go for longer time spans, I believe there is more of a chaos element to it, like whether it can be very dependent on very subtle things. And the physics simulation that is really, you know, taking to account the actual physics might be able to much, much better account for that. And that's why I believe across time here, you'll see you see that the two models get closer together. Nothing said metnet of course, still on top here. But it will be interesting to forecast for longer even though I haven't actually dig through their results through their numerical results. But you can do that if you want. All right, so this was it for metnet and axial attention. I hope you like this and bye bye. | [{"start": 0.0, "end": 8.6, "text": " Hi there. So what you're looking at here is a weather forecast model. Specifically, the"}, {"start": 8.6, "end": 15.44, "text": " very top row is a new weather forecast model called NetNet by Google Research. So the"}, {"start": 15.44, "end": 19.92, "text": " goal of weather prediction is pretty simple. You want to know what the weather is going"}, {"start": 19.92, "end": 27.6, "text": " to be in the future. Specifically here, you want to know precipitation rates. And so this"}, {"start": 27.6, "end": 35.160000000000004, "text": " is a new work that uses neural network instead of physical models in order to predict precipitation."}, {"start": 35.160000000000004, "end": 42.24, "text": " So in the middle here, you see this is the ground truth of what really happened at that particular"}, {"start": 42.24, "end": 50.480000000000004, "text": " time. You see precipitation rates in red here moving across the country. Now the bottom there is a"}, {"start": 50.480000000000004, "end": 56.760000000000005, "text": " physical model. And as far as I understand that physical models have been used so far to make"}, {"start": 56.76, "end": 64.2, "text": " weather predictions, which basically means that you simulate these these rain clouds and the"}, {"start": 64.2, "end": 70.67999999999999, "text": " movement of them across the country. And you do a physical simulation like a particle simulation"}, {"start": 71.32, "end": 76.68, "text": " type of thing. And then that allows you to predict and then you run that maybe multiple times."}, {"start": 76.68, "end": 84.6, "text": " And you get an idea of the kind of distribution you're going to get. Now what NetNet does is it"}, {"start": 84.6, "end": 92.28, "text": " simply uses a neural network to predict the outcome directly. So there is no physical simulation"}, {"start": 92.28, "end": 99.72, "text": " involved. Right. There is just a neural network that takes as input what's the situation now and"}, {"start": 99.72, "end": 105.88, "text": " maybe over a stretch of time. And then you ask it, please make a prediction in eight hours or"}, {"start": 105.88, "end": 115.24, "text": " something. And then the NetNet will make that prediction and it will just output it like, you"}, {"start": 115.24, "end": 122.28, "text": " know, snap no physical simulation needed. And you also see here that NetNet outputs things in"}, {"start": 122.28, "end": 129.48, "text": " kind of a cloud way in a probabilistic way. In one forward pass, you don't need to run it"}, {"start": 129.48, "end": 137.39999999999998, "text": " multiple times. But we'll get to that. On the bottom here, you see the measurement. So the axis is"}, {"start": 137.39999999999998, "end": 147.0, "text": " F1. F1 is kind of the the overlap of the how well you're able to predict a precipitation. And you"}, {"start": 147.0, "end": 158.44, "text": " see here the NetNet is above the HRR baseline for most of this time up to 480 minutes into the"}, {"start": 158.44, "end": 167.72, "text": " future. Right. Which is eight hours, I believe. All right. So the paper is the following. It's"}, {"start": 167.72, "end": 174.28, "text": " called NetNet, a neural weather model for precipitation forecasting. And I'm not going to read all"}, {"start": 174.28, "end": 181.0, "text": " the names here. The main corresponding authors are Casper Case Underby and now Calc Brenner."}, {"start": 181.0, "end": 193.64, "text": " And it's a team of Google research. So specifically, they use the input of these two things here."}, {"start": 193.64, "end": 203.96, "text": " So one is this goes 16, which is what you see here on the left. And the precipitation rates are"}, {"start": 203.96, "end": 213.72, "text": " here depicted on the right. So you want to take these things as input into your model. Now how do"}, {"start": 213.72, "end": 220.28, "text": " you do that? Of course, we're going to build a neural network. And this is the architecture they come"}, {"start": 220.28, "end": 229.24, "text": " up with. So on the bottom here, they feed in the data. And they feed in the data in 15 minute"}, {"start": 229.24, "end": 234.68, "text": " interval from 90 minutes into the past. So you have to imagine it like this. So there's a timeline."}, {"start": 235.88, "end": 242.28, "text": " Use a little bit of a finer thing. So there's a timeline. And do you let's say are here? This is now."}, {"start": 243.24, "end": 251.16000000000003, "text": " And then here in the future, this is maybe one hour into the future. This is your target. Right."}, {"start": 251.16000000000003, "end": 255.8, "text": " This is your here and you're looking out. You would like to know what's the"}, {"start": 255.8, "end": 263.40000000000003, "text": " the precipitation going to be in one hour from now. What magnet does is it takes a so-called"}, {"start": 264.28000000000003, "end": 273.16, "text": " takes an input. And specifically, it takes the last 19 minutes before now as an input. And it"}, {"start": 273.16, "end": 281.8, "text": " samples it in frequencies of 15 minute intervals. Right. So each one of these is going to be 15 minutes."}, {"start": 281.8, "end": 293.88, "text": " And each 15 minutes, you get like a snapshot of this entire of the input region. Now the input"}, {"start": 293.88, "end": 303.08000000000004, "text": " region, if I can jump back here to the website for a second, they show it. What the input region"}, {"start": 303.08000000000004, "end": 308.76, "text": " is, the input region, if you want to predict in the middle of the small square, the input region"}, {"start": 308.76, "end": 317.32, "text": " is actually the entire 1224 square kilometers around it. So it's very big input. Though the"}, {"start": 317.32, "end": 325.64, "text": " actual region you consider is the inside 64 square kilometers. But you take an information"}, {"start": 325.64, "end": 330.36, "text": " from the big region. And the main point of the paper, I believe, is how to do that."}, {"start": 330.36, "end": 338.76, "text": " Right. So each 15 minutes, you take a snapshot. And these are these snapshots here on the bottom."}, {"start": 338.76, "end": 345.64, "text": " So these are, and you have to imagine in here every 15 minutes, there's a stack of these inputs."}, {"start": 345.64, "end": 352.52000000000004, "text": " So what are these inputs? These inputs are some kind of features that you have. So there is the"}, {"start": 352.52, "end": 361.71999999999997, "text": " target time, which in this case would be this one hour here. There is the monthday hour, which is"}, {"start": 361.71999999999997, "end": 367.0, "text": " important for weather prediction, right. So the time of year, time of day, and so on, on std."}, {"start": 367.0, "end": 373.79999999999995, "text": " latitude is probably pretty important. Elevation map is probably pretty important. So these, you can see"}, {"start": 374.44, "end": 381.96, "text": " these are all maps. Now sometimes this is how you encode things in these. Since it's a neural"}, {"start": 381.96, "end": 387.56, "text": " network, you know, all of these things must be of the same dimensions here. So if you have 256"}, {"start": 387.56, "end": 394.91999999999996, "text": " dimensions here, and probably 256 dimensions here, then all of these things must be of the same"}, {"start": 394.91999999999996, "end": 400.59999999999997, "text": " dimension. And if you want to give a feature such as the target time, which in this case,"}, {"start": 400.59999999999997, "end": 407.71999999999997, "text": " let's say it's one hour, so you just put here one hour, what's one hour? Let's say 60 minutes."}, {"start": 407.72, "end": 415.96000000000004, "text": " So you just put the number 60 here, 60, 60, 60, 60, 60, 256 times, and 256 times, 256,"}, {"start": 415.96000000000004, "end": 428.6, "text": " times 256, sorry, 265 times. Now 56. I'm confusing with German. So this is how you encode features."}, {"start": 428.6, "end": 435.0, "text": " It's pretty primitive, but it turns out it works the best if you do it this way. All right. So you"}, {"start": 435.0, "end": 440.12, "text": " have these planes. And some, as I said, are just features such as the target time, month, day,"}, {"start": 440.12, "end": 448.44, "text": " and hour, and so on. Elevation, I guess, is a map. Is like an elevation map of the region you"}, {"start": 448.44, "end": 456.52, "text": " consider. And this corresponds now to this, these 64 kilometers times 64 kilometers here."}, {"start": 458.52, "end": 462.84, "text": " And that's exactly what these center crops are here. So this center crop thing,"}, {"start": 462.84, "end": 474.44, "text": " that now this thing here, this plane, sorry, is this 64 by 64 region. That's this plane here."}, {"start": 475.47999999999996, "end": 483.64, "text": " And also the, that's for the precipitation, the GOES, that's this thing here. Now we also have"}, {"start": 483.64, "end": 497.24, "text": " these down sampled things, which these are the 1,024 kilometers. So this here and this here,"}, {"start": 498.36, "end": 506.52, "text": " these are the 1,024 square kilometer patches, but they are down sampled. So everything is"}, {"start": 506.52, "end": 515.56, "text": " down sampled, I guess, to 256 by 256 pixels. So you don't really take into account every nuance of"}, {"start": 515.56, "end": 522.76, "text": " that very big of that very big input, but you do down sample it. So you can get the big picture"}, {"start": 522.76, "end": 529.88, "text": " of the outer frame. And in the inner frame, you take it in a much higher resolution in order to"}, {"start": 529.88, "end": 537.64, "text": " get the details. All right, so you stack all of this up into a big tensor and then you feed it"}, {"start": 537.64, "end": 547.4, "text": " into here into a spatial down sampler, which I guess, no, I have read, is a, some just a convolutional"}, {"start": 547.4, "end": 553.8, "text": " neural network. Right. So this is your typical image processing pipeline. So you do this for each"}, {"start": 553.8, "end": 563.9599999999999, "text": " of these stacks, right. And then what you get out of it is a, is a, is a lower size representation"}, {"start": 563.9599999999999, "end": 569.88, "text": " right here. So you get these representation. And then you let a temporal encoder run over it."}, {"start": 569.88, "end": 576.5999999999999, "text": " What does a temporal encoder do? This in particular is a convolutional LSTM. And if you already know"}, {"start": 576.6, "end": 585.4, "text": " what an LSTM is, a convolutional LSTM is nothing more than an LSTM that has as intermediate layers,"}, {"start": 585.4, "end": 593.5600000000001, "text": " convolutional layers. So it's pretty suited to do, for example, videos or any sort of image processing"}, {"start": 593.5600000000001, "end": 601.88, "text": " that goes over time like this one. So the temporal encoder simply starts out here with an initial state."}, {"start": 601.88, "end": 609.0, "text": " My pens are screwing me today. So it starts out here with an initial state. And then it simply"}, {"start": 609.0, "end": 617.8, "text": " inputs each of these representations takes them one by one runs across time, right. And each time"}, {"start": 617.8, "end": 625.88, "text": " producing a new intermediate representation of the, of the input until it finally reaches this"}, {"start": 625.88, "end": 635.64, "text": " here, final representation. So this thing here is a single final representation of all of this"}, {"start": 635.64, "end": 642.6, "text": " input, right, of, of this entire, of this entire time span of all of these stacks here."}, {"start": 645.64, "end": 652.6, "text": " Yeah, so you compress this into a single input with first a convolutional network to down sample"}, {"start": 652.6, "end": 659.96, "text": " each time point individually. And then with a recurrent neural network, an LSTM to integrate"}, {"start": 659.96, "end": 666.52, "text": " the information over time, you end up with a single piece here. And then what you do,"}, {"start": 667.48, "end": 673.88, "text": " so you still, here you still retain kind of an image sort of thing. So this representation here,"}, {"start": 675.0, "end": 679.64, "text": " you can see it in the background, maybe I get a thama scribbles here."}, {"start": 679.64, "end": 687.48, "text": " This here is still sort of an image tensor, though I guess it's a hidden representation."}, {"start": 687.48, "end": 695.48, "text": " So you couldn't really look at it, but it is, it still has dimensions of images. So this here"}, {"start": 696.52, "end": 704.84, "text": " is still I, I think the same or corresponding to these dimensions here. So this has still"}, {"start": 704.84, "end": 712.52, "text": " has some spatial information where this might be north south here in this axis might be east and"}, {"start": 712.52, "end": 720.12, "text": " west, right. And then these are just the hidden channels, the channels of the hidden representations."}, {"start": 720.12, "end": 733.72, "text": " Right. So what you would like to do now is, is to basically encode information from the space"}, {"start": 733.72, "end": 742.84, "text": " around you. If you look at, let's look at one of these, one of the big pictures. What you would"}, {"start": 742.84, "end": 750.84, "text": " like to do in weather prediction, let's say you are right here, what's a good color, you are right"}, {"start": 750.84, "end": 758.6, "text": " here, right. Now, if you want to know if this particular cloud over here is going to move to your"}, {"start": 758.6, "end": 768.6800000000001, "text": " direction, what you want to know is, for example, is there a mountain range here, right. Because then"}, {"start": 768.68, "end": 775.56, "text": " it's more probable that this cloud is going maybe to move up there. You would also want to know"}, {"start": 775.56, "end": 784.04, "text": " how this cloud here moves, right. If this cloud here moves somewhere here around, then it's"}, {"start": 784.04, "end": 791.0799999999999, "text": " probably this cloud down here might be pulled with it or something like this. So you're very much,"}, {"start": 791.08, "end": 799.64, "text": " sorry, you're there. You're very much kind of want to look out into each of the directions here."}, {"start": 800.2800000000001, "end": 807.8000000000001, "text": " And you want to incorporate kind of what's happening across the space. We're already used to"}, {"start": 807.8000000000001, "end": 816.0400000000001, "text": " kind of convolutional networks being able to do this, but in here the authors use attention to"}, {"start": 816.04, "end": 822.68, "text": " do that. So if you don't know what attention is, my most popular video is about attention and you"}, {"start": 822.68, "end": 832.12, "text": " can do attention for images. So the way that works is that you have a series of images of stacked"}, {"start": 832.12, "end": 839.0799999999999, "text": " blocks of an neural network. Let me draw this here. So you have an image here. And let's say it has"}, {"start": 839.08, "end": 846.9200000000001, "text": " just four pixels, right. So you have the next layer of these four pixels, right. So you have layers"}, {"start": 846.9200000000001, "end": 853.72, "text": " of this. So the next layers of the four pixels, they all emit what are called queries. And queries"}, {"start": 853.72, "end": 864.36, "text": " are just vectors. So each pixel emits a single vector. Let's say this, that, that, this, right."}, {"start": 864.36, "end": 873.8000000000001, "text": " And each of the lower layers emits what is called a key. This, this, this, this, this. And now the"}, {"start": 873.8000000000001, "end": 879.48, "text": " keys and the queries are routed together based on their inner product. So these two would be"}, {"start": 879.48, "end": 884.6800000000001, "text": " routed together. This would probably be routed here. This is well. This would probably route it"}, {"start": 884.6800000000001, "end": 894.12, "text": " here. So what in effect each of the pixels of the higher layer can look at specific pixels of"}, {"start": 894.12, "end": 901.16, "text": " the lower layer. Now you can imagine this is exactly what we want here in that. If there is a"}, {"start": 901.16, "end": 909.4, "text": " mountain range here, and we might be interested in that. So we'll be able from our, from our point"}, {"start": 909.4, "end": 918.6800000000001, "text": " here to specifically attend to that location using, using attention, right. So the authors here"}, {"start": 918.68, "end": 926.04, "text": " build basically a stacked model of attention layers. That's what's happening in the third part here."}, {"start": 927.16, "end": 935.8, "text": " This is the attention is in order to incorporate long range dependencies. As I made the example"}, {"start": 935.8, "end": 941.8, "text": " with the mountain range, this might be far away, but it might actually influence your weather very"}, {"start": 941.8, "end": 948.76, "text": " much. So the attention is to incorporate these long range dependencies, but the problem with attention"}, {"start": 948.76, "end": 958.8399999999999, "text": " is, is, as you saw in the example, each of these pixels can attend to each of the pixels"}, {"start": 958.8399999999999, "end": 966.52, "text": " in the lower layer. So what you'd end up with, so each can attend to that. This can attend to each,"}, {"start": 966.52, "end": 971.56, "text": " this can attend to each. You'll see you'll end up with 16 connections, you can't even draw them. So"}, {"start": 971.56, "end": 981.3199999999999, "text": " you'll end up with 16 connections in general. If you have D here, you will end up with a D"}, {"start": 981.3199999999999, "end": 988.4399999999999, "text": " squared number of things you need to calculate, right. So if this here, and now of course we have"}, {"start": 988.44, "end": 1000.12, "text": " images, so generally we'll think of D by D pixels. Now we have D by D pixels, and that thing squared"}, {"start": 1000.7600000000001, "end": 1008.6, "text": " number of things we need to calculate. This quickly gets too much. So in, for example, M-ness,"}, {"start": 1008.6, "end": 1023.8000000000001, "text": " you have a 28 by 28, F-28 by 28 pixel images. This is 780, so two or something. I don't quite remember."}, {"start": 1025.16, "end": 1033.72, "text": " But you'll have to calculate this squared many connections between things. This is simply"}, {"start": 1033.72, "end": 1040.2, "text": " gets impossible pretty quickly, especially if you scale up the images and then have some channels"}, {"start": 1040.2, "end": 1049.32, "text": " in here as well. So attention for image processing has been a bit lagging compared to natural"}, {"start": 1049.32, "end": 1056.04, "text": " language processing. Natural language processing usually have maybe 500 tokens or something. Images"}, {"start": 1056.04, "end": 1061.4, "text": " you have much more. So attention is much more expensive, so you can't really do it on current hardware."}, {"start": 1061.4, "end": 1069.4, "text": " Now this paper uses something called axial attention. Axial attention is kind of the trick of how to"}, {"start": 1070.0400000000002, "end": 1076.2800000000002, "text": " make these, this tension happen for images. And for that, I want to switch over to this paper."}, {"start": 1076.2800000000002, "end": 1082.8400000000001, "text": " It's called axial attention in multi-dimensional transformers. I, some of the same authors,"}, {"start": 1082.84, "end": 1092.04, "text": " so Jonathan Ho and Nell Koutbrenner, also of Google Brain and UC Berkeley, and they proposed this"}, {"start": 1092.04, "end": 1100.12, "text": " axial transformer. Now they originally proposed axial attention for auto-regressive models."}, {"start": 1100.12, "end": 1107.3999999999999, "text": " If you know transformers, they also started by making auto-regressive models, so language modeling,"}, {"start": 1107.4, "end": 1116.6000000000001, "text": " and so on. But we can decouple the axial attention from the auto-regressivity of these models."}, {"start": 1116.6000000000001, "end": 1122.2, "text": " So I'm not going to talk about auto-regressive models here, just axial attention. So what is axial"}, {"start": 1122.2, "end": 1129.16, "text": " attention? It's pretty simple actually. And I want to start by talking about convolutions."}, {"start": 1129.16, "end": 1137.96, "text": " So what does a convolution do? Let's just take a one-dimensional image, which is pretty boring,"}, {"start": 1137.96, "end": 1145.4, "text": " but let's say it has these eight pixels here. So this is an image. It just has one row of eight"}, {"start": 1145.4, "end": 1153.5600000000002, "text": " pixels. What do I do when I run a convolutional filter across that? This is the lower layer,"}, {"start": 1153.56, "end": 1163.96, "text": " and now this is the next layer that is produced by a convolution. So for each of the pixels in the"}, {"start": 1163.96, "end": 1172.04, "text": " next layer, what I can do with a convolutional layer, I can look at its neighbors in the lower"}, {"start": 1172.04, "end": 1180.04, "text": " layer, right? So these three would be part of that. And then I go on to this, and again, I look at"}, {"start": 1180.04, "end": 1188.12, "text": " its neighbors at these three, right? I might have done this in a different color. And then I look at"}, {"start": 1188.12, "end": 1198.12, "text": " this, and it can look at itself and its neighbors, right? So a convolution is pretty smart,"}, {"start": 1198.12, "end": 1209.1599999999999, "text": " and then of course in the next layer, that repeats. Now, if you think what's the difference"}, {"start": 1209.16, "end": 1213.88, "text": " between doing this and a fully connected layer? So if I have a fully connected layer,"}, {"start": 1214.76, "end": 1223.96, "text": " you know, classic neural network, fully connected layer, then this pixel here would incorporate"}, {"start": 1223.96, "end": 1233.4, "text": " information from all of the pixels here, right? And this pixel here would incorporate information"}, {"start": 1233.4, "end": 1239.88, "text": " from all the pixels. Now, why might this be better? Because, you know, the information that I want"}, {"start": 1240.3600000000001, "end": 1248.68, "text": " here for this pixel might depend on this pixel over here. So I might benefit from a connection"}, {"start": 1248.68, "end": 1254.76, "text": " over there, or it might benefit from, sorry, from this pixel here, right? Which you can't reach."}, {"start": 1255.4, "end": 1262.0400000000002, "text": " And with a convolutional network, I can't do that. Why are then convolutional networks preferable?"}, {"start": 1262.04, "end": 1268.44, "text": " Because the convolutional network can do the same thing across multiple layers, right?"}, {"start": 1268.44, "end": 1277.6399999999999, "text": " So let's assume, again, that this pixel here needs information from this pixel right here."}, {"start": 1278.52, "end": 1286.52, "text": " And as you can see in just one layer, it can only get information from those, right?"}, {"start": 1286.52, "end": 1297.24, "text": " But now take the next layer. So the same pixel here, it can attend to these three, right? Now,"}, {"start": 1298.6, "end": 1307.56, "text": " these three can each turn attend to it their neighbors, right? And I'm not going to draw everything,"}, {"start": 1307.56, "end": 1317.1599999999999, "text": " but the resolution field for this pixel here will end up being all of this, right? Now, we still"}, {"start": 1317.1599999999999, "end": 1321.8799999999999, "text": " don't have our desired pixel in here, but if we just go one layer more,"}, {"start": 1323.56, "end": 1332.52, "text": " ta-da-da-da-da, then this pixel right here, a different color, this pixel right here,"}, {"start": 1332.52, "end": 1337.6399999999999, "text": " right? The resolution field across the layers"}, {"start": 1341.24, "end": 1348.12, "text": " increases because it's always incorporating information from downstream and the downstream,"}, {"start": 1348.12, "end": 1353.0, "text": " again, incorporates information from the downstream. So eventually, you can aggregate the same"}, {"start": 1353.0, "end": 1360.04, "text": " information. So instead of having a single layer with all of these connections, we have convolutional"}, {"start": 1360.04, "end": 1366.84, "text": " layers which seem like it worse idea because they can only do less things, attend to less things,"}, {"start": 1366.84, "end": 1376.12, "text": " but across the layers, they actually can do the same thing, right? And that turns out to be a"}, {"start": 1376.12, "end": 1380.6, "text": " huge advantage of these convolutional layers and that's why convolutional layers are used for"}, {"start": 1380.6, "end": 1389.8, "text": " image processing and not the multi-layer perceptrons. So the same exact thing happens with axial"}, {"start": 1389.8, "end": 1397.8, "text": " attention just in a different form. It is a bit poorly drawn here, I believe, but this is how"}, {"start": 1397.8, "end": 1409.6399999999999, "text": " you have to imagine it. As before, right? This pixel, the red pixel here, if I just have a normal"}, {"start": 1409.64, "end": 1418.6000000000001, "text": " transformer layer, the red pixel can attend to all of the other pixels in the image, right?"}, {"start": 1419.8000000000002, "end": 1424.2, "text": " That's the, that's basically, and each of the pixels can do that. So that's your"}, {"start": 1424.2, "end": 1431.88, "text": " de-squared computation right here. Now, what we want to do is, in a convolutional layer,"}, {"start": 1431.88, "end": 1437.16, "text": " what we would do is, okay, you can only attend to your neighbors and then in the next layer,"}, {"start": 1437.16, "end": 1442.52, "text": " the neighbors can attend to their neighbors and thereby you go out and out. In axial attention,"}, {"start": 1443.16, "end": 1455.5600000000002, "text": " you say, okay, this thing can only attend to its row and its column, right? That's it. You can only"}, {"start": 1455.5600000000002, "end": 1461.88, "text": " do attention to your row and your column and I believe they don't even do it at the same time."}, {"start": 1461.88, "end": 1467.64, "text": " So in one layer, you can attend to the row you're in and in the other, you can attend to the column"}, {"start": 1467.64, "end": 1473.72, "text": " you're in. Now, let's see how the same thing happens as for a convolutional layer. So in the,"}, {"start": 1475.48, "end": 1483.5600000000002, "text": " basically, how then if the red pixel needs access to information in this green pixel, how does it"}, {"start": 1483.56, "end": 1494.52, "text": " do that? So in the first layer, it can attend to its row and its column, right? And so can every"}, {"start": 1494.52, "end": 1506.6799999999998, "text": " other pixel including, sorry, including, of course, the pixel where that, so let's say, this square"}, {"start": 1506.68, "end": 1516.3600000000001, "text": " here can also attend to its row and its column and its row happens to be including the green one,"}, {"start": 1516.3600000000001, "end": 1528.8400000000001, "text": " right? So in layer one, this red square here gets information from the green square via row"}, {"start": 1528.84, "end": 1540.04, "text": " attention, right? And then in layer two now, this our red square of interest now can row attend"}, {"start": 1541.3999999999999, "end": 1549.8799999999999, "text": " to this other red square here. So they get connected in layer two. I'm sorry, I don't want that."}, {"start": 1549.88, "end": 1561.72, "text": " So you see that within just two layers, we've transferred information from the green square via"}, {"start": 1561.72, "end": 1570.44, "text": " this red square to that red square. So we can, in the same way as a convolution, you can replace"}, {"start": 1570.44, "end": 1579.24, "text": " the long range arbitrary dependencies between pixels by simply having multiple layers of"}, {"start": 1579.24, "end": 1588.52, "text": " restricted dependence. Same goes for this axial attention. So you can replace the arbitrary"}, {"start": 1588.52, "end": 1599.56, "text": " attention in layers, right? You can replace that by a two step process where you first"}, {"start": 1599.56, "end": 1608.76, "text": " transfer information via the column and then transfer it via the row. It's a bit like, you know,"}, {"start": 1608.76, "end": 1617.48, "text": " in chess, you can have a queen that can move any direction, especially diagonally. And then if you"}, {"start": 1617.48, "end": 1624.6799999999998, "text": " just have a rook, you kind of need to do two moves. So in the queen is like the full attention."}, {"start": 1624.68, "end": 1632.3600000000001, "text": " And the rook is the multi layer axial attention. They can achieve the same thing. You just need"}, {"start": 1632.3600000000001, "end": 1642.92, "text": " more layers. But as a trade off, you get a super, super saving in requirement of memory and"}, {"start": 1642.92, "end": 1650.3600000000001, "text": " computation, right? So they stress that, you know, kind of you can represent the same distributions"}, {"start": 1650.36, "end": 1656.52, "text": " with the axial attention. And you know, the trade off is you just have to do multiple layers of it."}, {"start": 1658.12, "end": 1665.6399999999999, "text": " Right? So this is axial attention. And they are now able to incorporate this into their model"}, {"start": 1665.6399999999999, "end": 1672.84, "text": " right here. So they have, I believe, eight blocks. So four row attention. You see this right here"}, {"start": 1672.84, "end": 1682.84, "text": " and four column attention blocks in their model. And finally, they output a, they output this"}, {"start": 1682.84, "end": 1691.48, "text": " distribution here across their region of interest. Now this again is your, I believe, this 64 by 64"}, {"start": 1693.1599999999999, "end": 1699.1599999999999, "text": " resolution. So you can see how they kind of aggregated information across the 64"}, {"start": 1699.16, "end": 1707.48, "text": " using this axial attention. And then that makes their prediction in in this one hour. So this"}, {"start": 1708.0400000000002, "end": 1716.68, "text": " is this. All right. So this, this was a long way. So recap. They have 15 minute snapshots of"}, {"start": 1716.68, "end": 1723.4, "text": " these, is this input data across along with some features. They use a spatial down sampler,"}, {"start": 1723.4, "end": 1730.68, "text": " which is a CNN on each of them individually. Then they use a convolutional LSTM to encode this"}, {"start": 1730.68, "end": 1739.24, "text": " cross time to end up with a single representation here at the end. Then they use axial attention"}, {"start": 1739.24, "end": 1747.3200000000002, "text": " in order to aggregate information across the spatial dimensions. They do this in multiple stages."}, {"start": 1747.32, "end": 1757.0, "text": " And at the end, they make a participation prediction, which is, is a distribution, as you can see"}, {"start": 1757.0, "end": 1764.9199999999998, "text": " here. So as an output, you directly get a distribution of results, which is also cool because the"}, {"start": 1764.9199999999998, "end": 1770.9199999999998, "text": " physical simulation, you have to let it run many, many times in order to get a distribution of"}, {"start": 1770.92, "end": 1778.8400000000001, "text": " results. And this neural network can simply give you a distribution right away. That's what they say"}, {"start": 1779.5600000000002, "end": 1787.88, "text": " right here. So they go bit into the architecture compared to baseline. I want to get back to what"}, {"start": 1787.88, "end": 1793.24, "text": " I've shown you at the beginning. And this here is just the picture kind of the picture book."}, {"start": 1793.24, "end": 1801.48, "text": " Example also left is the ground truth in the middle is metnet and on the right is a baseline method."}, {"start": 1802.2, "end": 1810.6, "text": " This here is in, as you can see, in two hours in four, six, and eight. So you can see the metnet"}, {"start": 1810.6, "end": 1819.24, "text": " gives you as an output distribution. What I find interesting for example is this sample two right"}, {"start": 1819.24, "end": 1826.52, "text": " here. So in this sample one, you can see there is a consistent difference. And this is the forecast"}, {"start": 1826.52, "end": 1831.8, "text": " time. So how much in advance you want to know this would be a was our one hour, but it can go up to"}, {"start": 1831.8, "end": 1841.56, "text": " eight hours. Here is a consistent gap in F1, which means the metnet does it better across this"}, {"start": 1841.56, "end": 1850.9199999999998, "text": " span of time, which is for the top sample right here. For the bottom sample though, you can see here"}, {"start": 1850.9199999999998, "end": 1858.52, "text": " there is a big gap at the beginning again. There's a big gap at the beginning and then this gap gets"}, {"start": 1858.52, "end": 1866.44, "text": " smaller and smaller and smaller. And this I think might give you an indication of let's say the"}, {"start": 1866.44, "end": 1872.52, "text": " weakness of this approach doing it with neural networks. So with neural networks you kind of rely"}, {"start": 1872.52, "end": 1880.52, "text": " on regularities, you kind of rely on broad scale, correct things that you can learn from the data."}, {"start": 1881.56, "end": 1888.92, "text": " And this might work well as as long as things are regular, which of course across shorter time spans"}, {"start": 1888.92, "end": 1896.3600000000001, "text": " things tend to be more regular, right. But if you go for longer time spans, I believe there is more"}, {"start": 1896.3600000000001, "end": 1904.68, "text": " of a chaos element to it, like whether it can be very dependent on very subtle things. And the"}, {"start": 1904.68, "end": 1910.1200000000001, "text": " physics simulation that is really, you know, taking to account the actual physics might be able to"}, {"start": 1910.12, "end": 1918.9199999999998, "text": " much, much better account for that. And that's why I believe across time here, you'll see you see that"}, {"start": 1918.9199999999998, "end": 1925.8799999999999, "text": " the two models get closer together. Nothing said metnet of course, still on top here."}, {"start": 1927.32, "end": 1937.08, "text": " But it will be interesting to forecast for longer even though I haven't actually dig through"}, {"start": 1937.08, "end": 1945.8, "text": " their results through their numerical results. But you can do that if you want. All right, so this was"}, {"start": 1945.8, "end": 1975.72, "text": " it for metnet and axial attention. I hope you like this and bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=wAgO2WZzjn4 | [Rant] coronavirus | A rant about toilet paper and lockdowns.
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | This video is going to be a rant. There is not really a script and I have not really thought this through, but I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus. I am not a medical expert, I don't play it on the internet and there absolutely is no need to follow any of my advice or take anything as advice that I say. I just want to talk and maybe someone else will have a good idea of what I talk. It is a crazy world we live in. I would have never thought at the beginning of this year that this would be the year where everyone stays at their house and works from home. I have always thought that in a time like this when the economy is going down, basically the thing of value would be something like Bitcoin or Ethereum or alternative things. But no, everything is going down and the actual new currency of choice is toilet paper. Everyone is going to grab the toilet paper. What a crazy world where the most trusted news source is someone like Tucker Carson. Yeah, didn't see that one coming. Thanks Tucker for saving us. I don't know what to make of this and I do know that this is a serious situation and you should definitely do everything you can to take care of yourself and to take care of your community. What I want to talk about is the question of what is it going to do long term. If we think about this, we often think about this right now. We have an exponentially increasing number of cases. You have probably seen this. You have probably seen graphics like these where the goal is to flatten the curve. The sense behind this being that if this rises exponentially, of course at some point it will affect the entire population. So it's going to flatten out and if you look at the number of new cases daily, it might be some curve like this. The problem is that we only have a finite capacity of healthcare systems. So all these people are basically going to be screwed once we get to this point. Now the goal is to flatten the curve that we can take some measures to keep this curve under or at the capacity of our healthcare system. These measures are varying wildly. So it is these measures that I want to talk about a bit. Now these measures range from something like social distancing where you basically say, no big events, no groups of large people, so social distancing. And just kind of avoid contact with other people. Now of course all the CS departments of the world go like, well this is business as usual. We've practiced for this our entire lives. So it is mildly inconvenient but we can keep it up all the way to lockdown. Lockdown comes also in various forms but the most drastic sense is stay home or you'll get shot or locked up or something like this. And it is this discrepancy. Of course the more down on the curve you go, the more you're going to theoretically flatten this out. The more the less you do, the higher your peak is going to be. But it's not that easy I find. If you look at the cases here, of course we're exponentially rising but if you look at where the outbreak started in China, the orange curve here, you actually see the number of cases flattening out. Now you see it flattening out at something like 100k and last I know China has more people than 100k. So that means not everyone is infected. Now with the disease that infects this easily and spreads this easily from person to person as it appears to be the case. There are two possibilities. Either the rest of China which China is over a billion people and this is 100k. So the entire rest of China, basically almost all of China is asymptomatic. Which the latest numbers are here are that maybe 50% of cases are asymptomatic. Or the other possibility is that most of China has yet to be infected. Now with a virus like this, if you look at the distribution, it's basically arrived everywhere in the world. So there is very, very little hope of snuffing this thing out, actually making it stop. Which what you have to do is you have to lock every single person down for two to three weeks. And now only a single person that doesn't keep to that can start a new outbreak. So what I fully expect to happen if these numbers are correct and if China actually has done this successfully. So flattened this curve successfully is that let's say the green thing here is China is that okay they get to a point where they feel they have no new cases for a while. So they let the restriction up right they remove the restriction. There's going to be some person somewhere in some CS department that now goes outside and meets another person. And in that particular person here the virus happens to have an incubation period of 21 days instead of 14. And they're going to transmit that to two, two, three, four, five people. After these measures everyone's going to be longing for social contacts and large groups. And we might gradually loosen the restrictions, but still a new outbreak is inevitable. It seems. So what you'll have again is a spike and then a country might enact measures again and so on. But I believe the world we're going to live in if we really lock down people if we really enforce these measures is a world of multiple repeated seasonal peaks of this disease. And that means we are in for the long term. I don't I don't want to say ever that we shouldn't do that because it of course effectively reduces the number of deaths, which should be our ultimate goal here. But just know that flattening the curve once like these graphics here is a bit misleading, I believe. We need to be thinking about a long term plan here. And since we're going long term and with long term, I mean months, I mean multiple years with long term. The problem here is the people and I want to elaborate on that. So the the largest problem are the people. People aren't just machines that you can command around people are individuals to have their own ideas. They have their own goals that they want to fulfill. Right. At some point I want to go on a vacation. This is an island with a tree. So let's talk about lockdown lockdown. It appears to be a thing that is necessary in some parts. If you ask some people again, I don't want to give advice on this. I just want to give some thoughts. So what you get with lockdown with lockdown, you get. Oh, gee, it's happening. And so that's day one day three, you get funny YouTube videos. Everyone that is in lockdown will do. Be like, oh, I'm stuck at home. It's so boring. Already forgetting that other people have major issues with being locked down. A lot of people sitting on top of each other is going to create a lot of problems. And eventually more and more people are going to long for this to end. And I'm not saying that you know that is that response to a virus should be fun. But what I'm saying is that people are going to break this. It is inevitable. First, some are going to break. Then more. He have a very delicate balance going on here. Right now, there's a lot of support. A lot of people are on the side of locking things down in a lockdown. A lot of people are conscientious staying home, avoiding social contact as much as possible. But some are going to be the first ones to go over there. Some are going to break. Some are going to find excuses not to keep to it. And the problem is the harder the measures are the harder you are down here, the stronger the poll is going to be for people to go on this other side. And I guarantee you the people on social media that are shaming others the most that are yelling out the loudest for others to not break the lockdown. Either they have an extremely comfortable living at their own homes, which is an extreme privilege. Or they are the worst ones to break it to themselves to find every excuse they can why they are exempt from it. And people are going to see this. More and more people are going to be over here. And with more people over here, look, they have the sunshine. They're out and about. They're doing their things more like normal. The people over here, they're going to see this. And more and more people will be, hey, why am I keeping to this? Why am I not over there? Why can these people do that? And they'll go. And at some point the scale is going to tip and any lockdown, boring martial law and the threat of being shot, if you go outside will be ineffective. And at that point, wherever you are, the cases are going to spike. And it will be even worse than when you did nothing or as bad. So I believe that is very delicate balance that you have to strike here. Total lockdown, people aren't going to take this for a long time. And you need to think about a long time here. I don't know what the answer is. I don't know where exactly the scale of just keep part to stay home, whatever it takes is. I just think that two harsh measures can also be counterproductive. I'm very fortunate to live in Switzerland. Most of our neighbors have instituted total lockdowns. And the Swiss government has recently decided not to do so at this time. I believe with much of the same reasoning as I'm just laying out, we need to think about this long term. And people are not going to keep to a lockdown long term. And it will be worse if they don't. Now, I believe the best response to something like this is a distributed one. I believe the best response is to go to people in their networks. People usually care about people around them. Enough so that they will take responsibility into the hand. I believe you should give the people the responsibility as much responsibility as you can. And I believe the network of people, each one arranging themselves in the most pro-social way, can be the best response better than any government could do. Governments can do things such as prohibit large gatherings. Sometimes, if you don't do that, even the individual people can't do anything against that. But to actually believe in your citizens and believe in the fundamental goodness of humans and the fundamental care for other humans is a strong suit here. On the other hand, you see other governments, I have read that a city in Norway is thinking about employing a monitoring system where they track everyone's phone. And if more than a certain amount of people are in the same place, they will basically send everyone a text message saying you should disperse. While this is an effective measure, and I believe can definitely help, and it is something that you need to be very careful about. As we saw with 9-11, as the UNICE governments get power, they rarely let it go. As Edward Snowden finally demonstrated, if you enact something like this, you must definitely make sure that there is a time limit on it. Any government measure right now, be that spending to help the economy, which is certainly a good thing. Be this measures to increase social distancing, to prohibit public gatherings. I support this, but it must be time-limited. Otherwise, governments aren't going to let this go. Finally, I would like to come to a more global scale of long-term thinking, countries, and other countries. As you go on, you need to think about your economy. Our economies were growing, a fairly good pace until this hit, and now they're plunging. At any point, they're going to be opportunists. They're going to be personal opportunists hoarding toilet paper and hand sanitizer, and trying to sell them for marked up prices. And they're going to be country opportunists. When everything's falling down, if you're the country that locks things down now, your economy is going to fall. Eventually, though, you'll have to get back. The countries that get back sooner will be in an upswing sooner. Basically, the question is, where is the ideal point here? To leave the, to not react anymore, to let people do their thing, to get back on track. I don't know where that is, but I believe you're going to see a cold war-like situation in the world where countries are going to keep a huge other countries of not doing enough or doing too much of not playing fairly of helping to spread the virus. And I believe that it will be the case for the years to come, because what happens over the long time? Of course, right now, you can afford to not fix that pipe under your house that's broken. You can afford to not clean the, to not get the person to clean the chimney. You can afford to not get dental work done. I don't even know how to draw a tooth. Let's say this is a tooth. Probably has some peaks here. Over the long term, though, all of these things are going to break, and we need to get back to normal. And the longer a state keeps up, these measures, the worse it's going to get. Finally, we need to talk about risk people. People at risk tend to be older, tend to be ones with health issues. Think about this, if you're an old person, having health issues. You're looking at a long term. Once you realize this is not going to be over in a few weeks, what do you do? You're old, and the next year or so in lockdown mode is going to be hard for you, and for everyone. But a year, if you're that old and sick, is probably more quality life you have left than after it. So you need to be thinking, either, I'm going to survive this because I bunker in my house. Don't get the virus. But what is it worth because my other diseases will get me afterwards. Otherwise, I could be spending the quality time I have with my family, with my children, with my grandchildren. I could be spending it with my friends. And if I die, I die. It is not an easy question. But I'm absolutely sure there are people right now who are asking themselves this. If you're a government and you think about mandatory lockdowns, I do see that this is in order to save people in order to not have people walking around that spread the virus to vulnerable populations. But you need to be thinking about the people you're trying to help. Some of them would actually be on this side. I don't know what the best response is to everything here. I think we're just going to see, and I don't want to give advice. This is just some of the things I think. I wish everyone the absolute healthiest season they can have right now. Take care. Please think about others. Please do not make the problem worse yourself. You're a part of a network, and you can be a powerful force for good during this time. Think about long term if you're asking your government to do things. Think about what's the best situation and how we're going to get there. Thanks and stay healthy. | [{"start": 0.0, "end": 18.0, "text": " This video is going to be a rant. There is not really a script and I have not really thought this through, but I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus."}, {"start": 18.0, "end": 30.0, "text": " I am not a medical expert, I don't play it on the internet and there absolutely is no need to follow any of my advice or take anything as advice that I say."}, {"start": 30.0, "end": 37.0, "text": " I just want to talk and maybe someone else will have a good idea of what I talk."}, {"start": 37.0, "end": 50.0, "text": " It is a crazy world we live in. I would have never thought at the beginning of this year that this would be the year where everyone stays at their house and works from home."}, {"start": 50.0, "end": 65.0, "text": " I have always thought that in a time like this when the economy is going down, basically the thing of value would be something like Bitcoin or Ethereum or alternative things."}, {"start": 65.0, "end": 74.0, "text": " But no, everything is going down and the actual new currency of choice is toilet paper. Everyone is going to grab the toilet paper."}, {"start": 74.0, "end": 84.0, "text": " What a crazy world where the most trusted news source is someone like Tucker Carson."}, {"start": 84.0, "end": 91.0, "text": " Yeah, didn't see that one coming. Thanks Tucker for saving us."}, {"start": 91.0, "end": 108.0, "text": " I don't know what to make of this and I do know that this is a serious situation and you should definitely do everything you can to take care of yourself and to take care of your community."}, {"start": 108.0, "end": 116.0, "text": " What I want to talk about is the question of what is it going to do long term."}, {"start": 116.0, "end": 128.0, "text": " If we think about this, we often think about this right now. We have an exponentially increasing number of cases. You have probably seen this."}, {"start": 128.0, "end": 135.0, "text": " You have probably seen graphics like these where the goal is to flatten the curve."}, {"start": 135.0, "end": 145.0, "text": " The sense behind this being that if this rises exponentially, of course at some point it will affect the entire population."}, {"start": 145.0, "end": 152.0, "text": " So it's going to flatten out and if you look at the number of new cases daily, it might be some curve like this."}, {"start": 152.0, "end": 162.0, "text": " The problem is that we only have a finite capacity of healthcare systems. So all these people are basically going to be screwed once we get to this point."}, {"start": 162.0, "end": 172.0, "text": " Now the goal is to flatten the curve that we can take some measures to keep this curve under or at the capacity of our healthcare system."}, {"start": 172.0, "end": 179.0, "text": " These measures are varying wildly. So it is these measures that I want to talk about a bit."}, {"start": 179.0, "end": 197.0, "text": " Now these measures range from something like social distancing where you basically say, no big events, no groups of large people, so social distancing."}, {"start": 197.0, "end": 209.0, "text": " And just kind of avoid contact with other people. Now of course all the CS departments of the world go like, well this is business as usual."}, {"start": 209.0, "end": 223.0, "text": " We've practiced for this our entire lives. So it is mildly inconvenient but we can keep it up all the way to lockdown."}, {"start": 223.0, "end": 233.0, "text": " Lockdown comes also in various forms but the most drastic sense is stay home or you'll get shot or locked up or something like this."}, {"start": 233.0, "end": 243.0, "text": " And it is this discrepancy. Of course the more down on the curve you go, the more you're going to theoretically flatten this out."}, {"start": 243.0, "end": 252.0, "text": " The more the less you do, the higher your peak is going to be. But it's not that easy I find."}, {"start": 252.0, "end": 267.0, "text": " If you look at the cases here, of course we're exponentially rising but if you look at where the outbreak started in China, the orange curve here, you actually see the number of cases flattening out."}, {"start": 267.0, "end": 276.0, "text": " Now you see it flattening out at something like 100k and last I know China has more people than 100k."}, {"start": 276.0, "end": 286.0, "text": " So that means not everyone is infected. Now with the disease that infects this easily and spreads this easily from person to person as it appears to be the case."}, {"start": 286.0, "end": 295.0, "text": " There are two possibilities. Either the rest of China which China is over a billion people and this is 100k."}, {"start": 295.0, "end": 308.0, "text": " So the entire rest of China, basically almost all of China is asymptomatic. Which the latest numbers are here are that maybe 50% of cases are asymptomatic."}, {"start": 308.0, "end": 327.0, "text": " Or the other possibility is that most of China has yet to be infected. Now with a virus like this, if you look at the distribution, it's basically arrived everywhere in the world. So there is very, very little hope of snuffing this thing out, actually making it stop."}, {"start": 327.0, "end": 340.0, "text": " Which what you have to do is you have to lock every single person down for two to three weeks. And now only a single person that doesn't keep to that can start a new outbreak."}, {"start": 340.0, "end": 350.0, "text": " So what I fully expect to happen if these numbers are correct and if China actually has done this successfully."}, {"start": 350.0, "end": 361.0, "text": " So flattened this curve successfully is that let's say the green thing here is China is that okay they get to a point where they feel they have no new cases for a while."}, {"start": 361.0, "end": 374.0, "text": " So they let the restriction up right they remove the restriction. There's going to be some person somewhere in some CS department that now goes outside and meets another person."}, {"start": 374.0, "end": 388.0, "text": " And in that particular person here the virus happens to have an incubation period of 21 days instead of 14. And they're going to transmit that to two, two, three, four, five people."}, {"start": 388.0, "end": 393.0, "text": " After these measures everyone's going to be longing for social contacts and large groups."}, {"start": 393.0, "end": 400.0, "text": " And we might gradually loosen the restrictions, but still a new outbreak is inevitable. It seems."}, {"start": 400.0, "end": 423.0, "text": " So what you'll have again is a spike and then a country might enact measures again and so on. But I believe the world we're going to live in if we really lock down people if we really enforce these measures is a world of multiple repeated seasonal peaks of this disease."}, {"start": 423.0, "end": 438.0, "text": " And that means we are in for the long term. I don't I don't want to say ever that we shouldn't do that because it of course effectively reduces the number of deaths, which should be our ultimate goal here."}, {"start": 438.0, "end": 448.0, "text": " But just know that flattening the curve once like these graphics here is a bit misleading, I believe."}, {"start": 448.0, "end": 461.0, "text": " We need to be thinking about a long term plan here. And since we're going long term and with long term, I mean months, I mean multiple years with long term."}, {"start": 461.0, "end": 471.0, "text": " The problem here is the people and I want to elaborate on that. So the the largest problem are the people."}, {"start": 471.0, "end": 482.0, "text": " People aren't just machines that you can command around people are individuals to have their own ideas. They have their own goals that they want to fulfill."}, {"start": 482.0, "end": 490.0, "text": " Right. At some point I want to go on a vacation. This is an island with a tree."}, {"start": 490.0, "end": 503.0, "text": " So let's talk about lockdown lockdown. It appears to be a thing that is necessary in some parts. If you ask some people again, I don't want to give advice on this."}, {"start": 503.0, "end": 511.0, "text": " I just want to give some thoughts. So what you get with lockdown with lockdown, you get."}, {"start": 511.0, "end": 524.0, "text": " Oh, gee, it's happening. And so that's day one day three, you get funny YouTube videos. Everyone that is in lockdown will do."}, {"start": 524.0, "end": 534.0, "text": " Be like, oh, I'm stuck at home. It's so boring. Already forgetting that other people have major issues with being locked down."}, {"start": 534.0, "end": 546.0, "text": " A lot of people sitting on top of each other is going to create a lot of problems. And eventually more and more people are going to long for this to end."}, {"start": 546.0, "end": 558.0, "text": " And I'm not saying that you know that is that response to a virus should be fun. But what I'm saying is that people are going to break this. It is inevitable."}, {"start": 558.0, "end": 567.0, "text": " First, some are going to break. Then more. He have a very delicate balance going on here. Right now, there's a lot of support."}, {"start": 567.0, "end": 577.0, "text": " A lot of people are on the side of locking things down in a lockdown. A lot of people are conscientious staying home, avoiding social contact as much as possible."}, {"start": 577.0, "end": 588.0, "text": " But some are going to be the first ones to go over there. Some are going to break. Some are going to find excuses not to keep to it."}, {"start": 588.0, "end": 597.0, "text": " And the problem is the harder the measures are the harder you are down here, the stronger the poll is going to be for people to go on this other side."}, {"start": 597.0, "end": 609.0, "text": " And I guarantee you the people on social media that are shaming others the most that are yelling out the loudest for others to not break the lockdown."}, {"start": 609.0, "end": 615.0, "text": " Either they have an extremely comfortable living at their own homes, which is an extreme privilege."}, {"start": 615.0, "end": 628.0, "text": " Or they are the worst ones to break it to themselves to find every excuse they can why they are exempt from it. And people are going to see this. More and more people are going to be over here."}, {"start": 628.0, "end": 637.0, "text": " And with more people over here, look, they have the sunshine. They're out and about. They're doing their things more like normal."}, {"start": 637.0, "end": 651.0, "text": " The people over here, they're going to see this. And more and more people will be, hey, why am I keeping to this? Why am I not over there? Why can these people do that? And they'll go."}, {"start": 651.0, "end": 668.0, "text": " And at some point the scale is going to tip and any lockdown, boring martial law and the threat of being shot, if you go outside will be ineffective. And at that point, wherever you are, the cases are going to spike."}, {"start": 668.0, "end": 674.0, "text": " And it will be even worse than when you did nothing or as bad."}, {"start": 674.0, "end": 686.0, "text": " So I believe that is very delicate balance that you have to strike here. Total lockdown, people aren't going to take this for a long time. And you need to think about a long time here."}, {"start": 686.0, "end": 698.0, "text": " I don't know what the answer is. I don't know where exactly the scale of just keep part to stay home, whatever it takes is."}, {"start": 698.0, "end": 711.0, "text": " I just think that two harsh measures can also be counterproductive. I'm very fortunate to live in Switzerland. Most of our neighbors have instituted total lockdowns."}, {"start": 711.0, "end": 716.0, "text": " And the Swiss government has recently decided not to do so at this time."}, {"start": 716.0, "end": 730.0, "text": " I believe with much of the same reasoning as I'm just laying out, we need to think about this long term. And people are not going to keep to a lockdown long term. And it will be worse if they don't."}, {"start": 730.0, "end": 741.0, "text": " Now, I believe the best response to something like this is a distributed one. I believe the best response is to go to people in their networks. People usually care about people around them."}, {"start": 741.0, "end": 754.0, "text": " Enough so that they will take responsibility into the hand. I believe you should give the people the responsibility as much responsibility as you can."}, {"start": 754.0, "end": 765.0, "text": " And I believe the network of people, each one arranging themselves in the most pro-social way, can be the best response better than any government could do."}, {"start": 765.0, "end": 777.0, "text": " Governments can do things such as prohibit large gatherings. Sometimes, if you don't do that, even the individual people can't do anything against that."}, {"start": 777.0, "end": 791.0, "text": " But to actually believe in your citizens and believe in the fundamental goodness of humans and the fundamental care for other humans is a strong suit here."}, {"start": 791.0, "end": 807.0, "text": " On the other hand, you see other governments, I have read that a city in Norway is thinking about employing a monitoring system where they track everyone's phone."}, {"start": 807.0, "end": 818.0, "text": " And if more than a certain amount of people are in the same place, they will basically send everyone a text message saying you should disperse."}, {"start": 818.0, "end": 827.0, "text": " While this is an effective measure, and I believe can definitely help, and it is something that you need to be very careful about."}, {"start": 827.0, "end": 842.0, "text": " As we saw with 9-11, as the UNICE governments get power, they rarely let it go. As Edward Snowden finally demonstrated, if you enact something like this, you must definitely make sure that there is a time limit on it."}, {"start": 842.0, "end": 854.0, "text": " Any government measure right now, be that spending to help the economy, which is certainly a good thing. Be this measures to increase social distancing, to prohibit public gatherings."}, {"start": 854.0, "end": 863.0, "text": " I support this, but it must be time-limited. Otherwise, governments aren't going to let this go."}, {"start": 863.0, "end": 872.0, "text": " Finally, I would like to come to a more global scale of long-term thinking, countries, and other countries."}, {"start": 872.0, "end": 884.0, "text": " As you go on, you need to think about your economy. Our economies were growing, a fairly good pace until this hit, and now they're plunging."}, {"start": 884.0, "end": 894.0, "text": " At any point, they're going to be opportunists. They're going to be personal opportunists hoarding toilet paper and hand sanitizer, and trying to sell them for marked up prices."}, {"start": 894.0, "end": 907.0, "text": " And they're going to be country opportunists. When everything's falling down, if you're the country that locks things down now, your economy is going to fall."}, {"start": 907.0, "end": 919.0, "text": " Eventually, though, you'll have to get back. The countries that get back sooner will be in an upswing sooner. Basically, the question is, where is the ideal point here?"}, {"start": 919.0, "end": 927.0, "text": " To leave the, to not react anymore, to let people do their thing, to get back on track."}, {"start": 927.0, "end": 945.0, "text": " I don't know where that is, but I believe you're going to see a cold war-like situation in the world where countries are going to keep a huge other countries of not doing enough or doing too much of not playing fairly of helping to spread the virus."}, {"start": 945.0, "end": 959.0, "text": " And I believe that it will be the case for the years to come, because what happens over the long time? Of course, right now, you can afford to not fix that pipe under your house that's broken."}, {"start": 959.0, "end": 970.0, "text": " You can afford to not clean the, to not get the person to clean the chimney. You can afford to not get dental work done. I don't even know how to draw a tooth."}, {"start": 970.0, "end": 982.0, "text": " Let's say this is a tooth. Probably has some peaks here. Over the long term, though, all of these things are going to break, and we need to get back to normal."}, {"start": 982.0, "end": 991.0, "text": " And the longer a state keeps up, these measures, the worse it's going to get."}, {"start": 991.0, "end": 1002.0, "text": " Finally, we need to talk about risk people. People at risk tend to be older, tend to be ones with health issues."}, {"start": 1002.0, "end": 1008.0, "text": " Think about this, if you're an old person, having health issues."}, {"start": 1008.0, "end": 1015.0, "text": " You're looking at a long term. Once you realize this is not going to be over in a few weeks, what do you do?"}, {"start": 1015.0, "end": 1025.0, "text": " You're old, and the next year or so in lockdown mode is going to be hard for you, and for everyone."}, {"start": 1025.0, "end": 1036.0, "text": " But a year, if you're that old and sick, is probably more quality life you have left than after it."}, {"start": 1036.0, "end": 1045.0, "text": " So you need to be thinking, either, I'm going to survive this because I bunker in my house. Don't get the virus."}, {"start": 1045.0, "end": 1050.0, "text": " But what is it worth because my other diseases will get me afterwards."}, {"start": 1050.0, "end": 1057.0, "text": " Otherwise, I could be spending the quality time I have with my family, with my children, with my grandchildren."}, {"start": 1057.0, "end": 1063.0, "text": " I could be spending it with my friends. And if I die, I die. It is not an easy question."}, {"start": 1063.0, "end": 1069.0, "text": " But I'm absolutely sure there are people right now who are asking themselves this."}, {"start": 1069.0, "end": 1087.0, "text": " If you're a government and you think about mandatory lockdowns, I do see that this is in order to save people in order to not have people walking around that spread the virus to vulnerable populations."}, {"start": 1087.0, "end": 1098.0, "text": " But you need to be thinking about the people you're trying to help. Some of them would actually be on this side."}, {"start": 1098.0, "end": 1104.0, "text": " I don't know what the best response is to everything here."}, {"start": 1104.0, "end": 1112.0, "text": " I think we're just going to see, and I don't want to give advice. This is just some of the things I think."}, {"start": 1112.0, "end": 1119.0, "text": " I wish everyone the absolute healthiest season they can have right now."}, {"start": 1119.0, "end": 1125.0, "text": " Take care. Please think about others. Please do not make the problem worse yourself."}, {"start": 1125.0, "end": 1132.0, "text": " You're a part of a network, and you can be a powerful force for good during this time."}, {"start": 1132.0, "end": 1144.0, "text": " Think about long term if you're asking your government to do things. Think about what's the best situation and how we're going to get there."}, {"start": 1144.0, "end": 1171.0, "text": " Thanks and stay healthy."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=H3Bhlan0mE0 | Online Education - How I Make My Videos | Just a short overview of tools I use to make my videos.
OneNote - https://www.onenote.com
iSpring Free Cam - https://www.ispringsolutions.com/ispring-cam
Shotcut - https://shotcut.org
Slack - https://slack.com
RocketChat - https://rocket.chat
Zoom - https://zoom.us
Jitsi - https://jitsi.org
GDocs - https://www.google.com/docs/about
Piazza - https://piazza.com
CMT - https://cmt3.research.microsoft.com/About
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. So a lot of people have been asking me how I make these videos and this is of course relevant now that everyone's work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make these online education happen and I think this style of video lends itself to online education. So I've quickly go over the process of how to do this and maybe also how to run a university class online. Right, so the process is pretty simple of how I make my videos. This might not work for everyone, but it works for me. I used the Microsoft one note in order to scribble on papers basically. So the thing is in one note you have this insert thing here and you can print out a PDF onto your notebook here. Right. So the way this looks then is you'll get the PDF in your notebook and you can scribble on it with this using this draw tab here. You can choose a pen and scribble on it. You can highlight things and so on. And I do this while I record the screen. So that's pretty much all there's to it. You can then print out again this notebook and you can distribute those annotated PDF if you want. Now I'm pretty sure this is here inserted as some sort of as a an image. So I don't know about the copy pastability of the resulting product, but here you see this is a paper actually made a video about and that's basically all there's to it. It's one note. It's a free free program from Microsoft in order to do the annotating. I use in last or last last generation Microsoft Surface tablet that I got for cheap at some point. I comes with a nice pen and touch screen. So you can basically zoom around and zip around while you do these things. In order to record the screen, I use this ice spring free cam software. It might not be the best, but it does work for me. Well, and they have a cool pro edition if you need more features, but it works really well for recording your screen. You can record parts of your screen or the full screen. You can record with sound. So I use a microphone. And then I just record the sound from that with the same tool and at the end you get a video file that you can upload to YouTube. Easy as that. If I need to do some editing, which is rarely because I am lazy, I use either I move from Apple, which comes with an Apple operating system. So I have a Macbook that I run I'm on. I move is really easy to edit movies on. I don't know if there's anything on Windows where it's that easy that comes pre packaged. But if I need to do more complicated things, I use shot cut, which is an open source editor. So I believe that's available for all the platforms. And you can do fairly complicated things with shot cut if you ever need to do that. But if I just need to stitch like two or three things together, I use I move. And that's pretty much it for making and recording videos, I believe. Yeah, one note is that then in order to do a class from from online, not all people will just be able to record a video and then upload some of the things you need to do actually live. And a lot of people right now use zoom for live basically teleconferencing, but you can also do this sort of presenter mode where you present and people can do question. Of course, you can do this via YouTube streaming as well. But then it's of course, it's a kind of public on YouTube or link accessible with zoom, I believe you have more control. But of course, zoom is a proprietary solution. And with a free account, you can only get so far. So they limit your meetings in length. If you have more than I believe three or four people, an alternative is Jitsi, which is open source video conferencing. And the cool thing here is you can actually run your own server such that you can truly have control over everything. In order to communicate with lots of people, of course, people use Slack, but again, Slack is a proprietary service and an alternative to that would be rocket chat again, where you can run your own server. And it is fairly similar to Slack. In order to collaborate or share just general notes, of course, Google's suite of docs and sheets and so on is excellent. And for classes, especially, Piazza is a good place. You can sign up as a class. You can have the have TAs sign up as TAs. And you can have your students sign up as students. And then the students can ask questions. And then other students or the TAs can answer those questions. Basically, a bit of a forum, but you can also announce things there for your classes. It's pretty cool. And it's really geared towards online classes and it's free. So I know a lot of universities are using that right now. So if you're looking for some sort of announcement or discussion board for your class, Piazza is definitely a good place to be. And lastly, we sometimes have classes where students have to submit projects. And we actually use CMT for this because it's really neat where you can set deadlines and everything students can upload. And then you can assign reviewers to those projects, which in our case are us, the TAs. And you know, you can have meta reviews and so on. So since he's actually very good, maybe a bit of an overkill. If you just run a single class, but it has lots and lots of features. And of course, the big conferences also use CMT. So it's definitely stress tested. All right. So that was it from for my videos, or at least how I make them. I just, you know, print out the PDF, sit down for half an hour and rant about it. And that's pretty much it. And then you throw it on YouTube or distribute the file however you want. And with that, I hope I answered a little bit of these questions. And, um, yep, I always show healthy rest of the Corona season. Bye. | [{"start": 0.0, "end": 8.52, "text": " Hi there. So a lot of people have been asking me how I make these videos and this is of course relevant now that everyone's"}, {"start": 8.8, "end": 16.84, "text": " work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make these online"}, {"start": 18.04, "end": 23.96, "text": " education happen and I think this style of video lends itself to online education."}, {"start": 23.96, "end": 30.72, "text": " So I've quickly go over the process of how to do this and maybe also how to run a university class online."}, {"start": 31.44, "end": 37.68, "text": " Right, so the process is pretty simple of how I make my videos. This might not work for everyone, but it works for me."}, {"start": 37.68, "end": 41.32, "text": " I used the Microsoft one note in order to"}, {"start": 42.400000000000006, "end": 45.72, "text": " scribble on papers basically. So the thing is"}, {"start": 46.84, "end": 53.28, "text": " in one note you have this insert thing here and you can print out a PDF"}, {"start": 53.28, "end": 62.84, "text": " onto your notebook here. Right. So the way this looks then is you'll get the PDF in your notebook and you can"}, {"start": 62.84, "end": 68.32000000000001, "text": " scribble on it with this using this draw tab here. You can choose a pen and"}, {"start": 69.0, "end": 76.12, "text": " scribble on it. You can highlight things and so on. And I do this while I record the screen. So that's pretty much all"}, {"start": 76.12, "end": 85.12, "text": " there's to it. You can then print out again this notebook and you can distribute those annotated PDF if you want."}, {"start": 86.52000000000001, "end": 91.32000000000001, "text": " Now I'm pretty sure this is here inserted as some sort of as a"}, {"start": 92.12, "end": 98.04, "text": " an image. So I don't know about the copy pastability of the resulting product, but"}, {"start": 98.64, "end": 105.28, "text": " here you see this is a paper actually made a video about and that's basically all there's to it. It's one note."}, {"start": 105.28, "end": 115.12, "text": " It's a free free program from Microsoft in order to do the annotating. I use in last or last last"}, {"start": 115.12, "end": 122.32, "text": " generation Microsoft Surface tablet that I got for cheap at some point. I comes with a nice pen and touch"}, {"start": 122.32, "end": 129.8, "text": " screen. So you can basically zoom around and zip around while you do these things. In order to record"}, {"start": 129.8, "end": 139.8, "text": " the screen, I use this ice spring free cam software. It might not be the best, but it does work for me."}, {"start": 139.8, "end": 147.04000000000002, "text": " Well, and they have a cool pro edition if you need more features, but it works really well for recording your screen."}, {"start": 147.48000000000002, "end": 155.60000000000002, "text": " You can record parts of your screen or the full screen. You can record with sound. So I use a microphone."}, {"start": 155.6, "end": 162.16, "text": " And then I just record the sound from that with the same tool and at the end you get a video file that you can"}, {"start": 162.16, "end": 172.16, "text": " upload to YouTube. Easy as that. If I need to do some editing, which is rarely because I am lazy, I use either"}, {"start": 172.16, "end": 184.92, "text": " I move from Apple, which comes with an Apple operating system. So I have a Macbook that I run I'm"}, {"start": 184.92, "end": 191.64, "text": " on. I move is really easy to edit movies on. I don't know if there's anything on Windows where it's"}, {"start": 191.64, "end": 198.88, "text": " that easy that comes pre packaged. But if I need to do more complicated things, I use shot cut, which is an"}, {"start": 198.88, "end": 207.51999999999998, "text": " open source editor. So I believe that's available for all the platforms. And you can do fairly complicated"}, {"start": 207.51999999999998, "end": 213.04, "text": " things with shot cut if you ever need to do that. But if I just need to stitch like two or three"}, {"start": 213.04, "end": 224.16, "text": " things together, I use I move. And that's pretty much it for making and recording videos, I believe."}, {"start": 227.0, "end": 238.12, "text": " Yeah, one note is that then in order to do a class from from online, not all people will just be"}, {"start": 238.12, "end": 244.36, "text": " able to record a video and then upload some of the things you need to do actually live. And a lot"}, {"start": 244.36, "end": 250.92000000000002, "text": " of people right now use zoom for live basically teleconferencing, but you can also do this sort of"}, {"start": 250.92000000000002, "end": 257.4, "text": " presenter mode where you present and people can do question. Of course, you can do this via YouTube"}, {"start": 257.4, "end": 266.04, "text": " streaming as well. But then it's of course, it's a kind of public on YouTube or link accessible with"}, {"start": 266.04, "end": 273.56, "text": " zoom, I believe you have more control. But of course, zoom is a proprietary solution. And with a free"}, {"start": 273.56, "end": 279.64000000000004, "text": " account, you can only get so far. So they limit your meetings in length. If you have more than I"}, {"start": 279.64000000000004, "end": 287.64000000000004, "text": " believe three or four people, an alternative is Jitsi, which is open source video conferencing. And"}, {"start": 287.64000000000004, "end": 294.44, "text": " the cool thing here is you can actually run your own server such that you can truly have control"}, {"start": 294.44, "end": 303.8, "text": " over everything. In order to communicate with lots of people, of course, people use Slack, but"}, {"start": 303.8, "end": 311.15999999999997, "text": " again, Slack is a proprietary service and an alternative to that would be rocket chat again,"}, {"start": 311.15999999999997, "end": 322.44, "text": " where you can run your own server. And it is fairly similar to Slack. In order to collaborate or share"}, {"start": 322.44, "end": 331.88, "text": " just general notes, of course, Google's suite of docs and sheets and so on is excellent. And"}, {"start": 333.16, "end": 340.04, "text": " for classes, especially, Piazza is a good place. You can sign up as a class. You can have"}, {"start": 340.04, "end": 345.24, "text": " the have TAs sign up as TAs. And you can have your students sign up as students. And then the"}, {"start": 345.24, "end": 351.48, "text": " students can ask questions. And then other students or the TAs can answer those questions."}, {"start": 351.48, "end": 356.68, "text": " Basically, a bit of a forum, but you can also announce things there for your classes. It's"}, {"start": 356.68, "end": 364.20000000000005, "text": " pretty cool. And it's really geared towards online classes and it's free. So I know a lot of"}, {"start": 364.20000000000005, "end": 371.0, "text": " universities are using that right now. So if you're looking for some sort of announcement or"}, {"start": 371.0, "end": 378.20000000000005, "text": " discussion board for your class, Piazza is definitely a good place to be. And lastly, we sometimes"}, {"start": 378.2, "end": 386.36, "text": " have classes where students have to submit projects. And we actually use CMT for this because it's"}, {"start": 386.36, "end": 391.71999999999997, "text": " really neat where you can set deadlines and everything students can upload. And then you can"}, {"start": 391.71999999999997, "end": 398.84, "text": " assign reviewers to those projects, which in our case are us, the TAs. And you know, you can have"}, {"start": 398.84, "end": 407.32, "text": " meta reviews and so on. So since he's actually very good, maybe a bit of an overkill. If you just"}, {"start": 407.32, "end": 415.64, "text": " run a single class, but it has lots and lots of features. And of course, the big conferences"}, {"start": 415.64, "end": 422.84, "text": " also use CMT. So it's definitely stress tested. All right. So that was it from for my videos,"}, {"start": 423.56, "end": 429.0, "text": " or at least how I make them. I just, you know, print out the PDF, sit down for half an hour and"}, {"start": 429.0, "end": 433.64, "text": " rant about it. And that's pretty much it. And then you throw it on YouTube or distribute the"}, {"start": 433.64, "end": 442.76, "text": " file however you want. And with that, I hope I answered a little bit of these questions. And,"}, {"start": 442.76, "end": 471.88, "text": " um, yep, I always show healthy rest of the Corona season. Bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=p3sAF3gVMMA | Deep Learning for Symbolic Mathematics | This model solves integrals and ODEs by doing seq2seq!
https://arxiv.org/abs/1912.01412
https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/
Abstract:
Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
Authors: Guillaume Lample, François Charton
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Can you solve this? Well neither can I, but Wolfram Alpha can. So this is the thing that probably I have most to think for passing university, especially the math classes in it. If you don't know Wolfram Alpha, it is an engine. It's from the creators of Mathematica, but it is online. And it can do symbolic math. So it can integrate this expression for example, and you'll get the solution here. And if you have the pro version, it can give you a step-by-step solution of how to get there. So this part of math is an entirely different part than we usually do with computers. Usually we do numeric math working with actual values. But here it's about symbolic math. It's about manipulating expression. So in this case, integrating them. So here is a paper by Facebook AI research called Deep Learning for Symbolic Mathematics by Giam Blombe and François-Jerton. These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks. So they start out by saying here neural networks have a reputation for being better at solving statistical or approximate problems. That's what I meant by numeric. Then at performing calculations or working with symbolic data. And in this case, they go about this other than other people have. So let's look at how they did it. We can express symbolic mathematics in these kind of trees. So an expression like these up here would go would be expressed into this tree. So you would have plus this through two plus three. Of course, there's an implicit bracket here. So you'd have this plus right here, the two here and the entire right hand side here. So you can basically decompose it into trees like this or this or this. Here you also can have the differentiation operator as a symbol in there, just like any other operator. And moreover, you can basically decompose everything into everything they have here into binary and unary nodes in a tree. What that means is either like a plus sign, it has two components. So a left and the right hand side that should be added together. Or like the cosine, it has one argument, namely the thing that it should take the cosine of right. So a lot of people have tried going about this problem by working with these trees and basically training neural networks to. So first they use kind of a parser to decompose such a thing into a tree like this and then use neural networks, let's say three recursive neural networks are so on to to kind of make sense of the tree and solving your recursive manner or something like this. But that has its limitations. So what these people from Facebook, I did is they viewed it as a natural language expression problem, right. So they say no, no, let's actually go with trees as sequences. So you can see that this mathematical expression, for example, is already a sequence right. It's simply a sequence of tokens. But there are many different ways of expressing this. So you can say two plus three times the parentheses, you can say three times parentheses plus two. You can turn many things around and there's always these parentheses make it harder and so on. So what they do is they say, okay, let's actually go from this thing to a tree. So let's go to three representation and then let's take the tree representation because the tree representation can be normalized and then let's put that again into a sequence representation such as this one. This is called reverse Polish notation and it has multiple advantages over the old expression. So let's keep that on the right hand side here. This is the same thing except it's a what is called a prefix notation, whereas the thing on the right here is called an infix notation. Because the operators such as the plus is always between its arguments, right. So it's left hand argument and it's right hand argument. In prefix notation, the operator is always in front of its arguments. So this operator here is has its first argument. And as a second argument, this right now the cool thing is if you express a tree like this, you can simply go and use a stack machine to solve it. So you can basically go, I would say you can go from the from the right here and use to use like a two and a five plus and let's do it by hand actually. This is fun. So we have plus two times three. If you're a boomer like me, you remember you had to use calculators like this and couldn't use the infix notation. So you go from the right, right. You say two, five plus. Cool, that's seven. So scratch that, put seven here. Right. So your new stack is three, two times this. Right. Then you go again from the right and you go seven, three times. Okay, that's 21. Cool. 21. Scratch this. Now it's 21. Two plus 21 is 23. Fairly sure. That's the solution. Correct me if I'm wrong. But this is how you would go about solving like this. So it is the same expression as the original one, but it doesn't use any parentheses. And it is it is derived from the from the tree basically. So it is you can you can normalize it much more in order to find unique expressions. So what this system does is it, it transforms any expression into a prefix notation such as this one. Oops, and then it uses a sequence to sequence model in order to derive the solution. Now just how crazy is this? Right. So we come would go from this thing here, right. From this thing and the solution is 21. Right. And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence. That means it basically parses this as on a token level. Right. And then it outputs these tokens without so during training you simply give it the you give it the input here and you give it the output. And it's supposed to learn how to transform one into the other without you giving it any sort of mathematical ability. Right. Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you now. This by itself is already pretty astounding that you would try such a thing. It really transforms the string. So this is not the mathematical question, but the string of this into the string of that. Now they don't do it on numbers like I don't think that would work as well if you were to make it kind of calculate numerical things like this as we said this is symbolic. So what it can do is it can for example integrate. So if you have an expression like. Let's see some on the bottom here. So if you had an expression such as a polynomial. Here an expression like this. Right. You would like to find its integral. That is a problem. That's one of the problems we had at the beginning. Right. This integral right here. You can write this in a string like we said and then derive its solution right here. And have the neural network learn to map one to the other right to map this to that. So the way it goes is it would map this into map this into its tree representation. Three representation. It would map this into its prefix notation. Right. It would also map this to let's say another color here. Map this into its tree. Then it would map this into its prefix notation. And then that's the training data. The training data is take this derive that right. And at inference time of course you won't have this here. You'll simply be asked to output a sequence as a normal natural language like you can think of machine translation right. This thing translates problems into solutions. It's crazy. I mean it's not it's not technically super challenging. It's crazy that it works or that it could work. Right. So we'll see how they this actually works. They use a transformer model which is just which is a classic model. If you don't know what a transformer is, I have a video called attention is all you need about transformers. You can basically use them to do these kinds of tasks to map one string into another string. So yeah. So they go into detail here of how they construct the data set and how big the problem space is. And so on. Ultimately they compare their system to Mathematica I think. And maple and math lab which do the same things. So Mathematica which is the kind of desktop version of wall from alpha that I've shown you. Yeah, we're here. So integration is the task of integrating. Let's say these symbolic expressions. ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation which is also a task in symbolic mathematics. If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds. What Mathematica will do is it will kind of search the manipulations that it knows. So the advantage of this is it can always give you let's say a step by step solution if it finds a solution right it will just start and will do a tree search manipulating the expression you give in until it reaches a satisfactory solution. But then once it has that it can give you a path through the tree which leads to the solution which will give you a step by step solution so you can understand it. The system that Facebook designs here doesn't do that. It simply takes right it simply takes the input tokens like this is the input and it just gives you an output that is learn so that the network per se doesn't understand math. It simply learns from many many examples that to transform to to come up with good hypotheses. So if you compare here Mathematica for example it can integrate 84% of the things that they put into it. It's not said whether it gets it wrong or it simply times out in the rest I would say times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules. So I guess the rest of the rest 16% it simply times out doesn't find a solution. Whereas this Facebook system and they say find usually find the solution less than a second finds these solutions in 98.4% of the time with the beam size of 1. Now what does the beam size mean at decoding time if you have a seek to seek model you can always opt to do what's called beam search. So when you have a input sequence let's actually give an example a cat jumps. And you generate an output sequence let's say the task is simply to continue the sentence. What you can do is you can this is beam size would we call beam size one or no beam search at all you can do it what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory. So you in the beam size of 10 you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best. Let's see how this goes. Let's do a beam size of 3 in our case. So a cat jumps and then you can come up with three different things this sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly. So these are your three hypotheses then we go to the next step we have to evaluate each of those each of them. So a cat jumps over the over a over me a cat jumps between the between two and a cat jumps between many. The cat jumps swiftly end of sentence cat jumps swiftly over cat jumps swiftly and these are all valid. So of these nine you would now select again the three that overall have the highest likelihood maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two these three right so you just keep these three and then in the next step you again from these three you would want for each three hypotheses. So this is what's called a beam search and if you give it a beam size of 10 or 50 this system tends to improve even more the way this system works is quite different from Mathematica in that Mathematica as I said is a symbolic solver that never makes mistakes but can fail to give you a solution. This system simply generates an output sequence is not guaranteed to be actually a solution to the problem is just a hypothesis but then you can quickly check whether the hypothesis is correct so the nature of these math problems with integration you can simply differentiate with ODE you can simply plug them in to see if they're a solution is kind of like your classic let's say in PR problems or like the sat solving where you can quickly check whether something is a solution so if you have a system that generates you 50 hypotheses you could quickly check which one is actually correct so these numbers here mean that one of these 50 that the system came up with was a correct solution and if you allow for such many hypotheses you can see it goes up quite a bit for example the ODE solving is almost the same and here it's even worse if you take ODE's of order two it's even worse than Mathematica but if you allow for larger beam sizes you see it dramatically goes up and so it's a different approach I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off a Facebook or something or Facebook buy Mathematica in whatever way this clearly is a different approach and it appears to work better but there's a caveat so here's the caveat that I see with this kind of thing these evaluations are done on data sets of course and this paper goes into big detail on how to generate these data sets so they have to pay attention to many things like many solutions are equivalent for example here you know that this solution and this solution to this equation to this differential equation are the same so they you know have to use a symbolic framework to check whether the solutions are the same and so on this it is very good work but they do evaluate on expressions that fit into their fit into their data set so here in their data set they say okay we evaluate you know expressions with up to 15 internal nodes leave values for these four binary operators then these 15 unary operators so the expressions that they train on fall into this data set right also just numbers from negative 5 to 5 so it is it is kind of to be expected that a system that is trained on these things would make would perform very well on these things as opposed to opposed to Mathematica that is you know a general purpose tool moreover if you look at sorry I think this is further done for example in integration for the integration task they have three different ways of solving of generating data they have the four word way where they simply use a symbolic integrator to generate expressions they have the backward way where they start from the integral and then differentiate it in order to obtain a training pair and they have an integration by parts method so these are three different methods to come up with problems for this system to be trained on and they have very different properties to the effect that if you train with one just one it won't work well on the other so if you train with the forward method it will work very well on data that has been generated with the forward method so this is down here this is what it's trained on and this is what it's evaluated on right you can see the the diagonal is very very strong but if you train with the backward method but you evaluate on data generated with the forward method it is actually very poor that's because in one case generally the solutions are longer than the input in the other case the solutions are shorter not only does this system only work on the particular task here it is actually very attuned to the way that this data was generated right so in fact I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate and again the problem the problem is made kind of worse because they if their evaluation set would also come from their distribution so what they've ultimately shown is that they can do this on a very skewed probably very biased subset of this mathematical problem and on that biased subset they can outperform something like Mathematica right they kind of defeat themselves yeah if you look here they even the different integration data generating methods if you only train on one of them it doesn't generalize if you only train on forward data then if you evaluate on backward generated data it doesn't work so even the integrator can can't really generalize so they have to kind of combine different method and even now we can probably easily find examples that this integrator can't solve so I mean there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to and it's cool that it works anyway as I said this model has no program the notion of how math works but also it kind of shows the problems if you if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process the claims you can make at the end or limited then to be fair I don't know what claims they made in the press generally so I think there is a pretty cool work check it out and that was it thanks | [{"start": 0.0, "end": 16.0, "text": " Hi there. Can you solve this? Well neither can I, but Wolfram Alpha can. So this is the thing that probably I have most to think for passing university, especially the math classes in it."}, {"start": 16.0, "end": 33.0, "text": " If you don't know Wolfram Alpha, it is an engine. It's from the creators of Mathematica, but it is online. And it can do symbolic math. So it can integrate this expression for example, and you'll get the solution here."}, {"start": 33.0, "end": 42.0, "text": " And if you have the pro version, it can give you a step-by-step solution of how to get there."}, {"start": 42.0, "end": 58.0, "text": " So this part of math is an entirely different part than we usually do with computers. Usually we do numeric math working with actual values. But here it's about symbolic math. It's about manipulating expression."}, {"start": 58.0, "end": 75.0, "text": " So in this case, integrating them. So here is a paper by Facebook AI research called Deep Learning for Symbolic Mathematics by Giam Blombe and Fran\u00e7ois-Jerton."}, {"start": 75.0, "end": 95.0, "text": " These people have basically tackled the task of doing these mathematical reasoning, solving these mathematical problems in a symbolic way using neural networks. So they start out by saying here neural networks have a reputation for being better at solving statistical or approximate problems."}, {"start": 95.0, "end": 102.0, "text": " That's what I meant by numeric. Then at performing calculations or working with symbolic data."}, {"start": 102.0, "end": 116.0, "text": " And in this case, they go about this other than other people have. So let's look at how they did it. We can express symbolic mathematics in these kind of trees."}, {"start": 116.0, "end": 138.0, "text": " So an expression like these up here would go would be expressed into this tree. So you would have plus this through two plus three. Of course, there's an implicit bracket here. So you'd have this plus right here, the two here and the entire right hand side here."}, {"start": 138.0, "end": 156.0, "text": " So you can basically decompose it into trees like this or this or this. Here you also can have the differentiation operator as a symbol in there, just like any other operator."}, {"start": 156.0, "end": 173.0, "text": " And moreover, you can basically decompose everything into everything they have here into binary and unary nodes in a tree. What that means is either like a plus sign, it has two components. So a left and the right hand side that should be added together."}, {"start": 173.0, "end": 191.0, "text": " Or like the cosine, it has one argument, namely the thing that it should take the cosine of right. So a lot of people have tried going about this problem by working with these trees and basically training neural networks to."}, {"start": 191.0, "end": 208.0, "text": " So first they use kind of a parser to decompose such a thing into a tree like this and then use neural networks, let's say three recursive neural networks are so on to to kind of make sense of the tree and solving your recursive manner or something like this."}, {"start": 208.0, "end": 212.0, "text": " But that has its limitations."}, {"start": 212.0, "end": 227.0, "text": " So what these people from Facebook, I did is they viewed it as a natural language expression problem, right. So they say no, no, let's actually go with trees as sequences."}, {"start": 227.0, "end": 237.0, "text": " So you can see that this mathematical expression, for example, is already a sequence right. It's simply a sequence of tokens."}, {"start": 237.0, "end": 247.0, "text": " But there are many different ways of expressing this. So you can say two plus three times the parentheses, you can say three times parentheses plus two."}, {"start": 247.0, "end": 260.0, "text": " You can turn many things around and there's always these parentheses make it harder and so on. So what they do is they say, okay, let's actually go from this thing to a tree."}, {"start": 260.0, "end": 279.0, "text": " So let's go to three representation and then let's take the tree representation because the tree representation can be normalized and then let's put that again into a sequence representation such as this one."}, {"start": 279.0, "end": 292.0, "text": " This is called reverse Polish notation and it has multiple advantages over the old expression. So let's keep that on the right hand side here."}, {"start": 292.0, "end": 301.0, "text": " This is the same thing except it's a what is called a prefix notation, whereas the thing on the right here is called an infix notation."}, {"start": 301.0, "end": 311.0, "text": " Because the operators such as the plus is always between its arguments, right. So it's left hand argument and it's right hand argument."}, {"start": 311.0, "end": 322.0, "text": " In prefix notation, the operator is always in front of its arguments. So this operator here is has its first argument."}, {"start": 322.0, "end": 334.0, "text": " And as a second argument, this right now the cool thing is if you express a tree like this, you can simply go and use a stack machine to solve it."}, {"start": 334.0, "end": 347.0, "text": " So you can basically go, I would say you can go from the from the right here and use to use like a two and a five plus and let's do it by hand actually. This is fun."}, {"start": 347.0, "end": 359.0, "text": " So we have plus two times three. If you're a boomer like me, you remember you had to use calculators like this and couldn't use the infix notation."}, {"start": 359.0, "end": 374.0, "text": " So you go from the right, right. You say two, five plus. Cool, that's seven. So scratch that, put seven here. Right. So your new stack is three, two times this."}, {"start": 374.0, "end": 385.0, "text": " Right. Then you go again from the right and you go seven, three times. Okay, that's 21. Cool. 21. Scratch this. Now it's 21."}, {"start": 385.0, "end": 392.0, "text": " Two plus 21 is 23. Fairly sure. That's the solution."}, {"start": 392.0, "end": 405.0, "text": " Correct me if I'm wrong. But this is how you would go about solving like this. So it is the same expression as the original one, but it doesn't use any parentheses."}, {"start": 405.0, "end": 420.0, "text": " And it is it is derived from the from the tree basically. So it is you can you can normalize it much more in order to find unique expressions."}, {"start": 420.0, "end": 430.0, "text": " So what this system does is it, it transforms any expression into a prefix notation such as this one."}, {"start": 430.0, "end": 441.0, "text": " Oops, and then it uses a sequence to sequence model in order to derive the solution. Now just how crazy is this? Right."}, {"start": 441.0, "end": 451.0, "text": " So we come would go from this thing here, right. From this thing and the solution is 21. Right."}, {"start": 451.0, "end": 467.0, "text": " And the neural network is simply trained to do sequence to sequence from this to that sequence to sequence. That means it basically parses this as on a token level. Right."}, {"start": 467.0, "end": 480.0, "text": " And then it outputs these tokens without so during training you simply give it the you give it the input here and you give it the output."}, {"start": 480.0, "end": 490.0, "text": " And it's supposed to learn how to transform one into the other without you giving it any sort of mathematical ability. Right."}, {"start": 490.0, "end": 498.0, "text": " Without you telling it what does a plus sign mean without you telling it this algorithm that I just told you now."}, {"start": 498.0, "end": 506.0, "text": " This by itself is already pretty astounding that you would try such a thing. It really transforms the string."}, {"start": 506.0, "end": 512.0, "text": " So this is not the mathematical question, but the string of this into the string of that."}, {"start": 512.0, "end": 526.0, "text": " Now they don't do it on numbers like I don't think that would work as well if you were to make it kind of calculate numerical things like this as we said this is symbolic."}, {"start": 526.0, "end": 539.0, "text": " So what it can do is it can for example integrate. So if you have an expression like."}, {"start": 539.0, "end": 548.0, "text": " Let's see some on the bottom here. So if you had an expression such as a polynomial."}, {"start": 548.0, "end": 557.0, "text": " Here an expression like this. Right. You would like to find its integral. That is a problem."}, {"start": 557.0, "end": 561.0, "text": " That's one of the problems we had at the beginning. Right. This integral right here."}, {"start": 561.0, "end": 575.0, "text": " You can write this in a string like we said and then derive its solution right here."}, {"start": 575.0, "end": 582.0, "text": " And have the neural network learn to map one to the other right to map this to that."}, {"start": 582.0, "end": 591.0, "text": " So the way it goes is it would map this into map this into its tree representation."}, {"start": 591.0, "end": 599.0, "text": " Three representation. It would map this into its prefix notation. Right."}, {"start": 599.0, "end": 608.0, "text": " It would also map this to let's say another color here. Map this into its tree."}, {"start": 608.0, "end": 614.0, "text": " Then it would map this into its prefix notation. And then that's the training data."}, {"start": 614.0, "end": 624.0, "text": " The training data is take this derive that right. And at inference time of course you won't have this here."}, {"start": 624.0, "end": 632.0, "text": " You'll simply be asked to output a sequence as a normal natural language like you can think of machine translation right."}, {"start": 632.0, "end": 643.0, "text": " This thing translates problems into solutions. It's crazy. I mean it's not it's not technically super challenging."}, {"start": 643.0, "end": 650.0, "text": " It's crazy that it works or that it could work. Right. So we'll see how they this actually works."}, {"start": 650.0, "end": 654.0, "text": " They use a transformer model which is just which is a classic model."}, {"start": 654.0, "end": 661.0, "text": " If you don't know what a transformer is, I have a video called attention is all you need about transformers."}, {"start": 661.0, "end": 668.0, "text": " You can basically use them to do these kinds of tasks to map one string into another string."}, {"start": 668.0, "end": 680.0, "text": " So yeah. So they go into detail here of how they construct the data set and how big the problem space is."}, {"start": 680.0, "end": 696.0, "text": " And so on. Ultimately they compare their system to Mathematica I think. And maple and math lab which do the same things."}, {"start": 696.0, "end": 703.0, "text": " So Mathematica which is the kind of desktop version of wall from alpha that I've shown you."}, {"start": 703.0, "end": 712.0, "text": " Yeah, we're here. So integration is the task of integrating. Let's say these symbolic expressions."}, {"start": 712.0, "end": 724.0, "text": " ODE order one and order two are slightly different tasks where you're asked to solve an ordinary differential equation which is also a task in symbolic mathematics."}, {"start": 724.0, "end": 737.0, "text": " If you compare it to Mathematica here and they give it Mathematica a limit of 30 seconds. What Mathematica will do is it will kind of search the manipulations that it knows."}, {"start": 737.0, "end": 755.0, "text": " So the advantage of this is it can always give you let's say a step by step solution if it finds a solution right it will just start and will do a tree search manipulating the expression you give in until it reaches a satisfactory solution."}, {"start": 755.0, "end": 766.0, "text": " But then once it has that it can give you a path through the tree which leads to the solution which will give you a step by step solution so you can understand it."}, {"start": 766.0, "end": 780.0, "text": " The system that Facebook designs here doesn't do that. It simply takes right it simply takes the input tokens like this is the input and it just gives you an output that is learn so that the network per se doesn't understand math."}, {"start": 780.0, "end": 790.0, "text": " It simply learns from many many examples that to transform to to come up with good hypotheses."}, {"start": 790.0, "end": 800.0, "text": " So if you compare here Mathematica for example it can integrate 84% of the things that they put into it."}, {"start": 800.0, "end": 815.0, "text": " It's not said whether it gets it wrong or it simply times out in the rest I would say times out because probably Mathematica never gets it wrong because it's an actual symbolic manipulator with defined rules."}, {"start": 815.0, "end": 822.0, "text": " So I guess the rest of the rest 16% it simply times out doesn't find a solution."}, {"start": 822.0, "end": 837.0, "text": " Whereas this Facebook system and they say find usually find the solution less than a second finds these solutions in 98.4% of the time with the beam size of 1."}, {"start": 837.0, "end": 846.0, "text": " Now what does the beam size mean at decoding time if you have a seek to seek model you can always opt to do what's called beam search."}, {"start": 846.0, "end": 859.0, "text": " So when you have a input sequence let's actually give an example a cat jumps."}, {"start": 859.0, "end": 876.0, "text": " And you generate an output sequence let's say the task is simply to continue the sentence."}, {"start": 876.0, "end": 893.0, "text": " What you can do is you can this is beam size would we call beam size one or no beam search at all you can do it what's called a beam search in that each step you actually generate multiple hypotheses and then keep the best ones in memory."}, {"start": 893.0, "end": 908.0, "text": " So you in the beam size of 10 you would always consider the 10 most probable solutions and you would kind of evaluate all 10 and then always keep the 10 best."}, {"start": 908.0, "end": 913.0, "text": " Let's see how this goes. Let's do a beam size of 3 in our case."}, {"start": 913.0, "end": 930.0, "text": " So a cat jumps and then you can come up with three different things this sentence could continue cat jumps over a cat jumps between and a cat jumps swiftly."}, {"start": 930.0, "end": 956.0, "text": " So these are your three hypotheses then we go to the next step we have to evaluate each of those each of them. So a cat jumps over the over a over me a cat jumps between the between two and a cat jumps between many."}, {"start": 956.0, "end": 970.0, "text": " The cat jumps swiftly end of sentence cat jumps swiftly over cat jumps swiftly and these are all valid."}, {"start": 970.0, "end": 999.0, "text": " So of these nine you would now select again the three that overall have the highest likelihood maybe that's the following cat jumps over the cat jumps over a and a cat jumps between two these three right so you just keep these three and then in the next step you again from these three you would want for each three hypotheses."}, {"start": 999.0, "end": 1022.0, "text": " So this is what's called a beam search and if you give it a beam size of 10 or 50 this system tends to improve even more the way this system works is quite different from Mathematica in that Mathematica as I said is a symbolic solver that never makes mistakes but can fail to give you a solution."}, {"start": 1022.0, "end": 1050.0, "text": " This system simply generates an output sequence is not guaranteed to be actually a solution to the problem is just a hypothesis but then you can quickly check whether the hypothesis is correct so the nature of these math problems with integration you can simply differentiate with ODE you can simply plug them in to see if they're a solution is kind of like your classic let's say"}, {"start": 1050.0, "end": 1067.0, "text": " in PR problems or like the sat solving where you can quickly check whether something is a solution so if you have a system that generates you 50 hypotheses you could quickly check which one is actually correct so these numbers here"}, {"start": 1067.0, "end": 1087.0, "text": " mean that one of these 50 that the system came up with was a correct solution and if you allow for such many hypotheses you can see it goes up quite a bit for example the ODE solving is almost the same and here it's even worse if you take ODE's of order two"}, {"start": 1087.0, "end": 1108.0, "text": " it's even worse than Mathematica but if you allow for larger beam sizes you see it dramatically goes up and so it's a different approach I wouldn't be surprised if Mathematica would actually implement something like this very soon or just buy this off a Facebook or something"}, {"start": 1108.0, "end": 1123.0, "text": " or Facebook buy Mathematica in whatever way this clearly is a different approach and it appears to work better but there's a caveat so here's the caveat that I see with this kind of thing"}, {"start": 1123.0, "end": 1143.0, "text": " these evaluations are done on data sets of course and this paper goes into big detail on how to generate these data sets so they have to pay attention to many things like many solutions are equivalent for example here"}, {"start": 1143.0, "end": 1163.0, "text": " you know that this solution and this solution to this equation to this differential equation are the same so they you know have to use a symbolic framework to check whether the solutions are the same and so on"}, {"start": 1163.0, "end": 1184.0, "text": " this it is very good work but they do evaluate on expressions that fit into their fit into their data set so here in their data set they say okay we evaluate you know expressions with up to 15 internal nodes"}, {"start": 1184.0, "end": 1202.0, "text": " leave values for these four binary operators then these 15 unary operators so the expressions that they train on fall into this data set right also just numbers from negative 5 to 5"}, {"start": 1202.0, "end": 1222.0, "text": " so it is it is kind of to be expected that a system that is trained on these things would make would perform very well on these things as opposed to opposed to Mathematica that is you know a general purpose tool"}, {"start": 1222.0, "end": 1246.0, "text": " moreover if you look at sorry I think this is further done for example in integration for the integration task they have three different ways of solving of generating data they have the four word way where they simply use a symbolic integrator to generate expressions"}, {"start": 1246.0, "end": 1262.0, "text": " they have the backward way where they start from the integral and then differentiate it in order to obtain a training pair and they have an integration by parts method so these are three different methods to come up with problems for this system to be trained on"}, {"start": 1262.0, "end": 1272.0, "text": " and they have very different properties to the effect that if you train with one just one it won't work well on the other"}, {"start": 1272.0, "end": 1292.0, "text": " so if you train with the forward method it will work very well on data that has been generated with the forward method so this is down here this is what it's trained on and this is what it's evaluated on right you can see the the diagonal is very very strong"}, {"start": 1292.0, "end": 1312.0, "text": " but if you train with the backward method but you evaluate on data generated with the forward method it is actually very poor that's because in one case generally the solutions are longer than the input in the other case the solutions are shorter"}, {"start": 1312.0, "end": 1338.0, "text": " not only does this system only work on the particular task here it is actually very attuned to the way that this data was generated right so in fact I would postulate that this training data is only probably a very small subset of all of the things that we would like to integrate"}, {"start": 1338.0, "end": 1366.0, "text": " and again the problem the problem is made kind of worse because they if their evaluation set would also come from their distribution so what they've ultimately shown is that they can do this on a very skewed probably very biased subset of this mathematical problem and on that biased subset they can outperform something like Mathematica"}, {"start": 1366.0, "end": 1388.0, "text": " right they kind of defeat themselves yeah if you look here they even the different integration data generating methods if you only train on one of them it doesn't generalize if you only train on forward data then if you evaluate on backward generated data it doesn't work"}, {"start": 1388.0, "end": 1402.0, "text": " so even the integrator can can't really generalize so they have to kind of combine different method and even now we can probably easily find examples that this integrator can't solve"}, {"start": 1402.0, "end": 1415.0, "text": " so I mean there is a lot of cool things here and they show a number of properties that the model learns just from without them telling it to"}, {"start": 1415.0, "end": 1443.0, "text": " and it's cool that it works anyway as I said this model has no program the notion of how math works but also it kind of shows the problems if you if you do this via a training data set in that if your training data set is very skewed and then your evaluation set follows the same generation process the claims you can make at the end or limited"}, {"start": 1443.0, "end": 1455.0, "text": " then to be fair I don't know what claims they made in the press generally so I think there is a pretty cool work check it out and that was it thanks"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=JPX_jSZtszY | NeurIPS 2020 Changes to Paper Submission Process | My thoughts on the changes to the paper submission process for NeurIPS 2020.
The main new changes are:
1. ACs can desk reject papers
2. All authors have to be able to review if asked
3. Resubmissions from other conferences must be marked and a summary of changes since the last submission must be provided
4. Borader societal / ethical impact must be discussed
5. Upon acceptance, all papers must link to an explanatory video and the PDFs for slides and poster
https://neurips.cc/Conferences/2020/CallForPapers
https://youtu.be/361h6lHZGDg
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. So I just wanted to give a few quick thoughts about the changes to the new RIP submission process. This year as opposed to last year, they've announced this on the website, on Twitter, with the video and so on and I thought I might share some thoughts on that and maybe some of you haven't heard yet in case you're planning to submit or thinking about it. So desk rejections, ACs, area chairs have the ability to desk reject papers that they feel strongly are not going to be passable to the reviewers. They did an experiment last year where the ACs were simply supposed to mark submissions that they would desk reject and it turned out ACs aren't very good at estimating which submissions are going to be rejected by the reviewers. That might be because there wasn't really anything at stake because it was just kind of let's see how this works. But it is definitely a move to reduce the number of submissions because the field is exploding and we lack reviewing power, reviewing people. So this is a move to reduce the number of people that have to review something because there will be fewer papers. I don't know if this increases the quality overall. If your paper gets desk rejected, there's usually like some obvious reason for it why an AC decided it's not worth it. They probably haven't read it in depth but there might be some kind of overall structural issue that or like the introduction has many typos or you know that like look for the obvious things even though your work might be good. All authors of a paper have to be able to review if asked to do so and again this is a stab at this kind of reviewing crisis. I have mixed feelings about this. I really think this is a move in the wrong direction. It will increase the number of authors because a lot of people have been kind of free riding in that they are submitting papers but they aren't reviewing other papers even though they would be competent researchers simply because reviewing doesn't get you anything. So there's no incentive to do reviews. Maybe you can say you're a reviewer but then there's every incentive to do bad reviews like two line reviews where the first line says you should have compared to my paper reject like fuck you if you review like this. In any case like a lot of times and this hits for example like universities where you maybe work with a master student and the master student does some of the pre-processing of the data and they don't really have a clue about the machine learning but they still contribute it so why shouldn't they be an author on the paper they might even have been have written that section about the data pre-processing and now they're asked to review entire papers about topics where they're not really familiar with or you have some outside collaborators or you know that there are so many so many things wrong I think this attracts the wrong kind of people and by forcing people to do it you encourage even more like all these reviewers that would not have reviewed what will happen is they will give shitty reviews and you will have even worse quality of reviews as a result. I think this is the wrong move to reduce the number of load per reviewer. I'd rather see a bullish peer review completely in computer science in machine learning at least. That's my opinion but that might be a video for another time. I have plans how to replace it another time. Resubmissions have to be clearly marked so if your paper is a resubmission of like if you had already submitted it in the last 12 months it's been rejected you have to say it is a resubmission and the changes you made to the paper. Again with a with a with a peer review process that actually works this would make a lot of sense you can say well it got rejected last time and here is how I corrected for what the reviewers criticized but with the review quality right now I mean most of the papers what are they gonna say? It got rejected for nefarious reasons because the reviewer had a bad bowel movement that morning and I didn't really change much so you encourage people to kind of blow out of proportion the changes they made and put a lot of additional unnecessary work on two papers that would actually be already fine right so this all of these things they'll just they are they're forcing people to do things and then the incentives of what we want aren't aligned with what we give right so what you'll end up with is lower quality reviews and lower quality work. So the next two points are of a different nature the first one though I yeah that that will probably I mean even if the ACs aren't perfect you know that that's a bit I like that. The fourth point and the fifth point are a bit different the fourth point is there is a new section in CMT apparently where you have to describe the broader societal impact and ethics around your work like how will your work influence society what are positives and negatives ethical outcomes how can it be used and this is targeted towards things like let's say facial recognition if you develop a new facial recognition algorithm you may be able to argue well this could be better used to identify you know victims in a big crowd if you know there's a mass riot or something and then you don't know who is there is my relative one of the people in the mass right that gets stomped on or you can also say this potentially helps a dictatorial state to govern their people because they can now recognize everyone. For most papers it will be a bit shaky like if you're third or the optimizational algorithm and she uses a slightly better convergence rate I'm not sure what's here but what I feel is that this is it's dumb in a way because this just means more work right basically now you have to demonstrate and yeah it says you should discuss positive and negative aspects but in essence everyone will be demonstrating virtue signaling how good their work will be for society and what good can be done and maybe bad but that can be mitigated right and and just pushes it into a more PR world so it goes from the science world into more PR world it means extra work and who are the people that can afford to do extra work it's mostly the big companies right they can just put an additional team member on that maybe even do additional experiments to show the societal impact of the work and who will lose out are probably small universities independent researchers and so on that don't have that that capacity that simply do their research right because it's an interesting research question and for almost every single thing in the world that has an application it will have good and bad applications so yeah mixed feelings so fifth is you are now supposed if your paper gets accepted to make a video about it and upload the the poster basically link to the poster that you would use and also link to slides that you would give the talk with this is to make it more accessible to people people that are not at the conference which again I have mixed feelings about again it pushes into this more PR realm right talks are already live streamed most of them are for most of the large conferences and I feel it just gets people one step more away from the actual paper like it's very so it allows people to grandstand and PR up even more of their work because even people who don't attend the conference now they're not gonna read the paper just gonna watch the video right and in the video you can always leave away those you know things that you would have to like that a reviewer makes you put in the paper right and in the video you can overboys it's camera ready no one reviews the video you can say whatever you want so it's it's just where before if you didn't attend the conference I think many people actually did read the paper watch talks where people could ask questions and now it's it's just one more PR thing and again who has time energy and money to really invest a lot into this it's mainly large companies right if you're small and you're time bound and so on you might not have equipment or time to do that I am not for high-end to do your learnings videos just saying I don't have time to make these videos really as you could see in the stellar quality I think there's a bright glare right here so that was it for my opinions on this and I wish you nice day bye bye | [{"start": 0.0, "end": 9.72, "text": " Hi there. So I just wanted to give a few quick thoughts about the changes to the"}, {"start": 9.72, "end": 15.24, "text": " new RIP submission process. This year as opposed to last year, they've announced"}, {"start": 15.24, "end": 20.64, "text": " this on the website, on Twitter, with the video and so on and I thought I might"}, {"start": 20.64, "end": 25.52, "text": " share some thoughts on that and maybe some of you haven't heard yet in case"}, {"start": 25.52, "end": 34.480000000000004, "text": " you're planning to submit or thinking about it. So desk rejections, ACs, area"}, {"start": 34.480000000000004, "end": 40.12, "text": " chairs have the ability to desk reject papers that they feel strongly are not"}, {"start": 40.12, "end": 47.4, "text": " going to be passable to the reviewers. They did an experiment last year where"}, {"start": 47.4, "end": 51.84, "text": " the ACs were simply supposed to mark submissions that they would desk"}, {"start": 51.84, "end": 58.2, "text": " reject and it turned out ACs aren't very good at estimating which submissions"}, {"start": 58.2, "end": 63.24, "text": " are going to be rejected by the reviewers. That might be because there wasn't"}, {"start": 63.24, "end": 67.44, "text": " really anything at stake because it was just kind of let's see how this works."}, {"start": 67.44, "end": 73.64, "text": " But it is definitely a move to reduce the number of submissions because the"}, {"start": 73.64, "end": 81.18, "text": " field is exploding and we lack reviewing power, reviewing people. So this is a"}, {"start": 81.18, "end": 87.76, "text": " move to reduce the number of people that have to review something because there"}, {"start": 87.76, "end": 95.0, "text": " will be fewer papers. I don't know if this increases the quality overall. If your"}, {"start": 95.0, "end": 100.96000000000001, "text": " paper gets desk rejected, there's usually like some obvious reason for it why"}, {"start": 100.96000000000001, "end": 107.16000000000001, "text": " an AC decided it's not worth it. They probably haven't read it in depth but"}, {"start": 107.16, "end": 111.78, "text": " there might be some kind of overall structural issue that or like the"}, {"start": 111.78, "end": 117.72, "text": " introduction has many typos or you know that like look for the obvious"}, {"start": 117.72, "end": 126.12, "text": " things even though your work might be good. All authors of a paper have to be"}, {"start": 126.12, "end": 131.6, "text": " able to review if asked to do so and again this is a stab at this kind of"}, {"start": 131.6, "end": 138.32, "text": " reviewing crisis. I have mixed feelings about this. I really think this is a move"}, {"start": 138.32, "end": 142.76, "text": " in the wrong direction. It will increase the number of authors because a lot of"}, {"start": 142.76, "end": 146.29999999999998, "text": " people have been kind of free riding in that they are submitting papers but"}, {"start": 146.29999999999998, "end": 151.4, "text": " they aren't reviewing other papers even though they would be competent"}, {"start": 151.4, "end": 155.62, "text": " researchers simply because reviewing doesn't get you anything. So there's no"}, {"start": 155.62, "end": 160.16, "text": " incentive to do reviews. Maybe you can say you're a reviewer but then there's"}, {"start": 160.16, "end": 164.35999999999999, "text": " every incentive to do bad reviews like two line reviews where the first line"}, {"start": 164.35999999999999, "end": 170.76, "text": " says you should have compared to my paper reject like fuck you if you review"}, {"start": 170.76, "end": 178.04, "text": " like this. In any case like a lot of times and this hits for example like"}, {"start": 178.04, "end": 182.12, "text": " universities where you maybe work with a master student and the master"}, {"start": 182.12, "end": 187.76, "text": " student does some of the pre-processing of the data and they don't really have"}, {"start": 187.76, "end": 190.95999999999998, "text": " a clue about the machine learning but they still contribute it so why shouldn't"}, {"start": 190.95999999999998, "end": 194.12, "text": " they be an author on the paper they might even have been have written that"}, {"start": 194.12, "end": 199.28, "text": " section about the data pre-processing and now they're asked to review entire"}, {"start": 199.28, "end": 203.56, "text": " papers about topics where they're not really familiar with or you have some"}, {"start": 203.56, "end": 208.76, "text": " outside collaborators or you know that there are so many so many things wrong"}, {"start": 208.76, "end": 213.88, "text": " I think this attracts the wrong kind of people and by forcing people to do"}, {"start": 213.88, "end": 218.64, "text": " it you encourage even more like all these reviewers that would not have reviewed"}, {"start": 218.64, "end": 223.48, "text": " what will happen is they will give shitty reviews and you will have even"}, {"start": 223.48, "end": 228.88, "text": " worse quality of reviews as a result. I think this is the wrong move to reduce"}, {"start": 228.88, "end": 236.04, "text": " the number of load per reviewer. I'd rather see a bullish peer review"}, {"start": 236.04, "end": 240.48, "text": " completely in computer science in machine learning at least. That's my"}, {"start": 240.48, "end": 246.92, "text": " opinion but that might be a video for another time. I have plans how to replace"}, {"start": 246.92, "end": 253.92, "text": " it another time. Resubmissions have to be clearly marked so if your paper is a"}, {"start": 253.92, "end": 258.8, "text": " resubmission of like if you had already submitted it in the last 12 months it's"}, {"start": 258.8, "end": 264.59999999999997, "text": " been rejected you have to say it is a resubmission and the changes you made to"}, {"start": 264.59999999999997, "end": 270.15999999999997, "text": " the paper. Again with a with a with a peer review process that actually works this"}, {"start": 270.16, "end": 274.68, "text": " would make a lot of sense you can say well it got rejected last time and here is"}, {"start": 274.68, "end": 279.72, "text": " how I corrected for what the reviewers criticized but with the review quality"}, {"start": 279.72, "end": 286.6, "text": " right now I mean most of the papers what are they gonna say? It got rejected for"}, {"start": 286.6, "end": 292.16, "text": " nefarious reasons because the reviewer had a bad bowel movement that morning and"}, {"start": 292.16, "end": 296.76000000000005, "text": " I didn't really change much so you encourage people to kind of blow out of"}, {"start": 296.76, "end": 301.8, "text": " proportion the changes they made and put a lot of additional unnecessary work on"}, {"start": 301.8, "end": 306.92, "text": " two papers that would actually be already fine right so this all of these things"}, {"start": 306.92, "end": 313.96, "text": " they'll just they are they're forcing people to do things and then the"}, {"start": 313.96, "end": 320.8, "text": " incentives of what we want aren't aligned with what we give right so what you'll"}, {"start": 320.8, "end": 326.68, "text": " end up with is lower quality reviews and lower quality work. So the next"}, {"start": 326.68, "end": 332.96, "text": " two points are of a different nature the first one though I yeah that that"}, {"start": 332.96, "end": 337.88, "text": " will probably I mean even if the ACs aren't perfect you know that that's a bit"}, {"start": 337.88, "end": 344.52, "text": " I like that. The fourth point and the fifth point are a bit different the fourth"}, {"start": 344.52, "end": 348.6, "text": " point is there is a new section in CMT apparently where you have to describe"}, {"start": 348.6, "end": 354.76, "text": " the broader societal impact and ethics around your work like how will your"}, {"start": 354.76, "end": 360.64, "text": " work influence society what are positives and negatives ethical outcomes how can"}, {"start": 360.64, "end": 365.48, "text": " it be used and this is targeted towards things like let's say facial"}, {"start": 365.48, "end": 370.52, "text": " recognition if you develop a new facial recognition algorithm you may be able"}, {"start": 370.52, "end": 377.2, "text": " to argue well this could be better used to identify you know victims in a"}, {"start": 377.2, "end": 381.59999999999997, "text": " big crowd if you know there's a mass riot or something and then you don't"}, {"start": 381.6, "end": 387.72, "text": " know who is there is my relative one of the people in the mass right that gets"}, {"start": 387.72, "end": 395.48, "text": " stomped on or you can also say this potentially helps a dictatorial state to"}, {"start": 395.48, "end": 400.92, "text": " govern their people because they can now recognize everyone. For most papers it"}, {"start": 400.92, "end": 406.24, "text": " will be a bit shaky like if you're third or the optimizational algorithm and"}, {"start": 406.24, "end": 413.6, "text": " she uses a slightly better convergence rate I'm not sure what's here but what"}, {"start": 413.6, "end": 421.72, "text": " I feel is that this is it's dumb in a way because this just means more work"}, {"start": 421.72, "end": 426.8, "text": " right basically now you have to demonstrate and yeah it says you should"}, {"start": 426.8, "end": 430.76, "text": " discuss positive and negative aspects but in essence everyone will be"}, {"start": 430.76, "end": 436.08, "text": " demonstrating virtue signaling how good their work will be for society"}, {"start": 436.08, "end": 442.2, "text": " and what good can be done and maybe bad but that can be mitigated right and and"}, {"start": 442.2, "end": 448.32, "text": " just pushes it into a more PR world so it goes from the science world into more"}, {"start": 448.32, "end": 453.2, "text": " PR world it means extra work and who are the people that can afford to do"}, {"start": 453.2, "end": 456.88, "text": " extra work it's mostly the big companies right they can just put an additional"}, {"start": 456.88, "end": 461.44, "text": " team member on that maybe even do additional experiments to show the"}, {"start": 461.44, "end": 467.68, "text": " societal impact of the work and who will lose out are probably small universities"}, {"start": 467.68, "end": 473.92, "text": " independent researchers and so on that don't have that that capacity that"}, {"start": 473.92, "end": 478.44, "text": " simply do their research right because it's an interesting research question and"}, {"start": 478.44, "end": 483.28, "text": " for almost every single thing in the world that has an application it will have"}, {"start": 483.28, "end": 490.96, "text": " good and bad applications so yeah mixed feelings so fifth is you are now supposed"}, {"start": 490.96, "end": 497.44, "text": " if your paper gets accepted to make a video about it and upload the the"}, {"start": 497.44, "end": 502.79999999999995, "text": " poster basically link to the poster that you would use and also link to slides"}, {"start": 502.79999999999995, "end": 506.56, "text": " that you would give the talk with this is to make it more accessible to people"}, {"start": 506.56, "end": 512.92, "text": " people that are not at the conference which again I have mixed feelings about"}, {"start": 512.92, "end": 520.04, "text": " again it pushes into this more PR realm right talks are already live streamed"}, {"start": 520.04, "end": 525.4399999999999, "text": " most of them are for most of the large conferences and I feel it just gets"}, {"start": 525.4399999999999, "end": 531.24, "text": " people one step more away from the actual paper like it's very so it allows"}, {"start": 531.24, "end": 538.16, "text": " people to grandstand and PR up even more of their work because even people who"}, {"start": 538.16, "end": 541.1999999999999, "text": " don't attend the conference now they're not gonna read the paper just gonna"}, {"start": 541.1999999999999, "end": 546.0, "text": " watch the video right and in the video you can always leave away those you know"}, {"start": 546.0, "end": 550.12, "text": " things that you would have to like that a reviewer makes you put in the paper"}, {"start": 550.12, "end": 555.24, "text": " right and in the video you can overboys it's camera ready no one reviews the"}, {"start": 555.24, "end": 560.16, "text": " video you can say whatever you want so it's it's just where before if you didn't"}, {"start": 560.16, "end": 564.48, "text": " attend the conference I think many people actually did read the paper watch"}, {"start": 564.48, "end": 571.08, "text": " talks where people could ask questions and now it's it's just one more PR thing"}, {"start": 571.08, "end": 578.64, "text": " and again who has time energy and money to really invest a lot into this it's"}, {"start": 578.64, "end": 584.32, "text": " mainly large companies right if you're small and you're time bound and so on you"}, {"start": 584.32, "end": 590.72, "text": " might not have equipment or time to do that I am not for high-end to do your"}, {"start": 590.72, "end": 598.0, "text": " learnings videos just saying I don't have time to make these videos really as you"}, {"start": 598.0, "end": 602.76, "text": " could see in the stellar quality I think there's a bright glare right here so"}, {"start": 602.76, "end": 632.72, "text": " that was it for my opinions on this and I wish you nice day bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=9Kec_7WFyp0 | Growing Neural Cellular Automata | The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive!
https://distill.pub/2020/growing-ca/
https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
Abstract:
Most multicellular organisms begin their life as a single egg cell - a single cell whose progeny reliably self-assemble into highly complex anatomies with many organs and tissues in precisely the same arrangement each time. The ability to build their own bodies is probably the most fundamental skill every living creature possesses. Morphogenesis (the process of an organism’s shape development) is one of the most striking examples of a phenomenon called self-organisation. Cells, the tiny building blocks of bodies, communicate with their neighbors to decide the shape of organs and body plans, where to grow each organ, how to interconnect them, and when to eventually stop. Understanding the interplay of the emergence of complex outcomes from simple rules and homeostatic 1 feedback loops is an active area of research. What is clear is that evolution has learned to exploit the laws of physics and computation to implement the highly robust morphogenetic software that runs on genome-encoded cellular hardware.
This process is extremely robust to perturbations. Even when the organism is fully developed, some species still have the capability to repair damage - a process known as regeneration. Some creatures, such as salamanders, can fully regenerate vital organs, limbs, eyes, or even parts of the brain! Morphogenesis is a surprisingly adaptive process. Sometimes even a very atypical development process can result in a viable organism - for example, when an early mammalian embryo is cut in two, each half will form a complete individual - monozygotic twins!
The biggest puzzle in this field is the question of how the cell collective knows what to build and when to stop. The sciences of genomics and stem cell biology are only part of the puzzle, as they explain the distribution of specific components in each cell, and the establishment of different types of cells. While we know of many genes that are required for the process of regeneration, we still do not know the algorithm that is sufficient for cells to know how to build or remodel complex organs to a very specific anatomical end-goal. Thus, one major lynch-pin of future work in biomedicine is the discovery of the process by which large-scale anatomy is specified within cell collectives, and how we can rewrite this information to have rational control of growth and form. It is also becoming clear that the software of life possesses numerous modules or subroutines, such as “build an eye here”, which can be activated with simple signal triggers. Discovery of such subroutines and a mapping out of the developmental logic is a new field at the intersection of developmental biology and computer science. An important next step is to try to formulate computational models of this process, both to enrich the conceptual toolkit of biologists and to help translate the discoveries of biology into better robotics and computational technology.
Imagine if we could design systems of the same plasticity and robustness as biological life: structures and machines that could grow and repair themselves. Such technology would transform the current efforts in regenerative medicine, where scientists and clinicians seek to discover the inputs or stimuli that could cause cells in the body to build structures on demand as needed. To help crack the puzzle of the morphogenetic code, and also exploit the insights of biology to create self-repairing systems in real life, we try to replicate some of the desired properties in an in silico experiment.
Authors: Alexander Mordvintsev, Ettore Randazzo, Eyvind Niklasson, Michael Levin
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today I thought we would be looking at growing neural cellular automata, which is an article on the still.pub, which I found pretty neat. So this is kind of an interactive article. If you don't know the still.pub, check it out. It is a cool new concept as an alternative to the classical journals or the conference system. So what it allows you to do is to kind of write articles that are a bit more interactive, a bit more engaging and don't have the, like there's no PDFs, there's no pages, there are animations and so on. So I thought we'd be looking at this article today, which is kind of growing neural cellular automata. So if you don't know what cellular automata are, this is a very kind of old concept. The most famous one is called the Game of Life, where you have these cells. Here you can see every pixel is a cell and they follow some kind of update rule. And usually it's the update rule, something like if my neighbor is alive, I'm going to be alive as well in the next time step. And if, or if enough neighbors are alive, and if only there are few neighbors are alive, I'm going to die. So this gives rise to these kind of patterns. And here is the same as done with color. And the update rules are a bit more complicated. So basically, here, a traveler, oh, nice. Okay. So in the Game of Life, if you play it, the most prestigious thing to get is are these kind of travelers. I have not, this is the first time I've managed to do this and this thing. So what does it do? So each pixel here is kind of an autonomous thing that is only allowed to look at its neighbors in order to decide whether or not in the next time step it is going to be alive. Look, it's like incorporating again. So each cell looks at its neighbors and then decides what its next state will be. And here it's not only alive or dead. Dead would be white and alive would be anything else. But it is also, I guess this white is, it is also the color. So each cell decides on what color it should have. And then this is a live thing. So it kind of reproduces. You can see if I start it new, if you double click here, it grows from somewhere else. And this is completely local. So these cells really only look at their neighbors. That's the special part. They don't look at the global structures. Not like again, they can look at the entire picture and decide what's still missing. And what these can also do, if you destroy part of it, they can kind of grow back just again, just out of local update rules at the level of the individual cells and their neighbors. They're trained to do these big structures. So let's look at how they do it. So basically here's how they model a cell. And let's go over here. So each cell, as I said, is made up of 16 channels. And here it's modeled as three by three. But I think each cell is really one pixel. And each cell is allowed to look at its eight neighbors. Right. So each cell is allowed to look at its eight neighbors across 16 different channels. And the 16 channels here mean the first three are RGB. So this is actually color that is seen. Then there is an alive or dead channel. So what they call an alpha channel. So if this channel is high, the cell is considered alive. Otherwise, it is considered dead and not part of the pattern. So a cell can come alive or die depending on its neighbors. And then the rest, the rest 12 channels are what they call hidden channels. So the cell is allowed to encode some hidden state there. So there's each cell is represented by the 16-dimensional vector, which is not much right. And then each cell is allowed to look at three things. So from the bottom here, it's allowed to look at its own state, so at its own 16-dimensional vectors. And it is allowed to look at its neighbors. And it does this by doing a convolution with a sobel filter. And the sobel filter is simply a fixed filter that you do a three by three convolution with. As you can see here is basically a gradient filter. So basically measures the difference between what's to the left of the cell and what's to the right of the cell. And here in the sobel y direction, the same in the y direction. So it's basically allowed to look at gradients in states of its neighbors. This is modeled after real cells kind of looking at chemical gradients in their neighborhoods. So this is all this is all that the cell has to decide what it's supposed to do next. Right. And what we want is we want that each individual cell only looking in its neighbors produces in total they will produce these kind of very complex patterns. So the up to draw list of following you convoluted with the sobel filters and you take the cell identity. You put this all into a vector, you put it through a very very small neural network. So this is one dense layer, one relu, and then another dense layer to get the next 16-dimensional vector, which is the next state. And that defines your update rules. That doesn't really define the next state that defines the delta to the next state, kind of like a residual neural network. So basically which cells need to come alive in the next time step which cells need to die and how are they to change their colors. Right. And then you get the output of the next step. Right. So that's that's basically the entire thing. So all that is learned here is the update rule of the neural network. Right. So basically the neural network decides it looks at a cell and its neighbors and decides what the information and the cell in the next step should be. Right. And you do this for multiple time steps. That's I want to actually want to go down here. You do this for multiple time steps. The initial state is simply one cell that is alive here in the middle. Everything else is dead. This cell is alive and black. You do this for many steps. Right. And then at some point you get an output and you compare the output to your desired output. You compute a loss that is differentiable and because your update rule is differentiable and your loss is differentiable, you can backprop through time to the original to the original pattern here and you can basically learn this update rule by backproping through time. This is a bit like an LSTM and if you see in the architecture here, I think this residual connection is really the key to making this work over time because usually I would not expect something like this to easily emerge over time because you have the problem of vanishing and exploding gradients and you have no way of mitigating this problem here in this simple neural network. But in any case they backprop through time here. So each of these update steps which again this isn't one neural network with many layers. This is the same neural network applied over and over and over and over again and then there is a loss computed. So basically the gradients will accumulate over these steps. Basically tell the network what it needs to adjust to go from this one single black pixel to this final desired state. If you do this over and over again, you learn things. You learn a update rule that will give rise to that pattern. Hopefully. Now here is a kind of an illustration of this alive and dead thing. So what they do is they consider cells that have an alpha channel. This one of these channels called alpha. They have an alpha channel above 0.1. It's considered alive and part of the loss. Then the neighbors of these cells that are below 0.1 but are neighboring a cell that is mature alive. They're called growing. They're also part of the loss. So simply by being close to someone that is alive, a cell that is alive, you are considered alive as well. But your neighbors are not. Only the neighbors of really alive. So there's really alive kind of alive and then there is dead. Dead, the meaning of dead here at the gray ones, they won't become part of the pattern, part of the loss. They are dead. What will this get you initially? Here is an animation. If they train this just like that, just backprop through time with the target pattern and then they let it run. You see these patterns actually emerge. So that's pretty cool. But then if you let them run for a longer than they've been trained, you basically have no guarantees on what's going to happen. These update rules are simply trained to achieve the pattern within a certain number of steps. If you run for more than that and apply the update rules for longer than that, there's little guarantee what's going to happen. These update rules will simply continue as you can see here and produce some weird stuff. So they are trying to fix this. So what they do is basically they train for longer. But they do it in a kind of different way. So at each step of training and a step, I mean a batch over these number of time steps. So they sample a batch. Initially it's just all black pixels. Let's see how about. And then they optimize for these number of time steps and then they're at the end. So what they do is they don't always start from the black pixel. But sometimes they also start from a previously seen end state. So basically they take the end state of a previous training run and then they just continue from that instead of starting from the initial point. And you see after some training, they get better and better. So initially you see the thing on the left here. The thing on the left here being a starting state and then it progressively gets better. So basically by starting from end states of other things, you learn to. So if the end state of the other thing isn't very good, you basically learn to go to the good pattern to the pattern you want. But of course over time there is going to be more and more of these end states that you train from that are already pretty close to the pattern you want. And so then what that means is you learn to reproduce the pattern. So you are already at a good point. You learn to stay at that good point. And then that enables you to basically learn update rules that if you're not at the pattern you want, they go towards the pattern you want. But also if you run for longer, if you're already or at the pattern you want, then you stay at the pattern you want. So that's what we basically saw in the very initial demonstration where you could, this is a live demonstration like this thing up here. This is a live, this is running. And you see the update rules stated they are continuously applied. They basically stay at the pattern where they are. And that is also that is learned because of this protocol that you train from end states as well as from beginning states. So the next thing is what I'm doing here is I can destroy part of the pattern and it will kind of regrow right? You see that here. So this is also a part. So for now we've also only learned to go from a single pixel like here from a black pixel to the pattern. But now we also want to learn to go to regrow when destroyed because that is this is you can see this is modeled after kind of live tissue. So here you can see the parts are cut away and then the cells try to regrow. So this is I think initially initially when you just train them they exhibit some of that property but not like very satisfying in some cases. So what they do is they train not only do they use end states like we saw before but also some of their training samples are simply the pattern destroyed a bit. So as you can see in some of these samples like these here they in each sample they kind of cut out part of the sample and they train the update rules to regrow that part. That gives you now gives you the ability to if you damage to pretty consistently regrow the pattern as you can see here. And they also train for rotation which is non-trivial if you have these kind of pixel based pixel based models but I want to jump up because I want to keep it kind of short here. So the entire goal of this is to kind of model the behavior of natural cells because the natural cells they don't have an overarching view they only have the view of their neighbors right and they are able to grow into very complex structures. I invite you to give this a try. The distilled.pop journal is very cool it's very interactive you can play around with it you can reproduce things in a collab and yeah shout out to the authors here Alexander Morvinsev, Etoora, Eradazzo, sorry, Randazzo, Evan Nicholson and Michael Levin. Yep that was it for me thanks for watching and bye bye. | [{"start": 0.0, "end": 8.24, "text": " Hi there. Today I thought we would be looking at growing neural cellular automata, which"}, {"start": 8.24, "end": 16.16, "text": " is an article on the still.pub, which I found pretty neat. So this is kind of an interactive"}, {"start": 16.16, "end": 23.04, "text": " article. If you don't know the still.pub, check it out. It is a cool new concept as an alternative"}, {"start": 23.04, "end": 32.08, "text": " to the classical journals or the conference system. So what it allows you to do is to kind of write"}, {"start": 32.08, "end": 43.2, "text": " articles that are a bit more interactive, a bit more engaging and don't have the, like there's no"}, {"start": 43.2, "end": 49.6, "text": " PDFs, there's no pages, there are animations and so on. So I thought we'd be looking at this article"}, {"start": 49.6, "end": 56.480000000000004, "text": " today, which is kind of growing neural cellular automata. So if you don't know what cellular"}, {"start": 56.480000000000004, "end": 63.92, "text": " automata are, this is a very kind of old concept. The most famous one is called the Game of Life,"}, {"start": 63.92, "end": 70.16, "text": " where you have these cells. Here you can see every pixel is a cell and they follow some kind of"}, {"start": 70.16, "end": 76.4, "text": " update rule. And usually it's the update rule, something like if my neighbor is alive,"}, {"start": 76.4, "end": 82.4, "text": " I'm going to be alive as well in the next time step. And if, or if enough neighbors are alive,"}, {"start": 82.4, "end": 87.36000000000001, "text": " and if only there are few neighbors are alive, I'm going to die. So this gives rise to these kind"}, {"start": 87.36000000000001, "end": 93.60000000000001, "text": " of patterns. And here is the same as done with color. And the update rules are a bit more complicated."}, {"start": 94.4, "end": 104.08000000000001, "text": " So basically, here, a traveler, oh, nice. Okay. So in the Game of Life, if you play it,"}, {"start": 104.08, "end": 109.44, "text": " the most prestigious thing to get is are these kind of travelers. I have not,"}, {"start": 111.03999999999999, "end": 117.03999999999999, "text": " this is the first time I've managed to do this and this thing. So what does it do? So each pixel"}, {"start": 117.03999999999999, "end": 124.72, "text": " here is kind of an autonomous thing that is only allowed to look at its neighbors in order to"}, {"start": 124.72, "end": 129.52, "text": " decide whether or not in the next time step it is going to be alive. Look, it's like incorporating"}, {"start": 129.52, "end": 139.12, "text": " again. So each cell looks at its neighbors and then decides what its next state will be. And here"}, {"start": 139.12, "end": 147.52, "text": " it's not only alive or dead. Dead would be white and alive would be anything else. But it is also,"}, {"start": 148.16000000000003, "end": 154.0, "text": " I guess this white is, it is also the color. So each cell decides on what color it should have."}, {"start": 154.0, "end": 162.56, "text": " And then this is a live thing. So it kind of reproduces. You can see if I start it new,"}, {"start": 163.12, "end": 169.76, "text": " if you double click here, it grows from somewhere else. And this is completely local. So these cells"}, {"start": 169.76, "end": 175.04, "text": " really only look at their neighbors. That's the special part. They don't look at the global"}, {"start": 175.04, "end": 179.92000000000002, "text": " structures. Not like again, they can look at the entire picture and decide what's still missing."}, {"start": 179.92, "end": 187.92, "text": " And what these can also do, if you destroy part of it, they can kind of grow back just again,"}, {"start": 187.92, "end": 193.83999999999997, "text": " just out of local update rules at the level of the individual cells and their neighbors. They're"}, {"start": 194.39999999999998, "end": 200.79999999999998, "text": " trained to do these big structures. So let's look at how they do it. So basically here's how they"}, {"start": 200.8, "end": 212.56, "text": " model a cell. And let's go over here. So each cell, as I said, is made up of 16 channels. And here"}, {"start": 212.56, "end": 219.76000000000002, "text": " it's modeled as three by three. But I think each cell is really one pixel. And each cell is"}, {"start": 219.76000000000002, "end": 226.88000000000002, "text": " allowed to look at its eight neighbors. Right. So each cell is allowed to look at its eight neighbors"}, {"start": 226.88, "end": 235.6, "text": " across 16 different channels. And the 16 channels here mean the first three are RGB. So this is"}, {"start": 235.6, "end": 241.92, "text": " actually color that is seen. Then there is an alive or dead channel. So what they call an alpha"}, {"start": 241.92, "end": 249.35999999999999, "text": " channel. So if this channel is high, the cell is considered alive. Otherwise, it is considered dead"}, {"start": 249.35999999999999, "end": 256.08, "text": " and not part of the pattern. So a cell can come alive or die depending on its neighbors. And then"}, {"start": 256.08, "end": 263.59999999999997, "text": " the rest, the rest 12 channels are what they call hidden channels. So the cell is allowed to encode"}, {"start": 263.59999999999997, "end": 270.0, "text": " some hidden state there. So there's each cell is represented by the 16-dimensional vector,"}, {"start": 270.0, "end": 276.24, "text": " which is not much right. And then each cell is allowed to look at three things. So from the bottom"}, {"start": 276.24, "end": 283.59999999999997, "text": " here, it's allowed to look at its own state, so at its own 16-dimensional vectors. And it is"}, {"start": 283.6, "end": 289.76000000000005, "text": " allowed to look at its neighbors. And it does this by doing a convolution with a sobel filter."}, {"start": 289.76000000000005, "end": 295.6, "text": " And the sobel filter is simply a fixed filter that you do a three by three convolution with."}, {"start": 295.6, "end": 304.40000000000003, "text": " As you can see here is basically a gradient filter. So basically measures the difference between"}, {"start": 305.12, "end": 309.04, "text": " what's to the left of the cell and what's to the right of the cell. And here in the sobel"}, {"start": 309.04, "end": 316.24, "text": " y direction, the same in the y direction. So it's basically allowed to look at gradients in states"}, {"start": 316.24, "end": 324.8, "text": " of its neighbors. This is modeled after real cells kind of looking at chemical gradients in their"}, {"start": 324.8, "end": 332.48, "text": " neighborhoods. So this is all this is all that the cell has to decide what it's supposed to do next."}, {"start": 332.48, "end": 338.64000000000004, "text": " Right. And what we want is we want that each individual cell only looking in its neighbors"}, {"start": 338.64, "end": 343.52, "text": " produces in total they will produce these kind of very complex patterns."}, {"start": 344.8, "end": 349.84, "text": " So the up to draw list of following you convoluted with the sobel filters and you take the cell"}, {"start": 349.84, "end": 356.24, "text": " identity. You put this all into a vector, you put it through a very very small neural network."}, {"start": 356.24, "end": 362.71999999999997, "text": " So this is one dense layer, one relu, and then another dense layer to get the next 16-dimensional"}, {"start": 362.71999999999997, "end": 367.44, "text": " vector, which is the next state. And that defines your update rules. That doesn't really define"}, {"start": 367.44, "end": 373.6, "text": " the next state that defines the delta to the next state, kind of like a residual neural network."}, {"start": 374.32, "end": 380.0, "text": " So basically which cells need to come alive in the next time step which cells need to die and how"}, {"start": 380.0, "end": 388.4, "text": " are they to change their colors. Right. And then you get the output of the next step. Right. So"}, {"start": 388.4, "end": 395.6, "text": " that's that's basically the entire thing. So all that is learned here is the update rule of the"}, {"start": 395.6, "end": 400.88, "text": " neural network. Right. So basically the neural network decides it looks at a cell and its neighbors"}, {"start": 400.88, "end": 407.36, "text": " and decides what the information and the cell in the next step should be. Right. And you do this"}, {"start": 407.36, "end": 412.72, "text": " for multiple time steps. That's I want to actually want to go down here. You do this for multiple"}, {"start": 412.72, "end": 418.16, "text": " time steps. The initial state is simply one cell that is alive here in the middle. Everything else"}, {"start": 418.16, "end": 425.20000000000005, "text": " is dead. This cell is alive and black. You do this for many steps. Right. And then at some point you"}, {"start": 425.2, "end": 431.44, "text": " get an output and you compare the output to your desired output. You compute a loss that is"}, {"start": 431.44, "end": 437.12, "text": " differentiable and because your update rule is differentiable and your loss is differentiable,"}, {"start": 437.12, "end": 446.0, "text": " you can backprop through time to the original to the original pattern here and you can basically"}, {"start": 446.0, "end": 452.8, "text": " learn this update rule by backproping through time. This is a bit like an LSTM and if you see in"}, {"start": 452.8, "end": 458.72, "text": " the architecture here, I think this residual connection is really the key to making this work"}, {"start": 458.72, "end": 465.36, "text": " over time because usually I would not expect something like this to easily emerge over time"}, {"start": 465.36, "end": 470.48, "text": " because you have the problem of vanishing and exploding gradients and you have no way of mitigating"}, {"start": 470.48, "end": 481.28000000000003, "text": " this problem here in this simple neural network. But in any case they backprop through time here."}, {"start": 481.28, "end": 488.47999999999996, "text": " So each of these update steps which again this isn't one neural network with many layers. This"}, {"start": 488.47999999999996, "end": 494.23999999999995, "text": " is the same neural network applied over and over and over and over again and then there is a loss"}, {"start": 494.23999999999995, "end": 500.08, "text": " computed. So basically the gradients will accumulate over these steps. Basically tell the network what"}, {"start": 500.08, "end": 508.08, "text": " it needs to adjust to go from this one single black pixel to this final desired state. If you do this"}, {"start": 508.08, "end": 516.64, "text": " over and over again, you learn things. You learn a update rule that will give rise to that pattern."}, {"start": 516.64, "end": 524.0799999999999, "text": " Hopefully. Now here is a kind of an illustration of this alive and dead thing. So what they do is they"}, {"start": 524.0799999999999, "end": 530.64, "text": " consider cells that have an alpha channel. This one of these channels called alpha. They have an alpha"}, {"start": 530.64, "end": 542.56, "text": " channel above 0.1. It's considered alive and part of the loss. Then the neighbors of these cells"}, {"start": 542.56, "end": 551.1999999999999, "text": " that are below 0.1 but are neighboring a cell that is mature alive. They're called growing."}, {"start": 551.1999999999999, "end": 557.04, "text": " They're also part of the loss. So simply by being close to someone that is alive, a cell that"}, {"start": 557.04, "end": 564.8, "text": " is alive, you are considered alive as well. But your neighbors are not. Only the neighbors of"}, {"start": 564.8, "end": 571.36, "text": " really alive. So there's really alive kind of alive and then there is dead. Dead, the meaning of"}, {"start": 571.36, "end": 579.04, "text": " dead here at the gray ones, they won't become part of the pattern, part of the loss. They are dead."}, {"start": 579.04, "end": 592.3199999999999, "text": " What will this get you initially? Here is an animation. If they train this just like that,"}, {"start": 592.3199999999999, "end": 597.76, "text": " just backprop through time with the target pattern and then they let it run. You see these patterns"}, {"start": 597.76, "end": 602.88, "text": " actually emerge. So that's pretty cool. But then if you let them run for a longer than they've"}, {"start": 602.88, "end": 609.2, "text": " been trained, you basically have no guarantees on what's going to happen. These update rules are"}, {"start": 609.2, "end": 616.24, "text": " simply trained to achieve the pattern within a certain number of steps. If you run for more than"}, {"start": 616.24, "end": 625.2, "text": " that and apply the update rules for longer than that, there's little guarantee what's going to"}, {"start": 625.2, "end": 631.36, "text": " happen. These update rules will simply continue as you can see here and produce some weird stuff."}, {"start": 631.36, "end": 638.32, "text": " So they are trying to fix this. So what they do is basically they train for longer. But they do it"}, {"start": 638.32, "end": 649.28, "text": " in a kind of different way. So at each step of training and a step, I mean a batch over these"}, {"start": 649.28, "end": 656.5600000000001, "text": " number of time steps. So they sample a batch. Initially it's just all black pixels. Let's see"}, {"start": 656.56, "end": 663.28, "text": " how about. And then they optimize for these number of time steps and then they're at the end."}, {"start": 663.28, "end": 669.3599999999999, "text": " So what they do is they don't always start from the black pixel. But sometimes they also start"}, {"start": 670.0799999999999, "end": 679.8399999999999, "text": " from a previously seen end state. So basically they take the end state of a previous training run"}, {"start": 679.8399999999999, "end": 685.1199999999999, "text": " and then they just continue from that instead of starting from the initial point."}, {"start": 685.12, "end": 694.32, "text": " And you see after some training, they get better and better. So initially you see the thing on the left"}, {"start": 694.32, "end": 703.12, "text": " here. The thing on the left here being a starting state and then it progressively gets better. So"}, {"start": 703.12, "end": 712.88, "text": " basically by starting from end states of other things, you learn to. So if the end state of the"}, {"start": 712.88, "end": 719.6, "text": " other thing isn't very good, you basically learn to go to the good pattern to the pattern you want."}, {"start": 719.6, "end": 725.4399999999999, "text": " But of course over time there is going to be more and more of these end states that you train from"}, {"start": 725.4399999999999, "end": 732.8, "text": " that are already pretty close to the pattern you want. And so then what that means is you learn"}, {"start": 732.8, "end": 739.2, "text": " to reproduce the pattern. So you are already at a good point. You learn to stay at that good point."}, {"start": 739.2, "end": 747.2800000000001, "text": " And then that enables you to basically learn update rules that if you're not at the pattern you"}, {"start": 747.2800000000001, "end": 752.96, "text": " want, they go towards the pattern you want. But also if you run for longer, if you're already"}, {"start": 752.96, "end": 760.32, "text": " or at the pattern you want, then you stay at the pattern you want. So that's what we basically saw"}, {"start": 760.88, "end": 766.08, "text": " in the very initial demonstration where you could, this is a live demonstration like this thing"}, {"start": 766.08, "end": 773.6800000000001, "text": " up here. This is a live, this is running. And you see the update rules stated they are continuously"}, {"start": 773.6800000000001, "end": 779.44, "text": " applied. They basically stay at the pattern where they are. And that is also that is learned because"}, {"start": 779.44, "end": 784.24, "text": " of this protocol that you train from end states as well as from beginning states."}, {"start": 786.88, "end": 793.9200000000001, "text": " So the next thing is what I'm doing here is I can destroy part of the pattern and it will"}, {"start": 793.92, "end": 801.28, "text": " kind of regrow right? You see that here. So this is also a part. So for now we've also only learned"}, {"start": 801.28, "end": 807.28, "text": " to go from a single pixel like here from a black pixel to the pattern. But now we also want to learn"}, {"start": 807.28, "end": 815.04, "text": " to go to regrow when destroyed because that is this is you can see this is modeled after kind of"}, {"start": 815.04, "end": 828.7199999999999, "text": " live tissue. So here you can see the parts are cut away and then the cells try to regrow. So this"}, {"start": 828.7199999999999, "end": 839.68, "text": " is I think initially initially when you just train them they exhibit some of that property but not"}, {"start": 839.68, "end": 847.52, "text": " like very satisfying in some cases. So what they do is they train not only do they use end states"}, {"start": 847.52, "end": 855.68, "text": " like we saw before but also some of their training samples are simply the pattern destroyed a bit."}, {"start": 855.68, "end": 862.7199999999999, "text": " So as you can see in some of these samples like these here they in each sample they kind of cut out"}, {"start": 862.72, "end": 872.88, "text": " part of the sample and they train the update rules to regrow that part. That gives you now gives you"}, {"start": 872.88, "end": 879.76, "text": " the ability to if you damage to pretty consistently regrow the pattern as you can see here."}, {"start": 882.4, "end": 888.32, "text": " And they also train for rotation which is non-trivial if you have these kind of pixel based"}, {"start": 888.32, "end": 895.6, "text": " pixel based models but I want to jump up because I want to keep it kind of short here."}, {"start": 897.6800000000001, "end": 905.36, "text": " So the entire goal of this is to kind of model the behavior of natural cells because the natural"}, {"start": 905.36, "end": 911.5200000000001, "text": " cells they don't have an overarching view they only have the view of their neighbors right and they"}, {"start": 911.52, "end": 919.76, "text": " are able to grow into very complex structures. I invite you to give this a try. The distilled.pop"}, {"start": 919.76, "end": 925.28, "text": " journal is very cool it's very interactive you can play around with it you can reproduce things"}, {"start": 925.28, "end": 934.8, "text": " in a collab and yeah shout out to the authors here Alexander Morvinsev, Etoora, Eradazzo,"}, {"start": 934.8, "end": 947.5999999999999, "text": " sorry, Randazzo, Evan Nicholson and Michael Levin. Yep that was it for me thanks for watching and bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=tC01FRB0M7w | Turing-NLG, DeepSpeed and the ZeRO optimizer | Microsoft has trained a 17-billion parameter language model that achieves state-of-the-art perplexity. This video takes a look at the ZeRO optimizer that enabled this breakthrough. ZeRO allows you to do model- and data-parallelism without having huge cuts in training speed.
https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/
https://www.microsoft.com/en-us/research/blog/zero-deepspeed-new-system-optimizations-enable-training-models-with-over-100-billion-parameters/
https://github.com/microsoft/DeepSpeed
https://arxiv.org/abs/1910.02054
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model by Microsoft, the latest and greatest of language modeling by Microsoft. What is this? It is a language model. A language model is basically a model that learns to produce language given language. So if you start a sentence, it's supposed to finish a sentence. If you start a paragraph, it's supposed to finish the paragraph. That's a language model. Ultimately, you can make it do different things like answer questions, have a conversation with you, anything to do with understanding language. The special thing about this one is that it's ginormous. So if you look at the scale of kind of language models, so Bert was quite a large thing in its back in its day, ye old the Bert, you can see here it has about 340 million parameters. Now I have to say all of these language models are transformers. This is kind of the state of the art today. So all of these are kind of our transformer based models. Then GPT2 here, you can see that was the model that was so large, it was too dangerous to be released into the world. That stands at 1.5 billion parameters. Megatron LMBINE video, 8.3 billion and now we are at 17 billion parameters for this language model. And it is a bit better. People just throw more and more and more resources at this language problem. And it really, yeah. So what you can do with it, you can of course do language modeling. So what happens is you take a bunch of text like all of Wikipedia and all of the internet and all of Reddit and so on. And you let the model train on it to understand to basically produce that sort of language. And then you can measure it, for example, it's a perplexity on a validation set. And the nurturing NLG is currently state of the art on that. It can also do, for example, question answering. So you can ask the question and give it a passage about that question. And it will then tell you the answer that it deduced from that passage given the question, as you can see here. What is more interesting is that a usual QA system will point to the passage. So it will point to the words Tristan Prediman. Whereas with a generative model like this one, what you can do is you can make it actually output and answer as a sentence. So it will generate the text Jason Bras was engaged to Tristan Prediman. Sorry, Prediman. If you ask a question without giving it a context and just ask it to generate an answer, it will do so as well. I don't know if these answers are cherry picked, but they call this zero-shot question answering. So if you ask when the World War II end and it can output World War II ended in 1945, simply out of regularities it detected in the training data. So that's what I'm wondering at what point are these models? Do they have so many parameters that they simply reproduce the training data? This clearly, some article from the training data is about World War II or many are. And it simply learned that following a question when the World War II end, it needs to answer the appropriate passage. I'm not sure that is a proper measure of language understanding if you simply can bake more and more of the training data into these many, many parameters. But I'm not the judge of that here. It can do it very well. So what I'm actually more interested in is this thing is called the zero optimizer that they used to train the model. So the model is just a transformer. It's just a big, big transformer model. There is nothing really special about the model except that it is larger than the last model and therefore a bit better. What is interesting is that this would have been pretty impossible to train if it weren't for this zero optimizer of this deep speed library. And Microsoft has released this deep speed library. It's compatible for now with PyTorch. You can check this out. I'll put a link into the description. And I want to dive into this a bit. So there's a paper. It's by Samyam Rajbondari. And I'll end by all by Microsoft. Paper describes in detail the optimizer, but it's not very visual. That's why we're going to the blog post. You can see it gives many speed ups over usual, over the previous Megatron LM model that Nvidia just trained using kind of just what Nvidia has. Nvidia has machines that are interconnected within the machine with very fast, very fast buses between GPUs. But this zero optimizer can now also go over the network and make it pretty fast. So let's explore that a bit. I have the copy that's here. So we'll look at zero optimizer works. So usually what you do is you have multiple GPUs. You can do something like this. And this is called data parallelism. What you have is you have a model. And the model in this case fits on your GPU. It fits on a single GPU. So the blue thing here is the model. I'll actually draw this. So the model, let's say, is a somehow neural network. So it has a bunch of layers, layer, layer, layer, layer. And what you want to do is you pass data forward. Here is some loss and then right into the loss function and then backward again. That's basically what you need to do. You need to pass it forward and backward in order to do back propagation training. If this all fits into one box, that's completely fine. If this fits into one machine, cool. We can just put many batches of data through batch one, batch two, batch three, and so on. Train the model. If you want to do a speed up using this, you can do so. If you have lots of data, you can do what's called, and I'm always confused. I think this is called data parallelism. Or is it called model parallelism? Because in any case, what you can do is you can take a second machine or many of those. Replicate the model. Now these models, these two models here are exactly the same. Right? But what you do is you take your data and you split it up. Basically, you take double the amount of data and you put one batch of data through the top part and you put the other through the bottom part. And you do your forward passes on the machines and you do your backward passes. And then what you want to do is you want to sink between the machines, what they learned from the data. So each machine has a different set of data points. Each machine calculates its own up parameter updates. It learns from the data it has. And then they communicate to keep because this here and this here should be the same. It's the same model. So they have to keep in sync. Now this can be done fairly efficiently, especially if these aren't actually two machines, but just two GPUs inside of one large machine. So if this is a large machine, this is GPU zero and this is GPU one. This is pretty standard because especially on Nvidia machines, they have these, whatever I think they call them, infinity band or so or is that. So Nvidia has these connectors that connects the GPUs together really fast. So you can keep these in sync. But now the problem becomes what if you want to train a model that is larger than this. So let's forget about the data parallelism for now if that is what is called and just consider a model that is too large. So a model that is too large will not fit into a machine. So this is a model as a large model. So what you want to do is you want to pack some of the model onto your first machine and then take the other part of the model. Sorry. And pack it onto another machine. You separate the model and put it on different machines. Now if you have a batch of data, what you have to do is you pass it, pass it, pass it, forward propagate as you regularly would, but then you have an intermediate result. You send that to the next machine and you forward propagate that at the end here you have a loss, right? You want to back propagate regularly through this machine. You have an intermediate result of back propagation. Send it over the network and back prop all the way through the model. So that's how you can train a model that is too large for one machine if you have multiple machines. The problem here, of course, is this part. Just as you had to keep and sync the model before, now your communication problem becomes one of... You have to send the intermediate stages to that model and you have to send the intermediate stage of the back propagation back to that part of the model. And while this part is working, this part is kind of idling away and the network overhead, sorry, is just very costly. Actually if your model is so large, you can't even fit into one of these single boxes, right? So this is very problematic here. It's still doable. But what the zero optimizer does is it does both data and model parallelism. So it can train models that are too large for a single machine. And it can do data parallelism at the same time, such that basically everything is working all the time. There is no wasted or much wasted computation. The communication is efficient and so on. So it's really a technical achievement. It's not so much an scientific advance, it's really a technical achievement, this optimizer. And we'll shortly go through. There is a kind of an animation, but it's on the website, but it's super slow. And I think this might be the first time that I will be faster at explaining something than a video. All right. Let's see here. Cool. So what do you do is let's just consider these three GPUs. Before that, it would all fit on one machine. And now let's say you don't actually have that much memory. You don't have these giant empty blocks here. You just have a bit of that. So you have to split your model. The blue parts here are your model. These are model parameters. The orange parts here is memory you need to store gradients. You need as many gradients as you have model parameters. Because you do gradient descent. The green stuff here are what's called optimizer parameters. Now if you just have SGD, these would be nonexistent. But if you have something like add a grad or add them, they have additional parameters for each model parameter that they need to keep track of. So these are stored here. And there can be significant overhead. There's also like a floating point 32, 16 conversion going on here. Don't want to go into that. So you split your model onto these three machines. Let's say that's your entire model. Your model is six blocks wide. And you need to forward propagate now through everything. All right. So here is what zero does. And I think it's pretty cool. So what we need to do is we have these three different batches of data. And we want to forward propagate them all through the model, through the same model at the same time as if the model were actually stored on all these machines. Like if all of these machines had the entire model. And we have can do a bit of communication. So what we do first is this one's easy, right? Data zero through the first two layers here is easy, right? Because we have them, right? So bang, you go through the first, you get an intermediate result here and here, right? Okay. How do we propagate data one through this through the first layer? We can't send data one here. That would be too expensive, right? And that's the whole point would be lost, right? We want to actually compute data one on this GPU at the same time. What we do is before we start, we actually communicate these two blocks here, two GPU one, right? We send these parameters around and fill them in here. I'm actually making some blue, right? We send them here and we also send them here, right? We send the parameters to all the machines, right? And we can actually forward prop data one through this and data three through this, right? So we can do forward prop after we've communicated all the GPUs can be working. Same with layer two, right? So layer two simply can send it's these two here. You can see that these two here to the other machines, right? Now while it's doing that, we've already propagated through the first layer. So we've already propagated here and here through the first layer. So we can actually delete these again, right? We can delete these first layer parameters that we sent around again, right? So here's, you see how we can save memory. We don't keep all the modeling sync on all the machines. We send whatever we need on the other machines and then once the computation is done, they can delete it again, right? Because there's always one machine, right? This one here for the middle parameters that keeps track of the parameters and that can at any point if they're needed, send them again. So that's the big kind of catch. So you can forward prop nav through these two, right? They hear there are already present. And then you can delete those again on the machines where they're not natively stored. And you can send from here, you can send those two also up here, you can send those two. And forward prop your model through to the end, right? Dada, dada, dada, oops. Yeah, that was a mistake. They should be here. And each machine calculates its own loss, right? Loss, loss, loss. And the backward propagation happens in much the same way. So as you can, if you follow so far, you can already imagine, right? What I can do is, so now this is a loss, the loss is different because there's a different batch of data going through each machine, right? So there's a different batch of data going through each machine, but each machine has computed with the same model due to the communication of the zero optimizer. So that's pretty cool. So you get the benefits of data parallelism, right? Lots of data on the different machines. And you also split up the model across the machines, but you don't actually store the model on any of these machines. You don't store the model. You only send, right, from here you send as you need and then you delete again, right? And for the backward propagation, same thing, right? You backward propagation, you calculate gradients, now you calculate gradients here and you send the gradients as needed to the other machines, right? You calculate gradients here and here and you send them to the machine where they're actually needed. This is a weird pen, sorry. You send them to that machine. That machine will aggregate all the gradients of all the machines. What is up with this pen? Okay. It will aggregate them and then locally it can compute using these optimizer parameters and so on. All kinds of optimization locally because it has gathered gradients from all the other data. So what you end up with, for example, GPU 2 here, for these two layers, it has effectively broadcast the layers such that much, much more data than it just had itself could run through the layers. It has aggregated gradients from all of that data and now it can use all of these gradients together to make a good update using the optimizer parameters to make a good update to these model parameters. And then in the next iteration, it can go ahead and broadcast the model parameters, the new model parameters again. So it is able to compute with much more data than it can just fit by itself and it is just doing its part. So zero and deep speed, so zero is the, I guess the protocol and deep speed is the actual library. They will do all of this communication and splitting and so on for you over the network in a way that is efficient in a way that everything runs at the same time and the communication overhead is minimal. And you can actually choose which stage you want. So what your trade off of communication and memory saving will be. So this is extremely cool. They say this goes up to whatever 100 billion parameter models if you use item. This is something for you know your average call abuser. This is really something for the for big players. But that being said, I don't think language is solved by simply throwing more parameters at it. I think we're still a bit of a breakthrough. We're still a bit of a breakthrough ahead yet to come in language understanding with newer model architectures. Alright that was it for me. Thanks. | [{"start": 0.0, "end": 7.4, "text": " Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model"}, {"start": 7.4, "end": 15.56, "text": " by Microsoft, the latest and greatest of language modeling by Microsoft."}, {"start": 15.56, "end": 16.56, "text": " What is this?"}, {"start": 16.56, "end": 17.56, "text": " It is a language model."}, {"start": 17.56, "end": 24.84, "text": " A language model is basically a model that learns to produce language given language."}, {"start": 24.84, "end": 27.64, "text": " So if you start a sentence, it's supposed to finish a sentence."}, {"start": 27.64, "end": 30.64, "text": " If you start a paragraph, it's supposed to finish the paragraph."}, {"start": 30.64, "end": 32.28, "text": " That's a language model."}, {"start": 32.28, "end": 37.120000000000005, "text": " Ultimately, you can make it do different things like answer questions, have a conversation"}, {"start": 37.120000000000005, "end": 40.2, "text": " with you, anything to do with understanding language."}, {"start": 40.2, "end": 44.120000000000005, "text": " The special thing about this one is that it's ginormous."}, {"start": 44.120000000000005, "end": 52.0, "text": " So if you look at the scale of kind of language models, so Bert was quite a large thing in"}, {"start": 52.0, "end": 60.8, "text": " its back in its day, ye old the Bert, you can see here it has about 340 million parameters."}, {"start": 60.8, "end": 64.96000000000001, "text": " Now I have to say all of these language models are transformers."}, {"start": 64.96000000000001, "end": 67.28, "text": " This is kind of the state of the art today."}, {"start": 67.28, "end": 73.2, "text": " So all of these are kind of our transformer based models."}, {"start": 73.2, "end": 79.68, "text": " Then GPT2 here, you can see that was the model that was so large, it was too dangerous"}, {"start": 79.68, "end": 83.36000000000001, "text": " to be released into the world."}, {"start": 83.36000000000001, "end": 86.44000000000001, "text": " That stands at 1.5 billion parameters."}, {"start": 86.44000000000001, "end": 94.32000000000001, "text": " Megatron LMBINE video, 8.3 billion and now we are at 17 billion parameters for this"}, {"start": 94.32000000000001, "end": 95.76, "text": " language model."}, {"start": 95.76, "end": 101.04, "text": " And it is a bit better."}, {"start": 101.04, "end": 107.32000000000001, "text": " People just throw more and more and more resources at this language problem."}, {"start": 107.32, "end": 109.96, "text": " And it really, yeah."}, {"start": 109.96, "end": 113.8, "text": " So what you can do with it, you can of course do language modeling."}, {"start": 113.8, "end": 119.39999999999999, "text": " So what happens is you take a bunch of text like all of Wikipedia and all of the internet"}, {"start": 119.39999999999999, "end": 122.0, "text": " and all of Reddit and so on."}, {"start": 122.0, "end": 130.04, "text": " And you let the model train on it to understand to basically produce that sort of language."}, {"start": 130.04, "end": 135.0, "text": " And then you can measure it, for example, it's a perplexity on a validation set."}, {"start": 135.0, "end": 142.48, "text": " And the nurturing NLG is currently state of the art on that."}, {"start": 142.48, "end": 144.84, "text": " It can also do, for example, question answering."}, {"start": 144.84, "end": 149.84, "text": " So you can ask the question and give it a passage about that question."}, {"start": 149.84, "end": 156.16, "text": " And it will then tell you the answer that it deduced from that passage given the question,"}, {"start": 156.16, "end": 157.56, "text": " as you can see here."}, {"start": 157.56, "end": 164.56, "text": " What is more interesting is that a usual QA system will point to the passage."}, {"start": 164.56, "end": 170.76, "text": " So it will point to the words Tristan Prediman."}, {"start": 170.76, "end": 177.48, "text": " Whereas with a generative model like this one, what you can do is you can make it actually"}, {"start": 177.48, "end": 180.48, "text": " output and answer as a sentence."}, {"start": 180.48, "end": 186.08, "text": " So it will generate the text Jason Bras was engaged to Tristan Prediman."}, {"start": 186.08, "end": 191.84, "text": " Sorry, Prediman."}, {"start": 191.84, "end": 198.96, "text": " If you ask a question without giving it a context and just ask it to generate an answer, it"}, {"start": 198.96, "end": 200.24, "text": " will do so as well."}, {"start": 200.24, "end": 205.64000000000001, "text": " I don't know if these answers are cherry picked, but they call this zero-shot question answering."}, {"start": 205.64000000000001, "end": 212.8, "text": " So if you ask when the World War II end and it can output World War II ended in 1945,"}, {"start": 212.8, "end": 217.96, "text": " simply out of regularities it detected in the training data."}, {"start": 217.96, "end": 224.72, "text": " So that's what I'm wondering at what point are these models?"}, {"start": 224.72, "end": 231.64000000000001, "text": " Do they have so many parameters that they simply reproduce the training data?"}, {"start": 231.64000000000001, "end": 239.04000000000002, "text": " This clearly, some article from the training data is about World War II or many are."}, {"start": 239.04000000000002, "end": 247.76000000000002, "text": " And it simply learned that following a question when the World War II end, it needs to answer"}, {"start": 247.76, "end": 249.39999999999998, "text": " the appropriate passage."}, {"start": 249.39999999999998, "end": 258.44, "text": " I'm not sure that is a proper measure of language understanding if you simply can bake more"}, {"start": 258.44, "end": 264.03999999999996, "text": " and more of the training data into these many, many parameters."}, {"start": 264.03999999999996, "end": 268.68, "text": " But I'm not the judge of that here."}, {"start": 268.68, "end": 270.4, "text": " It can do it very well."}, {"start": 270.4, "end": 278.59999999999997, "text": " So what I'm actually more interested in is this thing is called the zero optimizer that"}, {"start": 278.59999999999997, "end": 280.35999999999996, "text": " they used to train the model."}, {"start": 280.35999999999996, "end": 282.71999999999997, "text": " So the model is just a transformer."}, {"start": 282.71999999999997, "end": 285.15999999999997, "text": " It's just a big, big transformer model."}, {"start": 285.15999999999997, "end": 291.12, "text": " There is nothing really special about the model except that it is larger than the last"}, {"start": 291.12, "end": 294.91999999999996, "text": " model and therefore a bit better."}, {"start": 294.91999999999996, "end": 299.79999999999995, "text": " What is interesting is that this would have been pretty impossible to train if it weren't"}, {"start": 299.8, "end": 304.44, "text": " for this zero optimizer of this deep speed library."}, {"start": 304.44, "end": 307.32, "text": " And Microsoft has released this deep speed library."}, {"start": 307.32, "end": 310.04, "text": " It's compatible for now with PyTorch."}, {"start": 310.04, "end": 311.04, "text": " You can check this out."}, {"start": 311.04, "end": 314.72, "text": " I'll put a link into the description."}, {"start": 314.72, "end": 317.68, "text": " And I want to dive into this a bit."}, {"start": 317.68, "end": 318.68, "text": " So there's a paper."}, {"start": 318.68, "end": 322.08000000000004, "text": " It's by Samyam Rajbondari."}, {"start": 322.08000000000004, "end": 327.6, "text": " And I'll end by all by Microsoft."}, {"start": 327.6, "end": 333.92, "text": " Paper describes in detail the optimizer, but it's not very visual."}, {"start": 333.92, "end": 338.6, "text": " That's why we're going to the blog post."}, {"start": 338.6, "end": 348.36, "text": " You can see it gives many speed ups over usual, over the previous Megatron LM model that"}, {"start": 348.36, "end": 353.08000000000004, "text": " Nvidia just trained using kind of just what Nvidia has."}, {"start": 353.08, "end": 361.56, "text": " Nvidia has machines that are interconnected within the machine with very fast, very fast"}, {"start": 361.56, "end": 364.24, "text": " buses between GPUs."}, {"start": 364.24, "end": 372.96, "text": " But this zero optimizer can now also go over the network and make it pretty fast."}, {"start": 372.96, "end": 376.64, "text": " So let's explore that a bit."}, {"start": 376.64, "end": 378.56, "text": " I have the copy that's here."}, {"start": 378.56, "end": 381.32, "text": " So we'll look at zero optimizer works."}, {"start": 381.32, "end": 385.71999999999997, "text": " So usually what you do is you have multiple GPUs."}, {"start": 385.71999999999997, "end": 387.84, "text": " You can do something like this."}, {"start": 387.84, "end": 392.59999999999997, "text": " And this is called data parallelism."}, {"start": 392.59999999999997, "end": 395.2, "text": " What you have is you have a model."}, {"start": 395.2, "end": 398.84, "text": " And the model in this case fits on your GPU."}, {"start": 398.84, "end": 400.76, "text": " It fits on a single GPU."}, {"start": 400.76, "end": 403.2, "text": " So the blue thing here is the model."}, {"start": 403.2, "end": 405.12, "text": " I'll actually draw this."}, {"start": 405.12, "end": 410.0, "text": " So the model, let's say, is a somehow neural network."}, {"start": 410.0, "end": 414.2, "text": " So it has a bunch of layers, layer, layer, layer, layer."}, {"start": 414.2, "end": 418.84, "text": " And what you want to do is you pass data forward."}, {"start": 418.84, "end": 425.0, "text": " Here is some loss and then right into the loss function and then backward again."}, {"start": 425.0, "end": 427.08, "text": " That's basically what you need to do."}, {"start": 427.08, "end": 432.76, "text": " You need to pass it forward and backward in order to do back propagation training."}, {"start": 432.76, "end": 437.24, "text": " If this all fits into one box, that's completely fine."}, {"start": 437.24, "end": 440.8, "text": " If this fits into one machine, cool."}, {"start": 440.8, "end": 446.12, "text": " We can just put many batches of data through batch one, batch two, batch three, and so on."}, {"start": 446.12, "end": 447.12, "text": " Train the model."}, {"start": 447.12, "end": 451.96000000000004, "text": " If you want to do a speed up using this, you can do so."}, {"start": 451.96000000000004, "end": 456.36, "text": " If you have lots of data, you can do what's called, and I'm always confused."}, {"start": 456.36, "end": 458.76, "text": " I think this is called data parallelism."}, {"start": 458.76, "end": 460.44, "text": " Or is it called model parallelism?"}, {"start": 460.44, "end": 467.2, "text": " Because in any case, what you can do is you can take a second machine or many of those."}, {"start": 467.2, "end": 468.76, "text": " Replicate the model."}, {"start": 468.76, "end": 474.24, "text": " Now these models, these two models here are exactly the same."}, {"start": 474.24, "end": 475.24, "text": " Right?"}, {"start": 475.24, "end": 479.71999999999997, "text": " But what you do is you take your data and you split it up."}, {"start": 479.71999999999997, "end": 484.32, "text": " Basically, you take double the amount of data and you put one batch of data through the"}, {"start": 484.32, "end": 488.88, "text": " top part and you put the other through the bottom part."}, {"start": 488.88, "end": 494.48, "text": " And you do your forward passes on the machines and you do your backward passes."}, {"start": 494.48, "end": 499.12, "text": " And then what you want to do is you want to sink between the machines, what they learned"}, {"start": 499.12, "end": 500.28000000000003, "text": " from the data."}, {"start": 500.28000000000003, "end": 504.0, "text": " So each machine has a different set of data points."}, {"start": 504.0, "end": 507.92, "text": " Each machine calculates its own up parameter updates."}, {"start": 507.92, "end": 510.24, "text": " It learns from the data it has."}, {"start": 510.24, "end": 517.16, "text": " And then they communicate to keep because this here and this here should be the same."}, {"start": 517.16, "end": 518.64, "text": " It's the same model."}, {"start": 518.64, "end": 521.0, "text": " So they have to keep in sync."}, {"start": 521.0, "end": 527.4, "text": " Now this can be done fairly efficiently, especially if these aren't actually two machines, but"}, {"start": 527.4, "end": 531.48, "text": " just two GPUs inside of one large machine."}, {"start": 531.48, "end": 537.92, "text": " So if this is a large machine, this is GPU zero and this is GPU one."}, {"start": 537.92, "end": 543.8, "text": " This is pretty standard because especially on Nvidia machines, they have these, whatever"}, {"start": 543.8, "end": 548.52, "text": " I think they call them, infinity band or so or is that."}, {"start": 548.52, "end": 555.04, "text": " So Nvidia has these connectors that connects the GPUs together really fast."}, {"start": 555.04, "end": 558.1999999999999, "text": " So you can keep these in sync."}, {"start": 558.1999999999999, "end": 565.0, "text": " But now the problem becomes what if you want to train a model that is larger than this."}, {"start": 565.0, "end": 570.1999999999999, "text": " So let's forget about the data parallelism for now if that is what is called and just"}, {"start": 570.1999999999999, "end": 573.36, "text": " consider a model that is too large."}, {"start": 573.36, "end": 580.32, "text": " So a model that is too large will not fit into a machine."}, {"start": 580.32, "end": 583.6800000000001, "text": " So this is a model as a large model."}, {"start": 583.6800000000001, "end": 592.0, "text": " So what you want to do is you want to pack some of the model onto your first machine and"}, {"start": 592.0, "end": 596.24, "text": " then take the other part of the model."}, {"start": 596.24, "end": 597.24, "text": " Sorry."}, {"start": 597.24, "end": 599.32, "text": " And pack it onto another machine."}, {"start": 599.32, "end": 603.6800000000001, "text": " You separate the model and put it on different machines."}, {"start": 603.6800000000001, "end": 608.6800000000001, "text": " Now if you have a batch of data, what you have to do is you pass it, pass it, pass it,"}, {"start": 608.6800000000001, "end": 612.5600000000001, "text": " forward propagate as you regularly would, but then you have an intermediate result."}, {"start": 612.5600000000001, "end": 617.44, "text": " You send that to the next machine and you forward propagate that at the end here you have"}, {"start": 617.44, "end": 620.12, "text": " a loss, right?"}, {"start": 620.12, "end": 624.0400000000001, "text": " You want to back propagate regularly through this machine."}, {"start": 624.0400000000001, "end": 626.84, "text": " You have an intermediate result of back propagation."}, {"start": 626.84, "end": 632.52, "text": " Send it over the network and back prop all the way through the model."}, {"start": 632.52, "end": 638.6, "text": " So that's how you can train a model that is too large for one machine if you have multiple"}, {"start": 638.6, "end": 639.64, "text": " machines."}, {"start": 639.64, "end": 644.8000000000001, "text": " The problem here, of course, is this part."}, {"start": 644.8000000000001, "end": 651.84, "text": " Just as you had to keep and sync the model before, now your communication problem becomes"}, {"start": 651.84, "end": 653.64, "text": " one of..."}, {"start": 653.64, "end": 662.0, "text": " You have to send the intermediate stages to that model and you have to send the intermediate"}, {"start": 662.0, "end": 666.64, "text": " stage of the back propagation back to that part of the model."}, {"start": 666.64, "end": 676.48, "text": " And while this part is working, this part is kind of idling away and the network overhead,"}, {"start": 676.48, "end": 680.2, "text": " sorry, is just very costly."}, {"start": 680.2, "end": 685.72, "text": " Actually if your model is so large, you can't even fit into one of these single boxes,"}, {"start": 685.72, "end": 686.72, "text": " right?"}, {"start": 686.72, "end": 695.36, "text": " So this is very problematic here."}, {"start": 695.36, "end": 697.08, "text": " It's still doable."}, {"start": 697.08, "end": 704.24, "text": " But what the zero optimizer does is it does both data and model parallelism."}, {"start": 704.24, "end": 714.6800000000001, "text": " So it can train models that are too large for a single machine."}, {"start": 714.6800000000001, "end": 721.04, "text": " And it can do data parallelism at the same time, such that basically everything is working"}, {"start": 721.04, "end": 722.04, "text": " all the time."}, {"start": 722.04, "end": 725.72, "text": " There is no wasted or much wasted computation."}, {"start": 725.72, "end": 728.36, "text": " The communication is efficient and so on."}, {"start": 728.36, "end": 730.72, "text": " So it's really a technical achievement."}, {"start": 730.72, "end": 736.64, "text": " It's not so much an scientific advance, it's really a technical achievement, this optimizer."}, {"start": 736.64, "end": 739.2, "text": " And we'll shortly go through."}, {"start": 739.2, "end": 743.76, "text": " There is a kind of an animation, but it's on the website, but it's super slow."}, {"start": 743.76, "end": 748.28, "text": " And I think this might be the first time that I will be faster at explaining something"}, {"start": 748.28, "end": 749.76, "text": " than a video."}, {"start": 749.76, "end": 751.44, "text": " All right."}, {"start": 751.44, "end": 752.44, "text": " Let's see here."}, {"start": 752.44, "end": 753.44, "text": " Cool."}, {"start": 753.44, "end": 756.76, "text": " So what do you do is let's just consider these three GPUs."}, {"start": 756.76, "end": 758.6800000000001, "text": " Before that, it would all fit on one machine."}, {"start": 758.68, "end": 762.56, "text": " And now let's say you don't actually have that much memory."}, {"start": 762.56, "end": 767.4399999999999, "text": " You don't have these giant empty blocks here."}, {"start": 767.4399999999999, "end": 768.8, "text": " You just have a bit of that."}, {"start": 768.8, "end": 770.88, "text": " So you have to split your model."}, {"start": 770.88, "end": 773.64, "text": " The blue parts here are your model."}, {"start": 773.64, "end": 777.1999999999999, "text": " These are model parameters."}, {"start": 777.1999999999999, "end": 783.8399999999999, "text": " The orange parts here is memory you need to store gradients."}, {"start": 783.8399999999999, "end": 787.7199999999999, "text": " You need as many gradients as you have model parameters."}, {"start": 787.72, "end": 791.0400000000001, "text": " Because you do gradient descent."}, {"start": 791.0400000000001, "end": 794.96, "text": " The green stuff here are what's called optimizer parameters."}, {"start": 794.96, "end": 801.1600000000001, "text": " Now if you just have SGD, these would be nonexistent."}, {"start": 801.1600000000001, "end": 805.28, "text": " But if you have something like add a grad or add them, they have additional parameters"}, {"start": 805.28, "end": 808.6800000000001, "text": " for each model parameter that they need to keep track of."}, {"start": 808.6800000000001, "end": 811.52, "text": " So these are stored here."}, {"start": 811.52, "end": 815.2, "text": " And there can be significant overhead."}, {"start": 815.2, "end": 819.96, "text": " There's also like a floating point 32, 16 conversion going on here."}, {"start": 819.96, "end": 821.32, "text": " Don't want to go into that."}, {"start": 821.32, "end": 824.5200000000001, "text": " So you split your model onto these three machines."}, {"start": 824.5200000000001, "end": 825.76, "text": " Let's say that's your entire model."}, {"start": 825.76, "end": 829.1600000000001, "text": " Your model is six blocks wide."}, {"start": 829.1600000000001, "end": 833.24, "text": " And you need to forward propagate now through everything."}, {"start": 833.24, "end": 833.76, "text": " All right."}, {"start": 833.76, "end": 836.0400000000001, "text": " So here is what zero does."}, {"start": 836.0400000000001, "end": 838.08, "text": " And I think it's pretty cool."}, {"start": 838.08, "end": 841.96, "text": " So what we need to do is we have these three different batches of data."}, {"start": 841.96, "end": 849.5600000000001, "text": " And we want to forward propagate them all through the model, through the same model at the"}, {"start": 849.5600000000001, "end": 855.2, "text": " same time as if the model were actually stored on all these machines."}, {"start": 855.2, "end": 860.44, "text": " Like if all of these machines had the entire model."}, {"start": 860.44, "end": 862.88, "text": " And we have can do a bit of communication."}, {"start": 862.88, "end": 867.2, "text": " So what we do first is this one's easy, right?"}, {"start": 867.2, "end": 871.72, "text": " Data zero through the first two layers here is easy, right?"}, {"start": 871.72, "end": 873.32, "text": " Because we have them, right?"}, {"start": 873.32, "end": 879.48, "text": " So bang, you go through the first, you get an intermediate result here and here, right?"}, {"start": 879.48, "end": 880.72, "text": " Okay."}, {"start": 880.72, "end": 890.24, "text": " How do we propagate data one through this through the first layer?"}, {"start": 890.24, "end": 891.9200000000001, "text": " We can't send data one here."}, {"start": 891.9200000000001, "end": 894.12, "text": " That would be too expensive, right?"}, {"start": 894.12, "end": 896.9200000000001, "text": " And that's the whole point would be lost, right?"}, {"start": 896.92, "end": 902.16, "text": " We want to actually compute data one on this GPU at the same time."}, {"start": 902.16, "end": 909.56, "text": " What we do is before we start, we actually communicate these two blocks here, two GPU"}, {"start": 909.56, "end": 911.12, "text": " one, right?"}, {"start": 911.12, "end": 914.9599999999999, "text": " We send these parameters around and fill them in here."}, {"start": 914.9599999999999, "end": 918.0799999999999, "text": " I'm actually making some blue, right?"}, {"start": 918.0799999999999, "end": 921.4399999999999, "text": " We send them here and we also send them here, right?"}, {"start": 921.4399999999999, "end": 925.7199999999999, "text": " We send the parameters to all the machines, right?"}, {"start": 925.72, "end": 933.6, "text": " And we can actually forward prop data one through this and data three through this, right?"}, {"start": 933.6, "end": 939.12, "text": " So we can do forward prop after we've communicated all the GPUs can be working."}, {"start": 939.12, "end": 941.12, "text": " Same with layer two, right?"}, {"start": 941.12, "end": 947.96, "text": " So layer two simply can send it's these two here."}, {"start": 947.96, "end": 954.08, "text": " You can see that these two here to the other machines, right?"}, {"start": 954.08, "end": 958.5200000000001, "text": " Now while it's doing that, we've already propagated through the first layer."}, {"start": 958.5200000000001, "end": 964.36, "text": " So we've already propagated here and here through the first layer."}, {"start": 964.36, "end": 968.24, "text": " So we can actually delete these again, right?"}, {"start": 968.24, "end": 973.88, "text": " We can delete these first layer parameters that we sent around again, right?"}, {"start": 973.88, "end": 976.88, "text": " So here's, you see how we can save memory."}, {"start": 976.88, "end": 980.84, "text": " We don't keep all the modeling sync on all the machines."}, {"start": 980.84, "end": 988.6, "text": " We send whatever we need on the other machines and then once the computation is done, they"}, {"start": 988.6, "end": 990.44, "text": " can delete it again, right?"}, {"start": 990.44, "end": 992.88, "text": " Because there's always one machine, right?"}, {"start": 992.88, "end": 997.52, "text": " This one here for the middle parameters that keeps track of the parameters and that can"}, {"start": 997.52, "end": 1001.0, "text": " at any point if they're needed, send them again."}, {"start": 1001.0, "end": 1003.1600000000001, "text": " So that's the big kind of catch."}, {"start": 1003.1600000000001, "end": 1006.6800000000001, "text": " So you can forward prop nav through these two, right?"}, {"start": 1006.6800000000001, "end": 1008.76, "text": " They hear there are already present."}, {"start": 1008.76, "end": 1014.4, "text": " And then you can delete those again on the machines where they're not natively stored."}, {"start": 1014.4, "end": 1023.0, "text": " And you can send from here, you can send those two also up here, you can send those two."}, {"start": 1023.0, "end": 1029.12, "text": " And forward prop your model through to the end, right?"}, {"start": 1029.12, "end": 1030.76, "text": " Dada, dada, dada, oops."}, {"start": 1030.76, "end": 1033.28, "text": " Yeah, that was a mistake."}, {"start": 1033.28, "end": 1036.24, "text": " They should be here."}, {"start": 1036.24, "end": 1040.2, "text": " And each machine calculates its own loss, right?"}, {"start": 1040.2, "end": 1042.48, "text": " Loss, loss, loss."}, {"start": 1042.48, "end": 1046.6, "text": " And the backward propagation happens in much the same way."}, {"start": 1046.6, "end": 1051.6, "text": " So as you can, if you follow so far, you can already imagine, right?"}, {"start": 1051.6, "end": 1057.2, "text": " What I can do is, so now this is a loss, the loss is different because there's a different"}, {"start": 1057.2, "end": 1060.24, "text": " batch of data going through each machine, right?"}, {"start": 1060.24, "end": 1064.6, "text": " So there's a different batch of data going through each machine, but each machine has computed"}, {"start": 1064.6, "end": 1072.1999999999998, "text": " with the same model due to the communication of the zero optimizer."}, {"start": 1072.1999999999998, "end": 1073.6, "text": " So that's pretty cool."}, {"start": 1073.6, "end": 1077.32, "text": " So you get the benefits of data parallelism, right?"}, {"start": 1077.32, "end": 1079.6799999999998, "text": " Lots of data on the different machines."}, {"start": 1079.6799999999998, "end": 1087.1599999999999, "text": " And you also split up the model across the machines, but you don't actually store the"}, {"start": 1087.1599999999999, "end": 1089.08, "text": " model on any of these machines."}, {"start": 1089.08, "end": 1090.52, "text": " You don't store the model."}, {"start": 1090.52, "end": 1098.68, "text": " You only send, right, from here you send as you need and then you delete again, right?"}, {"start": 1098.68, "end": 1102.08, "text": " And for the backward propagation, same thing, right?"}, {"start": 1102.08, "end": 1109.84, "text": " You backward propagation, you calculate gradients, now you calculate gradients here and you send"}, {"start": 1109.84, "end": 1114.32, "text": " the gradients as needed to the other machines, right?"}, {"start": 1114.32, "end": 1121.6, "text": " You calculate gradients here and here and you send them to the machine where they're actually"}, {"start": 1121.6, "end": 1122.6, "text": " needed."}, {"start": 1122.6, "end": 1124.08, "text": " This is a weird pen, sorry."}, {"start": 1124.08, "end": 1125.84, "text": " You send them to that machine."}, {"start": 1125.84, "end": 1129.72, "text": " That machine will aggregate all the gradients of all the machines."}, {"start": 1129.72, "end": 1132.2, "text": " What is up with this pen?"}, {"start": 1132.2, "end": 1134.2, "text": " Okay."}, {"start": 1134.2, "end": 1140.12, "text": " It will aggregate them and then locally it can compute using these optimizer parameters and"}, {"start": 1140.12, "end": 1141.12, "text": " so on."}, {"start": 1141.12, "end": 1146.6, "text": " All kinds of optimization locally because it has gathered gradients from all the other"}, {"start": 1146.6, "end": 1147.6, "text": " data."}, {"start": 1147.6, "end": 1159.32, "text": " So what you end up with, for example, GPU 2 here, for these two layers, it has effectively"}, {"start": 1159.32, "end": 1167.28, "text": " broadcast the layers such that much, much more data than it just had itself could run"}, {"start": 1167.28, "end": 1168.8799999999999, "text": " through the layers."}, {"start": 1168.88, "end": 1176.8400000000001, "text": " It has aggregated gradients from all of that data and now it can use all of these gradients"}, {"start": 1176.8400000000001, "end": 1184.4, "text": " together to make a good update using the optimizer parameters to make a good update to these"}, {"start": 1184.4, "end": 1186.0400000000002, "text": " model parameters."}, {"start": 1186.0400000000002, "end": 1190.6000000000001, "text": " And then in the next iteration, it can go ahead and broadcast the model parameters, the"}, {"start": 1190.6000000000001, "end": 1192.4, "text": " new model parameters again."}, {"start": 1192.4, "end": 1199.8000000000002, "text": " So it is able to compute with much more data than it can just fit by itself and it is just"}, {"start": 1199.8000000000002, "end": 1201.4, "text": " doing its part."}, {"start": 1201.4, "end": 1207.76, "text": " So zero and deep speed, so zero is the, I guess the protocol and deep speed is the actual"}, {"start": 1207.76, "end": 1208.76, "text": " library."}, {"start": 1208.76, "end": 1215.24, "text": " They will do all of this communication and splitting and so on for you over the network"}, {"start": 1215.24, "end": 1223.32, "text": " in a way that is efficient in a way that everything runs at the same time and the communication"}, {"start": 1223.32, "end": 1225.92, "text": " overhead is minimal."}, {"start": 1225.92, "end": 1229.84, "text": " And you can actually choose which stage you want."}, {"start": 1229.84, "end": 1235.72, "text": " So what your trade off of communication and memory saving will be."}, {"start": 1235.72, "end": 1237.68, "text": " So this is extremely cool."}, {"start": 1237.68, "end": 1247.72, "text": " They say this goes up to whatever 100 billion parameter models if you use item."}, {"start": 1247.72, "end": 1251.4, "text": " This is something for you know your average call abuser."}, {"start": 1251.4, "end": 1255.52, "text": " This is really something for the for big players."}, {"start": 1255.52, "end": 1262.6000000000001, "text": " But that being said, I don't think language is solved by simply throwing more parameters"}, {"start": 1262.6000000000001, "end": 1263.6000000000001, "text": " at it."}, {"start": 1263.6000000000001, "end": 1266.52, "text": " I think we're still a bit of a breakthrough."}, {"start": 1266.52, "end": 1273.52, "text": " We're still a bit of a breakthrough ahead yet to come in language understanding with"}, {"start": 1273.52, "end": 1275.6399999999999, "text": " newer model architectures."}, {"start": 1275.6399999999999, "end": 1276.6399999999999, "text": " Alright that was it for me."}, {"start": 1276.64, "end": 1304.64, "text": " Thanks."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=vB_hQ5NmtPs | [Interview] Mark Ledwich - Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization | Interview with one of the authors of a widely reported study on YouTube's recommendation engine and where it leads its users.
https://arxiv.org/abs/1912.11211
https://www.recfluence.net/
https://github.com/markledwich2/Recfluence
https://www.patreon.com/ledwich
Abstract:
The role that YouTube and its behind-the-scenes recommendation algorithm plays in encouraging online radicalization has been suggested by both journalists and academics alike. This study directly quantifies these claims by examining the role that YouTube's algorithm plays in suggesting radicalized content. After categorizing nearly 800 political channels, we were able to differentiate between political schemas in order to analyze the algorithm traffic flows out and between each group. After conducting a detailed analysis of recommendations received by each channel type, we refute the popular radicalization claims. To the contrary, these data suggest that YouTube's recommendation algorithm actively discourages viewers from visiting radicalizing or extremist content. Instead, the algorithm is shown to favor mainstream media and cable news content over independent YouTube channels with slant towards left-leaning or politically neutral channels. Our study thus suggests that YouTube's recommendation algorithm fails to promote inflammatory or radicalized content, as previously claimed by several outlets.
Authors: Mark Ledwich, Anna Zaitsev | All right, I'm very pleased to have Mark Ledowitch here today. He's one of the authors of this paper that's called Algorithmic Extremism Examining YouTube's Rhabitole of Radicalization. So I've done a video about a topic like this before, actually several, and this is basically one in a line of research that examines the recommendation algorithm of YouTube specifically, but also kind of the general social media platforms. So Mark, thanks for being here. Could you, maybe for people who do not know anything about this? Could you kind of explain where your work fits into what's been done before or kind of also what comes out of the mainstream media about this topic? Because there's been quite a bit of talk. Yeah, so I'm not a researcher by trade-ummer programmer, and the reason I got into this was because I could see clear bias in the way the YouTube recommendation system has been reported on, and also in the research. There's some narratives, I think it might be, because there's a lot of people worried about writing properly, and this is a way to explain that. They're looking for ways YouTube are radicalising people and finding evidence for that, or, but that could be anecdotes, or in some of the studies, et cetera, quantitative data. But they're only looking to confirm it. So there's really obvious things. I think you covered in your video. Just look for movement towards all right channels through, like, Centrist or, oh, light, they call it, instead of looking for both ways. Just really obvious things like that, calling it, calling it an infection, that clearly shows that I've really looked at it like a curious person would. So I thought I could easily, as a software engineer, just collect all the data, and without any complicated statistics, just looking at the overall flow of recommendations between videos, what their political influence is. Yeah, this was a thing that bugged me of the paper that I made a video about, is that they claim there's this radicalisation pipeline, right, and with pipeline, everyone sort of understands a different thing, but I think the general consensus is that the recommendation algorithm itself will steer you towards, like, more extreme content. And in this case, towards, like, the alt-right extremist content. And the paper actually analysed and said, okay, we found evidence that there is movement in this direction, but they never shown that this is significantly more movement than, like, in the other direction. So in order to justify a pipeline, one would need to show that the movement, this way, is about larger than this way in some notion. And so I've, I've, I've found, I've actually spoken to the author of that paper, and he agrees with that, but obviously doesn't, doesn't have, like, energy to go into every, you know, go, go, review everything that comes at them. They've also been a bunch of, like, they've also been exposed to a lot of criticism, let's say, as have you. And I think even more, when your paper came out, I think, the four days, there was just a giant storm of people attacking your paper, basically. Just listing every single thing that's wrong with it and why this isn't valid and things like this. So, so let's actually jump into what you did specifically. So, if I'm, if I'm, can summarize, and you can then maybe correct, so that we can establish what happened. You basically collected recommendations. So you scrape these videos on YouTube, and you collected these recommendations. And we can, we can see this. So in your paper, you then can make such diagrams, such as this one, or these. Where in the middle, the white bar is a channel or a group that you're interested in. And then to the left, you can see where all the impressions of that channel or group come from. So what's, where basically the views come from through the recommendation system. And on the right, you can see of all the views that channel has retrieved, where do they go to? So what, what, what is recommended next? Right? So it basically shows both directions for, for every group. And then you've also labeled these by multiple methods so that you can kind of establish these groups. And what is pretty cool, you've built this website, where you can analyze this. And my computer is a bit overloaded at the moment, but I promise this is really interactive. All right, so during the interview, my computer crashed. So I'm doing this in post-production. Just to show you how this website operates. So what you have here is an overview over all the channels of what, where recommendations were collected. And they're grouped into groups. For example, here I have the partisan left, central left, social just, as partisan right and so on. So you can see for each group or channel where recommendations come from and where they go to. For example, the large red one here, I happen to know, that is Fox News. You can see Fox News of the daily impression it received from itself about 36 million impressions and it gives to itself 36 million. These numbers have to agree by nature of how the data is collected, of course. But you can also see it gets 2.7 million impressions from CNN, 2.6 million from the next news network, and so on. And it also gives quite a bit of recommendations to CNN and so on. So you can go, for example, at some individual channel, here's the daily wire. The daily wire is mainly run by Ben Shapiro. So it's a bit more to the right of Fox News and a bit more on the direction of alternative media. You can see the daily wire gets some most of its impression, countwise from itself, from the daily wire. But it gives most of it to Fox News. So actually, you can see here, itself is a long way down, like in whatever a 6th or 7th place. So actually, if you were to watch the daily wire, the recommendation system would most likely steer you towards something like Fox News. Whereas the claim is that the YouTube algorithm would actually steer you towards more radical content. Actually, in reality, it seems like it would steer you towards more of these mainstream content. All right, so actually, I want to go to this. You can see different groupings here. And the radicalization pathways is the previous paper we have looked at. So they have all these channels here of this radicalization pathway. And you can see here, the control group gives very, very, very few impressions to the IDW. The IDW gives much more impressions to the control group. Right? Again, and the IDW gives very few impressions to the alt light compared to the amount of impressions the alt light gives to the IDW and even to the control group. And if you look at the alt right, and we're going to zoom in on that here, it's even more. So the alt right, of course, receives most of its impressions from itself, which you could expect for any kind of group. This is your classic filter bubble situation. But if we analyze the question of, is there a pipeline? You can see that next most likely, you are diverted to the IDW and to the control group much more than you come from the IDW or the control group. Right? Let's look at the alt light, so-called, this is kind of the so-called gateway to the alt right. You can see here, the alt light gives most of its impressions next to itself to the control group and the IDW. So de-radicalizing. If you look at its way to the alt right, you'll see that it gets about four times as much impressions from the alt right as it gives to the alt right. So basically, it's kind of taking the steam out of a quarter of all of these sessions and gives it to either the control group or the IDW or itself. So this is exactly the opposite of what you would expect if you were to claim that there is a pipeline. You would expect their most recommendations to come from more moderate content and go towards more extreme content. But it's exactly opposite. And again, these are the exact channels that this original paper used. Now, what this paper found, the one that we're discussing, if you go to media type here, what you'll be able to see is the division into mainstream media, YouTube creator, and so-called missing link media, which will leave out for a moment. Let's focus on mainstream versus YouTube creators. You can see the mainstream media gives most recommendations to itself while giving only very little recommendations to YouTube creators and the missing link media. While the YouTube creators actually give almost half of their impressions, look at that. They give almost half of their impressions to the mainstream media, which means that there is a big, big push by the algorithm to words these mainstream media away from YouTube creators. So in general, and I invite you to look at this website. In general, you can pretty much see that the exact opposite of a radicalization pipeline is happening if you, of course, if you look at these recommendations and how they are distributed, actually, most recommendation pathways are towards moderate centrist content and, of course, creating filter bubbles, which is a problem by itself, but is not a radicalization pipeline. Lastly, I want to look at white annotations because it's one of the groups that people are referring to when they claim that there are these radicalization pipelines. Look at that. So of the white annotatorian, they get most of their impressions, of course, from other white annotatorian videos, which would be your filter bubble phenomenon, but they give most, and this is a group, right? The white annotatorian channels give most of their recommendations to the parsing right, to the central and left mainstream media, libertarians, and so on, and themselves are like really, really, really far down. So to claim that there is this radicalization pipeline, if you look at this data, to me seems not justified from this data, and if I look at the other paper that really left out the important analysis of the backwards direction, it seems that given everything, it seems that the claim is not warranted. All right, back to the interview. Is that about what you've done? Is that a good summary of the data collection and analysis? Yeah, it's a good summary. I can go into detail. Yeah, please. So YouTube doesn't make it easy. So I started this back in November in 2018. I was using the YouTube API, and to get enough quota, because they limit the amount of requests you can actually make to their API, I created multiple keys, which is against their policy, and they also asked you to delete all your data after 30 days. That's also part of their policy. So later, I think it was October 2019, they cut off my access because I was doing that. So I had to move to just scraping websites, and now my collection process actually just loads up the website and gets the recommendations from the actual page, like the usual. And that's difficult because they block access after a couple of hundred requests. They'll stop you that machine from actually requesting from the website. So I need to use a proxy service that's fairly expensive, and what they do is they simulate where they have actual residential connections through your home connection, like AT&T, and my request gets tunneled through that, and like a variety of locations in the States to get a representative kind of sample. Cool. So the data collection is... would you say that's the hardest part? I feel the labeling of channels is also not so easy, but you've managed to kind of do it half automated, also half collecting things from kind of sources that don't analyze these channels. But at least for most of the things that I've inspected, I found the labeling to be pretty sane. I think this is always something you can attack. The original paper was also attacked on how they label. I find this to be kind of big-rish. Mostly I think your labels are pretty good as well. The other paper labels are also mostly pretty okay. So let's go to... Sorry. Yeah, it's quite subjective. I expected the labeling to be what I get my push back on, but it turns out it was the anonymous collection. So what you've actually found here, what would you say are your main results? And I can maybe show... So you've analyzed a bit where do things come from, where do things go to? And I found this part here to be one of the... Even though it's pretty simple, one of the core things to say about this is that mostly what you found could be said is it's simply a recommendation algorithm working as a recommendation algorithm should, which means it creates your typical filter bubbles. If I watch one makeup tutorial, all of a sudden my site is filled with makeup tutorials and things like this. But also you found that there is quite some over the top push towards what could be considered mainstream media. And there is a bit of a draw away from the smaller YouTuber-like channels. Is that something that characterizes your work? That's right. Yeah, that's a good way to characterize it. If that chart we're looking at now, if it was a neutral algorithm, the grain bios would be the same as the grey ones. So you'd receive the same amount of recommendations as you give. You get the future organically. The recommendations should seem to be equivalent to that. But we find that it disproportionately recommends mainstream media channels. That's not even though so it's not like... It doesn't look like it's consistently doing that. So you can find exceptions to that. I believe one of the main criticisms of your paper has been that you only use data from 2019 onwards. And I have actually looked at your website. And your website a lot of times says that the data has been collected from way earlier than that. So is it that you've only used 2019 data in your paper? Or what is it? The paper is just from November and December 2019. And the reason we did that is that we only had 400 channels before that. And the collection process has changed over time. So this is a clean set of data we could look at. And I thought the most recent was the most relevant. So what is it doing now? But I've provided... I've got the same analysis over time. So I've got a gift that I made. I can look at which goes through all the months I've been collecting. And you can see that chart for where it goes to. And it has gone through a bunch of changes. So in about April 2019, that's when I really clamped down on conspiracies and other French channels. Before that was... It was much closer to neutral. But it never looked like a rabbit hole. It was never a favorite. French channels. Yeah, I mean, that has been my experience also person on YouTube. I've joined YouTube very early. Or I've watched YouTube very early when young earth creationism was still active. And these things were kind of completely discredited by simply having you... having people exposed to other points of view. And even I find this now. Even though YouTube makes it kind of easy to find, let's say, niche content, it also exposes you to a bunch of different views. And I've always found this to be very, very optimistic in the sense of... this is probably deradi-kalizing much more people than radicalizing. But you've received, like as I said, a bunch of criticism. So if you could... What was the largest criticism? Irrespective of whether it was valid or not. What if you found was kind of what most people were criticizing? Most people were criticizing that we were collecting anonymous recommendations. It wasn't the personalized ones. And it's actually... It is a valid limitation. There's a first limitation we talked about in this paper. And it's still an open question how personalization would affect these aggregate results that we've got. But I think it's reasonable to assume it will be quite similar once you average it out. So if for anyone person it might be different. But you would expect personalization based on someone's history to even out. Because it's kind of the algorithm that the average of all that when it's anonymous. Yeah, I feel this is something. The notion that... Because if you're not logged in, the recommendation is like a person with only one video of history. So it's the same thing, but there's only one point of history instead of multiple. I find why should the behavior be qualitatively different? If you have multiple points of history, this is a strong claim that you have to really show that there is a qualitative difference. Not just a more or less accuracy. And I feel the people making this criticism are... It's really on them to show that there is a substantial difference, rather than saying that this is a giant limitation of the work. Yeah, and that's also very hypocritical for a lot of the people saying it. Because some of them like Zinep, who was mockingly saying that her article, her original article in New York Times used Algo Transparency, which is anonymous as well, but she never looked into that. I think a lot of this is completely motivated, reasoning that don't care about the details. I've seen this one Twitter user. She said something to the effect of, if you've seen this article, please consult someone that works in this space. Please don't retarticle yourself. You must get your information through someone. I've read the article. I find it's pretty straightforward. The limitations are clear, but also the results are pretty clear. It's actually mostly a boring article. I'm sorry, it's not a criticism. This is good. It's mostly you find that things work as expected. There is a bit of a push towards mainstream, which can be probably explained in that YouTube wants to be advertiser friendly. These mainstream channels already are advertiser friendly, so they probably get bumped a bit. What would you say is maybe the most valid criticism that you've heard, maybe not the biggest, but the most? Where do you say, yeah, this is really something that is... I think there was criticism that I'm over claiming not in the paper so much, but in my tweets and medium. I guess that's fair, but when I tweet and write a medium, those are what I believe in kind of a Vesian way. I'm not catching my claims like you would when you're writing a paper. So I guess that's valid. I think a lot of people read into what I was saying more than what I was. So when I say the algorithm has a de-radicalizing influence, I'm just talking about the recommendations, whereas a lot of people consider that to be talking about all things considered. So even if it doesn't have a biosource of fringe, maybe sociologically YouTube radicalizes people, it could be the case, I don't know. But that's what I'm talking about. I'm talking about just the influence through recommendations. And that's all we can hold Google accountable for, or at least it's what probably all could agree that Google should be held accountable for with its recommendation system. Yeah, do you expect something to come out of... Or have you heard something to come out of YouTube themselves, like the company, any form of official statement to this? Nothing, nothing at all. I got a vague report that was complaining that YouTube sent them this. So I think they've read it, but I have no, absolutely no contact with them. Okay. Cool. Are you doing anything in follow-up or do you have plans for more research? Not at this. I've just gone back to work. I've applied for a bunch of independent grant money, but I'm not optimistic. So if I don't get that, I'll keep it paddling along. I'll probably reduce the amount of recommendations, because I'm spending about $500 a month at the moment, just keeping it running. So I've got to, I've got to reduce my costs. Yeah. And you do have a Patreon for people to chip into that, right? Yeah. So if you can link to that, that'd be good. So if I'm getting something like $22 a month, so it doesn't really cover it. Yeah. All right. So, okay, this has been very, very pleasant. I think we've kind of looked at a lot of things. Is there anything you would like to amend to this that people should know about the research or about this field? I would just have a, I'd encourage you to have a play digging into the data itself, this, if you're in this area, the data is free to use, the code is free to use. Just consider this a contribution to knowledge. Cool. Well, thanks a lot, Mark. I wish you a very pleasant evening for you, I guess. And cheers. Thanks. Thanks having me. Bye. Bye. Bye. | [{"start": 0.0, "end": 3.0, "text": " All right, I'm very pleased to have Mark Ledowitch here today."}, {"start": 4.0, "end": 8.0, "text": " He's one of the authors of this paper that's called"}, {"start": 8.0, "end": 11.0, "text": " Algorithmic Extremism Examining YouTube's"}, {"start": 11.0, "end": 13.0, "text": " Rhabitole of Radicalization."}, {"start": 13.0, "end": 17.0, "text": " So I've done a video about a topic like this before,"}, {"start": 17.0, "end": 20.0, "text": " actually several, and this is basically one"}, {"start": 20.0, "end": 23.0, "text": " in a line of research that examines"}, {"start": 23.0, "end": 26.0, "text": " the recommendation algorithm of YouTube specifically,"}, {"start": 26.0, "end": 30.0, "text": " but also kind of the general social media platforms."}, {"start": 30.0, "end": 32.0, "text": " So Mark, thanks for being here."}, {"start": 34.0, "end": 38.0, "text": " Could you, maybe for people who do not know anything about this?"}, {"start": 38.0, "end": 42.0, "text": " Could you kind of explain where your work fits into"}, {"start": 42.0, "end": 47.0, "text": " what's been done before or kind of also what comes out of the"}, {"start": 47.0, "end": 51.0, "text": " mainstream media about this topic?"}, {"start": 51.0, "end": 55.0, "text": " Because there's been quite a bit of talk."}, {"start": 55.0, "end": 60.0, "text": " Yeah, so I'm not a researcher by trade-ummer programmer,"}, {"start": 60.0, "end": 65.0, "text": " and the reason I got into this was because I could see"}, {"start": 65.0, "end": 69.0, "text": " clear bias in the way the YouTube recommendation system"}, {"start": 69.0, "end": 72.0, "text": " has been reported on, and also in the research."}, {"start": 72.0, "end": 76.0, "text": " There's some narratives, I think it might be,"}, {"start": 76.0, "end": 79.0, "text": " because there's a lot of people worried about"}, {"start": 79.0, "end": 83.0, "text": " writing properly, and this is a way to explain that."}, {"start": 83.0, "end": 88.0, "text": " They're looking for ways YouTube are radicalising people"}, {"start": 88.0, "end": 93.0, "text": " and finding evidence for that, or, but that could be anecdotes,"}, {"start": 93.0, "end": 97.0, "text": " or in some of the studies, et cetera, quantitative data."}, {"start": 97.0, "end": 100.0, "text": " But they're only looking to confirm it."}, {"start": 100.0, "end": 102.0, "text": " So there's really obvious things."}, {"start": 102.0, "end": 105.0, "text": " I think you covered in your video."}, {"start": 105.0, "end": 109.0, "text": " Just look for movement towards all right channels"}, {"start": 109.0, "end": 113.0, "text": " through, like, Centrist or, oh, light, they call it,"}, {"start": 113.0, "end": 115.0, "text": " instead of looking for both ways."}, {"start": 115.0, "end": 118.0, "text": " Just really obvious things like that, calling it,"}, {"start": 118.0, "end": 121.0, "text": " calling it an infection, that clearly shows"}, {"start": 121.0, "end": 125.0, "text": " that I've really looked at it like a curious person would."}, {"start": 125.0, "end": 127.0, "text": " So I thought I could easily, as a software engineer,"}, {"start": 127.0, "end": 132.0, "text": " just collect all the data, and without any complicated statistics,"}, {"start": 132.0, "end": 136.0, "text": " just looking at the overall flow of recommendations"}, {"start": 136.0, "end": 141.0, "text": " between videos, what their political influence is."}, {"start": 141.0, "end": 145.0, "text": " Yeah, this was a thing that bugged me"}, {"start": 145.0, "end": 148.0, "text": " of the paper that I made a video about,"}, {"start": 148.0, "end": 151.0, "text": " is that they claim there's this radicalisation pipeline,"}, {"start": 151.0, "end": 154.0, "text": " right, and with pipeline, everyone sort of understands"}, {"start": 154.0, "end": 157.0, "text": " a different thing, but I think the general consensus"}, {"start": 157.0, "end": 160.0, "text": " is that the recommendation algorithm itself"}, {"start": 160.0, "end": 165.0, "text": " will steer you towards, like, more extreme content."}, {"start": 165.0, "end": 170.0, "text": " And in this case, towards, like, the alt-right extremist content."}, {"start": 170.0, "end": 174.0, "text": " And the paper actually analysed and said,"}, {"start": 174.0, "end": 177.0, "text": " okay, we found evidence that there is movement"}, {"start": 177.0, "end": 181.0, "text": " in this direction, but they never shown"}, {"start": 181.0, "end": 184.0, "text": " that this is significantly more movement"}, {"start": 184.0, "end": 187.0, "text": " than, like, in the other direction."}, {"start": 187.0, "end": 190.0, "text": " So in order to justify a pipeline,"}, {"start": 190.0, "end": 192.0, "text": " one would need to show that the movement,"}, {"start": 192.0, "end": 196.0, "text": " this way, is about larger than this way in some notion."}, {"start": 196.0, "end": 201.0, "text": " And so I've, I've, I've found, I've actually spoken"}, {"start": 201.0, "end": 205.0, "text": " to the author of that paper, and he agrees with that,"}, {"start": 205.0, "end": 210.0, "text": " but obviously doesn't, doesn't have, like, energy"}, {"start": 210.0, "end": 213.0, "text": " to go into every, you know, go,"}, {"start": 213.0, "end": 217.0, "text": " go, review everything that comes at them."}, {"start": 217.0, "end": 219.0, "text": " They've also been a bunch of, like,"}, {"start": 219.0, "end": 222.0, "text": " they've also been exposed to a lot of criticism,"}, {"start": 222.0, "end": 224.0, "text": " let's say, as have you."}, {"start": 224.0, "end": 229.0, "text": " And I think even more, when your paper came out,"}, {"start": 229.0, "end": 233.0, "text": " I think, the four days, there was just a giant storm"}, {"start": 233.0, "end": 238.0, "text": " of people attacking your paper, basically."}, {"start": 238.0, "end": 243.0, "text": " Just listing every single thing that's wrong with it"}, {"start": 243.0, "end": 247.0, "text": " and why this isn't valid and things like this."}, {"start": 247.0, "end": 252.0, "text": " So, so let's actually jump into what you did specifically."}, {"start": 252.0, "end": 256.0, "text": " So, if I'm, if I'm, can summarize,"}, {"start": 256.0, "end": 258.0, "text": " and you can then maybe correct,"}, {"start": 258.0, "end": 262.0, "text": " so that we can establish what happened."}, {"start": 262.0, "end": 264.0, "text": " You basically collected recommendations."}, {"start": 264.0, "end": 267.0, "text": " So you scrape these videos on YouTube,"}, {"start": 267.0, "end": 269.0, "text": " and you collected these recommendations."}, {"start": 269.0, "end": 272.0, "text": " And we can, we can see this."}, {"start": 272.0, "end": 278.0, "text": " So in your paper, you then can make such diagrams,"}, {"start": 278.0, "end": 282.0, "text": " such as this one, or these."}, {"start": 282.0, "end": 287.0, "text": " Where in the middle, the white bar is a channel"}, {"start": 287.0, "end": 289.0, "text": " or a group that you're interested in."}, {"start": 289.0, "end": 293.0, "text": " And then to the left, you can see where all the impressions"}, {"start": 293.0, "end": 297.0, "text": " of that channel or group come from."}, {"start": 297.0, "end": 300.0, "text": " So what's, where basically the views come from"}, {"start": 300.0, "end": 302.0, "text": " through the recommendation system."}, {"start": 302.0, "end": 305.0, "text": " And on the right, you can see of all the views"}, {"start": 305.0, "end": 307.0, "text": " that channel has retrieved, where do they go to?"}, {"start": 307.0, "end": 310.0, "text": " So what, what, what is recommended next?"}, {"start": 310.0, "end": 313.0, "text": " Right? So it basically shows both directions for,"}, {"start": 313.0, "end": 314.0, "text": " for every group."}, {"start": 314.0, "end": 318.0, "text": " And then you've also labeled these by multiple methods"}, {"start": 318.0, "end": 321.0, "text": " so that you can kind of establish these groups."}, {"start": 321.0, "end": 325.0, "text": " And what is pretty cool, you've built this website,"}, {"start": 325.0, "end": 327.0, "text": " where you can analyze this."}, {"start": 327.0, "end": 331.0, "text": " And my computer is a bit overloaded at the moment,"}, {"start": 331.0, "end": 334.0, "text": " but I promise this is really interactive."}, {"start": 334.0, "end": 337.0, "text": " All right, so during the interview, my computer crashed."}, {"start": 337.0, "end": 340.0, "text": " So I'm doing this in post-production."}, {"start": 340.0, "end": 342.0, "text": " Just to show you how this website operates."}, {"start": 342.0, "end": 344.0, "text": " So what you have here is an overview"}, {"start": 344.0, "end": 346.0, "text": " over all the channels of what,"}, {"start": 346.0, "end": 348.0, "text": " where recommendations were collected."}, {"start": 348.0, "end": 351.0, "text": " And they're grouped into groups."}, {"start": 351.0, "end": 353.0, "text": " For example, here I have the partisan left,"}, {"start": 353.0, "end": 355.0, "text": " central left, social just,"}, {"start": 355.0, "end": 357.0, "text": " as partisan right and so on."}, {"start": 357.0, "end": 359.0, "text": " So you can see for each group or channel"}, {"start": 359.0, "end": 362.0, "text": " where recommendations come from and where they go to."}, {"start": 362.0, "end": 365.0, "text": " For example, the large red one here,"}, {"start": 365.0, "end": 368.0, "text": " I happen to know, that is Fox News."}, {"start": 368.0, "end": 372.0, "text": " You can see Fox News of the daily impression"}, {"start": 372.0, "end": 377.0, "text": " it received from itself about 36 million impressions"}, {"start": 377.0, "end": 379.0, "text": " and it gives to itself 36 million."}, {"start": 379.0, "end": 381.0, "text": " These numbers have to agree by nature"}, {"start": 381.0, "end": 384.0, "text": " of how the data is collected, of course."}, {"start": 384.0, "end": 387.0, "text": " But you can also see it gets 2.7 million impressions"}, {"start": 387.0, "end": 390.0, "text": " from CNN, 2.6 million from the next news network,"}, {"start": 390.0, "end": 391.0, "text": " and so on."}, {"start": 391.0, "end": 394.0, "text": " And it also gives quite a bit of recommendations"}, {"start": 394.0, "end": 396.0, "text": " to CNN and so on."}, {"start": 396.0, "end": 398.0, "text": " So you can go, for example,"}, {"start": 398.0, "end": 401.0, "text": " at some individual channel,"}, {"start": 401.0, "end": 402.0, "text": " here's the daily wire."}, {"start": 402.0, "end": 406.0, "text": " The daily wire is mainly run by Ben Shapiro."}, {"start": 406.0, "end": 408.0, "text": " So it's a bit more to the right of Fox News"}, {"start": 408.0, "end": 412.0, "text": " and a bit more on the direction of alternative media."}, {"start": 412.0, "end": 415.0, "text": " You can see the daily wire gets some"}, {"start": 415.0, "end": 417.0, "text": " most of its impression,"}, {"start": 417.0, "end": 420.0, "text": " countwise from itself, from the daily wire."}, {"start": 420.0, "end": 424.0, "text": " But it gives most of it to Fox News."}, {"start": 424.0, "end": 427.0, "text": " So actually, you can see here,"}, {"start": 427.0, "end": 431.0, "text": " itself is a long way down,"}, {"start": 431.0, "end": 434.0, "text": " like in whatever a 6th or 7th place."}, {"start": 434.0, "end": 437.0, "text": " So actually, if you were to watch the daily wire,"}, {"start": 437.0, "end": 440.0, "text": " the recommendation system would most likely steer you"}, {"start": 440.0, "end": 442.0, "text": " towards something like Fox News."}, {"start": 442.0, "end": 446.0, "text": " Whereas the claim is that the YouTube algorithm"}, {"start": 446.0, "end": 450.0, "text": " would actually steer you towards more radical content."}, {"start": 450.0, "end": 452.0, "text": " Actually, in reality,"}, {"start": 452.0, "end": 455.0, "text": " it seems like it would steer you towards"}, {"start": 455.0, "end": 457.0, "text": " more of these mainstream content."}, {"start": 457.0, "end": 459.0, "text": " All right, so actually, I want to go to this."}, {"start": 459.0, "end": 461.0, "text": " You can see different groupings here."}, {"start": 461.0, "end": 464.0, "text": " And the radicalization pathways"}, {"start": 464.0, "end": 467.0, "text": " is the previous paper we have looked at."}, {"start": 467.0, "end": 470.0, "text": " So they have all these channels here"}, {"start": 470.0, "end": 472.0, "text": " of this radicalization pathway."}, {"start": 472.0, "end": 474.0, "text": " And you can see here,"}, {"start": 474.0, "end": 479.0, "text": " the control group gives very, very, very few impressions"}, {"start": 479.0, "end": 482.0, "text": " to the IDW."}, {"start": 482.0, "end": 486.0, "text": " The IDW gives much more impressions to the control group."}, {"start": 486.0, "end": 487.0, "text": " Right?"}, {"start": 487.0, "end": 491.0, "text": " Again, and the IDW gives very few impressions"}, {"start": 491.0, "end": 495.0, "text": " to the alt light compared to the amount of impressions"}, {"start": 495.0, "end": 499.0, "text": " the alt light gives to the IDW and even to the control group."}, {"start": 499.0, "end": 501.0, "text": " And if you look at the alt right,"}, {"start": 501.0, "end": 503.0, "text": " and we're going to zoom in on that here,"}, {"start": 503.0, "end": 504.0, "text": " it's even more."}, {"start": 504.0, "end": 506.0, "text": " So the alt right, of course,"}, {"start": 506.0, "end": 508.0, "text": " receives most of its impressions from itself,"}, {"start": 508.0, "end": 510.0, "text": " which you could expect for any kind of group."}, {"start": 510.0, "end": 513.0, "text": " This is your classic filter bubble situation."}, {"start": 513.0, "end": 518.0, "text": " But if we analyze the question of, is there a pipeline?"}, {"start": 518.0, "end": 522.0, "text": " You can see that next most likely,"}, {"start": 522.0, "end": 526.0, "text": " you are diverted to the IDW and to the control group"}, {"start": 526.0, "end": 531.0, "text": " much more than you come from the IDW or the control group."}, {"start": 531.0, "end": 532.0, "text": " Right?"}, {"start": 532.0, "end": 534.0, "text": " Let's look at the alt light, so-called,"}, {"start": 534.0, "end": 537.0, "text": " this is kind of the so-called gateway to the alt right."}, {"start": 537.0, "end": 541.0, "text": " You can see here, the alt light gives most of its impressions"}, {"start": 541.0, "end": 545.0, "text": " next to itself to the control group and the IDW."}, {"start": 545.0, "end": 547.0, "text": " So de-radicalizing."}, {"start": 547.0, "end": 550.0, "text": " If you look at its way to the alt right,"}, {"start": 550.0, "end": 554.0, "text": " you'll see that it gets about four times as much impressions"}, {"start": 554.0, "end": 558.0, "text": " from the alt right as it gives to the alt right."}, {"start": 558.0, "end": 562.0, "text": " So basically, it's kind of taking the steam out of a quarter"}, {"start": 562.0, "end": 566.0, "text": " of all of these sessions and gives it to either the control group"}, {"start": 566.0, "end": 569.0, "text": " or the IDW or itself."}, {"start": 569.0, "end": 574.0, "text": " So this is exactly the opposite of what you would expect"}, {"start": 574.0, "end": 577.0, "text": " if you were to claim that there is a pipeline."}, {"start": 577.0, "end": 580.0, "text": " You would expect their most recommendations to come"}, {"start": 580.0, "end": 584.0, "text": " from more moderate content and go towards more extreme content."}, {"start": 584.0, "end": 586.0, "text": " But it's exactly opposite."}, {"start": 586.0, "end": 590.0, "text": " And again, these are the exact channels that this original paper used."}, {"start": 590.0, "end": 594.0, "text": " Now, what this paper found, the one that we're discussing,"}, {"start": 594.0, "end": 596.0, "text": " if you go to media type here,"}, {"start": 596.0, "end": 599.0, "text": " what you'll be able to see is the division"}, {"start": 599.0, "end": 601.0, "text": " into mainstream media, YouTube creator,"}, {"start": 601.0, "end": 603.0, "text": " and so-called missing link media,"}, {"start": 603.0, "end": 605.0, "text": " which will leave out for a moment."}, {"start": 605.0, "end": 608.0, "text": " Let's focus on mainstream versus YouTube creators."}, {"start": 608.0, "end": 612.0, "text": " You can see the mainstream media gives most recommendations"}, {"start": 612.0, "end": 617.0, "text": " to itself while giving only very little recommendations"}, {"start": 617.0, "end": 620.0, "text": " to YouTube creators and the missing link media."}, {"start": 620.0, "end": 624.0, "text": " While the YouTube creators actually give almost half"}, {"start": 624.0, "end": 626.0, "text": " of their impressions, look at that."}, {"start": 626.0, "end": 629.0, "text": " They give almost half of their impressions"}, {"start": 629.0, "end": 632.0, "text": " to the mainstream media,"}, {"start": 632.0, "end": 638.0, "text": " which means that there is a big, big push by the algorithm"}, {"start": 638.0, "end": 643.0, "text": " to words these mainstream media away from YouTube creators."}, {"start": 643.0, "end": 648.0, "text": " So in general, and I invite you to look at this website."}, {"start": 648.0, "end": 651.0, "text": " In general, you can pretty much see"}, {"start": 651.0, "end": 656.0, "text": " that the exact opposite of a radicalization pipeline"}, {"start": 656.0, "end": 658.0, "text": " is happening if you, of course,"}, {"start": 658.0, "end": 660.0, "text": " if you look at these recommendations"}, {"start": 660.0, "end": 663.0, "text": " and how they are distributed, actually,"}, {"start": 663.0, "end": 669.0, "text": " most recommendation pathways are towards moderate centrist content"}, {"start": 669.0, "end": 673.0, "text": " and, of course, creating filter bubbles,"}, {"start": 673.0, "end": 674.0, "text": " which is a problem by itself,"}, {"start": 674.0, "end": 677.0, "text": " but is not a radicalization pipeline."}, {"start": 677.0, "end": 680.0, "text": " Lastly, I want to look at white annotations"}, {"start": 680.0, "end": 684.0, "text": " because it's one of the groups that people are referring to"}, {"start": 684.0, "end": 688.0, "text": " when they claim that there are these radicalization pipelines."}, {"start": 688.0, "end": 689.0, "text": " Look at that."}, {"start": 689.0, "end": 691.0, "text": " So of the white annotatorian,"}, {"start": 691.0, "end": 694.0, "text": " they get most of their impressions,"}, {"start": 694.0, "end": 698.0, "text": " of course, from other white annotatorian videos,"}, {"start": 698.0, "end": 701.0, "text": " which would be your filter bubble phenomenon,"}, {"start": 701.0, "end": 705.0, "text": " but they give most, and this is a group, right?"}, {"start": 705.0, "end": 708.0, "text": " The white annotatorian channels give most"}, {"start": 708.0, "end": 712.0, "text": " of their recommendations to the parsing right,"}, {"start": 712.0, "end": 716.0, "text": " to the central and left mainstream media,"}, {"start": 716.0, "end": 718.0, "text": " libertarians, and so on,"}, {"start": 718.0, "end": 724.0, "text": " and themselves are like really, really, really far down."}, {"start": 724.0, "end": 728.0, "text": " So to claim that there is this radicalization pipeline,"}, {"start": 728.0, "end": 730.0, "text": " if you look at this data,"}, {"start": 730.0, "end": 733.0, "text": " to me seems not justified from this data,"}, {"start": 733.0, "end": 736.0, "text": " and if I look at the other paper"}, {"start": 736.0, "end": 739.0, "text": " that really left out the important analysis"}, {"start": 739.0, "end": 742.0, "text": " of the backwards direction,"}, {"start": 742.0, "end": 745.0, "text": " it seems that given everything,"}, {"start": 745.0, "end": 747.0, "text": " it seems that the claim is not warranted."}, {"start": 747.0, "end": 750.0, "text": " All right, back to the interview."}, {"start": 750.0, "end": 754.0, "text": " Is that about what you've done?"}, {"start": 754.0, "end": 760.0, "text": " Is that a good summary of the data collection and analysis?"}, {"start": 760.0, "end": 763.0, "text": " Yeah, it's a good summary."}, {"start": 763.0, "end": 765.0, "text": " I can go into detail."}, {"start": 765.0, "end": 768.0, "text": " Yeah, please."}, {"start": 768.0, "end": 771.0, "text": " So YouTube doesn't make it easy."}, {"start": 771.0, "end": 775.0, "text": " So I started this back in November in 2018."}, {"start": 775.0, "end": 778.0, "text": " I was using the YouTube API,"}, {"start": 778.0, "end": 781.0, "text": " and to get enough quota,"}, {"start": 781.0, "end": 783.0, "text": " because they limit the amount of requests"}, {"start": 783.0, "end": 785.0, "text": " you can actually make to their API,"}, {"start": 785.0, "end": 786.0, "text": " I created multiple keys,"}, {"start": 786.0, "end": 789.0, "text": " which is against their policy,"}, {"start": 789.0, "end": 791.0, "text": " and they also asked you to delete"}, {"start": 791.0, "end": 793.0, "text": " all your data after 30 days."}, {"start": 793.0, "end": 795.0, "text": " That's also part of their policy."}, {"start": 795.0, "end": 802.0, "text": " So later, I think it was October 2019,"}, {"start": 802.0, "end": 803.0, "text": " they cut off my access"}, {"start": 803.0, "end": 805.0, "text": " because I was doing that."}, {"start": 805.0, "end": 808.0, "text": " So I had to move to just scraping websites,"}, {"start": 808.0, "end": 810.0, "text": " and now my collection process"}, {"start": 810.0, "end": 812.0, "text": " actually just loads up the website"}, {"start": 812.0, "end": 813.0, "text": " and gets the recommendations"}, {"start": 813.0, "end": 817.0, "text": " from the actual page, like the usual."}, {"start": 817.0, "end": 820.0, "text": " And that's difficult"}, {"start": 820.0, "end": 824.0, "text": " because they block access after a couple of hundred requests."}, {"start": 824.0, "end": 826.0, "text": " They'll stop you that machine"}, {"start": 826.0, "end": 828.0, "text": " from actually requesting from the website."}, {"start": 828.0, "end": 831.0, "text": " So I need to use a proxy service"}, {"start": 831.0, "end": 834.0, "text": " that's fairly expensive,"}, {"start": 834.0, "end": 836.0, "text": " and what they do is they simulate"}, {"start": 836.0, "end": 838.0, "text": " where they have actual residential connections"}, {"start": 838.0, "end": 841.0, "text": " through your home connection, like AT&T,"}, {"start": 841.0, "end": 845.0, "text": " and my request gets tunneled through that,"}, {"start": 845.0, "end": 848.0, "text": " and like a variety of locations in the States"}, {"start": 848.0, "end": 852.0, "text": " to get a representative kind of sample."}, {"start": 855.0, "end": 858.0, "text": " Cool. So the data collection is..."}, {"start": 858.0, "end": 861.0, "text": " would you say that's the hardest part?"}, {"start": 861.0, "end": 865.0, "text": " I feel the labeling of channels is also not so easy,"}, {"start": 865.0, "end": 869.0, "text": " but you've managed to kind of do it half automated,"}, {"start": 869.0, "end": 872.0, "text": " also half collecting things"}, {"start": 872.0, "end": 874.0, "text": " from kind of sources"}, {"start": 874.0, "end": 876.0, "text": " that don't analyze these channels."}, {"start": 876.0, "end": 880.0, "text": " But at least for most of the things that I've inspected,"}, {"start": 880.0, "end": 883.0, "text": " I found the labeling to be pretty sane."}, {"start": 883.0, "end": 886.0, "text": " I think this is always something you can attack."}, {"start": 886.0, "end": 890.0, "text": " The original paper was also attacked on how they label."}, {"start": 890.0, "end": 894.0, "text": " I find this to be kind of big-rish."}, {"start": 894.0, "end": 897.0, "text": " Mostly I think your labels are pretty good"}, {"start": 897.0, "end": 901.0, "text": " as well. The other paper labels are also mostly pretty okay."}, {"start": 901.0, "end": 903.0, "text": " So let's go to..."}, {"start": 903.0, "end": 905.0, "text": " Sorry."}, {"start": 905.0, "end": 907.0, "text": " Yeah, it's quite subjective."}, {"start": 907.0, "end": 911.0, "text": " I expected the labeling to be what I get my push back on,"}, {"start": 911.0, "end": 917.0, "text": " but it turns out it was the anonymous collection."}, {"start": 917.0, "end": 920.0, "text": " So what you've actually found here,"}, {"start": 920.0, "end": 924.0, "text": " what would you say are your main results?"}, {"start": 924.0, "end": 928.0, "text": " And I can maybe show..."}, {"start": 928.0, "end": 932.0, "text": " So you've analyzed a bit where do things come from,"}, {"start": 932.0, "end": 935.0, "text": " where do things go to?"}, {"start": 935.0, "end": 941.0, "text": " And I found this part here to be one of the..."}, {"start": 941.0, "end": 943.0, "text": " Even though it's pretty simple,"}, {"start": 943.0, "end": 947.0, "text": " one of the core things to say about this is that"}, {"start": 947.0, "end": 953.0, "text": " mostly what you found could be said is"}, {"start": 953.0, "end": 956.0, "text": " it's simply a recommendation algorithm"}, {"start": 956.0, "end": 959.0, "text": " working as a recommendation algorithm should,"}, {"start": 959.0, "end": 964.0, "text": " which means it creates your typical filter bubbles."}, {"start": 964.0, "end": 967.0, "text": " If I watch one makeup tutorial,"}, {"start": 967.0, "end": 970.0, "text": " all of a sudden my site is filled with makeup tutorials"}, {"start": 970.0, "end": 972.0, "text": " and things like this."}, {"start": 972.0, "end": 975.0, "text": " But also you found that there is quite some"}, {"start": 975.0, "end": 980.0, "text": " over the top push towards what could be considered mainstream media."}, {"start": 980.0, "end": 985.0, "text": " And there is a bit of a draw away from the smaller"}, {"start": 985.0, "end": 989.0, "text": " YouTuber-like channels."}, {"start": 989.0, "end": 993.0, "text": " Is that something that characterizes your work?"}, {"start": 993.0, "end": 995.0, "text": " That's right."}, {"start": 995.0, "end": 998.0, "text": " Yeah, that's a good way to characterize it."}, {"start": 998.0, "end": 1001.0, "text": " If that chart we're looking at now,"}, {"start": 1001.0, "end": 1004.0, "text": " if it was a neutral algorithm,"}, {"start": 1004.0, "end": 1008.0, "text": " the grain bios would be the same as the grey ones."}, {"start": 1008.0, "end": 1012.0, "text": " So you'd receive the same amount of recommendations as you give."}, {"start": 1012.0, "end": 1016.0, "text": " You get the future organically."}, {"start": 1016.0, "end": 1019.0, "text": " The recommendations should seem to be equivalent to that."}, {"start": 1019.0, "end": 1023.0, "text": " But we find that it disproportionately recommends mainstream media channels."}, {"start": 1023.0, "end": 1026.0, "text": " That's not even though so it's not like..."}, {"start": 1026.0, "end": 1029.0, "text": " It doesn't look like it's consistently doing that."}, {"start": 1029.0, "end": 1032.0, "text": " So you can find exceptions to that."}, {"start": 1032.0, "end": 1039.0, "text": " I believe one of the main criticisms of your paper has been"}, {"start": 1039.0, "end": 1045.0, "text": " that you only use data from 2019 onwards."}, {"start": 1045.0, "end": 1048.0, "text": " And I have actually looked at your website."}, {"start": 1048.0, "end": 1051.0, "text": " And your website a lot of times says that the data has been collected"}, {"start": 1051.0, "end": 1055.0, "text": " from way earlier than that."}, {"start": 1055.0, "end": 1062.0, "text": " So is it that you've only used 2019 data in your paper?"}, {"start": 1062.0, "end": 1064.0, "text": " Or what is it?"}, {"start": 1064.0, "end": 1069.0, "text": " The paper is just from November and December 2019."}, {"start": 1069.0, "end": 1073.0, "text": " And the reason we did that"}, {"start": 1073.0, "end": 1078.0, "text": " is that we only had 400 channels before that."}, {"start": 1078.0, "end": 1081.0, "text": " And the collection process has changed over time."}, {"start": 1081.0, "end": 1083.0, "text": " So this is a clean set of data we could look at."}, {"start": 1083.0, "end": 1085.0, "text": " And I thought the most recent was the most relevant."}, {"start": 1085.0, "end": 1087.0, "text": " So what is it doing now?"}, {"start": 1087.0, "end": 1088.0, "text": " But I've provided..."}, {"start": 1088.0, "end": 1090.0, "text": " I've got the same analysis over time."}, {"start": 1090.0, "end": 1092.0, "text": " So I've got a gift that I made."}, {"start": 1092.0, "end": 1096.0, "text": " I can look at which goes through all the months I've been collecting."}, {"start": 1096.0, "end": 1099.0, "text": " And you can see that chart for where it goes to."}, {"start": 1099.0, "end": 1101.0, "text": " And it has gone through a bunch of changes."}, {"start": 1101.0, "end": 1103.0, "text": " So in about April 2019,"}, {"start": 1103.0, "end": 1107.0, "text": " that's when I really clamped down on conspiracies and other French channels."}, {"start": 1107.0, "end": 1109.0, "text": " Before that was..."}, {"start": 1109.0, "end": 1112.0, "text": " It was much closer to neutral."}, {"start": 1112.0, "end": 1115.0, "text": " But it never looked like a rabbit hole."}, {"start": 1115.0, "end": 1117.0, "text": " It was never a favorite."}, {"start": 1117.0, "end": 1118.0, "text": " French channels."}, {"start": 1118.0, "end": 1122.0, "text": " Yeah, I mean, that has been my experience also person on YouTube."}, {"start": 1122.0, "end": 1124.0, "text": " I've joined YouTube very early."}, {"start": 1124.0, "end": 1131.0, "text": " Or I've watched YouTube very early when young earth creationism was still active."}, {"start": 1131.0, "end": 1136.0, "text": " And these things were kind of completely discredited by simply having you..."}, {"start": 1136.0, "end": 1140.0, "text": " having people exposed to other points of view."}, {"start": 1140.0, "end": 1142.0, "text": " And even I find this now."}, {"start": 1142.0, "end": 1147.0, "text": " Even though YouTube makes it kind of easy to find, let's say, niche content,"}, {"start": 1147.0, "end": 1152.0, "text": " it also exposes you to a bunch of different views."}, {"start": 1152.0, "end": 1159.0, "text": " And I've always found this to be very, very optimistic in the sense of..."}, {"start": 1159.0, "end": 1163.0, "text": " this is probably deradi-kalizing much more people than radicalizing."}, {"start": 1163.0, "end": 1168.0, "text": " But you've received, like as I said, a bunch of criticism."}, {"start": 1168.0, "end": 1170.0, "text": " So if you could..."}, {"start": 1170.0, "end": 1173.0, "text": " What was the largest criticism?"}, {"start": 1173.0, "end": 1176.0, "text": " Irrespective of whether it was valid or not."}, {"start": 1176.0, "end": 1182.0, "text": " What if you found was kind of what most people were criticizing?"}, {"start": 1182.0, "end": 1187.0, "text": " Most people were criticizing that we were collecting anonymous recommendations."}, {"start": 1187.0, "end": 1189.0, "text": " It wasn't the personalized ones."}, {"start": 1189.0, "end": 1191.0, "text": " And it's actually..."}, {"start": 1191.0, "end": 1193.0, "text": " It is a valid limitation."}, {"start": 1193.0, "end": 1197.0, "text": " There's a first limitation we talked about in this paper."}, {"start": 1197.0, "end": 1204.0, "text": " And it's still an open question how personalization would affect these aggregate results that we've got."}, {"start": 1204.0, "end": 1209.0, "text": " But I think it's reasonable to assume it will be quite similar once you average it out."}, {"start": 1209.0, "end": 1212.0, "text": " So if for anyone person it might be different."}, {"start": 1212.0, "end": 1217.0, "text": " But you would expect personalization based on someone's history to even out."}, {"start": 1217.0, "end": 1222.0, "text": " Because it's kind of the algorithm that the average of all that when it's anonymous."}, {"start": 1222.0, "end": 1225.0, "text": " Yeah, I feel this is something."}, {"start": 1225.0, "end": 1228.0, "text": " The notion that..."}, {"start": 1228.0, "end": 1231.0, "text": " Because if you're not logged in,"}, {"start": 1231.0, "end": 1236.0, "text": " the recommendation is like a person with only one video of history."}, {"start": 1236.0, "end": 1241.0, "text": " So it's the same thing, but there's only one point of history instead of multiple."}, {"start": 1241.0, "end": 1247.0, "text": " I find why should the behavior be qualitatively different?"}, {"start": 1247.0, "end": 1249.0, "text": " If you have multiple points of history,"}, {"start": 1249.0, "end": 1256.0, "text": " this is a strong claim that you have to really show that there is a qualitative difference."}, {"start": 1256.0, "end": 1260.0, "text": " Not just a more or less accuracy."}, {"start": 1260.0, "end": 1262.0, "text": " And I feel the people making this criticism are..."}, {"start": 1262.0, "end": 1267.0, "text": " It's really on them to show that there is a substantial difference,"}, {"start": 1267.0, "end": 1273.0, "text": " rather than saying that this is a giant limitation of the work."}, {"start": 1273.0, "end": 1277.0, "text": " Yeah, and that's also very hypocritical for a lot of the people saying it."}, {"start": 1277.0, "end": 1284.0, "text": " Because some of them like Zinep, who was mockingly saying that her article,"}, {"start": 1284.0, "end": 1288.0, "text": " her original article in New York Times used Algo Transparency,"}, {"start": 1288.0, "end": 1292.0, "text": " which is anonymous as well, but she never looked into that."}, {"start": 1292.0, "end": 1294.0, "text": " I think a lot of this is completely motivated,"}, {"start": 1294.0, "end": 1298.0, "text": " reasoning that don't care about the details."}, {"start": 1298.0, "end": 1301.0, "text": " I've seen this one Twitter user."}, {"start": 1301.0, "end": 1305.0, "text": " She said something to the effect of,"}, {"start": 1305.0, "end": 1313.0, "text": " if you've seen this article, please consult someone that works in this space."}, {"start": 1313.0, "end": 1321.0, "text": " Please don't retarticle yourself. You must get your information through someone."}, {"start": 1321.0, "end": 1324.0, "text": " I've read the article."}, {"start": 1324.0, "end": 1326.0, "text": " I find it's pretty straightforward."}, {"start": 1326.0, "end": 1330.0, "text": " The limitations are clear, but also the results are pretty clear."}, {"start": 1330.0, "end": 1333.0, "text": " It's actually mostly a boring article."}, {"start": 1333.0, "end": 1336.0, "text": " I'm sorry, it's not a criticism."}, {"start": 1336.0, "end": 1337.0, "text": " This is good."}, {"start": 1337.0, "end": 1341.0, "text": " It's mostly you find that things work as expected."}, {"start": 1341.0, "end": 1344.0, "text": " There is a bit of a push towards mainstream,"}, {"start": 1344.0, "end": 1349.0, "text": " which can be probably explained in that YouTube wants to be advertiser friendly."}, {"start": 1349.0, "end": 1354.0, "text": " These mainstream channels already are advertiser friendly,"}, {"start": 1354.0, "end": 1358.0, "text": " so they probably get bumped a bit."}, {"start": 1358.0, "end": 1365.0, "text": " What would you say is maybe the most valid criticism that you've heard,"}, {"start": 1365.0, "end": 1367.0, "text": " maybe not the biggest, but the most?"}, {"start": 1367.0, "end": 1372.0, "text": " Where do you say, yeah, this is really something that is..."}, {"start": 1374.0, "end": 1381.0, "text": " I think there was criticism that I'm over claiming not in the paper so much,"}, {"start": 1381.0, "end": 1384.0, "text": " but in my tweets and medium."}, {"start": 1384.0, "end": 1388.0, "text": " I guess that's fair, but when I tweet and write a medium,"}, {"start": 1388.0, "end": 1391.0, "text": " those are what I believe in kind of a Vesian way."}, {"start": 1391.0, "end": 1396.0, "text": " I'm not catching my claims like you would when you're writing a paper."}, {"start": 1396.0, "end": 1401.0, "text": " So I guess that's valid."}, {"start": 1401.0, "end": 1405.0, "text": " I think a lot of people read into what I was saying more than what I was."}, {"start": 1405.0, "end": 1409.0, "text": " So when I say the algorithm has a de-radicalizing influence,"}, {"start": 1409.0, "end": 1411.0, "text": " I'm just talking about the recommendations,"}, {"start": 1411.0, "end": 1416.0, "text": " whereas a lot of people consider that to be talking about all things considered."}, {"start": 1416.0, "end": 1420.0, "text": " So even if it doesn't have a biosource of fringe,"}, {"start": 1420.0, "end": 1424.0, "text": " maybe sociologically YouTube radicalizes people,"}, {"start": 1424.0, "end": 1427.0, "text": " it could be the case, I don't know."}, {"start": 1427.0, "end": 1429.0, "text": " But that's what I'm talking about."}, {"start": 1429.0, "end": 1432.0, "text": " I'm talking about just the influence through recommendations."}, {"start": 1432.0, "end": 1435.0, "text": " And that's all we can hold Google accountable for,"}, {"start": 1435.0, "end": 1439.0, "text": " or at least it's what probably all could agree that Google should be held accountable for"}, {"start": 1439.0, "end": 1442.0, "text": " with its recommendation system."}, {"start": 1443.0, "end": 1447.0, "text": " Yeah, do you expect something to come out of..."}, {"start": 1447.0, "end": 1450.0, "text": " Or have you heard something to come out of YouTube themselves,"}, {"start": 1450.0, "end": 1455.0, "text": " like the company, any form of official statement to this?"}, {"start": 1457.0, "end": 1459.0, "text": " Nothing, nothing at all."}, {"start": 1459.0, "end": 1464.0, "text": " I got a vague report that was complaining that YouTube sent them this."}, {"start": 1464.0, "end": 1467.0, "text": " So I think they've read it,"}, {"start": 1467.0, "end": 1471.0, "text": " but I have no, absolutely no contact with them."}, {"start": 1471.0, "end": 1473.0, "text": " Okay."}, {"start": 1473.0, "end": 1478.0, "text": " Cool. Are you doing anything in follow-up or do you have plans for more research?"}, {"start": 1478.0, "end": 1482.0, "text": " Not at this. I've just gone back to work."}, {"start": 1482.0, "end": 1486.0, "text": " I've applied for a bunch of independent grant money,"}, {"start": 1486.0, "end": 1488.0, "text": " but I'm not optimistic."}, {"start": 1488.0, "end": 1491.0, "text": " So if I don't get that, I'll keep it paddling along."}, {"start": 1491.0, "end": 1494.0, "text": " I'll probably reduce the amount of recommendations,"}, {"start": 1494.0, "end": 1499.0, "text": " because I'm spending about $500 a month at the moment,"}, {"start": 1499.0, "end": 1501.0, "text": " just keeping it running."}, {"start": 1501.0, "end": 1503.0, "text": " So I've got to, I've got to reduce my costs."}, {"start": 1503.0, "end": 1507.0, "text": " Yeah. And you do have a Patreon for people to chip into that, right?"}, {"start": 1507.0, "end": 1510.0, "text": " Yeah. So if you can link to that, that'd be good."}, {"start": 1510.0, "end": 1514.0, "text": " So if I'm getting something like $22 a month,"}, {"start": 1514.0, "end": 1516.0, "text": " so it doesn't really cover it."}, {"start": 1516.0, "end": 1518.0, "text": " Yeah. All right."}, {"start": 1518.0, "end": 1521.0, "text": " So, okay, this has been very, very pleasant."}, {"start": 1521.0, "end": 1524.0, "text": " I think we've kind of looked at a lot of things."}, {"start": 1524.0, "end": 1528.0, "text": " Is there anything you would like to amend to this"}, {"start": 1528.0, "end": 1533.0, "text": " that people should know about the research or about this field?"}, {"start": 1533.0, "end": 1537.0, "text": " I would just have a, I'd encourage you to have a play"}, {"start": 1537.0, "end": 1541.0, "text": " digging into the data itself, this, if you're in this area,"}, {"start": 1541.0, "end": 1544.0, "text": " the data is free to use, the code is free to use."}, {"start": 1544.0, "end": 1548.0, "text": " Just consider this a contribution to knowledge."}, {"start": 1548.0, "end": 1552.0, "text": " Cool. Well, thanks a lot, Mark."}, {"start": 1552.0, "end": 1556.0, "text": " I wish you a very pleasant evening for you, I guess."}, {"start": 1556.0, "end": 1559.0, "text": " And cheers. Thanks."}, {"start": 1559.0, "end": 1561.0, "text": " Thanks having me. Bye."}, {"start": 1561.0, "end": 1562.0, "text": " Bye."}, {"start": 1562.0, "end": 1563.0, "text": " Bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=i4H0kjxrias | Reformer: The Efficient Transformer | The Transformer for the masses! Reformer solves the biggest problem with the famous Transformer model: Its huge resource requirements. By cleverly combining Locality Sensitive Hashing and ideas from Reversible Networks, the classically huge footprint of the Transformer is drastically reduced. Not only does that mean the model uses less memory, but it can process much longer input sequences, up to 16K tokens with just 16gb of memory!
https://arxiv.org/abs/2001.04451
https://ai.googleblog.com/2020/01/reformer-efficient-transformer.html
Abstract:
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(LlogL), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
Authors: Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we'll look at Reformer, the efficient transformer by Nikita Kitaiv, Lukash Kaiser and Ansem Leveskaya. This is a paper that tries to reduce the extreme resource requirements of the transformer model. Now if you haven't seen the transformer model before, that's this thing. I suggest you go watch, for example, my video on it, attention is all you need, it's called, where the transformer is introduced. The most famous transformer is called BERT, B-E-R-T, and you can also look that up. I've made a video about this. So what's the issue here? If you remember Transformers, they need a lot of memory and why? That's because they compute in each layer, they compute these attention things. Let's recap shortly. In a transformer, you propagate information layer by layer. So you have layer here with some signal and then the next layer that you try to propagate the signal. Now what you do, you assign queries to each of the next layers. So each of the next layer has queries and queries are just vectors. This is a vector, this is a vector, this is a vector, and so on. So basically the next layer has the ability to ask the last layer what it wants. This is a kind of intrinsic property of attention and I, as I said, I explain this in detail in the video attention is all you need. Basically these are what's called queries, Q, and then this layer is exposing what are called keys and keys again are vectors. So vector, vector, vector, vector, and so on. So keys are vectors and the way that the information is propagated to the next layer is whenever, whatever, we consider for example this node here, right, this node. Let's make that yellow. When we consider this node here, it is going to look in the last layer which, which keys match my key the most. And in this case it would probably be this key and this key, right, they match the key the most. And here we look at the inner product, so the angle between the vectors. And then information is aggregated by simply having a weighted average of the values. So information is coming in here and here actually information is coming into all the nodes. But since only these keys match, the information will be propagated like this to this unit. We can do this for another unit, for example, this unit right here. What's the value of this unit? Well, we have to look at the key here, which key is it going to be matched to? It's probably going to be matched to this key right here. And probably no other key really. Maybe this key a little bit. So the information of that node in the next layer will be whatever's information is coming in here, routed there, and a little bit of this information. So this is kind of a, it's not a hard, it's a, it's called soft attention. So there's a little bit of information going everywhere, but the majority of the information is coming from the nodes where the keys match. So these are queries, these are keys. And technically these things coming in here are called values. But imagine the values simply as the information to be propagated. And the queries and the keys are responsible for routing that information to the next layer. All of these things are learned. So the queries, the keys, and the values. Now what's the problem? The problem is between the queries and the keys. As you can see, what you have to do is you have to match every single query with every single key in order to find out where information goes. So this becomes order of, what if you have D keys and D queries order of D squared operations that you have to do. And of course, D squares, values that you have to compute. And since these are all vectors, of course, there is D will not only be the number of keys, but then again, this is multiplied. So there's an inner multiplication with the dimensionality. Let's call that capital, the of the, no, sorry, that's not an inner multiplication. Let's just remain at this. So these squared inner products between vectors of capital D dimensions. So it's not an, it's not an easy thing for resources to do. You need a lot of resources to hold this, all of this in memory at the same time and to compute all of these things. The reformer aims to solve this problem. So this, this giant space problem that the transformers have, space, memory, also computational problem to a lesser degree. Mostly it's a memory issue. All right. So what is happening here? And you see, you see here that this, this product between two matrices clearly gives you this kind of squared, um, squared thing. So what's happening in the reformer to do this? The trick is, the trick is, if we go back to this drawing, the trick is to create what's called a hashing scheme or buckets. And in creating buckets, what you want to do is you want to group similar things together. So let's say we create four buckets. Bucket one, bucket two, bucket three, bucket four. Right. And each bucket we label and bucket one we label with the up direction is with the right direction with the down direction, the left direction as vectors. And now we simply put each of the things into the bucket where it belongs most. So let's, for example, this vector here, it goes here. Sorry, that is like an absolutely not the right place. It goes probably here, right? And this vector here, probably this one goes here, right? And so on. So you'll end up each of these assigning a bucket. So this, these, these all go into into that bucket. Let's continue. Actually, let's also put the the keys in the same buckets. So also the keys, this key here probably goes to this bucket. This key here probably goes to this bucket. Let's say this key here probably goes to the bucket over here. You already see, so before, right before we cared about this particular query and this particular key, we just looked and we said those two will probably route information to each other because they're similar. And now you can see they both ended up in the same bucket. So the idea is to create a scheme where you throw these things into buckets such that if two vectors are similar, they will end up in the same bucket with high probability. So you'll only have to really compare things within the same bucket and not across all of these D squared elements. That's the idea. And the technique here is called locality sensitive hashing. So locality sensitive hashing. And short, this is called LSH. The idea is the following. If you have two vectors, V1 and V2, and they have and you have a distance measure, distance measure, D, is a distance. What you want is if the distance between V1 and V2 is small, I'm going to use this with small, then you want them in the same bucket. And if the distance is large, then you want them in a different bucket. Different buckets. You know, with high probability. So all of these things where you say you want them in the same bucket with probability P, with probability P, with high probability P. And here you want them in different buckets with high probability. Or you want them in the same bucket with low probability. That's an equivalent form of stating. This is all formalized and I can direct you to the Wikipedia page of that. It's pretty good. It gives a concise definition. Here you can see that. And it gives a number of examples. So one example I'd like to give here for locality sensitive hashing is of course the scheme of bucketing will all depend on what your distance measure is. If you consider the distance measure simply to be the jacquart distance. So let's say we have two vectors 0, 1, 0, 1. And here we have 1, 0, 1, 1, 0, 1. And here it's 0, 0, 0, 1. All right. So maybe you can see the first two vectors here are much more close together than the last vector. Now in terms of bit differences. Right. One scheme to do locality sensitive hashing is two simply sub-sample bits. So in this case, this is a slightly constructed example. We will just sub-sample the first two bits and then construct the buckets according to these bit values. So if we since we sample two bits we have four buckets. Right. So here is 0, 0. Here is 0, 1. Here is 1, 0. And here is 1, 1. That's the concept of locality sensitive hashing. You have these buckets. Right. And then you can say, all right, this vector has 1, 0 goes into this, this goes into this. And then that goes into, sorry, the 0, 1 bucket. Right. And you end up with what you have, you have the two close vectors in the same bucket and the two far apart vectors in that bucket. Of course, that doesn't always work. You know, you can be unlucky in sub-sampling, but that's kind of trade off. You'll have to to go for. Right. If things that are close together happen with, it's a low probability, but if they happen to end up in the different buckets, then basically you lose the fact that they are close to each other. And that's the trade off. The kind of locality sensitive hashing they use in the reformer now is what are called random projections. So let's say you have a bunch of vectors. And that's really what we care about, right. You have a bunch of vectors and what you want, you want the keys and queries. So you have a bunch of vectors like this. And you want to create buckets such that vectors that are close together will end up in the same bucket and vectors that are far apart. You'll will end up in the in different buckets. The way a cool way to do is, and this is in the cosine distance. So we care about the angle between vectors, right. A cool way to do this is to use random plane projections. And this the the cool thing about it is it works for the cosine distance. And you can basically choose how many buckets you create, right. Let's say we want to create four buckets here again. What we need is two hyperplanes. And what we'll do is, so here is the origin, will simply create two hyperplanes through the origin at random. So I'm going to draw a random hyperplane here like this. And then a second random hyperplane like this. Right. So you would agree those are pretty random hyperplanes as much as I can be a random generator. And then we'll simply label. So this will label hyperplane one. This will label hyperplane two. Right. And now we simply assign each vector bits according to the in which on which side of the hyperplane they lie. So let's call this here the plus side and this here the minus side or even yeah, let's call this the plus and the minus and here also we call this the plus side and this the minus side. So this vector here is its signs are plus plus, right, because it's on the plus side of both of hyperplanes. This vector plus plus this one plus plus this one here is called it's on the negative side of plane two, but on the positive side of plane one. So it's plus minus this one here minus minus minus minus minus minus and these are your buckets. So you would group these vectors together because they have they have the same signs you would group that vector. You would group these vectors together. The combination of this with the attention since in attention you've seen attention uses a softmax and the softmax is dominated usually by the largest elements and since we compute inner products it means that the softmax thing is dominated by the by vectors that have small inner products. So basically you don't have to look at all of these D squared vectors if you can find the ones that are have the closest distance. You can pretty much ignore the others and LSH allows you to do this. So build buckets of vectors with the with similar directions then you only have to care about these vectors comparing them to each other. So that's not a lot of vectors generally and that's how you save a lot of work. So you will only have to care about these three vectors if your key vector for example is right here. You'll only have to care about these things in the same bucket and you can ignore all of the rest of the space. Of course the more hyper planes you have the more buckets you'll have the less vectors you'll have in the same bucket. That's the general idea. I find this explanation to be a bit easy. You can equivalently explain it by doing these kind of random rotations in this space. You can think about how that will end up actually being the exact same thing as what I just explained but I just like that my explanation better I think. All right so the way they use this they have an illustration right here is the following. So they have these keys right sequence of queries and keys so they they do equivalent queries and keys which is a thing you can do in transformers don't worry too much about it whether they're different or not but then they do this LSH bucketing and here the color of the cell is just the bucket the LSH bucket at which it will end up then they sort that right as you can see and now they do an additional thing which is called they chunk. The as you can see there are not the same amount of vectors in each bucket and that is sometimes a problem because even though you've you've reduced the memory the memory still the memory requirements are still dominated by the largest bucket right by whatever bucket has the most number of vectors that will pretty much be your memory requirement because now you don't you don't have to if this is d right you have to compute all the d squared things anymore but you'll only have to compute this quantity let's call that B so max the maximum bucket size but that could still be large right if you look at a distribution it's probably going to be something like this right where most buckets have a kind of a standard number of vectors but some buckets will have a lot of vectors and that's sorry some few buckets will have a lot of vectors and your memory requirement is still dominated by this so they do an additional thing which is called chunking which means they'd actually take fixed size chunks here fixed size here they always take four and they say all right these are our chunks and we will only compute attention within the chunks right so it could be that they're there's the same bucket is actually split between chunks and that's what they do an additional thing is that you can attend two things in a different chunk right here right you can attend two things in your neighboring chunks so you're restricted to either your own chunk or your neighboring chunk note that there aren't any any arrows going over here so you can you can attend have this diagram here which things you can attend to you can attend to yourself or attend to your neighboring thing but not to any other thing or the other way around right so that's basically the the concept of saving memory now your memory requirements are if we call this quantity now we call the other one b let's call this the chunk size c right your memory requirements are pretty much c squared plus whatever this unit directional so not this this isn't squared plus probably O of C something like this so you bring your memory requirements down quite a bit now that's the the general idea here the problem they face again is so they face another problem where they say hold on I can find it right here they say hold on we we do have actually another problem and that is that these transformers have to back propagate so you'll have to forward propagate these things and now we've come to solve this d-square computation issue but what you'll have to do is if you go from layer to layer right layer layer layer layer what you have to do is if you propagate information forward you still have to back propagate and in order to back propagate usually usually you'll have to remember all of these activations right so these activations these activations in order to do backprop it is often the case that you actually have to remember the activation because in each forward propagation in each layer here you might lose some information imagine you have a you have to a layer that maps these two dimensional vectors both to so here actually let's make this blue maps these three vectors to the following um configuration so a layer maps these vectors to this this and this so it maps two things to one thing at which which you know can be if you in a linear layer it can decide to map it to a lower dimensional subspace so it could actually so I decide to map it to in fact two points right this is also a possibility could do dimension reduction so because all of this in order to do backprop you actually have to remember these things in order to do proper backprop this is a problem again for the transformer because all these activations even though we've gotten rid of the d-square computation they will have to be remembered and that takes a lot of memory the way to solve this is actually to invertible layers what that means is that if I propagate information forward forward forward forward forward I can figure out what the information here was simply by looking at the backprop activations and this is this happens if the layer is invertible so if this function here is invertible so if f here technically is invertible so I can actually write down the inverse of f and that is is defined this of course is a pretty big restriction and the way they achieve it I like to go to the blog here the way they achieve it is they do what's called an idea from reversible networks where they always have two sets of activations that's what you see here x1 and x2 and in each layer only one of them is updated in a residual fashion so you can see here layer one updates x2 but x1 remains the same and goes to y1 right and then in the next layer layer two only updates y1 in order to construct z1 but y2 remains the same to be z2 and then you can revert the layers you can basically figure out what the activations were from the backprop signal now that's extremely good if you want to save memory but of course it restricts clearly you have to be restricted to this kind of architecture or similar this idea actually isn't new there has been used many times in things like normalizing flows and I want to highlight this paper actually want to highlight the specific I chose this paper because they have these nice diagrams where they show exactly this you see they have two sets x1 and x2 that in forward propagation they only update one of them and then in backward in what's called inverse propagation they can figure out what those were and they couple these in exactly the same way like here this drawing might be even more similar where they alternate between updating the two activations so you can think of this as a way to simply make the function that you're representing with the neural network invertible that is a giant constraint on your architecture but these methods here these normalizing flow methods use that so they can actually define an invertible layer because they need the Jacobian inverse in order to compute their normalizing flow so you see that's why they originally did it and I'm sure that's not a like a new idea or it's particularly new again strangely I haven't found the the any of the flows literature cited they do cite the reversible residual net paper that they probably got the idea from all right so with these two things now you can save the giant computation right and you can also not store the forward activations so they say they can take now giant giant giant input sizes you may remember transformers like Bert so Bert it can use something like 512 tokens in its input sequence that means it's the sequence that you can look at with Bert at a time is 512 long and not a bit longer right there have been some extensions to that for example I believe in XL net so XL net has pushed this to something like C times 512 where C is a smallish constant that where you can kind of carry over information between sequences but this thing here as you can see they calculate it could take up something like 64 000 tokens and it that would use in total 16 gigabytes of memory which is available on a high end GPU right so this is a giant this is a giant step forward in in producing transformers that can actually take large models and here you see the memory and time complexity you can look at these things yourself but be you can see maybe here that these squares here in you from the original transformer they now vanish from this and all of these constants are a lot of these constants are actually smaller for example that chunk size is in there instead of kind of the entire sequence length so that's basically the the paper they show that you can actually input those long sequences they can apply this to images you see there's image net pixel by pixel which is a lot of pixels and would have been absolutely unthinkable with one of the original transformers and with that I invite you to check out the paper and the blog post and I'll see you next time bye bye | [{"start": 0.0, "end": 5.98, "text": " Hi there. Today we'll look at Reformer, the efficient transformer by Nikita"}, {"start": 5.98, "end": 13.280000000000001, "text": " Kitaiv, Lukash Kaiser and Ansem Leveskaya. This is a paper that tries to reduce"}, {"start": 13.280000000000001, "end": 18.7, "text": " the extreme resource requirements of the transformer model. Now if you haven't"}, {"start": 18.7, "end": 25.32, "text": " seen the transformer model before, that's this thing. I suggest you go watch, for"}, {"start": 25.32, "end": 29.48, "text": " example, my video on it, attention is all you need, it's called, where the"}, {"start": 29.48, "end": 37.2, "text": " transformer is introduced. The most famous transformer is called BERT, B-E-R-T, and"}, {"start": 37.2, "end": 43.96, "text": " you can also look that up. I've made a video about this. So what's the issue"}, {"start": 43.96, "end": 50.6, "text": " here? If you remember Transformers, they need a lot of memory and why? That's"}, {"start": 50.6, "end": 57.0, "text": " because they compute in each layer, they compute these attention things. Let's"}, {"start": 57.0, "end": 64.04, "text": " recap shortly. In a transformer, you propagate information layer by layer. So you"}, {"start": 64.04, "end": 71.64, "text": " have layer here with some signal and then the next layer that you try to"}, {"start": 71.64, "end": 80.66, "text": " propagate the signal. Now what you do, you assign queries to each of the"}, {"start": 80.66, "end": 85.56, "text": " next layers. So each of the next layer has queries and queries are just vectors."}, {"start": 85.56, "end": 91.08, "text": " This is a vector, this is a vector, this is a vector, and so on. So basically the"}, {"start": 91.08, "end": 99.32000000000001, "text": " next layer has the ability to ask the last layer what it wants. This is a kind of"}, {"start": 99.32000000000001, "end": 106.32000000000001, "text": " intrinsic property of attention and I, as I said, I explain this in detail in"}, {"start": 106.32000000000001, "end": 110.92, "text": " the video attention is all you need. Basically these are what's called queries,"}, {"start": 110.92, "end": 118.2, "text": " Q, and then this layer is exposing what are called keys and keys again are"}, {"start": 118.2, "end": 127.36, "text": " vectors. So vector, vector, vector, vector, and so on. So keys are vectors and the"}, {"start": 127.36, "end": 134.88, "text": " way that the information is propagated to the next layer is whenever, whatever,"}, {"start": 134.88, "end": 141.32, "text": " we consider for example this node here, right, this node. Let's make that yellow."}, {"start": 141.32, "end": 147.0, "text": " When we consider this node here, it is going to look in the last layer which,"}, {"start": 147.0, "end": 153.76, "text": " which keys match my key the most. And in this case it would probably be this"}, {"start": 153.76, "end": 160.16, "text": " key and this key, right, they match the key the most. And here we look at the"}, {"start": 160.16, "end": 165.35999999999999, "text": " inner product, so the angle between the vectors. And then information is"}, {"start": 165.35999999999999, "end": 172.68, "text": " aggregated by simply having a weighted average of the values. So information is"}, {"start": 172.68, "end": 177.48, "text": " coming in here and here actually information is coming into all the nodes. But"}, {"start": 177.48, "end": 183.68, "text": " since only these keys match, the information will be propagated like this to"}, {"start": 183.68, "end": 190.88, "text": " this unit. We can do this for another unit, for example, this unit right here."}, {"start": 190.88, "end": 197.0, "text": " What's the value of this unit? Well, we have to look at the key here, which key"}, {"start": 197.0, "end": 201.96, "text": " is it going to be matched to? It's probably going to be matched to this key"}, {"start": 201.96, "end": 209.16, "text": " right here. And probably no other key really. Maybe this key a little bit. So"}, {"start": 209.16, "end": 213.79999999999998, "text": " the information of that node in the next layer will be whatever's information is"}, {"start": 213.79999999999998, "end": 219.32, "text": " coming in here, routed there, and a little bit of this information. So this is"}, {"start": 219.32, "end": 224.88, "text": " kind of a, it's not a hard, it's a, it's called soft attention. So there's a"}, {"start": 224.88, "end": 229.07999999999998, "text": " little bit of information going everywhere, but the majority of the information"}, {"start": 229.07999999999998, "end": 233.64, "text": " is coming from the nodes where the keys match. So these are queries, these are"}, {"start": 233.64, "end": 239.39999999999998, "text": " keys. And technically these things coming in here are called values. But"}, {"start": 239.39999999999998, "end": 244.11999999999998, "text": " imagine the values simply as the information to be propagated. And the queries"}, {"start": 244.11999999999998, "end": 249.16, "text": " and the keys are responsible for routing that information to the next layer."}, {"start": 249.16, "end": 254.44, "text": " All of these things are learned. So the queries, the keys, and the values. Now"}, {"start": 254.44, "end": 259.28, "text": " what's the problem? The problem is between the queries and the keys. As you"}, {"start": 259.28, "end": 265.0, "text": " can see, what you have to do is you have to match every single query with every"}, {"start": 265.0, "end": 269.79999999999995, "text": " single key in order to find out where information goes. So this becomes"}, {"start": 269.96, "end": 278.4, "text": " order of, what if you have D keys and D queries order of D squared operations"}, {"start": 278.4, "end": 283.28, "text": " that you have to do. And of course, D squares, values that you have to compute."}, {"start": 283.28, "end": 290.67999999999995, "text": " And since these are all vectors, of course, there is D will not only be the"}, {"start": 290.67999999999995, "end": 295.08, "text": " number of keys, but then again, this is multiplied. So there's an inner"}, {"start": 295.08, "end": 300.91999999999996, "text": " multiplication with the dimensionality. Let's call that capital, the of the,"}, {"start": 300.91999999999996, "end": 309.67999999999995, "text": " no, sorry, that's not an inner multiplication. Let's just remain at this. So"}, {"start": 309.68, "end": 316.08, "text": " these squared inner products between vectors of capital D dimensions. So it's"}, {"start": 316.08, "end": 322.72, "text": " not an, it's not an easy thing for resources to do. You need a lot of resources"}, {"start": 322.72, "end": 327.88, "text": " to hold this, all of this in memory at the same time and to compute all of these"}, {"start": 327.88, "end": 335.52, "text": " things. The reformer aims to solve this problem. So this, this giant space"}, {"start": 335.52, "end": 342.56, "text": " problem that the transformers have, space, memory, also computational problem to"}, {"start": 342.56, "end": 349.35999999999996, "text": " a lesser degree. Mostly it's a memory issue. All right. So what is happening"}, {"start": 349.35999999999996, "end": 356.03999999999996, "text": " here? And you see, you see here that this, this product between two matrices"}, {"start": 356.03999999999996, "end": 362.88, "text": " clearly gives you this kind of squared, um, squared thing. So what's happening"}, {"start": 362.88, "end": 369.48, "text": " in the reformer to do this? The trick is, the trick is, if we go back to this"}, {"start": 369.48, "end": 376.96, "text": " drawing, the trick is to create what's called a hashing scheme or buckets. And in"}, {"start": 376.96, "end": 382.8, "text": " creating buckets, what you want to do is you want to group similar things"}, {"start": 382.8, "end": 391.28, "text": " together. So let's say we create four buckets. Bucket one, bucket two, bucket"}, {"start": 391.28, "end": 399.79999999999995, "text": " three, bucket four. Right. And each bucket we label and bucket one we label"}, {"start": 400.28, "end": 405.03999999999996, "text": " with the up direction is with the right direction with the down direction,"}, {"start": 405.4, "end": 411.76, "text": " the left direction as vectors. And now we simply put each of the things into"}, {"start": 411.96, "end": 417.44, "text": " the bucket where it belongs most. So let's, for example, this vector here,"}, {"start": 417.44, "end": 425.36, "text": " it goes here. Sorry, that is like an absolutely not the right place. It goes"}, {"start": 426.44, "end": 432.92, "text": " probably here, right? And this vector here, probably this one goes here,"}, {"start": 432.92, "end": 438.32, "text": " right? And so on. So you'll end up each of these assigning a bucket. So this,"}, {"start": 439.32, "end": 445.68, "text": " these, these all go into into that bucket. Let's continue. Actually, let's also"}, {"start": 445.68, "end": 452.92, "text": " put the the keys in the same buckets. So also the keys, this key here probably"}, {"start": 452.92, "end": 458.68, "text": " goes to this bucket. This key here probably goes to this bucket."}, {"start": 461.76, "end": 466.08, "text": " Let's say this key here probably goes to the bucket over here. You already see,"}, {"start": 466.12, "end": 475.32, "text": " so before, right before we cared about this particular query and this particular"}, {"start": 475.32, "end": 480.68, "text": " key, we just looked and we said those two will probably route information to each"}, {"start": 480.68, "end": 487.08, "text": " other because they're similar. And now you can see they both ended up in the same"}, {"start": 487.12, "end": 493.92, "text": " bucket. So the idea is to create a scheme where you throw these things into"}, {"start": 493.92, "end": 500.2, "text": " buckets such that if two vectors are similar, they will end up in the same bucket"}, {"start": 500.2, "end": 505.47999999999996, "text": " with high probability. So you'll only have to really compare things within the same"}, {"start": 505.47999999999996, "end": 513.4, "text": " bucket and not across all of these D squared elements. That's the idea. And the"}, {"start": 513.4399999999999, "end": 522.04, "text": " technique here is called locality sensitive hashing. So locality sensitive hashing."}, {"start": 522.04, "end": 533.7199999999999, "text": " And short, this is called LSH. The idea is the following. If you have two vectors,"}, {"start": 533.7199999999999, "end": 542.0799999999999, "text": " V1 and V2, and they have and you have a distance measure, distance measure, D,"}, {"start": 542.08, "end": 556.88, "text": " is a distance. What you want is if the distance between V1 and V2 is small, I'm"}, {"start": 556.88, "end": 570.76, "text": " going to use this with small, then you want them in the same bucket. And if the"}, {"start": 570.76, "end": 581.56, "text": " distance is large, then you want them in a different bucket. Different buckets."}, {"start": 581.56, "end": 592.04, "text": " You know, with high probability. So all of these things where you say you want them"}, {"start": 592.04, "end": 601.48, "text": " in the same bucket with probability P, with probability P, with high probability P."}, {"start": 601.48, "end": 605.64, "text": " And here you want them in different buckets with high probability. Or you want them in"}, {"start": 605.64, "end": 611.0, "text": " the same bucket with low probability. That's an equivalent form of stating. This is"}, {"start": 611.0, "end": 616.12, "text": " all formalized and I can direct you to the Wikipedia page of that. It's pretty good."}, {"start": 616.12, "end": 623.4, "text": " It gives a concise definition. Here you can see that. And it gives a number of examples."}, {"start": 623.4, "end": 631.0, "text": " So one example I'd like to give here for locality sensitive hashing is of course the scheme"}, {"start": 631.0, "end": 637.96, "text": " of bucketing will all depend on what your distance measure is. If you consider the distance measure"}, {"start": 637.96, "end": 645.64, "text": " simply to be the jacquart distance. So let's say we have two vectors 0, 1, 0, 1. And here we have"}, {"start": 647.24, "end": 660.76, "text": " 1, 0, 1, 1, 0, 1. And here it's 0, 0, 0, 1. All right. So maybe you can see the first two"}, {"start": 660.76, "end": 671.0, "text": " vectors here are much more close together than the last vector. Now in terms of bit differences."}, {"start": 671.0, "end": 680.12, "text": " Right. One scheme to do locality sensitive hashing is two simply sub-sample bits. So in this case,"}, {"start": 680.92, "end": 687.8, "text": " this is a slightly constructed example. We will just sub-sample the first two bits and then"}, {"start": 687.8, "end": 693.64, "text": " construct the buckets according to these bit values. So if we since we sample two bits we have"}, {"start": 693.64, "end": 701.88, "text": " four buckets. Right. So here is 0, 0. Here is 0, 1. Here is 1, 0. And here is 1, 1. That's the"}, {"start": 701.88, "end": 706.3599999999999, "text": " concept of locality sensitive hashing. You have these buckets. Right. And then you can say,"}, {"start": 706.3599999999999, "end": 713.9599999999999, "text": " all right, this vector has 1, 0 goes into this, this goes into this. And then that goes into, sorry,"}, {"start": 713.96, "end": 721.64, "text": " the 0, 1 bucket. Right. And you end up with what you have, you have the two close vectors in"}, {"start": 721.64, "end": 726.9200000000001, "text": " the same bucket and the two far apart vectors in that bucket. Of course, that doesn't always"}, {"start": 726.9200000000001, "end": 732.12, "text": " work. You know, you can be unlucky in sub-sampling, but that's kind of trade off. You'll have to"}, {"start": 732.12, "end": 738.76, "text": " to go for. Right. If things that are close together happen with, it's a low probability, but if"}, {"start": 738.76, "end": 745.88, "text": " they happen to end up in the different buckets, then basically you lose the fact that they are"}, {"start": 745.88, "end": 752.52, "text": " close to each other. And that's the trade off. The kind of locality sensitive hashing they use"}, {"start": 752.52, "end": 759.08, "text": " in the reformer now is what are called random projections. So let's say you have a bunch of vectors."}, {"start": 759.08, "end": 765.96, "text": " And that's really what we care about, right. You have a bunch of vectors and what you want,"}, {"start": 765.96, "end": 773.88, "text": " you want the keys and queries. So you have a bunch of vectors like this. And you want to create"}, {"start": 773.88, "end": 780.44, "text": " buckets such that vectors that are close together will end up in the same bucket and vectors that are"}, {"start": 780.44, "end": 788.2, "text": " far apart. You'll will end up in the in different buckets. The way a cool way to do is, and this is"}, {"start": 788.2, "end": 794.0400000000001, "text": " in the cosine distance. So we care about the angle between vectors, right. A cool way to do this is"}, {"start": 794.04, "end": 802.04, "text": " to use random plane projections. And this the the cool thing about it is it works for the cosine"}, {"start": 802.04, "end": 809.0799999999999, "text": " distance. And you can basically choose how many buckets you create, right. Let's say we want to"}, {"start": 809.0799999999999, "end": 816.52, "text": " create four buckets here again. What we need is two hyperplanes. And what we'll do is, so here is"}, {"start": 816.52, "end": 823.9599999999999, "text": " the origin, will simply create two hyperplanes through the origin at random. So I'm going to draw"}, {"start": 823.96, "end": 832.76, "text": " a random hyperplane here like this. And then a second random hyperplane like this."}, {"start": 833.64, "end": 841.48, "text": " Right. So you would agree those are pretty random hyperplanes as much as I can be a random generator."}, {"start": 841.48, "end": 847.8000000000001, "text": " And then we'll simply label. So this will label hyperplane one. This will label hyperplane two."}, {"start": 847.8, "end": 857.7199999999999, "text": " Right. And now we simply assign each vector bits according to the in which on which side of the"}, {"start": 857.7199999999999, "end": 864.76, "text": " hyperplane they lie. So let's call this here the plus side and this here the minus side or even"}, {"start": 864.76, "end": 869.4799999999999, "text": " yeah, let's call this the plus and the minus and here also we call this the plus side and this"}, {"start": 869.48, "end": 879.96, "text": " the minus side. So this vector here is its signs are plus plus, right, because it's on the plus"}, {"start": 879.96, "end": 889.0, "text": " side of both of hyperplanes. This vector plus plus this one plus plus this one here is called"}, {"start": 889.0, "end": 895.24, "text": " it's on the negative side of plane two, but on the positive side of plane one. So it's plus minus"}, {"start": 895.24, "end": 904.76, "text": " this one here minus minus minus minus minus minus and these are your buckets. So you would group"}, {"start": 904.76, "end": 910.52, "text": " these vectors together because they have they have the same signs you would group that vector."}, {"start": 910.52, "end": 917.88, "text": " You would group these vectors together. The combination of this with the attention since in"}, {"start": 917.88, "end": 926.68, "text": " attention you've seen attention uses a softmax and the softmax is dominated usually by the"}, {"start": 926.68, "end": 933.08, "text": " largest elements and since we compute inner products it means that the softmax thing is"}, {"start": 933.08, "end": 940.84, "text": " dominated by the by vectors that have small inner products. So basically you don't have to look"}, {"start": 940.84, "end": 948.2800000000001, "text": " at all of these D squared vectors if you can find the ones that are have the closest distance."}, {"start": 948.2800000000001, "end": 956.6, "text": " You can pretty much ignore the others and LSH allows you to do this. So build buckets of vectors"}, {"start": 956.6, "end": 965.0, "text": " with the with similar directions then you only have to care about these vectors comparing them to"}, {"start": 965.0, "end": 974.12, "text": " each other. So that's not a lot of vectors generally and that's how you save a lot of work. So you"}, {"start": 974.12, "end": 979.24, "text": " will only have to care about these three vectors if your key vector for example is right here."}, {"start": 980.2, "end": 986.12, "text": " You'll only have to care about these things in the same bucket and you can ignore all of the"}, {"start": 986.12, "end": 992.44, "text": " rest of the space. Of course the more hyper planes you have the more buckets you'll have the"}, {"start": 992.44, "end": 997.8800000000001, "text": " less vectors you'll have in the same bucket. That's the general idea. I find this explanation to be"}, {"start": 997.8800000000001, "end": 1004.5200000000001, "text": " a bit easy. You can equivalently explain it by doing these kind of random rotations in this space."}, {"start": 1004.5200000000001, "end": 1010.6800000000001, "text": " You can think about how that will end up actually being the exact same thing as what I just explained"}, {"start": 1010.6800000000001, "end": 1020.36, "text": " but I just like that my explanation better I think. All right so the way they use this they have"}, {"start": 1020.36, "end": 1028.44, "text": " an illustration right here is the following. So they have these keys right sequence of queries"}, {"start": 1028.44, "end": 1033.16, "text": " and keys so they they do equivalent queries and keys which is a thing you can do in transformers"}, {"start": 1033.16, "end": 1039.48, "text": " don't worry too much about it whether they're different or not but then they do this LSH"}, {"start": 1039.48, "end": 1047.72, "text": " bucketing and here the color of the cell is just the bucket the LSH bucket at which it will end up"}, {"start": 1047.72, "end": 1055.24, "text": " then they sort that right as you can see and now they do an additional thing which is called"}, {"start": 1055.24, "end": 1063.32, "text": " they chunk. The as you can see there are not the same amount of vectors in each bucket and that is"}, {"start": 1064.2, "end": 1070.6000000000001, "text": " sometimes a problem because even though you've you've reduced the memory the memory still"}, {"start": 1070.6, "end": 1077.8799999999999, "text": " the memory requirements are still dominated by the largest bucket right by whatever bucket has"}, {"start": 1077.8799999999999, "end": 1084.52, "text": " the most number of vectors that will pretty much be your memory requirement because now you don't"}, {"start": 1084.52, "end": 1091.3999999999999, "text": " you don't have to if this is d right you have to compute all the d squared things anymore but you'll"}, {"start": 1091.4, "end": 1102.8400000000001, "text": " only have to compute this quantity let's call that B so max the maximum bucket size but that could"}, {"start": 1102.8400000000001, "end": 1109.8000000000002, "text": " still be large right if you look at a distribution it's probably going to be something like this right"}, {"start": 1109.8000000000002, "end": 1117.24, "text": " where most buckets have a kind of a standard number of vectors but some buckets will have a lot of"}, {"start": 1117.24, "end": 1124.04, "text": " vectors and that's sorry some few buckets will have a lot of vectors and your memory requirement"}, {"start": 1124.04, "end": 1128.6, "text": " is still dominated by this so they do an additional thing which is called chunking which means they'd"}, {"start": 1128.6, "end": 1136.84, "text": " actually take fixed size chunks here fixed size here they always take four and they say all right these"}, {"start": 1136.84, "end": 1147.08, "text": " are our chunks and we will only compute attention within the chunks right so it could be that they're"}, {"start": 1147.08, "end": 1151.1599999999999, "text": " there's the same bucket is actually split between chunks and that's what they do an additional"}, {"start": 1151.1599999999999, "end": 1159.3999999999999, "text": " thing is that you can attend two things in a different chunk right here right you can attend"}, {"start": 1159.4, "end": 1166.6000000000001, "text": " two things in your neighboring chunks so you're restricted to either your own chunk or your neighboring"}, {"start": 1166.6000000000001, "end": 1176.52, "text": " chunk note that there aren't any any arrows going over here so you can you can attend"}, {"start": 1177.4, "end": 1183.5600000000002, "text": " have this diagram here which things you can attend to you can attend to yourself or attend to"}, {"start": 1183.56, "end": 1192.28, "text": " your neighboring thing but not to any other thing or the other way around right so that's basically"}, {"start": 1192.28, "end": 1202.6799999999998, "text": " the the concept of saving memory now your memory requirements are if we call this quantity now"}, {"start": 1202.6799999999998, "end": 1209.8799999999999, "text": " we call the other one b let's call this the chunk size c right your memory requirements are"}, {"start": 1209.88, "end": 1217.72, "text": " pretty much c squared plus whatever this unit directional so not this this isn't squared plus"}, {"start": 1217.72, "end": 1227.88, "text": " probably O of C something like this so you bring your memory requirements down quite a bit"}, {"start": 1227.88, "end": 1240.92, "text": " now that's the the general idea here the problem they face again is so they face another problem"}, {"start": 1241.48, "end": 1252.5200000000002, "text": " where they say hold on I can find it right here they say hold on we we do have actually another"}, {"start": 1252.52, "end": 1259.08, "text": " problem and that is that these transformers have to back propagate so you'll have to forward"}, {"start": 1259.08, "end": 1264.2, "text": " propagate these things and now we've come to solve this d-square computation issue but what you'll"}, {"start": 1264.2, "end": 1271.6399999999999, "text": " have to do is if you go from layer to layer right layer layer layer layer what you have to do is if"}, {"start": 1271.6399999999999, "end": 1278.04, "text": " you propagate information forward you still have to back propagate and in order to back propagate"}, {"start": 1278.04, "end": 1286.68, "text": " usually usually you'll have to remember all of these activations right so these activations these"}, {"start": 1286.68, "end": 1292.52, "text": " activations in order to do backprop it is often the case that you actually have to remember the"}, {"start": 1292.52, "end": 1298.44, "text": " activation because in each forward propagation in each layer here you might lose some information"}, {"start": 1298.44, "end": 1311.0, "text": " imagine you have a you have to a layer that maps these two dimensional vectors both to so here"}, {"start": 1311.0, "end": 1319.0, "text": " actually let's make this blue maps these three vectors to the following um configuration so a"}, {"start": 1319.0, "end": 1329.4, "text": " layer maps these vectors to this this and this so it maps two things to one thing at which which"}, {"start": 1329.4, "end": 1335.88, "text": " you know can be if you in a linear layer it can decide to map it to a lower dimensional"}, {"start": 1335.88, "end": 1343.88, "text": " subspace so it could actually so I decide to map it to in fact two points right this is also"}, {"start": 1343.88, "end": 1349.0, "text": " a possibility could do dimension reduction so because all of this in order to do backprop you actually"}, {"start": 1349.0, "end": 1357.3200000000002, "text": " have to remember these things in order to do proper backprop this is a problem again for the"}, {"start": 1357.3200000000002, "end": 1362.92, "text": " transformer because all these activations even though we've gotten rid of the d-square computation"}, {"start": 1363.48, "end": 1370.6000000000001, "text": " they will have to be remembered and that takes a lot of memory the way to solve this is actually to"}, {"start": 1370.6, "end": 1379.1599999999999, "text": " invertible layers what that means is that if I propagate information forward forward forward forward"}, {"start": 1379.1599999999999, "end": 1387.6399999999999, "text": " forward I can figure out what the information here was simply by looking at the backprop"}, {"start": 1387.6399999999999, "end": 1394.9199999999998, "text": " activations and this is this happens if the layer is invertible so if this function here is"}, {"start": 1394.92, "end": 1404.2, "text": " invertible so if f here technically is invertible so I can actually write down the inverse of f and"}, {"start": 1404.2, "end": 1414.68, "text": " that is is defined this of course is a pretty big restriction and the way they achieve it I like to"}, {"start": 1414.68, "end": 1427.72, "text": " go to the blog here the way they achieve it is they do what's called an idea from"}, {"start": 1429.72, "end": 1434.8400000000001, "text": " reversible networks where they always have two sets of activations that's what you see here"}, {"start": 1434.8400000000001, "end": 1442.2, "text": " x1 and x2 and in each layer only one of them is updated in a residual fashion so you can see"}, {"start": 1442.2, "end": 1452.76, "text": " here layer one updates x2 but x1 remains the same and goes to y1 right and then in the next layer"}, {"start": 1452.76, "end": 1465.24, "text": " layer two only updates y1 in order to construct z1 but y2 remains the same to be z2 and then you can"}, {"start": 1465.24, "end": 1472.84, "text": " revert the layers you can basically figure out what the activations were from the backprop signal"}, {"start": 1472.84, "end": 1480.76, "text": " now that's extremely good if you want to save memory but of course it restricts clearly you have"}, {"start": 1480.76, "end": 1488.36, "text": " to be restricted to this kind of architecture or similar this idea actually isn't new there has"}, {"start": 1488.36, "end": 1494.52, "text": " been used many times in things like normalizing flows and I want to highlight this paper actually"}, {"start": 1494.52, "end": 1502.28, "text": " want to highlight the specific I chose this paper because they have these nice diagrams where they"}, {"start": 1502.28, "end": 1511.24, "text": " show exactly this you see they have two sets x1 and x2 that in forward propagation they only"}, {"start": 1511.24, "end": 1518.28, "text": " update one of them and then in backward in what's called inverse propagation they can figure out"}, {"start": 1518.28, "end": 1526.84, "text": " what those were and they couple these in exactly the same way like here this drawing might be"}, {"start": 1526.84, "end": 1534.36, "text": " even more similar where they alternate between updating the two activations so you can think of"}, {"start": 1534.36, "end": 1541.24, "text": " this as a way to simply make the function that you're representing with the neural network"}, {"start": 1541.24, "end": 1547.6399999999999, "text": " invertible that is a giant constraint on your architecture but these methods here these normalizing"}, {"start": 1547.64, "end": 1554.8400000000001, "text": " flow methods use that so they can actually define an invertible layer because they need the"}, {"start": 1554.8400000000001, "end": 1565.3200000000002, "text": " Jacobian inverse in order to compute their normalizing flow so you see that's why they originally"}, {"start": 1565.3200000000002, "end": 1574.1200000000001, "text": " did it and I'm sure that's not a like a new idea or it's particularly new again strangely I haven't"}, {"start": 1574.12, "end": 1583.0, "text": " found the the any of the flows literature cited they do cite the reversible residual net paper"}, {"start": 1584.1999999999998, "end": 1592.4399999999998, "text": " that they probably got the idea from all right so with these two things now you can save the"}, {"start": 1592.4399999999998, "end": 1601.0, "text": " giant computation right and you can also not store the forward activations so they say they can"}, {"start": 1601.0, "end": 1616.76, "text": " take now giant giant giant input sizes you may remember transformers like Bert so Bert it can use"}, {"start": 1617.8, "end": 1627.08, "text": " something like 512 tokens in its input sequence that means it's the sequence that you can"}, {"start": 1627.08, "end": 1635.72, "text": " look at with Bert at a time is 512 long and not a bit longer right there have been some extensions to"}, {"start": 1635.72, "end": 1649.0, "text": " that for example I believe in XL net so XL net has pushed this to something like C times 512 where"}, {"start": 1649.0, "end": 1658.44, "text": " C is a smallish constant that where you can kind of carry over information between sequences"}, {"start": 1658.44, "end": 1667.8, "text": " but this thing here as you can see they calculate it could take up something like 64 000 tokens"}, {"start": 1668.44, "end": 1678.12, "text": " and it that would use in total 16 gigabytes of memory which is available on a high end GPU right so"}, {"start": 1678.12, "end": 1689.0, "text": " this is a giant this is a giant step forward in in producing transformers that can actually take"}, {"start": 1689.0, "end": 1695.8, "text": " large models and here you see the memory and time complexity you can look at these things"}, {"start": 1695.8, "end": 1704.04, "text": " yourself but be you can see maybe here that these squares here in you from the original transformer"}, {"start": 1704.04, "end": 1710.92, "text": " they now vanish from this and all of these constants are a lot of these constants are actually"}, {"start": 1710.92, "end": 1717.08, "text": " smaller for example that chunk size is in there instead of kind of the entire sequence length"}, {"start": 1718.52, "end": 1727.96, "text": " so that's basically the the paper they show that you can actually input those long sequences they"}, {"start": 1727.96, "end": 1735.72, "text": " can apply this to images you see there's image net pixel by pixel which is a lot of pixels and would"}, {"start": 1735.72, "end": 1743.56, "text": " have been absolutely unthinkable with one of the original transformers and with that I invite you"}, {"start": 1743.56, "end": 1760.36, "text": " to check out the paper and the blog post and I'll see you next time bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=EbFosdOi5SY | Go-Explore: a New Approach for Hard-Exploration Problems | This algorithm solves the hardest games in the Atari suite and makes it look so easy! This modern version of Dijkstra's shortest path algorithm is outperforming everything else by orders of magnitude, and all based on random exploration.
https://arxiv.org/abs/1901.10995
https://eng.uber.com/go-explore/
https://github.com/uber-research/go-explore
Abstract:
A grand challenge in reinforcement learning is intelligent exploration, especially when rewards are sparse or deceptive. Two Atari games serve as benchmarks for such hard-exploration domains: Montezuma's Revenge and Pitfall. On both games, current RL algorithms perform poorly, even those with intrinsic motivation, which is the dominant method to improve performance on hard-exploration domains. To address this shortfall, we introduce a new algorithm called Go-Explore. It exploits the following principles: (1) remember previously visited states, (2) first return to a promising state (without exploration), then explore from it, and (3) solve simulated environments through any available means (including by introducing determinism), then robustify via imitation learning. The combined effect of these principles is a dramatic performance improvement on hard-exploration problems. On Montezuma's Revenge, Go-Explore scores a mean of over 43k points, almost 4 times the previous state of the art. Go-Explore can also harness human-provided domain knowledge and, when augmented with it, scores a mean of over 650k points on Montezuma's Revenge. Its max performance of nearly 18 million surpasses the human world record, meeting even the strictest definition of "superhuman" performance. On Pitfall, Go-Explore with domain knowledge is the first algorithm to score above zero. Its mean score of almost 60k points exceeds expert human performance. Because Go-Explore produces high-performing demonstrations automatically and cheaply, it also outperforms imitation learning work where humans provide solution demonstrations. Go-Explore opens up many new research directions into improving it and weaving its insights into current RL algorithms. It may also enable progress on previously unsolvable hard-exploration problems in many domains, especially those that harness a simulator during training (e.g. robotics).
Authors: Adrien Ecoffet, Joost Huizinga, Joel Lehman, Kenneth O. Stanley, Jeff Clune | Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem for a long time for reinforcement learning algorithms. What you can see is this little person that has to kind of jump around, collect keys, collect these coins, kind of get over enemies and so on. And all of this is super hard because the reward is so sparse. So sometimes you have to do hundreds of actions until you get the next whatever improvement in score. You can see on the top how your score is increasing and it seems like this algorithm is pretty efficient on this but keep in mind this algorithm has to learn from just the pixel input. It has to learn every single move of the agent. So if you see here for example jumping over the enemies, stopping when these blue bars come and going down the ladders without hitting the spider, this is really really hard problem. So far reinforcement learning algorithms have had a very hard time doing this until this algorithm showed up. So explore which was the first one that actually surpassed I believe human experts or widely surpassed human experts at this game. In fact, the first reinforcement learning algorithm that without human demonstration could do anything at all at this game. So let's dive in and see how this algorithm does what it does. And the paper to this is called Go Explore, a new approach for hard exploration problems by Adria Ecofe, Joost Huizinga, Joa Lehmann, Kenneth Ostendly and Jeff Kloon from Uber AI Labs. So they break down the problem into what they call two problems. So these hard exploration problems, they say they suffer from two things, detachment and derailment. You can see here, detachment and derailment. So they explain those in detail. Detachment and derailment are related to each other. Detachment is when an exploration algorithm that has some sort of intrinsic motivation. This is how you usually do these hard exploration problems. You give intrinsic motivation to the agent to explore new things. Like in absence of a reward, if there's no reward around, it should just reach some kind of a new state. And you give the algorithm points for reaching states that it has never seen before. But this can come to this sort of detachment problem. They illustrate this here. So let's say your algorithm starts actually here in the middle. And everything that's green here is intrinsic reward. So you collect the green stuff that gives you points. So the goal might actually be in here or in here. But you have to teach the algorithm to go all this way around. And you do that by simply modinating it to go to new states by giving it a reward for every state it hasn't been. So it starts exploring goes here. And maybe the first episode it reaches here right before it is reset. Usually reset after while like it bounces kind around. It's like, ah, there's new stuff. And then it goes here and it will explore kind of it. And it will be motivated to explore because there's always this green stuff here. So after a while here, whatever is purple has been explored right recently. And with purple day mark, what has been recently explored all of this has been recently explored right. So it is gone until here. But usually you also have like a component that isn't purely seeking this green stuff. But is also doing some kind of random exploration. And so what happens what can happen in this algorithm is that if you at one of these times you start the episode here by chance, it actually goes into the other direction. All right. And then it's like, wow, there's all this green stuff over here right. And then it's like, so much green stuff, right. And then what usually happens is kind of forgets that there's green stuff over here. So it explores all of this stuff around here. It explores, explores, explores, there's no more stuff. And then it's stuck, right. It's stuck here. And it says where, where am I going to go? Like I know over here, there's no more green stuff. And over here, there doesn't appear to be any green stuff because it's forgotten about this. So they claim these intrinsic motivation algorithms. What they can lead to is you can detach from your frontier of new knowledge, right. Like they can forget that there is that that here at one point they were here and the algorithm, what the algorithm did, it was it explored here until here and then it explored over here. So it thinks that this thing over here is its most recent frontier of knowledge, right. This is my state here. This is where I go explore from, but there's nowhere to explore from, right. What it should remember is that here it actually kind of jumped over by random chance. I hope this makes sense. This is called detachment of intrinsic motivation algorithms. And it happens when you kind of give these points according to simply reaching new states. And then another thing is what they call derailment. And derailment is bit of a more subtle problem. So in derailment, what happens is maybe you, maybe you've actually, let's say this same situation. You've discovered a promising state, right, by some miracle. Here's the goal, right. You've reached the goal. You've done this by exploration. You've explored a bunch and you've reached the goal. Now the problem is, can you do it again, right. Especially if the environment is a bit stochastic, right. If there is noise, if the environment isn't always the same, can you actually learn how to do this robustly, like such that you can repeat your success. And the derailment is the problem that often these algorithms, while they find promising things, they kind of struggle to robustly reach those promising states. Go explore solves these problems in two separate phases, one for each, basically. So what it does is in a phase one, it explores, right. Explore and here's a crucial part, until solved. So this is an explorer method that explores until the problem is solved with the focus on explore, right. And then in stage two, robustify and biobastify means that if stage one has resulted in trajectories that have solved the game over that environment, then phase two is simply tasked with robustly finding those. So let's look at phase one. Phase one is kind of like think of dyke stress algorithm. So in dyke stress algorithm, this is the shortest path algorithm in graphs. So in dyke stress algorithm, you have a graph. And you want to reach from the start, let's call this the start, and this is the end or the goal. And the graph is connected with edges. And these edges have usually, sometimes they have weights. We can simply, the goal is how to go the shortest path from start to the end. And what dyke stress algorithm does, it starts exploring. So it's like it goes here. All right. And then I says, ah, this is a new state. I reach this state in one step, all right. Explore some more. I reach this state in two steps. And then it's like, I reach this state in three steps. OK. But I can also go here. I reach this state in one step. In two, so I've already been here. OK. But then it can, you can say, OK, from here, I reach this state into this is a bad example. Let's say we actually have to make a shortest path. That this is the graph, right? So it reaches this state in two steps. But then it explores this thing. It's like, ah, wait a minute. I've seen this state. But before I've reached it in two steps, now I'm reaching it in one step. This is better. So this path here is better than this path here. And then it goes on from here. It goes on. It says, OK. I'm reaching this goal in two steps. I've reached it in three steps. So clearly, this bottom path here is better than what I've done before. This top or this path. So this is what Go Explorer does in a nutshell. What it does is it has an archive of states, right? An archive of states that it has visited previously. And the crucial thing here is, and this is kind of necessary to their algorithm, that this is completely deterministic. So what they actually do is they will save the state of the game emulator, right? They are here, right? And they do some exploration by jumping some ball until their person is here, their game is in some state. And they will save the emulator to a buffer. This is kind of crucial. Such that at a later point, they can select this, this exactly this state that they were in. And from here, run a bunch of explorations again, right? So if they say select state from archive, and then go to that state, this is simply restoring the emulator state. But you could also, what you could also do if this is a purely deterministic environment, you could simply save the sequence of actions that you've done to come here. And simply buy it. So maybe you go right, right, and here you jump, and you go right, you can simply replay those to get to the exact same state. They discuss that this can be expanded to also handle kind of stochastic environments, but in their case, at the phase one, the environment is completely deterministic. So they can do this. They can go, sorry, they can go to a state deterministically. So they'll select a state from an archive. They have an algorithm for selecting kind of promising states. They go to that state, and then they explore from that state. And they simply do this random. So this is random. And then they update the archive. So what do they do? Right? We saw, so here, maybe as a new graph. So they go to a state, this is their state, and then they explore. Now there are multiple things that can happen. One, they can encounter a new state, right? New state, never seen before. All right, what they do is they save it to the buffer. They say, okay, this new state, let's call it n. This new state I've reached it in, and here we have done s steps. I've reached it in s plus one step. And whatever here is the emulator state that we had before. Right? So at any point, I can go back. If however, the state has already been seen, let's call this m, they retrieve m, m prime from the buffer because they've already seen it. It's in the buffer, right? They compare, hey, these steps. So is s prime, is this smaller or larger than s plus one? So basically, I've seen this state before, but using this path, can I reach it in fewer steps than I've reached it before? If yes, then I'm going to replace this, right? Replace this s by s plus one, and then save it again in the buffer. So I can, I can, I now have a better path to reach this state than before. So it's almost exactly like the extra algorithm, and that you simply explore, and every new state you find, you've either already seen. So you simply have a new way of getting to that state. If you haven't seen it, you simply remember it. And then you do it all again. So you can, you can imagine with time, these number of states and this buffer will explode. And it's not feasible for Montezuma's revenge. Like, imagine this game, right? You have to, you have to go everywhere and explore everything, right? This, I mean, every single action here could be a state. That's why, let me pause this. That's why what they do is they, they have to come up with a notion of state that is, doesn't simply include every single game state there is. And what they do is, this is sample here. They down sample the image. And then this, sorry, I've tried drawing over a blog post, they down sample the image. And then they simply say, all right. So this, this thing would become this thing. And they simply say, okay, if two of these images have the same representation. So gray scale, down sampled, quantized, then they are the same state. And that's kind of the crux of the algorithm I find. So if two things have the same state, then the algorithm is prone to kind of confusing them for each other. If things, one is the other, not exactly, but it does kind of assume that they are close actually here. But there is a crucial difference between the two. The algorithm will have a very hard time in some situations. I don't want to, like you can think of, it needs to be kind of convoluted situations, but it can be the kind of crux of the algorithm very much if this state representation isn't done well. And they actually have two methods. One simply relies on this down sampling and the other one they provide domain knowledge, which means kind of which level you're in, where the player is, and so on. But this is pretty cool. So if you are able, so if you're reinforcement learning problem, first of all, is deterministic and at least in a simulator. And second allows for good state representations kind of for low dimensional state. And if those two things are given, you can use Go Explorer. And as I said, this representation here is key. So now you know how they do it. They simply explore these states. And if they come on a new state, and every state is, is, is, so we don't mean this here. We actually mean this representation of it. They store it and they remember how to get to it. And simply by exploring like this and having a smart algorithm that picks which state to explore from, which of course is also a lot of domain knowledge, they are able to solve the game. Right? So you see, it goes way past human expert. And they are able to actually perform really well. Simply by exploring, this is the exploration phase. This is simply random exploration from promising states. And then in the second part, in the second phase, they now robustify it. So now they introduce noise into their environment. Because usually environments have noise or some sort of stochasticity. And they run imitation learning on the best trajectories they found. And what that does is what they do is they have a trajectory. Let's say, this is a trajectory. These are actions you need to reach this goal state. This imitation learning algorithm, what they do is they take a few steps back, say here. And they just use imitation learning, which is basically a form of reinforcement learning to reach the goal state from here. Simply reach the goal state. Once and under noise. So you can't just take the exact same actions. Once this has been learned, back up a few more steps, maybe here. And then try to reach the goal state. Now you've already learned how to do this part. So this bigger part should become, should be easier than simply starting from here. And you do that until you've kind of backed up your entire trajectory. This is a well-known method from imitation learning. But usually this red thing is a human demonstration. But now this red trajectory has been found by GoExplore. It turns out if you have a bunch of these trajectories from GoExplore, you can do a pretty good job at that. Alright, that's basically all that I wanted to say about GoExplore. It's basically Dijkstra's algorithm. It works under very specific circumstances, but I think it's super promising and it's kind of a new way of thinking about it. So the video I've shown is actually GoExplore solving Montezuma's revenge, getting like a new high score. And you can see how skilled this algorithm becomes. Alright, with that, I say goodbye and hope to see you next time. | [{"start": 0.0, "end": 7.88, "text": " Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem"}, {"start": 7.88, "end": 11.200000000000001, "text": " for a long time for reinforcement learning algorithms."}, {"start": 11.200000000000001, "end": 17.8, "text": " What you can see is this little person that has to kind of jump around, collect keys, collect"}, {"start": 17.8, "end": 23.400000000000002, "text": " these coins, kind of get over enemies and so on."}, {"start": 23.400000000000002, "end": 27.12, "text": " And all of this is super hard because the reward is so sparse."}, {"start": 27.12, "end": 33.36, "text": " So sometimes you have to do hundreds of actions until you get the next whatever improvement"}, {"start": 33.36, "end": 34.36, "text": " in score."}, {"start": 34.36, "end": 38.760000000000005, "text": " You can see on the top how your score is increasing and it seems like this algorithm is pretty"}, {"start": 38.760000000000005, "end": 45.2, "text": " efficient on this but keep in mind this algorithm has to learn from just the pixel input."}, {"start": 45.2, "end": 49.92, "text": " It has to learn every single move of the agent."}, {"start": 49.92, "end": 56.56, "text": " So if you see here for example jumping over the enemies, stopping when these blue bars"}, {"start": 56.56, "end": 62.160000000000004, "text": " come and going down the ladders without hitting the spider, this is really really hard"}, {"start": 62.160000000000004, "end": 63.160000000000004, "text": " problem."}, {"start": 63.160000000000004, "end": 69.56, "text": " So far reinforcement learning algorithms have had a very hard time doing this until this"}, {"start": 69.56, "end": 71.16, "text": " algorithm showed up."}, {"start": 71.16, "end": 79.8, "text": " So explore which was the first one that actually surpassed I believe human experts or widely"}, {"start": 79.8, "end": 82.64, "text": " surpassed human experts at this game."}, {"start": 82.64, "end": 89.0, "text": " In fact, the first reinforcement learning algorithm that without human demonstration could"}, {"start": 89.0, "end": 92.19999999999999, "text": " do anything at all at this game."}, {"start": 92.19999999999999, "end": 97.24, "text": " So let's dive in and see how this algorithm does what it does."}, {"start": 97.24, "end": 103.24, "text": " And the paper to this is called Go Explore, a new approach for hard exploration problems"}, {"start": 103.24, "end": 112.36, "text": " by Adria Ecofe, Joost Huizinga, Joa Lehmann, Kenneth Ostendly and Jeff Kloon from Uber"}, {"start": 112.36, "end": 114.39999999999999, "text": " AI Labs."}, {"start": 114.39999999999999, "end": 121.52, "text": " So they break down the problem into what they call two problems."}, {"start": 121.52, "end": 126.47999999999999, "text": " So these hard exploration problems, they say they suffer from two things, detachment"}, {"start": 126.48, "end": 128.0, "text": " and derailment."}, {"start": 128.0, "end": 133.08, "text": " You can see here, detachment and derailment."}, {"start": 133.08, "end": 137.8, "text": " So they explain those in detail."}, {"start": 137.8, "end": 143.24, "text": " Detachment and derailment are related to each other."}, {"start": 143.24, "end": 151.2, "text": " Detachment is when an exploration algorithm that has some sort of intrinsic motivation."}, {"start": 151.2, "end": 153.88, "text": " This is how you usually do these hard exploration problems."}, {"start": 153.88, "end": 159.64, "text": " You give intrinsic motivation to the agent to explore new things."}, {"start": 159.64, "end": 163.96, "text": " Like in absence of a reward, if there's no reward around, it should just reach some kind"}, {"start": 163.96, "end": 165.68, "text": " of a new state."}, {"start": 165.68, "end": 172.07999999999998, "text": " And you give the algorithm points for reaching states that it has never seen before."}, {"start": 172.07999999999998, "end": 177.6, "text": " But this can come to this sort of detachment problem."}, {"start": 177.6, "end": 179.28, "text": " They illustrate this here."}, {"start": 179.28, "end": 184.72, "text": " So let's say your algorithm starts actually here in the middle."}, {"start": 184.72, "end": 190.52, "text": " And everything that's green here is intrinsic reward."}, {"start": 190.52, "end": 193.72, "text": " So you collect the green stuff that gives you points."}, {"start": 193.72, "end": 197.88, "text": " So the goal might actually be in here or in here."}, {"start": 197.88, "end": 201.6, "text": " But you have to teach the algorithm to go all this way around."}, {"start": 201.6, "end": 207.8, "text": " And you do that by simply modinating it to go to new states by giving it a reward for"}, {"start": 207.8, "end": 209.44, "text": " every state it hasn't been."}, {"start": 209.44, "end": 211.24, "text": " So it starts exploring goes here."}, {"start": 211.24, "end": 215.72, "text": " And maybe the first episode it reaches here right before it is reset."}, {"start": 215.72, "end": 218.48000000000002, "text": " Usually reset after while like it bounces kind around."}, {"start": 218.48000000000002, "end": 220.32000000000002, "text": " It's like, ah, there's new stuff."}, {"start": 220.32000000000002, "end": 224.4, "text": " And then it goes here and it will explore kind of it."}, {"start": 224.4, "end": 229.64000000000001, "text": " And it will be motivated to explore because there's always this green stuff here."}, {"start": 229.64000000000001, "end": 235.76000000000002, "text": " So after a while here, whatever is purple has been explored right recently."}, {"start": 235.76, "end": 239.28, "text": " And with purple day mark, what has been recently explored all of this has been recently explored"}, {"start": 239.28, "end": 240.28, "text": " right."}, {"start": 240.28, "end": 242.07999999999998, "text": " So it is gone until here."}, {"start": 242.07999999999998, "end": 246.79999999999998, "text": " But usually you also have like a component that isn't purely seeking this green stuff."}, {"start": 246.79999999999998, "end": 249.92, "text": " But is also doing some kind of random exploration."}, {"start": 249.92, "end": 254.95999999999998, "text": " And so what happens what can happen in this algorithm is that if you at one of these times"}, {"start": 254.95999999999998, "end": 260.2, "text": " you start the episode here by chance, it actually goes into the other direction."}, {"start": 260.2, "end": 261.2, "text": " All right."}, {"start": 261.2, "end": 265.15999999999997, "text": " And then it's like, wow, there's all this green stuff over here right."}, {"start": 265.16, "end": 269.36, "text": " And then it's like, so much green stuff, right."}, {"start": 269.36, "end": 275.88000000000005, "text": " And then what usually happens is kind of forgets that there's green stuff over here."}, {"start": 275.88000000000005, "end": 279.04, "text": " So it explores all of this stuff around here."}, {"start": 279.04, "end": 283.20000000000005, "text": " It explores, explores, explores, there's no more stuff."}, {"start": 283.20000000000005, "end": 285.32000000000005, "text": " And then it's stuck, right."}, {"start": 285.32000000000005, "end": 287.68, "text": " It's stuck here."}, {"start": 287.68, "end": 290.28000000000003, "text": " And it says where, where am I going to go?"}, {"start": 290.28000000000003, "end": 294.8, "text": " Like I know over here, there's no more green stuff."}, {"start": 294.8, "end": 298.92, "text": " And over here, there doesn't appear to be any green stuff because it's forgotten about"}, {"start": 298.92, "end": 299.92, "text": " this."}, {"start": 299.92, "end": 302.92, "text": " So they claim these intrinsic motivation algorithms."}, {"start": 302.92, "end": 308.68, "text": " What they can lead to is you can detach from your frontier of new knowledge, right."}, {"start": 308.68, "end": 316.8, "text": " Like they can forget that there is that that here at one point they were here and the algorithm,"}, {"start": 316.8, "end": 322.8, "text": " what the algorithm did, it was it explored here until here and then it explored over here."}, {"start": 322.8, "end": 331.16, "text": " So it thinks that this thing over here is its most recent frontier of knowledge, right."}, {"start": 331.16, "end": 332.88, "text": " This is my state here."}, {"start": 332.88, "end": 336.6, "text": " This is where I go explore from, but there's nowhere to explore from, right."}, {"start": 336.6, "end": 343.0, "text": " What it should remember is that here it actually kind of jumped over by random chance."}, {"start": 343.0, "end": 344.0, "text": " I hope this makes sense."}, {"start": 344.0, "end": 348.92, "text": " This is called detachment of intrinsic motivation algorithms."}, {"start": 348.92, "end": 357.64000000000004, "text": " And it happens when you kind of give these points according to simply reaching new states."}, {"start": 357.64000000000004, "end": 361.84000000000003, "text": " And then another thing is what they call derailment."}, {"start": 361.84000000000003, "end": 365.08000000000004, "text": " And derailment is bit of a more subtle problem."}, {"start": 365.08000000000004, "end": 372.96000000000004, "text": " So in derailment, what happens is maybe you, maybe you've actually, let's say this same"}, {"start": 372.96000000000004, "end": 374.20000000000005, "text": " situation."}, {"start": 374.2, "end": 379.96, "text": " You've discovered a promising state, right, by some miracle."}, {"start": 379.96, "end": 382.0, "text": " Here's the goal, right."}, {"start": 382.0, "end": 384.32, "text": " You've reached the goal."}, {"start": 384.32, "end": 386.28, "text": " You've done this by exploration."}, {"start": 386.28, "end": 389.36, "text": " You've explored a bunch and you've reached the goal."}, {"start": 389.36, "end": 393.52, "text": " Now the problem is, can you do it again, right."}, {"start": 393.52, "end": 396.2, "text": " Especially if the environment is a bit stochastic, right."}, {"start": 396.2, "end": 402.56, "text": " If there is noise, if the environment isn't always the same, can you actually learn how"}, {"start": 402.56, "end": 407.6, "text": " to do this robustly, like such that you can repeat your success."}, {"start": 407.6, "end": 414.16, "text": " And the derailment is the problem that often these algorithms, while they find promising"}, {"start": 414.16, "end": 420.6, "text": " things, they kind of struggle to robustly reach those promising states."}, {"start": 420.6, "end": 427.24, "text": " Go explore solves these problems in two separate phases, one for each, basically."}, {"start": 427.24, "end": 434.84000000000003, "text": " So what it does is in a phase one, it explores, right."}, {"start": 434.84000000000003, "end": 437.8, "text": " Explore and here's a crucial part, until solved."}, {"start": 437.8, "end": 444.44, "text": " So this is an explorer method that explores until the problem is solved with the focus"}, {"start": 444.44, "end": 448.28000000000003, "text": " on explore, right."}, {"start": 448.28000000000003, "end": 457.2, "text": " And then in stage two, robustify and biobastify means that if stage one has resulted in"}, {"start": 457.2, "end": 463.71999999999997, "text": " trajectories that have solved the game over that environment, then phase two is simply"}, {"start": 463.71999999999997, "end": 467.36, "text": " tasked with robustly finding those."}, {"start": 467.36, "end": 470.68, "text": " So let's look at phase one."}, {"start": 470.68, "end": 476.03999999999996, "text": " Phase one is kind of like think of dyke stress algorithm."}, {"start": 476.03999999999996, "end": 480.96, "text": " So in dyke stress algorithm, this is the shortest path algorithm in graphs."}, {"start": 480.96, "end": 485.88, "text": " So in dyke stress algorithm, you have a graph."}, {"start": 485.88, "end": 492.44, "text": " And you want to reach from the start, let's call this the start, and this is the end or"}, {"start": 492.44, "end": 493.84, "text": " the goal."}, {"start": 493.84, "end": 497.8, "text": " And the graph is connected with edges."}, {"start": 497.8, "end": 501.04, "text": " And these edges have usually, sometimes they have weights."}, {"start": 501.04, "end": 507.28, "text": " We can simply, the goal is how to go the shortest path from start to the end."}, {"start": 507.28, "end": 510.8, "text": " And what dyke stress algorithm does, it starts exploring."}, {"start": 510.8, "end": 512.16, "text": " So it's like it goes here."}, {"start": 512.16, "end": 513.16, "text": " All right."}, {"start": 513.16, "end": 514.64, "text": " And then I says, ah, this is a new state."}, {"start": 514.64, "end": 517.1999999999999, "text": " I reach this state in one step, all right."}, {"start": 517.1999999999999, "end": 518.36, "text": " Explore some more."}, {"start": 518.36, "end": 520.36, "text": " I reach this state in two steps."}, {"start": 520.36, "end": 523.0, "text": " And then it's like, I reach this state in three steps."}, {"start": 523.0, "end": 524.0, "text": " OK."}, {"start": 524.0, "end": 525.0, "text": " But I can also go here."}, {"start": 525.0, "end": 527.6, "text": " I reach this state in one step."}, {"start": 527.6, "end": 529.4, "text": " In two, so I've already been here."}, {"start": 529.4, "end": 530.4, "text": " OK."}, {"start": 530.4, "end": 537.36, "text": " But then it can, you can say, OK, from here, I reach this state into this is a bad example."}, {"start": 537.36, "end": 540.28, "text": " Let's say we actually have to make a shortest path."}, {"start": 540.28, "end": 541.72, "text": " That this is the graph, right?"}, {"start": 541.72, "end": 543.36, "text": " So it reaches this state in two steps."}, {"start": 543.36, "end": 544.72, "text": " But then it explores this thing."}, {"start": 544.72, "end": 546.12, "text": " It's like, ah, wait a minute."}, {"start": 546.12, "end": 547.5600000000001, "text": " I've seen this state."}, {"start": 547.5600000000001, "end": 552.16, "text": " But before I've reached it in two steps, now I'm reaching it in one step."}, {"start": 552.16, "end": 553.16, "text": " This is better."}, {"start": 553.16, "end": 557.72, "text": " So this path here is better than this path here."}, {"start": 557.72, "end": 561.24, "text": " And then it goes on from here."}, {"start": 561.24, "end": 562.24, "text": " It goes on."}, {"start": 562.24, "end": 563.24, "text": " It says, OK."}, {"start": 563.24, "end": 566.36, "text": " I'm reaching this goal in two steps."}, {"start": 566.36, "end": 568.0, "text": " I've reached it in three steps."}, {"start": 568.0, "end": 573.12, "text": " So clearly, this bottom path here is better than what I've done before."}, {"start": 573.12, "end": 575.28, "text": " This top or this path."}, {"start": 575.28, "end": 579.48, "text": " So this is what Go Explorer does in a nutshell."}, {"start": 579.48, "end": 583.6, "text": " What it does is it has an archive of states, right?"}, {"start": 583.6, "end": 586.76, "text": " An archive of states that it has visited previously."}, {"start": 586.76, "end": 591.88, "text": " And the crucial thing here is, and this is kind of necessary to their algorithm, that"}, {"start": 591.88, "end": 593.72, "text": " this is completely deterministic."}, {"start": 593.72, "end": 600.12, "text": " So what they actually do is they will save the state of the game emulator, right?"}, {"start": 600.12, "end": 602.76, "text": " They are here, right?"}, {"start": 602.76, "end": 609.92, "text": " And they do some exploration by jumping some ball until their person is here, their game"}, {"start": 609.92, "end": 611.0, "text": " is in some state."}, {"start": 611.0, "end": 617.52, "text": " And they will save the emulator to a buffer."}, {"start": 617.52, "end": 619.04, "text": " This is kind of crucial."}, {"start": 619.04, "end": 625.28, "text": " Such that at a later point, they can select this, this exactly this state that they were"}, {"start": 625.28, "end": 626.28, "text": " in."}, {"start": 626.28, "end": 630.92, "text": " And from here, run a bunch of explorations again, right?"}, {"start": 630.92, "end": 636.88, "text": " So if they say select state from archive, and then go to that state, this is simply restoring"}, {"start": 636.88, "end": 638.28, "text": " the emulator state."}, {"start": 638.28, "end": 643.28, "text": " But you could also, what you could also do if this is a purely deterministic environment,"}, {"start": 643.28, "end": 648.8399999999999, "text": " you could simply save the sequence of actions that you've done to come here."}, {"start": 648.8399999999999, "end": 649.8399999999999, "text": " And simply buy it."}, {"start": 649.8399999999999, "end": 656.24, "text": " So maybe you go right, right, and here you jump, and you go right, you can simply replay"}, {"start": 656.24, "end": 659.4399999999999, "text": " those to get to the exact same state."}, {"start": 659.44, "end": 664.5200000000001, "text": " They discuss that this can be expanded to also handle kind of stochastic environments,"}, {"start": 664.5200000000001, "end": 670.2, "text": " but in their case, at the phase one, the environment is completely deterministic."}, {"start": 670.2, "end": 671.36, "text": " So they can do this."}, {"start": 671.36, "end": 676.9200000000001, "text": " They can go, sorry, they can go to a state deterministically."}, {"start": 676.9200000000001, "end": 679.08, "text": " So they'll select a state from an archive."}, {"start": 679.08, "end": 683.32, "text": " They have an algorithm for selecting kind of promising states."}, {"start": 683.32, "end": 686.2800000000001, "text": " They go to that state, and then they explore from that state."}, {"start": 686.2800000000001, "end": 688.36, "text": " And they simply do this random."}, {"start": 688.36, "end": 692.6800000000001, "text": " So this is random."}, {"start": 692.6800000000001, "end": 694.2, "text": " And then they update the archive."}, {"start": 694.2, "end": 695.6800000000001, "text": " So what do they do?"}, {"start": 695.6800000000001, "end": 696.6800000000001, "text": " Right?"}, {"start": 696.6800000000001, "end": 701.0, "text": " We saw, so here, maybe as a new graph."}, {"start": 701.0, "end": 706.6, "text": " So they go to a state, this is their state, and then they explore."}, {"start": 706.6, "end": 711.0, "text": " Now there are multiple things that can happen."}, {"start": 711.0, "end": 713.28, "text": " One, they can encounter a new state, right?"}, {"start": 713.28, "end": 715.24, "text": " New state, never seen before."}, {"start": 715.24, "end": 719.5600000000001, "text": " All right, what they do is they save it to the buffer."}, {"start": 719.5600000000001, "end": 723.4, "text": " They say, okay, this new state, let's call it n."}, {"start": 723.4, "end": 727.08, "text": " This new state I've reached it in, and here we have done s steps."}, {"start": 727.08, "end": 729.48, "text": " I've reached it in s plus one step."}, {"start": 729.48, "end": 733.28, "text": " And whatever here is the emulator state that we had before."}, {"start": 733.28, "end": 734.2, "text": " Right?"}, {"start": 734.2, "end": 736.84, "text": " So at any point, I can go back."}, {"start": 736.84, "end": 746.12, "text": " If however, the state has already been seen, let's call this m, they retrieve m, m prime"}, {"start": 746.12, "end": 748.0, "text": " from the buffer because they've already seen it."}, {"start": 748.0, "end": 749.4, "text": " It's in the buffer, right?"}, {"start": 749.4, "end": 752.76, "text": " They compare, hey, these steps."}, {"start": 752.76, "end": 762.36, "text": " So is s prime, is this smaller or larger than s plus one?"}, {"start": 762.36, "end": 770.52, "text": " So basically, I've seen this state before, but using this path, can I reach it in fewer"}, {"start": 770.52, "end": 772.8000000000001, "text": " steps than I've reached it before?"}, {"start": 772.8000000000001, "end": 776.76, "text": " If yes, then I'm going to replace this, right?"}, {"start": 776.76, "end": 781.6, "text": " Replace this s by s plus one, and then save it again in the buffer."}, {"start": 781.6, "end": 787.28, "text": " So I can, I can, I now have a better path to reach this state than before."}, {"start": 787.28, "end": 794.04, "text": " So it's almost exactly like the extra algorithm, and that you simply explore, and every new state"}, {"start": 794.04, "end": 796.52, "text": " you find, you've either already seen."}, {"start": 796.52, "end": 800.8399999999999, "text": " So you simply have a new way of getting to that state."}, {"start": 800.8399999999999, "end": 804.12, "text": " If you haven't seen it, you simply remember it."}, {"start": 804.12, "end": 806.36, "text": " And then you do it all again."}, {"start": 806.36, "end": 816.88, "text": " So you can, you can imagine with time, these number of states and this buffer will explode."}, {"start": 816.88, "end": 819.4, "text": " And it's not feasible for Montezuma's revenge."}, {"start": 819.4, "end": 820.92, "text": " Like, imagine this game, right?"}, {"start": 820.92, "end": 825.92, "text": " You have to, you have to go everywhere and explore everything, right?"}, {"start": 825.92, "end": 829.84, "text": " This, I mean, every single action here could be a state."}, {"start": 829.84, "end": 833.32, "text": " That's why, let me pause this."}, {"start": 833.32, "end": 840.12, "text": " That's why what they do is they, they have to come up with a notion of state that is,"}, {"start": 840.12, "end": 843.72, "text": " doesn't simply include every single game state there is."}, {"start": 843.72, "end": 845.48, "text": " And what they do is, this is sample here."}, {"start": 845.48, "end": 848.6, "text": " They down sample the image."}, {"start": 848.6, "end": 855.96, "text": " And then this, sorry, I've tried drawing over a blog post, they down sample the image."}, {"start": 855.96, "end": 860.9200000000001, "text": " And then they simply say, all right."}, {"start": 860.9200000000001, "end": 864.8000000000001, "text": " So this, this thing would become this thing."}, {"start": 864.8000000000001, "end": 870.28, "text": " And they simply say, okay, if two of these images have the same representation."}, {"start": 870.28, "end": 876.3199999999999, "text": " So gray scale, down sampled, quantized, then they are the same state."}, {"start": 876.3199999999999, "end": 878.92, "text": " And that's kind of the crux of the algorithm I find."}, {"start": 878.92, "end": 885.36, "text": " So if two things have the same state, then the algorithm is prone to kind of confusing"}, {"start": 885.36, "end": 886.36, "text": " them for each other."}, {"start": 886.36, "end": 893.9599999999999, "text": " If things, one is the other, not exactly, but it does kind of assume that they are close"}, {"start": 893.9599999999999, "end": 895.6, "text": " actually here."}, {"start": 895.6, "end": 897.8, "text": " But there is a crucial difference between the two."}, {"start": 897.8, "end": 902.1999999999999, "text": " The algorithm will have a very hard time in some situations."}, {"start": 902.1999999999999, "end": 907.16, "text": " I don't want to, like you can think of, it needs to be kind of convoluted situations,"}, {"start": 907.16, "end": 913.56, "text": " but it can be the kind of crux of the algorithm very much if this state representation isn't"}, {"start": 913.56, "end": 914.56, "text": " done well."}, {"start": 914.56, "end": 915.8, "text": " And they actually have two methods."}, {"start": 915.8, "end": 920.64, "text": " One simply relies on this down sampling and the other one they provide domain knowledge,"}, {"start": 920.64, "end": 927.3199999999999, "text": " which means kind of which level you're in, where the player is, and so on."}, {"start": 927.32, "end": 928.96, "text": " But this is pretty cool."}, {"start": 928.96, "end": 936.44, "text": " So if you are able, so if you're reinforcement learning problem, first of all, is deterministic"}, {"start": 936.44, "end": 944.9200000000001, "text": " and at least in a simulator."}, {"start": 944.9200000000001, "end": 957.24, "text": " And second allows for good state representations kind of for low dimensional state."}, {"start": 957.24, "end": 965.4, "text": " And if those two things are given, you can use Go Explorer."}, {"start": 965.4, "end": 968.84, "text": " And as I said, this representation here is key."}, {"start": 968.84, "end": 971.92, "text": " So now you know how they do it."}, {"start": 971.92, "end": 974.48, "text": " They simply explore these states."}, {"start": 974.48, "end": 981.4, "text": " And if they come on a new state, and every state is, is, is, so we don't mean this here."}, {"start": 981.4, "end": 984.36, "text": " We actually mean this representation of it."}, {"start": 984.36, "end": 988.24, "text": " They store it and they remember how to get to it."}, {"start": 988.24, "end": 994.36, "text": " And simply by exploring like this and having a smart algorithm that picks which state to"}, {"start": 994.36, "end": 1000.64, "text": " explore from, which of course is also a lot of domain knowledge, they are able to solve"}, {"start": 1000.64, "end": 1002.0, "text": " the game."}, {"start": 1002.0, "end": 1003.0, "text": " Right?"}, {"start": 1003.0, "end": 1007.48, "text": " So you see, it goes way past human expert."}, {"start": 1007.48, "end": 1011.04, "text": " And they are able to actually perform really well."}, {"start": 1011.04, "end": 1014.2, "text": " Simply by exploring, this is the exploration phase."}, {"start": 1014.2, "end": 1018.0400000000001, "text": " This is simply random exploration from promising states."}, {"start": 1018.0400000000001, "end": 1025.1200000000001, "text": " And then in the second part, in the second phase, they now robustify it."}, {"start": 1025.1200000000001, "end": 1029.68, "text": " So now they introduce noise into their environment."}, {"start": 1029.68, "end": 1034.32, "text": " Because usually environments have noise or some sort of stochasticity."}, {"start": 1034.32, "end": 1038.72, "text": " And they run imitation learning on the best trajectories they found."}, {"start": 1038.72, "end": 1044.4, "text": " And what that does is what they do is they have a trajectory."}, {"start": 1044.4, "end": 1046.8, "text": " Let's say, this is a trajectory."}, {"start": 1046.8, "end": 1050.24, "text": " These are actions you need to reach this goal state."}, {"start": 1050.24, "end": 1054.44, "text": " This imitation learning algorithm, what they do is they take a few steps back, say here."}, {"start": 1054.44, "end": 1058.92, "text": " And they just use imitation learning, which is basically a form of reinforcement learning"}, {"start": 1058.92, "end": 1061.1200000000001, "text": " to reach the goal state from here."}, {"start": 1061.1200000000001, "end": 1063.72, "text": " Simply reach the goal state."}, {"start": 1063.72, "end": 1066.1200000000001, "text": " Once and under noise."}, {"start": 1066.1200000000001, "end": 1068.68, "text": " So you can't just take the exact same actions."}, {"start": 1068.68, "end": 1073.3200000000002, "text": " Once this has been learned, back up a few more steps, maybe here."}, {"start": 1073.3200000000002, "end": 1076.0800000000002, "text": " And then try to reach the goal state."}, {"start": 1076.0800000000002, "end": 1078.92, "text": " Now you've already learned how to do this part."}, {"start": 1078.92, "end": 1085.0800000000002, "text": " So this bigger part should become, should be easier than simply starting from here."}, {"start": 1085.0800000000002, "end": 1090.76, "text": " And you do that until you've kind of backed up your entire trajectory."}, {"start": 1090.76, "end": 1094.2, "text": " This is a well-known method from imitation learning."}, {"start": 1094.2, "end": 1099.4, "text": " But usually this red thing is a human demonstration."}, {"start": 1099.4, "end": 1103.2, "text": " But now this red trajectory has been found by GoExplore."}, {"start": 1103.2, "end": 1108.24, "text": " It turns out if you have a bunch of these trajectories from GoExplore, you can do a pretty good"}, {"start": 1108.24, "end": 1109.24, "text": " job at that."}, {"start": 1109.24, "end": 1113.88, "text": " Alright, that's basically all that I wanted to say about GoExplore."}, {"start": 1113.88, "end": 1116.16, "text": " It's basically Dijkstra's algorithm."}, {"start": 1116.16, "end": 1120.2, "text": " It works under very specific circumstances, but I think it's super promising and it's"}, {"start": 1120.2, "end": 1123.24, "text": " kind of a new way of thinking about it."}, {"start": 1123.24, "end": 1128.04, "text": " So the video I've shown is actually GoExplore solving Montezuma's revenge, getting like a"}, {"start": 1128.04, "end": 1129.32, "text": " new high score."}, {"start": 1129.32, "end": 1136.92, "text": " And you can see how skilled this algorithm becomes."}, {"start": 1136.92, "end": 1165.44, "text": " Alright, with that, I say goodbye and hope to see you next time."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=waK7AD-AEyc | NeurIPS 19 Poster Session | I'm at the poster session and the amount of people here is just crazy | Hi there, we are here at the NERV 2019 poster session, one of the poster sessions specifically. There are two poster sessions a day, three days, so this is day two the first poster session. It's technically lunchtime so most people are out, but you can see there are still so many people here. There are about 250 posters in this room and every poster has like a ball of people around it and this isn't even like this is not peak time. Yesterday they didn't even let people into this room. You're like that's the kind of the only reason you come to the conference to actually talk to the people doing the work, but it's almost impossible because they're constantly trying to explain. I mean look at this, they're constantly trying to explain. They're work to about 20 people at a time asking any meaningful meaningful questions and getting into a conversation. It's almost impossible. And yeah it's about 10 degrees warmer in here than outside. It is sweaty, it smells, it's absolutely beautiful. Yeah it's, I don't know, like there is a kind of a feeling in the air that this is a bubble. There's a sheer amount of people attending this. It's crazy and I don't know what it looks like in a few years. Maybe this is peak or maybe it's just going to grow and grow and grow. I don't know, but yeah so you can see what it looks like and maybe I've described well what it feels like to be here. Alright so with that I am going to dive in and bye bye. | [{"start": 0.0, "end": 7.640000000000001, "text": " Hi there, we are here at the NERV 2019 poster session, one of the poster sessions"}, {"start": 7.640000000000001, "end": 12.96, "text": " specifically. There are two poster sessions a day, three days, so this is day two"}, {"start": 12.96, "end": 16.72, "text": " the first poster session. It's technically lunchtime so most people are out, but"}, {"start": 16.72, "end": 22.6, "text": " you can see there are still so many people here. There are about 250 posters in"}, {"start": 22.6, "end": 28.560000000000002, "text": " this room and every poster has like a ball of people around it and this isn't"}, {"start": 28.56, "end": 33.28, "text": " even like this is not peak time. Yesterday they didn't even let people into this"}, {"start": 33.28, "end": 39.56, "text": " room. You're like that's the kind of the only reason you come to the conference"}, {"start": 39.56, "end": 44.16, "text": " to actually talk to the people doing the work, but it's almost impossible because"}, {"start": 44.16, "end": 47.879999999999995, "text": " they're constantly trying to explain. I mean look at this, they're constantly trying to"}, {"start": 47.879999999999995, "end": 54.4, "text": " explain. They're work to about 20 people at a time asking any meaningful"}, {"start": 54.4, "end": 60.239999999999995, "text": " meaningful questions and getting into a conversation. It's almost impossible."}, {"start": 60.239999999999995, "end": 67.56, "text": " And yeah it's about 10 degrees warmer in here than outside. It is sweaty, it"}, {"start": 67.56, "end": 75.12, "text": " smells, it's absolutely beautiful. Yeah it's, I don't know, like there is a"}, {"start": 75.12, "end": 82.75999999999999, "text": " kind of a feeling in the air that this is a bubble. There's a sheer amount of"}, {"start": 82.76, "end": 89.32000000000001, "text": " people attending this. It's crazy and I don't know what it looks like in a few"}, {"start": 89.32000000000001, "end": 93.72, "text": " years. Maybe this is peak or maybe it's just going to grow and grow and grow. I"}, {"start": 93.72, "end": 100.68, "text": " don't know, but yeah so you can see what it looks like and maybe I've described"}, {"start": 100.68, "end": 107.72, "text": " well what it feels like to be here. Alright so with that I am going to dive in"}, {"start": 107.72, "end": 114.72, "text": " and bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=RrvC8YW0pT0 | Reinforcement Learning Upside Down: Don't Predict Rewards -- Just Map Them to Actions | Schmidhuber thinking outside the box! Upside-Down RL turns RL on its head and constructs a behavior function that uses the desired reward as an input. The new paradigm shows surprising performance compared to classic RL algorithms.
Abstract:
We transform reinforcement learning (RL) into a form of supervised learning (SL) by turning traditional RL on its head, calling this Upside Down RL (UDRL). Standard RL predicts rewards, while UDRL instead uses rewards as task-defining inputs, together with representations of time horizons and other computable functions of historic and desired future data. UDRL learns to interpret these input observations as commands, mapping them to actions (or action probabilities) through SL on past (possibly accidental) experience. UDRL generalizes to achieve high rewards or other goals, through input commands such as: get lots of reward within at most so much time! A separate paper [61] on first experiments with UDRL shows that even a pilot version of UDRL can outperform traditional baseline algorithms on certain challenging RL problems. We also introduce a related simple but general approach for teaching a robot to imitate humans. First videotape humans imitating the robot's current behaviors, then let the robot learn through SL to map the videos (as input commands) to these behaviors, then let it generalize and imitate videos of humans executing previously unknown behavior. This Imitate-Imitator concept may actually explain why biological evolution has resulted in parents who imitate the babbling of their babies.
Author: Juergen Schmidhuber
https://arxiv.org/abs/1912.02875
https://arxiv.org/abs/1912.02877 | He did it. Crazy son of a bitch. Did it again. What am I talking about? Jürgen Schmidhuber, reinforcement learning upside down. New paper just dropped on the verge of the NURRIPS conference being presented at a workshop here. Presenting upside down reinforcement learning. I am pumped for this one. Can you tell? So it says we transform reinforcement learning into a form of supervised learning by turning traditional RL on its head, calling this RL. R. What do we call it? Call this. Let's just call it LAR. Upside down reinforcement learning. And so this is upside down. This never mind. OK, let's just check out how it works. So I'm going to breathe over. Before we go into this paper. All right. Let's say you have a reinforcement learning problem. Let's say in Atari game, for example. And in an Atari game, you usually have a screen. And let's just say you're playing this Marine commander. So there's water here. And there might be a bunch of here's your boat. There's a boat. A little boat. There might be a bunch of opponents right here. Fishy fish opponents, fishy fish opponents, and so on. And there are a bunch of gold coins like here. That's a big gold coin. And you're kind of supposed to, I think you're supposed to go get air. You have some air meter over here. Whatever. So there's this Atari game. I'm supposed to get the reward, which is this coin here. And stay alive as long as possible and so on. So this is a classic reinforcement learning problem. And there are various techniques for this. We've looked a couple of them. And what upside down reinforcement learning does is basically what you do is you want to transform this input to a new representation, which basically, well, if I can, maybe I can, let me get this correctly. So then there's this over here. And then there's a little fishy, little fishy here. And there's a coin right here. So what you want to do is basically turn this input on its head like upside down. And so this way is kind of up or down, whatever in this new representation. And if you actually learn on this new representation with pretty the same techniques, it works much better than the classic RL setting. And this is not only for these Atari games. This appears to hold throughout the RL space. So in robotics, like if you have a robot, or whatever, this is a robot. It has a square head, as you can tell. It's supposed to open a door. You've seen this DARPA challenge. This doesn't work, right? But if you just transform this and actually turn the robot upside down, the robot will be able to open the door just fine. And even if you have a chess board, and there's like a bunch of pieces on it, the problem in this case is you have to, so you need to simulate this chess board. And if you turn this, if you turn this around now, you basically, all the pieces will fall off. So what you need to do is you need to have a simulator that encodes a magnetic chess board, such that the pieces don't fall off. So it's a bit of programming effort. But if you do that, all right, I'm kidding. It is a new paradigm for RL, but it's unfortunately, it's not as good. Those someone should try the magnetic chess board simulator. Upside down, RL is a new paradigm for RL, where basically the kind of notion of inputs and outputs of the RL algorithm are switched around a bit. So basic ideas here is that you have an RL algorithm that is also fed with a bunch of commands. So in classic RL, what you'll have, and let's actually go back to this Atari game here. In classic RL, an RL algorithm will get the Atari game as screen as an input, and is asked from this to predict a bunch of outputs. So in classic Atari, these are eight actions. I'm going to draw three here. Like go to the left, go to the right, or press the button for shoot. These are the actions you have, and the algorithm is tasked. And there are different versions of this. In policy methods, policy gradient methods, typically the algorithm is tasked with outputting a distribution over these actions. With in other methods, like value learning, Q learning, the algorithm is tasked with assigning each of these actions a value. So it's a in this situation, going to the left will be worth three in the future. This going to the right will be worth negative one, and shooting will be worth zero. So you might want to go with this action here. Right. Now in upside down reinforcement learning, so we've had observation going into the model, and the model coming up with the value estimation of the different actions. In upside down reinforcement learning, you'll have the observation and something else going into the model and the model coming up with an action. Right. And there's something else is the key. What you input here is your desire, your future desire, and in this paper they call it a command. So you'll have a command as an input, together with the observations. You basically say, here's my state, and I would like to achieve, let's say, five reward in the next five reward in the next two time steps. Right. Make this happen. Right. This is your command going into the model. And the model will then try to find actions such that in the next two time steps, you'll get five reward. You can easily see a model that learns this will actually be able to do various things. It's including doing the classic RL things like get as much reward as possible in a given or in the shortest amount of time, but can also do much more. And in the general sense, the difference is how this is trained now. This model, when you train it, as you can see, you don't, it's not trained with having in mind kind of only to get the maximum reward. It is trained to be much more a general kind of understanding of the world, learning what do I need to do to achieve a variety of goals. Specifically, what you want to do to train this is the following. Say you have a method of moving in the world and collecting traces. So you go from state, state one, state two, state three, you go with like action one, action two, let's draw action three, the state four. And in each of these, you get rewards, right? Reward one, reward two, reward three. Now, this in classic RL, this will kind of give you one training example, right? So this is if you consider this to be an episode, this will give you one training example to run the sequence of actions. Upside down RL, you can actually consider this as many, many training examples. And here's what I mean. So if you, for example, start at state one, you can say, aha, within one time step, one, one time step, I go to state two. And I have achieved R1 rewards by doing action A1, right? So this now can be an input to your model. Your model could learn if you get as an observation, and remember the previous thing, as an observation, you get S1. As a command, you get, I want to achieve in one time step R1 reward, right? And you train, this goes into the model. And the model is trained to say A1. Given if I am in S1, and I do A1, I will achieve that, right? So you train the model to give A1 as an output. And this is valid because in the past, you've observed going from S1 using A1 to a state where you get this kind of reward in this kind of time. But you can also, so you can do all of these single steps. They will all provide individual training examples to your model. But then also you can consider two step things. You can say, I'm in state S1, and I go in two time steps, I have achieved R1 plus R2 reward by doing actions A1, then A2, right? And A2, I'm going to do in parents here, because what you want to do is you want to always, always consider the action that comes right after where you are now. So again, your training sample, let me draw this up here maybe, your training sample would be the following. I am in state S1, this would be my observation. My command would be I would like to achieve in two time steps, reward R1 plus R2 reward, right? This reward, this both goes into the model, right? You tell the model, please, given the state S1, achieve this reward in this time, and the model is supposed to output A1, saying, aha, in the past I wasn't the state, and I did achieve this goal by using that. So the model is supposed to learn to achieve different goals, right? So now you can not only train from good episodes, right? You can train from any episode, any episode, usually in classic RL, you kind of want to focus on the good episodes because you want to maximize your reward, but here you can tell the model, hey, if you've done something particularly stupid, let's say here in S3, you've done something the A3 was particularly stupid, gave you, so R3 here was really bad reward, like a negative 5 billion trillion. And you can actually train the model to recognize this, and there's a, hey, look, if you are in S3, and within one time step you want to achieve negative 5 billion, billion, billion trillion reward, all you have to do is action A3, right? And then the cool thing now is, if you are at evaluation time, you actually want the big reward, what you'll do is you simply plug in a different command, you simply, in one time step, still I'm in state S3, in one time step, I want to achieve actually three reward, not negative a lot, right? And the model will have learned that A3 will lead to a situation where you get a lot of negative rewards, so the model will be like, I'm for sure not going to do A3, right? I'm going to do something else here because I have learned to map A3 to like this really low reward. So in essence, this has connections to things like hindsight experience replay, and kind of universal value function, where you can learn to go from any state to any other state, in this, but none of these have this kind of command, what the Schmiedubert calls command here, as an input to the model. And I think actually this is really positive to input this, because usually in universal value functions, what you would say is, let's consider a simple grid world, right? Whatever your agent is here, and you need to reach a goal that's down here, but you might not be able to learn it because it's super sparse reward and so on. But what you can do is you can learn to reach this position and this position and this position from various positions like go here, go from here to here, you can learn to go from here to here. And in essence, you would like it eventually to generalize to all the fields. So you basically learn to go from any position to any other position with your agent, with these universal value or universal policy functions, having sub goals, but they during that phase where they learn to go from anything to anything, they don't necessarily include this reward thing as an input. It's more like kind of either a sub goal or like the value function will simply approximate the reward whereas in this technique, we actually have a policy learning. We actually output an action value. Also, hindsight experience replay, what hindsight experience replay would do in the same situation, right? You're here. We might do a videos on this in the future. You're here and you try, right? And your agent actually, I'd ends up here, right? Ends up right here. What you can do is you can simply say, oh, well, actually this was my goal all along and then simply train your model as if this thing here was your goal all along and not this thing here. And treated as kind of a positive reward for this, at least that's how I understand it. Right, and both of these things are quite different than here where we have this command as inputs. And I do like it. So I think this is very much the basic things. Here it is extrapolated to kind of noisy inputs and noisy environments and so on. But this is the basic, the basic gist of it. So here you see what you will learn is to map all and all is your representation of your input. So the screen, for example, or the chessboard and I think also kind of the last action and the reward you get in this step plus your horizon and desire. So in how much time you would like to achieve how much reward and then you can also get input some extra goals that you have. And so you can see basically any episode that you've run in the past will give you a valid training example for this your model will simply learn to match the previous experience with the goals that were achieved in the previous experience. So there is lots of lots of generalizations here. Like how exactly these things are represented. This time horizon can be a high dimensional object. The desire can be as I understand it. Some what a high dimensional object, the extra commands can be like conditionals on these two things. It gets very complicated. But I want to jump ahead to a different paper where so this paper is basically just describing the algorithm and then the next paper is doing experiments with this. Let's scroll past here. All right, so this paper training agents using up that down reinforcement learning released on the same day, but different authors that have used so I'll show you who is also here, but have used this to implement a variant of this. And here you see again what I was trying to explain. So in traditional RL, this is especially your Q learning, you'll have this function will sketch an observation as input and then Q learning, especially you also get the action as an input you're supposed to say for the given observation. This particular action has this expected value as a return. Right, that's what I explained in the beginning. That's kind of value-based reinforcement learning. Whereas the behavior function here, which would be upside down reinforcement learning, gets the observation and a command and will map that to an action. And here again is what we've gone over. This is a bit of a different thing. So this agent has apparently run two different episodes. One point it did this sequence of actions and at the other point from the same starting state, this sequence of action. And you can see here on the right all the training samples we can derive from this. So we can say from state S0, right, if I want to return in one time step, I have experiences in the past, right, to return in one time step, all I have to do is take action a1. But if I want one return in one time step, I have to take action a2. And you teach your behavior function to learn these things, to learn to output these actions with these things here as inputs. And then what you hope of course is that this will generalize, that it will learn to generalize, that you can say, now give me more reward than I have ever seen before, right? And it will kind of learn which things correspond to lower reward, which things correspond to higher reward, and will be able to extrapolate, which things will correspond to even higher reward, sorry. So they have two algorithms, and this is kind of, this is reminiscent of the old RL kind of world where you do kind of one algorithm is continuously learning from the experience gathered by another algorithm. So you have one set of algorithms, and this even in modern RL, this is how it's done, right? You have two different boxes, right? Actually, you have probably one box learning the model, like this is done, represent this here, learner, right? And the learner distributes the model to many, many machines interacting with the simulators, and these machines, all they do is run episodes with the learned model, and they will send back their experience here, and then the learner can learn from it, and then at the end, send it again. So, all right, here we go. So in each step, what we do in order to generate a new episode, we don't always wanna kind of execute one given policy. What we do is we sample from the end of the replay buffer, and the replay buffer is sorted by returns, right? So the highest return episodes are on top. So we wanna sample the highest return episodes. Then we want to say maybe some of them are 10 steps long, maybe some of them are five steps long, and so on. So we set the horizon to be the mean of the lengths of these, and we set the desired return, how much return should be achieved in this time, to be the unit, to sample from the uniform distribution between M and M plus S, and M is the mean, and S is the standard deviation of the selected episode of returns. So what this means is, here's a bunch of episodes from, oh, they started at the same time. Here's a bunch of episodes that I ran, right, from here is time zero, and then time goes on, that I ran, that had really high returns, right? Now, I'm gonna take the mean time that these episodes ran, like this, this is maybe five time steps. So in five time, I want to achieve, now how much reward, now you look at all the rewards that were achieved, this is maybe a distribution that has some mean here, like so, and then you say, I want to achieve a reward between here and one standard deviation higher than here. So, right, and this would be the reward you want to achieve. So what you do is, you kind of push your learned model to just go a bit beyond what it has seen so far, is basically say, look, you can do this, but you can just do a bit more in the same amount of time. Please do this, and you hope the model has learned to kind of generalize to do this. And if so, you will execute these episodes, and then these episodes will go back to the learner, right? We'll go back to the learner here, and the learner will learn from them, and hopefully then you can generalize even more, and then you can say, I now know how to achieve this bit more reward. Now, if I run the episode, I will achieve even more reward, I can push the model even further, right? So at Eval time, you can always ask the model to produce as much reward as possible in the given time. And of course, every episode sent back here is not only one training example as we saw, but many, many training examples can be derived from these models, even beyond what's in this paper. All right, so I think this was a good first shot at describing this algorithm. I hope you get the gist of it. I enjoy this. A bit of a criticism for me would be, it still kind of doesn't, so it doesn't touch the exploration dilemma. So it, again, deals with kind of incremental, incrementally getting better. Whereas I feel this can easily get stuck in some minimum where it's not possible to do this incremental generalization of the model where you really need a new approach, and that's why games like Montezuma's revenge are solved using algorithms like Go Explore and not any of the classic algorithms. That being said, they have experiments where they show that especially in sparse reward environments, they do better than classic RL algorithms. So if you, for example, here take the lunar lander where A2C beats upside down RL, I guess you didn't get mapplode limit to do the upside down RL. Well, in other environments upside down RL clearly beats the classic algorithms. And what I like here is they took a lunar lander, and which basically, every time that you get a reward in lunar lander and they hypothesize, okay, this is really good for these classic algorithms that do reward maximization instead of kind of learning this general behavior function. And what they did is they modified the game such that all the reward is given at the end of the episode. And then you see that upside down RL will actually outperform here the classic things. It's exactly the same game. You just get the reward at the end. So upside down RL kind of learns the structure of the world learns that you get this reward at the end after such and such many time steps. So you can, it will learn, oh, please get me zero reward in 50 time steps, like no problem, but please get me a thousand rewards in a hundred time steps. No problem, I just go to the end of the episode, right? Whereas these pure reward maximization techniques, they somehow have a harder time to do that. I like this investigation. I like the thinking outside the box, the schmid hubarism of the paper. It's just all great. It's a great time to be alive and check this out. And I see you, bye bye. | [{"start": 0.0, "end": 3.8000000000000003, "text": " He did it."}, {"start": 3.8000000000000003, "end": 5.32, "text": " Crazy son of a bitch."}, {"start": 5.32, "end": 6.88, "text": " Did it again."}, {"start": 6.88, "end": 8.8, "text": " What am I talking about?"}, {"start": 8.8, "end": 13.52, "text": " J\u00fcrgen Schmidhuber, reinforcement learning upside down."}, {"start": 13.52, "end": 18.52, "text": " New paper just dropped on the verge of the NURRIPS conference"}, {"start": 18.52, "end": 20.88, "text": " being presented at a workshop here."}, {"start": 20.88, "end": 23.76, "text": " Presenting upside down reinforcement learning."}, {"start": 23.76, "end": 25.04, "text": " I am pumped for this one."}, {"start": 25.04, "end": 26.8, "text": " Can you tell?"}, {"start": 26.8, "end": 31.68, "text": " So it says we transform reinforcement learning"}, {"start": 31.68, "end": 33.56, "text": " into a form of supervised learning"}, {"start": 33.56, "end": 38.16, "text": " by turning traditional RL on its head, calling this RL."}, {"start": 38.16, "end": 39.760000000000005, "text": " R. What do we call it?"}, {"start": 39.760000000000005, "end": 40.56, "text": " Call this."}, {"start": 40.56, "end": 42.68, "text": " Let's just call it LAR."}, {"start": 42.68, "end": 45.56, "text": " Upside down reinforcement learning."}, {"start": 45.56, "end": 49.6, "text": " And so this is upside down."}, {"start": 49.6, "end": 52.72, "text": " This never mind."}, {"start": 52.72, "end": 57.56, "text": " OK, let's just check out how it works."}, {"start": 57.56, "end": 59.96, "text": " So I'm going to breathe over."}, {"start": 59.96, "end": 62.8, "text": " Before we go into this paper."}, {"start": 62.8, "end": 63.8, "text": " All right."}, {"start": 63.8, "end": 67.72, "text": " Let's say you have a reinforcement learning problem."}, {"start": 67.72, "end": 70.0, "text": " Let's say in Atari game, for example."}, {"start": 70.0, "end": 73.2, "text": " And in an Atari game, you usually have a screen."}, {"start": 73.2, "end": 77.08, "text": " And let's just say you're playing this Marine commander."}, {"start": 77.08, "end": 79.56, "text": " So there's water here."}, {"start": 79.56, "end": 84.76, "text": " And there might be a bunch of here's your boat."}, {"start": 84.76, "end": 85.56, "text": " There's a boat."}, {"start": 85.56, "end": 86.36, "text": " A little boat."}, {"start": 86.36, "end": 88.56, "text": " There might be a bunch of opponents right here."}, {"start": 88.56, "end": 91.8, "text": " Fishy fish opponents, fishy fish opponents,"}, {"start": 91.8, "end": 92.72, "text": " and so on."}, {"start": 92.72, "end": 95.28, "text": " And there are a bunch of gold coins like here."}, {"start": 95.28, "end": 97.04, "text": " That's a big gold coin."}, {"start": 97.04, "end": 99.68, "text": " And you're kind of supposed to, I think you're"}, {"start": 99.68, "end": 101.88, "text": " supposed to go get air."}, {"start": 101.88, "end": 103.96000000000001, "text": " You have some air meter over here."}, {"start": 103.96000000000001, "end": 104.48, "text": " Whatever."}, {"start": 104.48, "end": 106.92, "text": " So there's this Atari game."}, {"start": 106.92, "end": 108.44, "text": " I'm supposed to get the reward, which"}, {"start": 108.44, "end": 111.28, "text": " is this coin here."}, {"start": 111.28, "end": 114.2, "text": " And stay alive as long as possible and so on."}, {"start": 114.2, "end": 116.6, "text": " So this is a classic reinforcement learning problem."}, {"start": 116.6, "end": 118.92, "text": " And there are various techniques for this."}, {"start": 118.92, "end": 120.84, "text": " We've looked a couple of them."}, {"start": 120.84, "end": 123.08, "text": " And what upside down reinforcement learning does"}, {"start": 123.08, "end": 124.96, "text": " is basically what you do is you want"}, {"start": 124.96, "end": 130.88, "text": " to transform this input to a new representation, which basically,"}, {"start": 130.88, "end": 138.64, "text": " well, if I can, maybe I can, let me get this correctly."}, {"start": 138.64, "end": 142.2, "text": " So then there's this over here."}, {"start": 142.2, "end": 145.79999999999998, "text": " And then there's a little fishy, little fishy here."}, {"start": 145.79999999999998, "end": 148.04, "text": " And there's a coin right here."}, {"start": 148.04, "end": 151.24, "text": " So what you want to do is basically turn this input"}, {"start": 151.24, "end": 153.56, "text": " on its head like upside down."}, {"start": 153.56, "end": 157.28, "text": " And so this way is kind of up or down,"}, {"start": 157.28, "end": 159.24, "text": " whatever in this new representation."}, {"start": 159.24, "end": 163.96, "text": " And if you actually learn on this new representation"}, {"start": 163.96, "end": 168.20000000000002, "text": " with pretty the same techniques, it works much better"}, {"start": 168.20000000000002, "end": 169.84, "text": " than the classic RL setting."}, {"start": 169.84, "end": 172.64000000000001, "text": " And this is not only for these Atari games."}, {"start": 172.64000000000001, "end": 177.64000000000001, "text": " This appears to hold throughout the RL space."}, {"start": 177.64000000000001, "end": 180.48000000000002, "text": " So in robotics, like if you have a robot,"}, {"start": 180.48000000000002, "end": 181.84, "text": " or whatever, this is a robot."}, {"start": 181.84, "end": 185.48000000000002, "text": " It has a square head, as you can tell."}, {"start": 185.48000000000002, "end": 187.08, "text": " It's supposed to open a door."}, {"start": 187.08, "end": 188.60000000000002, "text": " You've seen this DARPA challenge."}, {"start": 188.6, "end": 190.76, "text": " This doesn't work, right?"}, {"start": 190.76, "end": 198.92, "text": " But if you just transform this and actually turn the robot upside down,"}, {"start": 198.92, "end": 202.24, "text": " the robot will be able to open the door just fine."}, {"start": 202.24, "end": 205.92, "text": " And even if you have a chess board, and there's"}, {"start": 205.92, "end": 209.28, "text": " like a bunch of pieces on it, the problem in this case"}, {"start": 209.28, "end": 212.88, "text": " is you have to, so you need to simulate this chess board."}, {"start": 212.88, "end": 215.48, "text": " And if you turn this, if you turn this around now,"}, {"start": 215.48, "end": 218.28, "text": " you basically, all the pieces will fall off."}, {"start": 218.28, "end": 221.36, "text": " So what you need to do is you need to have a simulator"}, {"start": 221.36, "end": 224.16, "text": " that encodes a magnetic chess board,"}, {"start": 224.16, "end": 226.84, "text": " such that the pieces don't fall off."}, {"start": 226.84, "end": 228.32, "text": " So it's a bit of programming effort."}, {"start": 228.32, "end": 231.52, "text": " But if you do that, all right, I'm kidding."}, {"start": 234.84, "end": 239.72, "text": " It is a new paradigm for RL, but it's unfortunately,"}, {"start": 239.72, "end": 240.4, "text": " it's not as good."}, {"start": 240.4, "end": 244.6, "text": " Those someone should try the magnetic chess board simulator."}, {"start": 244.6, "end": 249.07999999999998, "text": " Upside down, RL is a new paradigm for RL,"}, {"start": 249.07999999999998, "end": 255.16, "text": " where basically the kind of notion of inputs and outputs"}, {"start": 255.16, "end": 259.64, "text": " of the RL algorithm are switched around a bit."}, {"start": 259.64, "end": 270.08, "text": " So basic ideas here is that you have an RL algorithm that"}, {"start": 270.08, "end": 272.48, "text": " is also fed with a bunch of commands."}, {"start": 272.48, "end": 275.28000000000003, "text": " So in classic RL, what you'll have,"}, {"start": 275.28000000000003, "end": 279.44, "text": " and let's actually go back to this Atari game here."}, {"start": 279.44, "end": 283.64000000000004, "text": " In classic RL, an RL algorithm will get the Atari game"}, {"start": 283.64000000000004, "end": 287.44, "text": " as screen as an input, and is asked from this"}, {"start": 287.44, "end": 290.36, "text": " to predict a bunch of outputs."}, {"start": 290.36, "end": 293.64000000000004, "text": " So in classic Atari, these are eight actions."}, {"start": 293.64000000000004, "end": 294.6, "text": " I'm going to draw three here."}, {"start": 294.6, "end": 297.12, "text": " Like go to the left, go to the right,"}, {"start": 297.12, "end": 300.48, "text": " or press the button for shoot."}, {"start": 300.48, "end": 305.68, "text": " These are the actions you have, and the algorithm is tasked."}, {"start": 305.68, "end": 307.68, "text": " And there are different versions of this."}, {"start": 307.68, "end": 310.6, "text": " In policy methods, policy gradient methods,"}, {"start": 310.6, "end": 313.20000000000005, "text": " typically the algorithm is tasked with outputting"}, {"start": 313.20000000000005, "end": 314.84000000000003, "text": " a distribution over these actions."}, {"start": 314.84000000000003, "end": 319.52000000000004, "text": " With in other methods, like value learning, Q learning,"}, {"start": 319.52000000000004, "end": 321.28000000000003, "text": " the algorithm is tasked with assigning"}, {"start": 321.28000000000003, "end": 323.64000000000004, "text": " each of these actions a value."}, {"start": 323.64000000000004, "end": 327.0, "text": " So it's a in this situation, going to the left"}, {"start": 327.0, "end": 330.12, "text": " will be worth three in the future."}, {"start": 330.12, "end": 334.08, "text": " This going to the right will be worth negative one,"}, {"start": 334.08, "end": 336.28000000000003, "text": " and shooting will be worth zero."}, {"start": 336.28000000000003, "end": 341.28000000000003, "text": " So you might want to go with this action here."}, {"start": 341.28000000000003, "end": 342.28000000000003, "text": " Right."}, {"start": 342.28000000000003, "end": 344.44, "text": " Now in upside down reinforcement learning,"}, {"start": 344.44, "end": 349.32, "text": " so we've had observation going into the model,"}, {"start": 349.32, "end": 353.12, "text": " and the model coming up with the value estimation"}, {"start": 353.12, "end": 355.48, "text": " of the different actions."}, {"start": 355.48, "end": 357.6, "text": " In upside down reinforcement learning,"}, {"start": 357.6, "end": 362.44, "text": " you'll have the observation and something else going"}, {"start": 362.44, "end": 366.28000000000003, "text": " into the model and the model coming up with an action."}, {"start": 366.28000000000003, "end": 367.28000000000003, "text": " Right."}, {"start": 367.28000000000003, "end": 368.88, "text": " And there's something else is the key."}, {"start": 368.88, "end": 374.12, "text": " What you input here is your desire, your future desire,"}, {"start": 374.12, "end": 377.16, "text": " and in this paper they call it a command."}, {"start": 377.16, "end": 379.28000000000003, "text": " So you'll have a command as an input,"}, {"start": 379.28000000000003, "end": 380.56, "text": " together with the observations."}, {"start": 380.56, "end": 383.48, "text": " You basically say, here's my state,"}, {"start": 383.48, "end": 389.0, "text": " and I would like to achieve, let's say, five reward"}, {"start": 389.0, "end": 392.92, "text": " in the next five reward in the next two time steps."}, {"start": 392.92, "end": 393.44, "text": " Right."}, {"start": 393.44, "end": 394.64000000000004, "text": " Make this happen."}, {"start": 394.64000000000004, "end": 395.64000000000004, "text": " Right."}, {"start": 395.64000000000004, "end": 397.28000000000003, "text": " This is your command going into the model."}, {"start": 397.28000000000003, "end": 400.6, "text": " And the model will then try to find actions"}, {"start": 400.6, "end": 406.16, "text": " such that in the next two time steps, you'll get five reward."}, {"start": 406.16, "end": 409.40000000000003, "text": " You can easily see a model that learns this will actually"}, {"start": 409.40000000000003, "end": 413.08000000000004, "text": " be able to do various things."}, {"start": 413.08, "end": 415.32, "text": " It's including doing the classic RL things"}, {"start": 415.32, "end": 418.96, "text": " like get as much reward as possible in a given"}, {"start": 418.96, "end": 424.52, "text": " or in the shortest amount of time, but can also do much more."}, {"start": 424.52, "end": 427.44, "text": " And in the general sense, the difference"}, {"start": 427.44, "end": 429.59999999999997, "text": " is how this is trained now."}, {"start": 429.59999999999997, "end": 432.24, "text": " This model, when you train it, as you can see,"}, {"start": 432.24, "end": 436.88, "text": " you don't, it's not trained with having in mind"}, {"start": 436.88, "end": 440.24, "text": " kind of only to get the maximum reward."}, {"start": 440.24, "end": 444.28000000000003, "text": " It is trained to be much more a general kind of understanding"}, {"start": 444.28000000000003, "end": 448.08, "text": " of the world, learning what do I need to do"}, {"start": 448.08, "end": 452.56, "text": " to achieve a variety of goals."}, {"start": 452.56, "end": 455.96000000000004, "text": " Specifically, what you want to do to train this"}, {"start": 455.96000000000004, "end": 458.0, "text": " is the following."}, {"start": 458.0, "end": 462.36, "text": " Say you have a method of moving in the world"}, {"start": 462.36, "end": 464.96000000000004, "text": " and collecting traces."}, {"start": 464.96, "end": 472.88, "text": " So you go from state, state one, state two, state three,"}, {"start": 472.88, "end": 478.35999999999996, "text": " you go with like action one, action two,"}, {"start": 478.35999999999996, "end": 484.56, "text": " let's draw action three, the state four."}, {"start": 484.56, "end": 487.59999999999997, "text": " And in each of these, you get rewards, right?"}, {"start": 487.59999999999997, "end": 492.08, "text": " Reward one, reward two, reward three."}, {"start": 492.08, "end": 496.59999999999997, "text": " Now, this in classic RL, this will kind of give you"}, {"start": 496.59999999999997, "end": 498.96, "text": " one training example, right?"}, {"start": 498.96, "end": 502.0, "text": " So this is if you consider this to be an episode,"}, {"start": 502.0, "end": 504.12, "text": " this will give you one training example"}, {"start": 504.12, "end": 508.24, "text": " to run the sequence of actions."}, {"start": 508.24, "end": 511.64, "text": " Upside down RL, you can actually consider this"}, {"start": 511.64, "end": 513.76, "text": " as many, many training examples."}, {"start": 513.76, "end": 515.12, "text": " And here's what I mean."}, {"start": 515.12, "end": 524.52, "text": " So if you, for example, start at state one, you can say, aha,"}, {"start": 524.52, "end": 529.64, "text": " within one time step, one, one time step,"}, {"start": 529.64, "end": 531.2, "text": " I go to state two."}, {"start": 531.2, "end": 538.96, "text": " And I have achieved R1 rewards by doing action A1, right?"}, {"start": 538.96, "end": 542.0, "text": " So this now can be an input to your model."}, {"start": 542.0, "end": 546.64, "text": " Your model could learn if you get as an observation,"}, {"start": 546.64, "end": 549.0, "text": " and remember the previous thing, as an observation,"}, {"start": 549.0, "end": 550.72, "text": " you get S1."}, {"start": 550.72, "end": 557.64, "text": " As a command, you get, I want to achieve in one time step"}, {"start": 557.64, "end": 561.28, "text": " R1 reward, right?"}, {"start": 561.28, "end": 564.48, "text": " And you train, this goes into the model."}, {"start": 564.48, "end": 568.12, "text": " And the model is trained to say A1."}, {"start": 568.12, "end": 574.68, "text": " Given if I am in S1, and I do A1, I will achieve that, right?"}, {"start": 574.68, "end": 578.16, "text": " So you train the model to give A1 as an output."}, {"start": 578.16, "end": 580.12, "text": " And this is valid because in the past,"}, {"start": 580.12, "end": 586.08, "text": " you've observed going from S1 using A1 to a state"}, {"start": 586.08, "end": 591.08, "text": " where you get this kind of reward in this kind of time."}, {"start": 591.08, "end": 594.16, "text": " But you can also, so you can do all of these single steps."}, {"start": 594.16, "end": 598.7199999999999, "text": " They will all provide individual training examples to your model."}, {"start": 598.7199999999999, "end": 601.68, "text": " But then also you can consider two step things."}, {"start": 601.68, "end": 609.16, "text": " You can say, I'm in state S1, and I go in two time steps,"}, {"start": 609.16, "end": 617.4, "text": " I have achieved R1 plus R2 reward by doing actions A1,"}, {"start": 617.4, "end": 618.8, "text": " then A2, right?"}, {"start": 618.8, "end": 620.9599999999999, "text": " And A2, I'm going to do in parents here,"}, {"start": 620.9599999999999, "end": 623.0, "text": " because what you want to do is you want to always,"}, {"start": 623.0, "end": 627.72, "text": " always consider the action that comes right after where you are now."}, {"start": 627.72, "end": 632.84, "text": " So again, your training sample, let me draw this up here maybe,"}, {"start": 632.84, "end": 635.36, "text": " your training sample would be the following."}, {"start": 635.36, "end": 638.88, "text": " I am in state S1, this would be my observation."}, {"start": 638.88, "end": 642.8, "text": " My command would be I would like to achieve in two time steps,"}, {"start": 642.8, "end": 647.08, "text": " reward R1 plus R2 reward, right?"}, {"start": 647.08, "end": 650.92, "text": " This reward, this both goes into the model, right?"}, {"start": 650.92, "end": 653.56, "text": " You tell the model, please, given the state S1,"}, {"start": 653.56, "end": 656.3199999999999, "text": " achieve this reward in this time,"}, {"start": 656.3199999999999, "end": 660.4799999999999, "text": " and the model is supposed to output A1, saying,"}, {"start": 660.4799999999999, "end": 663.76, "text": " aha, in the past I wasn't the state,"}, {"start": 663.76, "end": 666.4399999999999, "text": " and I did achieve this goal by using that."}, {"start": 666.4399999999999, "end": 670.68, "text": " So the model is supposed to learn to achieve different goals, right?"}, {"start": 670.68, "end": 674.12, "text": " So now you can not only train from good episodes, right?"}, {"start": 674.12, "end": 677.1999999999999, "text": " You can train from any episode, any episode,"}, {"start": 677.2, "end": 681.72, "text": " usually in classic RL, you kind of want to focus on the good episodes"}, {"start": 681.72, "end": 683.9200000000001, "text": " because you want to maximize your reward,"}, {"start": 683.9200000000001, "end": 685.5200000000001, "text": " but here you can tell the model,"}, {"start": 685.5200000000001, "end": 688.4000000000001, "text": " hey, if you've done something particularly stupid,"}, {"start": 688.4000000000001, "end": 690.44, "text": " let's say here in S3,"}, {"start": 690.44, "end": 693.32, "text": " you've done something the A3 was particularly stupid,"}, {"start": 693.32, "end": 696.48, "text": " gave you, so R3 here was really bad reward,"}, {"start": 696.48, "end": 700.88, "text": " like a negative 5 billion trillion."}, {"start": 700.88, "end": 705.9200000000001, "text": " And you can actually train the model to recognize this,"}, {"start": 705.92, "end": 708.64, "text": " and there's a, hey, look, if you are in S3,"}, {"start": 708.64, "end": 710.8399999999999, "text": " and within one time step you want to achieve"}, {"start": 710.8399999999999, "end": 714.9599999999999, "text": " negative 5 billion, billion, billion trillion reward,"}, {"start": 714.9599999999999, "end": 717.8, "text": " all you have to do is action A3, right?"}, {"start": 717.8, "end": 720.12, "text": " And then the cool thing now is,"}, {"start": 720.12, "end": 722.24, "text": " if you are at evaluation time,"}, {"start": 722.24, "end": 723.76, "text": " you actually want the big reward,"}, {"start": 723.76, "end": 726.16, "text": " what you'll do is you simply plug in a different command,"}, {"start": 726.16, "end": 728.16, "text": " you simply, in one time step,"}, {"start": 728.16, "end": 730.76, "text": " still I'm in state S3, in one time step,"}, {"start": 730.76, "end": 734.12, "text": " I want to achieve actually three reward,"}, {"start": 734.12, "end": 736.48, "text": " not negative a lot, right?"}, {"start": 736.48, "end": 740.8, "text": " And the model will have learned that A3 will lead"}, {"start": 740.8, "end": 744.64, "text": " to a situation where you get a lot of negative rewards,"}, {"start": 744.64, "end": 745.92, "text": " so the model will be like,"}, {"start": 745.92, "end": 750.48, "text": " I'm for sure not going to do A3, right?"}, {"start": 750.48, "end": 752.04, "text": " I'm going to do something else here"}, {"start": 752.04, "end": 756.28, "text": " because I have learned to map A3 to like this"}, {"start": 756.28, "end": 757.52, "text": " really low reward."}, {"start": 757.52, "end": 761.6, "text": " So in essence, this has connections to things"}, {"start": 761.6, "end": 764.16, "text": " like hindsight experience replay,"}, {"start": 764.16, "end": 767.08, "text": " and kind of universal value function,"}, {"start": 767.08, "end": 770.28, "text": " where you can learn to go from any state to any other state,"}, {"start": 772.0400000000001, "end": 777.0400000000001, "text": " in this, but none of these have this kind of command,"}, {"start": 777.8000000000001, "end": 780.0400000000001, "text": " what the Schmiedubert calls command here,"}, {"start": 780.0400000000001, "end": 781.48, "text": " as an input to the model."}, {"start": 781.48, "end": 786.4, "text": " And I think actually this is really positive to input this,"}, {"start": 786.4, "end": 788.9200000000001, "text": " because usually in universal value functions,"}, {"start": 788.9200000000001, "end": 790.2, "text": " what you would say is,"}, {"start": 790.2, "end": 792.6800000000001, "text": " let's consider a simple grid world, right?"}, {"start": 794.32, "end": 796.32, "text": " Whatever your agent is here,"}, {"start": 796.32, "end": 800.32, "text": " and you need to reach a goal that's down here,"}, {"start": 801.48, "end": 803.24, "text": " but you might not be able to learn it"}, {"start": 803.24, "end": 805.48, "text": " because it's super sparse reward and so on."}, {"start": 805.48, "end": 808.96, "text": " But what you can do is you can learn to reach this position"}, {"start": 808.96, "end": 811.08, "text": " and this position and this position"}, {"start": 811.08, "end": 813.72, "text": " from various positions like go here,"}, {"start": 813.72, "end": 814.9200000000001, "text": " go from here to here,"}, {"start": 814.9200000000001, "end": 816.48, "text": " you can learn to go from here to here."}, {"start": 816.48, "end": 819.6, "text": " And in essence, you would like it eventually"}, {"start": 819.6, "end": 822.08, "text": " to generalize to all the fields."}, {"start": 822.08, "end": 824.2, "text": " So you basically learn to go from any position"}, {"start": 824.2, "end": 826.2, "text": " to any other position with your agent,"}, {"start": 828.12, "end": 831.16, "text": " with these universal value or universal policy functions,"}, {"start": 831.16, "end": 832.48, "text": " having sub goals,"}, {"start": 832.48, "end": 835.76, "text": " but they during that phase where they learn to go"}, {"start": 835.76, "end": 839.08, "text": " from anything to anything, they don't necessarily include"}, {"start": 839.08, "end": 842.32, "text": " this reward thing as an input."}, {"start": 842.32, "end": 847.32, "text": " It's more like kind of either a sub goal"}, {"start": 847.32, "end": 852.32, "text": " or like the value function will simply approximate the reward"}, {"start": 853.96, "end": 856.4000000000001, "text": " whereas in this technique,"}, {"start": 856.4000000000001, "end": 859.0400000000001, "text": " we actually have a policy learning."}, {"start": 859.0400000000001, "end": 862.36, "text": " We actually output an action value."}, {"start": 862.36, "end": 864.32, "text": " Also, hindsight experience replay,"}, {"start": 864.32, "end": 866.12, "text": " what hindsight experience replay would do"}, {"start": 866.12, "end": 867.8000000000001, "text": " in the same situation, right?"}, {"start": 867.8000000000001, "end": 868.6400000000001, "text": " You're here."}, {"start": 870.0, "end": 872.7600000000001, "text": " We might do a videos on this in the future."}, {"start": 872.7600000000001, "end": 875.48, "text": " You're here and you try, right?"}, {"start": 875.48, "end": 878.88, "text": " And your agent actually, I'd ends up here, right?"}, {"start": 878.88, "end": 879.84, "text": " Ends up right here."}, {"start": 879.84, "end": 881.9200000000001, "text": " What you can do is you can simply say,"}, {"start": 881.9200000000001, "end": 885.6, "text": " oh, well, actually this was my goal all along"}, {"start": 885.6, "end": 890.6, "text": " and then simply train your model as if this thing here"}, {"start": 891.12, "end": 893.8000000000001, "text": " was your goal all along and not this thing here."}, {"start": 893.8000000000001, "end": 896.72, "text": " And treated as kind of a positive reward"}, {"start": 898.64, "end": 900.96, "text": " for this, at least that's how I understand it."}, {"start": 900.96, "end": 905.96, "text": " Right, and both of these things are quite different"}, {"start": 906.48, "end": 908.6800000000001, "text": " than here where we have this command as inputs."}, {"start": 908.6800000000001, "end": 911.0400000000001, "text": " And I do like it."}, {"start": 911.0400000000001, "end": 916.0400000000001, "text": " So I think this is very much the basic things."}, {"start": 919.0, "end": 924.0, "text": " Here it is extrapolated to kind of noisy inputs"}, {"start": 924.52, "end": 926.96, "text": " and noisy environments and so on."}, {"start": 926.96, "end": 931.96, "text": " But this is the basic, the basic gist of it."}, {"start": 933.5600000000001, "end": 938.5600000000001, "text": " So here you see what you will learn is to map all"}, {"start": 940.4000000000001, "end": 944.6800000000001, "text": " and all is your representation of your input."}, {"start": 944.6800000000001, "end": 947.24, "text": " So the screen, for example, or the chessboard"}, {"start": 947.24, "end": 949.12, "text": " and I think also kind of the last action"}, {"start": 949.12, "end": 950.84, "text": " and the reward you get in this step"}, {"start": 950.84, "end": 953.32, "text": " plus your horizon and desire."}, {"start": 953.32, "end": 955.84, "text": " So in how much time you would like to achieve"}, {"start": 955.84, "end": 958.88, "text": " how much reward and then you can also get input"}, {"start": 958.88, "end": 961.1600000000001, "text": " some extra goals that you have."}, {"start": 962.1600000000001, "end": 967.1600000000001, "text": " And so you can see basically any episode"}, {"start": 968.2, "end": 971.44, "text": " that you've run in the past will give you a valid training"}, {"start": 971.44, "end": 974.0400000000001, "text": " example for this your model will simply learn"}, {"start": 974.0400000000001, "end": 977.44, "text": " to match the previous experience"}, {"start": 978.32, "end": 981.4000000000001, "text": " with the goals that were achieved in the previous experience."}, {"start": 981.4, "end": 986.4, "text": " So there is lots of lots of generalizations here."}, {"start": 986.4, "end": 989.12, "text": " Like how exactly these things are represented."}, {"start": 989.12, "end": 992.12, "text": " This time horizon can be a high dimensional object."}, {"start": 992.12, "end": 994.52, "text": " The desire can be as I understand it."}, {"start": 994.52, "end": 996.4, "text": " Some what a high dimensional object,"}, {"start": 996.4, "end": 999.0799999999999, "text": " the extra commands can be like conditionals"}, {"start": 999.0799999999999, "end": 1000.92, "text": " on these two things."}, {"start": 1000.92, "end": 1002.36, "text": " It gets very complicated."}, {"start": 1002.36, "end": 1007.36, "text": " But I want to jump ahead to a different paper"}, {"start": 1007.36, "end": 1012.16, "text": " where so this paper is basically just describing the algorithm"}, {"start": 1012.16, "end": 1016.16, "text": " and then the next paper is doing experiments with this."}, {"start": 1017.5600000000001, "end": 1019.2, "text": " Let's scroll past here."}, {"start": 1019.2, "end": 1021.36, "text": " All right, so this paper training agents"}, {"start": 1021.36, "end": 1023.48, "text": " using up that down reinforcement learning released"}, {"start": 1023.48, "end": 1028.48, "text": " on the same day, but different authors that have used"}, {"start": 1028.92, "end": 1030.84, "text": " so I'll show you who is also here,"}, {"start": 1030.84, "end": 1035.84, "text": " but have used this to implement"}, {"start": 1035.84, "end": 1038.1599999999999, "text": " a variant of this."}, {"start": 1038.1599999999999, "end": 1041.56, "text": " And here you see again what I was trying to explain."}, {"start": 1041.56, "end": 1046.12, "text": " So in traditional RL, this is especially your Q learning,"}, {"start": 1046.12, "end": 1048.8799999999999, "text": " you'll have this function will sketch an observation"}, {"start": 1048.8799999999999, "end": 1050.72, "text": " as input and then Q learning,"}, {"start": 1050.72, "end": 1052.52, "text": " especially you also get the action as an input"}, {"start": 1052.52, "end": 1055.72, "text": " you're supposed to say for the given observation."}, {"start": 1056.84, "end": 1061.84, "text": " This particular action has this expected value as a return."}, {"start": 1062.6399999999999, "end": 1064.4399999999998, "text": " Right, that's what I explained in the beginning."}, {"start": 1064.44, "end": 1067.44, "text": " That's kind of value-based reinforcement learning."}, {"start": 1068.44, "end": 1071.3200000000002, "text": " Whereas the behavior function here,"}, {"start": 1071.3200000000002, "end": 1073.2, "text": " which would be upside down reinforcement learning,"}, {"start": 1073.2, "end": 1075.88, "text": " gets the observation and a command"}, {"start": 1075.88, "end": 1078.16, "text": " and will map that to an action."}, {"start": 1079.8400000000001, "end": 1082.1200000000001, "text": " And here again is what we've gone over."}, {"start": 1082.1200000000001, "end": 1083.6000000000001, "text": " This is a bit of a different thing."}, {"start": 1083.6000000000001, "end": 1087.52, "text": " So this agent has apparently run two different episodes."}, {"start": 1087.52, "end": 1091.28, "text": " One point it did this sequence of actions"}, {"start": 1091.28, "end": 1093.48, "text": " and at the other point from the same starting state,"}, {"start": 1093.48, "end": 1094.84, "text": " this sequence of action."}, {"start": 1094.84, "end": 1097.56, "text": " And you can see here on the right"}, {"start": 1097.56, "end": 1101.52, "text": " all the training samples we can derive from this."}, {"start": 1101.52, "end": 1106.04, "text": " So we can say from state S0,"}, {"start": 1106.04, "end": 1110.1200000000001, "text": " right, if I want to return in one time step,"}, {"start": 1110.1200000000001, "end": 1111.84, "text": " I have experiences in the past,"}, {"start": 1111.84, "end": 1114.32, "text": " right, to return in one time step,"}, {"start": 1114.32, "end": 1116.48, "text": " all I have to do is take action a1."}, {"start": 1117.48, "end": 1120.6, "text": " But if I want one return in one time step,"}, {"start": 1120.6, "end": 1122.32, "text": " I have to take action a2."}, {"start": 1122.32, "end": 1125.8, "text": " And you teach your behavior function"}, {"start": 1125.8, "end": 1127.28, "text": " to learn these things,"}, {"start": 1127.28, "end": 1130.6, "text": " to learn to output these actions"}, {"start": 1130.6, "end": 1133.32, "text": " with these things here as inputs."}, {"start": 1133.32, "end": 1136.6, "text": " And then what you hope of course is that this will generalize,"}, {"start": 1136.6, "end": 1138.6, "text": " that it will learn to generalize,"}, {"start": 1138.6, "end": 1139.6, "text": " that you can say,"}, {"start": 1139.6, "end": 1144.6, "text": " now give me more reward than I have ever seen before, right?"}, {"start": 1144.6799999999998, "end": 1147.96, "text": " And it will kind of learn which things correspond"}, {"start": 1147.96, "end": 1149.2, "text": " to lower reward,"}, {"start": 1149.2, "end": 1150.8799999999999, "text": " which things correspond to higher reward,"}, {"start": 1150.88, "end": 1152.64, "text": " and will be able to extrapolate,"}, {"start": 1152.64, "end": 1157.64, "text": " which things will correspond to even higher reward, sorry."}, {"start": 1159.64, "end": 1162.2, "text": " So they have two algorithms,"}, {"start": 1162.2, "end": 1163.68, "text": " and this is kind of,"}, {"start": 1165.3600000000001, "end": 1170.3600000000001, "text": " this is reminiscent of the old RL kind of world"}, {"start": 1171.6000000000001, "end": 1176.6000000000001, "text": " where you do kind of one algorithm is continuously learning"}, {"start": 1177.24, "end": 1180.5200000000002, "text": " from the experience gathered by another algorithm."}, {"start": 1180.52, "end": 1182.0, "text": " So you have one set of algorithms,"}, {"start": 1182.0, "end": 1183.68, "text": " and this even in modern RL,"}, {"start": 1183.68, "end": 1185.76, "text": " this is how it's done, right?"}, {"start": 1185.76, "end": 1188.4, "text": " You have two different boxes, right?"}, {"start": 1188.4, "end": 1191.8799999999999, "text": " Actually, you have probably one box learning the model,"}, {"start": 1191.8799999999999, "end": 1193.24, "text": " like this is done,"}, {"start": 1193.24, "end": 1195.84, "text": " represent this here, learner, right?"}, {"start": 1195.84, "end": 1197.84, "text": " And the learner distributes the model"}, {"start": 1197.84, "end": 1202.16, "text": " to many, many machines interacting with the simulators,"}, {"start": 1202.16, "end": 1203.12, "text": " and these machines,"}, {"start": 1203.12, "end": 1207.08, "text": " all they do is run episodes with the learned model,"}, {"start": 1207.08, "end": 1211.56, "text": " and they will send back their experience here,"}, {"start": 1211.56, "end": 1214.12, "text": " and then the learner can learn from it,"}, {"start": 1214.12, "end": 1216.76, "text": " and then at the end, send it again."}, {"start": 1216.76, "end": 1217.6, "text": " So,"}, {"start": 1226.9199999999998, "end": 1228.04, "text": " all right, here we go."}, {"start": 1230.36, "end": 1232.4399999999998, "text": " So in each step,"}, {"start": 1232.44, "end": 1237.44, "text": " what we do in order to generate a new episode,"}, {"start": 1238.92, "end": 1242.8400000000001, "text": " we don't always wanna kind of execute one given policy."}, {"start": 1242.8400000000001, "end": 1247.0800000000002, "text": " What we do is we sample from the end of the replay buffer,"}, {"start": 1247.0800000000002, "end": 1249.64, "text": " and the replay buffer is sorted by returns, right?"}, {"start": 1249.64, "end": 1252.3200000000002, "text": " So the highest return episodes are on top."}, {"start": 1252.3200000000002, "end": 1255.0800000000002, "text": " So we wanna sample the highest return episodes."}, {"start": 1256.28, "end": 1259.6000000000001, "text": " Then we want to say maybe some of them are 10 steps long,"}, {"start": 1259.6000000000001, "end": 1262.04, "text": " maybe some of them are five steps long, and so on."}, {"start": 1262.04, "end": 1267.04, "text": " So we set the horizon to be the mean of the lengths of these,"}, {"start": 1268.68, "end": 1271.28, "text": " and we set the desired return,"}, {"start": 1272.44, "end": 1275.6399999999999, "text": " how much return should be achieved in this time,"}, {"start": 1275.6399999999999, "end": 1278.68, "text": " to be the unit, to sample from the uniform distribution"}, {"start": 1278.68, "end": 1282.92, "text": " between M and M plus S, and M is the mean,"}, {"start": 1282.92, "end": 1284.96, "text": " and S is the standard deviation"}, {"start": 1284.96, "end": 1286.6399999999999, "text": " of the selected episode of returns."}, {"start": 1286.6399999999999, "end": 1288.44, "text": " So what this means is,"}, {"start": 1288.44, "end": 1293.04, "text": " here's a bunch of episodes from, oh, they started at the same time."}, {"start": 1293.04, "end": 1295.8400000000001, "text": " Here's a bunch of episodes that I ran, right,"}, {"start": 1295.8400000000001, "end": 1299.3200000000002, "text": " from here is time zero, and then time goes on,"}, {"start": 1300.44, "end": 1304.2, "text": " that I ran, that had really high returns, right?"}, {"start": 1305.3600000000001, "end": 1310.3600000000001, "text": " Now, I'm gonna take the mean time that these episodes ran,"}, {"start": 1310.48, "end": 1313.96, "text": " like this, this is maybe five time steps."}, {"start": 1313.96, "end": 1318.96, "text": " So in five time, I want to achieve, now how much reward,"}, {"start": 1319.6000000000001, "end": 1322.0, "text": " now you look at all the rewards that were achieved,"}, {"start": 1322.0, "end": 1325.92, "text": " this is maybe a distribution that has some mean here,"}, {"start": 1325.92, "end": 1328.16, "text": " like so, and then you say,"}, {"start": 1328.16, "end": 1331.3600000000001, "text": " I want to achieve a reward between here"}, {"start": 1331.3600000000001, "end": 1334.68, "text": " and one standard deviation higher than here."}, {"start": 1334.68, "end": 1338.8400000000001, "text": " So, right, and this would be the reward you want to achieve."}, {"start": 1338.8400000000001, "end": 1342.68, "text": " So what you do is, you kind of push your learned model"}, {"start": 1342.68, "end": 1346.16, "text": " to just go a bit beyond what it has seen so far,"}, {"start": 1346.16, "end": 1350.0800000000002, "text": " is basically say, look, you can do this,"}, {"start": 1350.0800000000002, "end": 1353.52, "text": " but you can just do a bit more in the same amount of time."}, {"start": 1353.52, "end": 1355.72, "text": " Please do this, and you hope the model has learned"}, {"start": 1355.72, "end": 1357.3600000000001, "text": " to kind of generalize to do this."}, {"start": 1357.3600000000001, "end": 1360.92, "text": " And if so, you will execute these episodes,"}, {"start": 1360.92, "end": 1365.24, "text": " and then these episodes will go back to the learner, right?"}, {"start": 1365.24, "end": 1368.24, "text": " We'll go back to the learner here,"}, {"start": 1368.24, "end": 1370.24, "text": " and the learner will learn from them,"}, {"start": 1370.24, "end": 1373.4, "text": " and hopefully then you can generalize even more,"}, {"start": 1373.4, "end": 1376.8, "text": " and then you can say, I now know how to achieve"}, {"start": 1376.8, "end": 1378.0, "text": " this bit more reward."}, {"start": 1378.0, "end": 1381.76, "text": " Now, if I run the episode, I will achieve even more reward,"}, {"start": 1381.76, "end": 1383.96, "text": " I can push the model even further, right?"}, {"start": 1383.96, "end": 1386.08, "text": " So at Eval time, you can always ask the model"}, {"start": 1386.08, "end": 1390.28, "text": " to produce as much reward as possible in the given time."}, {"start": 1391.28, "end": 1394.04, "text": " And of course, every episode sent back here"}, {"start": 1394.04, "end": 1396.56, "text": " is not only one training example as we saw,"}, {"start": 1396.56, "end": 1398.56, "text": " but many, many training examples"}, {"start": 1398.56, "end": 1401.36, "text": " can be derived from these models,"}, {"start": 1401.36, "end": 1403.9199999999998, "text": " even beyond what's in this paper."}, {"start": 1405.08, "end": 1408.3999999999999, "text": " All right, so I think this was a good first shot"}, {"start": 1408.3999999999999, "end": 1410.28, "text": " at describing this algorithm."}, {"start": 1410.28, "end": 1413.84, "text": " I hope you get the gist of it."}, {"start": 1413.84, "end": 1415.1599999999999, "text": " I enjoy this."}, {"start": 1415.1599999999999, "end": 1417.3999999999999, "text": " A bit of a criticism for me would be,"}, {"start": 1417.3999999999999, "end": 1419.72, "text": " it still kind of doesn't,"}, {"start": 1419.72, "end": 1422.84, "text": " so it doesn't touch the exploration dilemma."}, {"start": 1422.84, "end": 1427.84, "text": " So it, again, deals with kind of incremental,"}, {"start": 1428.72, "end": 1430.84, "text": " incrementally getting better."}, {"start": 1430.84, "end": 1434.04, "text": " Whereas I feel this can easily get stuck"}, {"start": 1434.04, "end": 1436.1999999999998, "text": " in some minimum where it's not possible"}, {"start": 1436.1999999999998, "end": 1438.9199999999998, "text": " to do this incremental generalization of the model"}, {"start": 1438.9199999999998, "end": 1440.6, "text": " where you really need a new approach,"}, {"start": 1440.6, "end": 1442.9599999999998, "text": " and that's why games like Montezuma's revenge"}, {"start": 1442.9599999999998, "end": 1446.36, "text": " are solved using algorithms like Go Explore"}, {"start": 1446.36, "end": 1449.6799999999998, "text": " and not any of the classic algorithms."}, {"start": 1449.6799999999998, "end": 1451.72, "text": " That being said, they have experiments"}, {"start": 1451.72, "end": 1454.68, "text": " where they show that especially in sparse"}, {"start": 1454.68, "end": 1456.6000000000001, "text": " reward environments,"}, {"start": 1456.6000000000001, "end": 1459.44, "text": " they do better than classic RL algorithms."}, {"start": 1459.44, "end": 1463.1200000000001, "text": " So if you, for example, here take the lunar lander"}, {"start": 1463.1200000000001, "end": 1466.56, "text": " where A2C beats upside down RL,"}, {"start": 1469.32, "end": 1472.92, "text": " I guess you didn't get mapplode limit to do"}, {"start": 1472.92, "end": 1474.32, "text": " the upside down RL."}, {"start": 1476.3600000000001, "end": 1480.92, "text": " Well, in other environments upside down"}, {"start": 1480.92, "end": 1483.44, "text": " RL clearly beats the classic algorithms."}, {"start": 1483.44, "end": 1486.72, "text": " And what I like here is they took a lunar lander,"}, {"start": 1486.72, "end": 1490.2, "text": " and which basically, every time that you get"}, {"start": 1490.2, "end": 1492.3600000000001, "text": " a reward in lunar lander and they hypothesize,"}, {"start": 1492.3600000000001, "end": 1494.8400000000001, "text": " okay, this is really good for these classic algorithms"}, {"start": 1494.8400000000001, "end": 1497.2, "text": " that do reward maximization instead of kind of"}, {"start": 1497.2, "end": 1499.44, "text": " learning this general behavior function."}, {"start": 1499.44, "end": 1501.88, "text": " And what they did is they modified the game such"}, {"start": 1501.88, "end": 1504.8400000000001, "text": " that all the reward is given at the end of the episode."}, {"start": 1504.8400000000001, "end": 1507.5600000000002, "text": " And then you see that upside down RL"}, {"start": 1507.56, "end": 1512.56, "text": " will actually outperform here the classic things."}, {"start": 1512.8, "end": 1514.04, "text": " It's exactly the same game."}, {"start": 1514.04, "end": 1516.0, "text": " You just get the reward at the end."}, {"start": 1516.0, "end": 1519.12, "text": " So upside down RL kind of learns the structure"}, {"start": 1519.12, "end": 1521.56, "text": " of the world learns that you get this reward at the end"}, {"start": 1521.56, "end": 1523.56, "text": " after such and such many time steps."}, {"start": 1523.56, "end": 1525.8, "text": " So you can, it will learn,"}, {"start": 1525.8, "end": 1528.72, "text": " oh, please get me zero reward in 50 time steps,"}, {"start": 1528.72, "end": 1529.72, "text": " like no problem,"}, {"start": 1529.72, "end": 1532.6, "text": " but please get me a thousand rewards in a hundred time steps."}, {"start": 1532.6, "end": 1535.32, "text": " No problem, I just go to the end of the episode, right?"}, {"start": 1535.32, "end": 1539.6399999999999, "text": " Whereas these pure reward maximization techniques,"}, {"start": 1539.6399999999999, "end": 1543.48, "text": " they somehow have a harder time to do that."}, {"start": 1543.48, "end": 1544.9199999999998, "text": " I like this investigation."}, {"start": 1544.9199999999998, "end": 1548.2, "text": " I like the thinking outside the box,"}, {"start": 1548.2, "end": 1550.48, "text": " the schmid hubarism of the paper."}, {"start": 1550.48, "end": 1552.48, "text": " It's just all great."}, {"start": 1552.48, "end": 1557.48, "text": " It's a great time to be alive and check this out."}, {"start": 1557.48, "end": 1564.48, "text": " And I see you, bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=Z6ea_AbnnCc | NeurIPS 2019 | I'm at the 2019 conference on Neural Information Processing Systems in Vancouver, trying to register, but the line was just so long that I decided to bail :D | Good morning learners, we are here in beautiful Vancouver in Canada and attending the NURRIPS conference 2019. Of course one of the largest conferences in machine learning of the year. It's actually there's been a lottery system for the tickets because so many people wanted to register. There are over 8,000 people attending I think and it's Sunday morning so even before the conference starts I thought I was smart. Going really early to register but today is company Expo Day and I didn't register for that because usually companies will make fair bit of fuss about their research online so there's kind of little need to attend that in person you can just catch up later but everyone wants to get in on that and it's crazy here like the line starts so you go in here but actually have to go downstairs and the line starts somewhere like way back here underground then you go all the way all the way up there go there over there up the escalator circle a bunch of times go up some more I guess then you maybe see people all the way over there up to the registration desks that are finally I guess over there I didn't look but it's absolutely crazy these conferences exploding with people from all over the planet I don't even know what kind of the composition is I would be interested how many of them are students of course machine learning departments probably exploding right now with people every company wants to get in on that and I don't know where the trend is going I growth can't continue forever I feel and it's it's kind of questionable how long we can uphold this how good this is I don't know any of these things I'll just try to get back later going to work a bit now get back later get my ticket and then I hope I can report a bit from the conference over the next few days I can get some good nuggets out of there that said I hope you're doing well and I'll see you later bye bye | [{"start": 0.0, "end": 6.8, "text": " Good morning learners, we are here in beautiful Vancouver in Canada and"}, {"start": 6.8, "end": 13.76, "text": " attending the NURRIPS conference 2019. Of course one of the largest conferences in"}, {"start": 13.76, "end": 20.28, "text": " machine learning of the year. It's actually there's been a lottery system for"}, {"start": 20.28, "end": 24.52, "text": " the tickets because so many people wanted to register. There are over 8,000"}, {"start": 24.52, "end": 29.92, "text": " people attending I think and it's Sunday morning so even before the conference"}, {"start": 29.92, "end": 34.800000000000004, "text": " starts I thought I was smart. Going really early to register but today is"}, {"start": 34.800000000000004, "end": 39.68, "text": " company Expo Day and I didn't register for that because usually"}, {"start": 39.68, "end": 45.800000000000004, "text": " companies will make fair bit of fuss about their research online so there's"}, {"start": 45.800000000000004, "end": 53.7, "text": " kind of little need to attend that in person you can just catch up later but"}, {"start": 53.7, "end": 60.260000000000005, "text": " everyone wants to get in on that and it's crazy here like the line starts so"}, {"start": 60.260000000000005, "end": 63.660000000000004, "text": " you go in here but actually have to go downstairs and the line starts somewhere"}, {"start": 63.660000000000004, "end": 68.94, "text": " like way back here underground then you go all the way all the way up there"}, {"start": 68.94, "end": 73.78, "text": " go there over there up the escalator circle a bunch of times go up some more"}, {"start": 73.78, "end": 80.38, "text": " I guess then you maybe see people all the way over there up to the registration"}, {"start": 80.38, "end": 85.69999999999999, "text": " desks that are finally I guess over there I didn't look but it's absolutely"}, {"start": 85.69999999999999, "end": 90.46, "text": " crazy these conferences exploding with people from all over the planet I don't"}, {"start": 90.46, "end": 93.97999999999999, "text": " even know what kind of the composition is I would be interested how many of"}, {"start": 93.97999999999999, "end": 98.86, "text": " them are students of course machine learning departments probably exploding"}, {"start": 98.86, "end": 105.89999999999999, "text": " right now with people every company wants to get in on that and I don't know"}, {"start": 105.9, "end": 113.38000000000001, "text": " where the trend is going I growth can't continue forever I feel and it's it's"}, {"start": 113.38000000000001, "end": 118.54, "text": " kind of questionable how long we can uphold this how good this is I don't know"}, {"start": 118.54, "end": 124.14000000000001, "text": " any of these things I'll just try to get back later going to work a bit now get"}, {"start": 124.14000000000001, "end": 129.54000000000002, "text": " back later get my ticket and then I hope I can report a bit from the conference"}, {"start": 129.54, "end": 137.57999999999998, "text": " over the next few days I can get some good nuggets out of there that said I hope"}, {"start": 137.58, "end": 167.54000000000002, "text": " you're doing well and I'll see you later bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=We20YSAJZSE | MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | MuZero harnesses the power of AlphaZero, but without relying on an accurate environment model. This opens up planning-based reinforcement learning to entirely new domains, where such environment models aren't available. The difference to previous work is that, instead of learning a model predicting future observations, MuZero predicts the future observations' latent representations, and thus learns to only represent things that matter to the task!
Abstract:
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
Authors: Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy Lillicrap, David Silver
https://arxiv.org/abs/1911.08265
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there. Today we're looking at mastering Atari, Go, chess, and Shogi by planning with a Learned Model by Julian Schitweiser and people generally from DeepMind. So this paper is an extension to Alpha0, the kind of famous algorithm that learn to play Go and chess simply by playing itself and the kind of cool thing about this model is that it has a Learned Environment model. So what does this mean? Usually if you have a game such as chess, I believe there's a picture of chess down here. If you have a game such as chess and you want to learn to play it, you need to know the rules of chess. In chess you have the rules like the pawn can move to or one, the bishop can move diagonally and so on. Similarly in Shogi or Go here, where you can place the stones and when you win, everything is clearly defined. So what you can do is actually you can plan. You can now think of if I do this opening, my opponent could do either this or this or this. And for each of the three moves, I'll have a response. So if they move this pawn, I'll go for a gambit here and if they move this pawn, then I can move on. Something like this. So in a sense, what you have is a tree search. So you start out with the state you're currently in. And then your opponent, sorry, this should be, you're the state you're currently in. Your opponent has the option of performing any one of these moves. Let's say there are three moves. And then from each of these three moves, you again have the option of performing any of these moves. And the good thing is in chess, you know each exactly what they do. Like if I move my pawn, then the new board configuration will be the pawn will no longer be here, but here. So you know exactly what's going to happen. You can calculate that you're a perfect simulator. And other domains, you don't have that. For example, in Atari, all you have in Atari is this screen, right? Maybe you have a little submarine here, right? You have some opponents, right? The opponent, I don't know, what are the, what do you, what do you, the opponents look like? Fish? I don't even know this game, right? And you can, I think you can shoot just coins to select. I don't know. Okay. In any case, and sometimes you need to go up and there is like a health bar. So, but in essence, you only have this screen here, right? You don't have, you don't have more. And if you press a button, you can, you can, you don't exactly know what's going to happen. You don't exactly know what the pixel space will look like as this shot moves forward. Right? I guess you could know, but you can't, you can't use that to plan because the, the kind of space is too big and your actions may be not, not clearly predictable. And when you win or aren't clearly predictable, there may be randomness. So all of this stuff, usually what people do is here, they do use a model free reinforcement learning. We've had this discussion, you know, before. So this would be model free. And while chess here, you'd go about model based. Now, what MewZero does is it uses a model based planning, but it learns the model. So it, it tries to construct a model for this here. It tries to say, okay, if I have this screen A here, right? My thing is here. And I press the button, right. Then probably my submarine is going to be a bit more to the right. But it doesn't do this exactly. So this has been done before. And this is what's, what's kind of known as learning an environment model, where you map current environment, plus action to the next step in the environment, right? And this usually doesn't work too well, because you're really trying to generate this entire pixel space here. What the cool thing about MewZero is it doesn't do that. It doesn't predict the next state. What it does predict is a hidden state. And let's draw the hidden state as a little cloud here. It predicts a hidden state of the next state. And from the hidden state, it will predict things like the reward, the policy, the value. And then it can use from that hidden state, it'll predict the next hidden state. You end from that, it will again predict the reward. So the base idea is you only predict, you only predict what you absolutely need to obtain the values that are important for doing reinforcement learning. You're not trying to predict the full environment. You're simply trying to predict whatever is necessary. And this here is a learned quantity that whatever is necessary to predict what your RL model is going to need. So that's the basic gist of it. And we'll look at how they do it or how they describe what they're doing. So basically, the picture A here is how MewZero plans. So imagine you have a configuration, a current state. This is an observation. Now this could be a chess board. This could also be a position and show, but it could also be a screen in a notary game or a camera input of a self-driving car and so on. And the first thing it does, it encodes that observation using this H here, this, I believe they call this a representation function. You encode that to this hidden state. Now the hidden state, this is appropriately sized, the hidden state here, is supposed to capture everything you need about the state to predict the kind of RL quantities in the future. And you learn this function H, which in this case, of course, is going to be neural network in order to produce such a state. Now from this state, you do two things. First of all, you have this function F here and they call this I don't remember, but you have a function to predict the following two quantities. You predict the value function at that state. And the value function simply means if you are in this state here, right? This is now not a true state, but a hidden state, but still if you're in this state, in this hidden state that belongs to this observation. Then in the future, you're going to make this much reward on average, with your current policy. That's the value function. So the value function basically tells you how good it is to be in a given state, right? And then the policy, this is a bit special, the policy is predicting how you would act in this state. Now, this is a bit a bit confusing or it was to me when I first learning because we're going to see over here how a mu zero decides on how to act. Namely, it does this entire three search thing up to a certain depth, right? And then it creates this histogram and from that it produces the action. But in order to produce to do this three search, this is exactly this picture. A, this is the that three search that is done. And in order to do that, you need these p values because we'll go there in a second. You need these p values and they cannot themselves again do a three search, right? That would be like infinite recursion. So what you need is you need kind of an estimate, right? Like if I were and especially down down, for it makes more sense, if I were in that state, how would I act, right? If I were to do a three search like this. So you simply build a neural network that tells you with one evaluation without having to do the entire three search down from here, how you would act. This doesn't need to be a perfect approximation of how you would actually act, but it needs to be good enough, right? So this simply tells you how you would act in that state. And that's important because what we do next is we use this policy to generate this action. And this is going to simulate it action. This isn't a real action because the real action would go here to the next actual observation. This is a simulated action saying, if I'm in this hidden state, right? My policy approximately would be this thing. And so I can sample from that and say my action in that state would be this action. And so now I have a hidden state and an action. And from that, I can produce the next hidden state. Now, of course, if I were to apply the action up here to the observation, right? Action one, I would get the next observation. And that is exactly how alpha zero works, right? You use your simulator, your perfect simulator, to take the current observation, the current state with a given action that the or this policy gives you. And you produce the next state, but we don't have a perfect simulator, right? And we don't want to learn a model that predicts the entire state. But what we want to do is we want to predict the following. If we were to take a one here, if, right? We would get an observation. Can we predict the result when we would apply the function H to that, right? Giving me S prime, right? This is observation prime. So this function H here, which is the function that maps from observation space to hidden space. If we were to apply this to the next to the next observation, we would obtain some hidden state for that observation. Can we predict that thing? So we need a function that maps from the hidden state, given an action, right? To the next hidden state. And that's exactly what what happens down here, right? This function G here maps exactly this hidden state plus the action to the next hidden state. And also, also at the same time, it will predict a reward, right? Because in each step, you might get a reward. So each transition here gives you a reward. We're trying to predict that as well. Not that important, especially for games like chess or showy where there's only win or lose at the very end. But they incorporate this here to also be able to play these Atari games and like a broader range of reinforcement learning games. But in essence, that's what it is, right? We're trying to predict the next hidden state. And now we can basically recursively apply this. So from here, I have an idea of what my policy might be in that state, right? My approximate policy, my kind of mini policy that only needs one evaluation. I can sample an action from that policy. And if maybe it's action to here, and I can then predict the next hidden state that I would be in. Also, the reward, right? And therefore, using this, I can do like a three search. So I can simulate future trajectories, right? First, all of these policies, I can sample from them. Right? I can sample from them, give me different actions so that that'll lead me down the tree different routes. So I can simulate future trajectories in this tree. And at the end, I have a pretty good idea. I can do this up to a certain depth, right? I don't have to do it until the very end, I can. And then I'll have a pretty good idea of how my immediate, the immediate future looks, right? Which actions lead me to approximately which states? And for each state, of course, especially for each bottom state here, I have an estimation of the value of that state. So basically, I can, the easiest thing would simply be to whatever search, how many steps is this? One, no, this is zero. One, two, three steps into the future. And for each of these states, obtain the value V here, V here, V, V, V, V, V. And then I simply pick the action up, the action up here. I am running out of colors. I simply pick the action up here that will lead me eventually to the highest value state. So that's, of course, we've not incorporated opponent plays here and so on, but that's the basic idea. You can do this more sophisticated, this tree search. And this is a topic that we might cover in a video about AlphaGo or Alpha0. But in essence, you can do the same thing as AlphaGo or Alpha0 except if you're not working with the simulator, but you're working with a learned model on the hidden states of the true observations. So B is how you would actually act, right? So for each observation here, we'd say you'd run such a tree search and you kind of get a histogram over visited actions. And again, we'll skip over that here, but this, this is a part of the Alpha0 paper and you decide on an action. And that will give you a reward and a next observation. And that's how you act. And then you train these things end to end. So you, you train, you train the networks such that, of course, the reward, you know what the rewards are, right? The reward prediction of G, you know what that should be, right? From given a trajectory and action sequence, you know what the individual reward should be. So that's, you can train G for that, first of all, you can also train to predict the correct value functions. Like in classic reinforcement learning, you can do like an end step into the future prediction or you can play until the end sample trajectories and so on. And the policy, you predict, you, you predict the policy, your approximate policy to match your true actions, right? Because your true actions you've generated by doing this entire tree search thing, which is, you know, that you're, what you're actually going to do. So you're training your approximate policy predictor that you use to run the tree search up to match as close as possible to your actual actions, right? This in this fashion. So this policy resulting from hidden state zero should be as close as possible to the action you actually took in the observation that led to hidden state zero. Yeah, so this is how you search, search, act, and train using mu zero. And this is pretty, this is it, right? This is the rest is experiments. The rest is simply showing that they can handle these games. They can keep the performance basically of the simulator based alpha zero in games. Sorry, where are the results here? Yeah, so in these games, in these left hand games, they can keep the performance of alpha zero, even exceeded here in go. And remember, they don't have a simulator like alpha zero. They have to learn this model. And in Atari, they actually outcompete the current state of the art, which is I think R2D2 or impala. But it's it's some model, I guess some model three RL baseline here on the on Atari. So that's pretty cool. And I think that brings RL to kind of a new level with this hidden learning. And yeah, they so they compare against against multiple ones. Oh, the R2D2 is a different thing. All right. Yeah, so that's that's that. For me, it's a cool paper. It's short, read it if you if you want. Invite you to also look at the additional experiments where they basically a blade, what they need is the learned model really as good or better as the real simulator. Does it take as much time? Actually takes less time, which for for higher elo, which is pretty cool. How many simulations are needed? Things like this. All right, that was it. I like this paper. Check it out. Bye bye. | [{"start": 0.0, "end": 6.88, "text": " Hi there. Today we're looking at mastering Atari, Go, chess, and Shogi by planning with"}, {"start": 6.88, "end": 16.72, "text": " a Learned Model by Julian Schitweiser and people generally from DeepMind. So this paper"}, {"start": 16.72, "end": 25.44, "text": " is an extension to Alpha0, the kind of famous algorithm that learn to play Go and chess"}, {"start": 25.44, "end": 35.04, "text": " simply by playing itself and the kind of cool thing about this model is that it has a Learned"}, {"start": 35.04, "end": 41.04, "text": " Environment model. So what does this mean? Usually if you have a game such as chess, I believe"}, {"start": 41.04, "end": 47.68000000000001, "text": " there's a picture of chess down here. If you have a game such as chess and you want to learn to play it,"}, {"start": 47.68, "end": 57.519999999999996, "text": " you need to know the rules of chess. In chess you have the rules like the pawn can move to or one,"}, {"start": 58.4, "end": 66.64, "text": " the bishop can move diagonally and so on. Similarly in Shogi or Go here, where you can place the"}, {"start": 67.2, "end": 73.68, "text": " stones and when you win, everything is clearly defined. So what you can do is actually you can plan."}, {"start": 73.68, "end": 87.12, "text": " You can now think of if I do this opening, my opponent could do either this or this or this."}, {"start": 87.92, "end": 95.76, "text": " And for each of the three moves, I'll have a response. So if they move this pawn, I'll go for a"}, {"start": 95.76, "end": 106.72, "text": " gambit here and if they move this pawn, then I can move on. Something like this. So in a sense,"}, {"start": 106.72, "end": 111.04, "text": " what you have is a tree search. So you start out with the state you're currently in."}, {"start": 112.16, "end": 117.52000000000001, "text": " And then your opponent, sorry, this should be, you're the state you're currently in. Your opponent"}, {"start": 117.52000000000001, "end": 123.92, "text": " has the option of performing any one of these moves. Let's say there are three moves."}, {"start": 123.92, "end": 130.88, "text": " And then from each of these three moves, you again have the option of performing any of these moves."}, {"start": 130.88, "end": 137.52, "text": " And the good thing is in chess, you know each exactly what they do. Like if I move my pawn,"}, {"start": 138.96, "end": 146.48, "text": " then the new board configuration will be the pawn will no longer be here, but here. So you know"}, {"start": 146.48, "end": 152.16, "text": " exactly what's going to happen. You can calculate that you're a perfect simulator. And other"}, {"start": 152.16, "end": 160.96, "text": " domains, you don't have that. For example, in Atari, all you have in Atari is this screen,"}, {"start": 160.96, "end": 169.35999999999999, "text": " right? Maybe you have a little submarine here, right? You have some opponents, right? The opponent,"}, {"start": 169.35999999999999, "end": 174.72, "text": " I don't know, what are the, what do you, what do you, the opponents look like? Fish? I don't even know"}, {"start": 174.72, "end": 182.0, "text": " this game, right? And you can, I think you can shoot just coins to select. I don't know. Okay."}, {"start": 182.56, "end": 186.07999999999998, "text": " In any case, and sometimes you need to go up and there is like a health bar."}, {"start": 188.0, "end": 194.4, "text": " So, but in essence, you only have this screen here, right? You don't have, you don't have more."}, {"start": 196.0, "end": 201.12, "text": " And if you press a button, you can, you can, you don't exactly know what's going to happen."}, {"start": 201.12, "end": 207.68, "text": " You don't exactly know what the pixel space will look like as this shot moves forward. Right?"}, {"start": 207.68, "end": 215.04, "text": " I guess you could know, but you can't, you can't use that to plan because the, the kind of space"}, {"start": 215.04, "end": 221.36, "text": " is too big and your actions may be not, not clearly predictable. And when you win or aren't"}, {"start": 221.36, "end": 227.28, "text": " clearly predictable, there may be randomness. So all of this stuff, usually what people do is here,"}, {"start": 227.28, "end": 231.76, "text": " they do use a model free reinforcement learning. We've had this discussion, you know, before."}, {"start": 231.76, "end": 240.4, "text": " So this would be model free. And while chess here, you'd go about model based."}, {"start": 244.48, "end": 252.8, "text": " Now, what MewZero does is it uses a model based planning, but it learns the model. So it,"}, {"start": 252.8, "end": 259.68, "text": " it tries to construct a model for this here. It tries to say, okay, if I have this screen A here,"}, {"start": 259.68, "end": 269.12, "text": " right? My thing is here. And I press the button, right. Then probably my submarine is going to be"}, {"start": 269.12, "end": 276.56, "text": " a bit more to the right. But it doesn't do this exactly. So this has been done before. And this is"}, {"start": 276.56, "end": 282.08, "text": " what's, what's kind of known as learning an environment model, where you map current environment,"}, {"start": 282.08, "end": 291.04, "text": " plus action to the next step in the environment, right? And this usually doesn't work too well,"}, {"start": 291.36, "end": 298.56, "text": " because you're really trying to generate this entire pixel space here. What the cool thing about"}, {"start": 298.56, "end": 305.44, "text": " MewZero is it doesn't do that. It doesn't predict the next state. What it does predict is a hidden"}, {"start": 305.44, "end": 311.2, "text": " state. And let's draw the hidden state as a little cloud here. It predicts a hidden state of"}, {"start": 311.2, "end": 317.68, "text": " the next state. And from the hidden state, it will predict things like the reward, the policy,"}, {"start": 318.16, "end": 325.12, "text": " the value. And then it can use from that hidden state, it'll predict the next hidden state."}, {"start": 326.64, "end": 332.4, "text": " You end from that, it will again predict the reward. So the base idea is you only predict,"}, {"start": 332.4, "end": 340.64, "text": " you only predict what you absolutely need to obtain the values that are important for doing"}, {"start": 340.64, "end": 346.64, "text": " reinforcement learning. You're not trying to predict the full environment. You're simply trying"}, {"start": 346.64, "end": 352.4, "text": " to predict whatever is necessary. And this here is a learned quantity that whatever is necessary"}, {"start": 352.4, "end": 364.47999999999996, "text": " to predict what your RL model is going to need. So that's the basic gist of it. And we'll look at"}, {"start": 364.47999999999996, "end": 373.91999999999996, "text": " how they do it or how they describe what they're doing. So basically, the picture A here is how"}, {"start": 373.91999999999996, "end": 381.2, "text": " MewZero plans. So imagine you have a configuration, a current state. This is an observation. Now this could"}, {"start": 381.2, "end": 387.28, "text": " be a chess board. This could also be a position and show, but it could also be a screen in a notary"}, {"start": 387.28, "end": 394.8, "text": " game or a camera input of a self-driving car and so on. And the first thing it does, it encodes"}, {"start": 394.8, "end": 401.44, "text": " that observation using this H here, this, I believe they call this a representation function."}, {"start": 402.71999999999997, "end": 410.08, "text": " You encode that to this hidden state. Now the hidden state, this is appropriately sized,"}, {"start": 410.08, "end": 420.15999999999997, "text": " the hidden state here, is supposed to capture everything you need about the state to predict the"}, {"start": 420.15999999999997, "end": 427.59999999999997, "text": " kind of RL quantities in the future. And you learn this function H, which in this case, of course,"}, {"start": 427.59999999999997, "end": 435.2, "text": " is going to be neural network in order to produce such a state. Now from this state, you do two things."}, {"start": 435.2, "end": 443.68, "text": " First of all, you have this function F here and they call this I don't remember, but you have a"}, {"start": 443.68, "end": 450.08, "text": " function to predict the following two quantities. You predict the value function at that state."}, {"start": 450.08, "end": 457.59999999999997, "text": " And the value function simply means if you are in this state here, right? This is now not a true"}, {"start": 457.59999999999997, "end": 463.76, "text": " state, but a hidden state, but still if you're in this state, in this hidden state that belongs to"}, {"start": 463.76, "end": 473.68, "text": " this observation. Then in the future, you're going to make this much reward on average, with your"}, {"start": 473.68, "end": 478.88, "text": " current policy. That's the value function. So the value function basically tells you how good it is"}, {"start": 478.88, "end": 488.0, "text": " to be in a given state, right? And then the policy, this is a bit special, the policy is predicting"}, {"start": 488.0, "end": 495.92, "text": " how you would act in this state. Now, this is a bit a bit confusing or it was to me when I first"}, {"start": 495.92, "end": 504.16, "text": " learning because we're going to see over here how a mu zero decides on how to act. Namely, it does"}, {"start": 504.16, "end": 510.24, "text": " this entire three search thing up to a certain depth, right? And then it creates this histogram and"}, {"start": 510.24, "end": 517.12, "text": " from that it produces the action. But in order to produce to do this three search, this is exactly"}, {"start": 517.12, "end": 523.6, "text": " this picture. A, this is the that three search that is done. And in order to do that, you need these"}, {"start": 523.6, "end": 531.36, "text": " p values because we'll go there in a second. You need these p values and they cannot themselves"}, {"start": 531.36, "end": 536.8, "text": " again do a three search, right? That would be like infinite recursion. So what you need is you need"}, {"start": 536.8, "end": 546.48, "text": " kind of an estimate, right? Like if I were and especially down down, for it makes more sense,"}, {"start": 546.48, "end": 553.76, "text": " if I were in that state, how would I act, right? If I were to do a three search like this. So you"}, {"start": 553.76, "end": 559.84, "text": " simply build a neural network that tells you with one evaluation without having to do the entire"}, {"start": 559.84, "end": 566.88, "text": " three search down from here, how you would act. This doesn't need to be a perfect approximation"}, {"start": 567.52, "end": 573.36, "text": " of how you would actually act, but it needs to be good enough, right? So this simply tells you"}, {"start": 573.36, "end": 580.16, "text": " how you would act in that state. And that's important because what we do next is we use this policy"}, {"start": 580.16, "end": 586.0, "text": " to generate this action. And this is going to simulate it action. This isn't a real action because"}, {"start": 586.0, "end": 591.36, "text": " the real action would go here to the next actual observation. This is a simulated action saying,"}, {"start": 591.36, "end": 600.5600000000001, "text": " if I'm in this hidden state, right? My policy approximately would be this thing. And so I can"}, {"start": 600.56, "end": 607.52, "text": " sample from that and say my action in that state would be this action. And so now I have a hidden"}, {"start": 607.52, "end": 615.1199999999999, "text": " state and an action. And from that, I can produce the next hidden state. Now, of course, if I were to"}, {"start": 615.1199999999999, "end": 621.52, "text": " apply the action up here to the observation, right? Action one, I would get the next observation."}, {"start": 622.0799999999999, "end": 629.76, "text": " And that is exactly how alpha zero works, right? You use your simulator, your perfect simulator,"}, {"start": 629.76, "end": 635.36, "text": " to take the current observation, the current state with a given action that the"}, {"start": 635.36, "end": 640.24, "text": " or this policy gives you. And you produce the next state, but we don't have a perfect simulator,"}, {"start": 640.24, "end": 645.84, "text": " right? And we don't want to learn a model that predicts the entire state. But what we want to do"}, {"start": 645.84, "end": 651.76, "text": " is we want to predict the following. If we were to take a one here, if, right?"}, {"start": 651.76, "end": 661.6, "text": " We would get an observation. Can we predict the result when we would apply the function H"}, {"start": 663.12, "end": 670.72, "text": " to that, right? Giving me S prime, right? This is observation prime. So this function H here,"}, {"start": 670.72, "end": 677.92, "text": " which is the function that maps from observation space to hidden space. If we were to apply this"}, {"start": 677.92, "end": 685.12, "text": " to the next to the next observation, we would obtain some hidden state for that observation."}, {"start": 685.12, "end": 693.04, "text": " Can we predict that thing? So we need a function that maps from the hidden state,"}, {"start": 694.0, "end": 702.24, "text": " given an action, right? To the next hidden state. And that's exactly what what happens down here,"}, {"start": 702.24, "end": 711.2, "text": " right? This function G here maps exactly this hidden state plus the action to the next hidden state."}, {"start": 712.48, "end": 720.8, "text": " And also, also at the same time, it will predict a reward, right? Because in each step,"}, {"start": 720.8, "end": 726.48, "text": " you might get a reward. So each transition here gives you a reward. We're trying to predict that"}, {"start": 726.48, "end": 730.88, "text": " as well. Not that important, especially for games like chess or showy where there's only"}, {"start": 730.88, "end": 737.04, "text": " win or lose at the very end. But they incorporate this here to also be able to play these Atari games"}, {"start": 737.04, "end": 742.88, "text": " and like a broader range of reinforcement learning games. But in essence, that's what it is, right?"}, {"start": 742.88, "end": 747.4399999999999, "text": " We're trying to predict the next hidden state. And now we can basically recursively apply this. So"}, {"start": 747.4399999999999, "end": 754.64, "text": " from here, I have an idea of what my policy might be in that state, right? My approximate policy,"}, {"start": 754.64, "end": 762.16, "text": " my kind of mini policy that only needs one evaluation. I can sample an action from that policy."}, {"start": 762.72, "end": 769.52, "text": " And if maybe it's action to here, and I can then predict the next hidden state that I would be in."}, {"start": 770.48, "end": 778.8, "text": " Also, the reward, right? And therefore, using this, I can do like a three search. So I can"}, {"start": 778.8, "end": 785.1999999999999, "text": " simulate future trajectories, right? First, all of these policies, I can sample from them."}, {"start": 786.4, "end": 792.64, "text": " Right? I can sample from them, give me different actions so that that'll lead me down the tree"}, {"start": 792.64, "end": 800.16, "text": " different routes. So I can simulate future trajectories in this tree. And at the end, I have a"}, {"start": 800.16, "end": 805.1999999999999, "text": " pretty good idea. I can do this up to a certain depth, right? I don't have to do it until the very"}, {"start": 805.2, "end": 813.9200000000001, "text": " end, I can. And then I'll have a pretty good idea of how my immediate, the immediate future looks,"}, {"start": 813.9200000000001, "end": 820.32, "text": " right? Which actions lead me to approximately which states? And for each state, of course,"}, {"start": 820.32, "end": 825.36, "text": " especially for each bottom state here, I have an estimation of the value of that state. So"}, {"start": 825.36, "end": 832.48, "text": " basically, I can, the easiest thing would simply be to whatever search, how many steps is this? One,"}, {"start": 832.48, "end": 840.88, "text": " no, this is zero. One, two, three steps into the future. And for each of these states, obtain the"}, {"start": 840.88, "end": 849.04, "text": " value V here, V here, V, V, V, V, V. And then I simply pick the action up, the action up here."}, {"start": 849.76, "end": 855.52, "text": " I am running out of colors. I simply pick the action up here that will lead me eventually to"}, {"start": 855.52, "end": 865.28, "text": " the highest value state. So that's, of course, we've not incorporated opponent plays here and so on,"}, {"start": 865.28, "end": 871.6, "text": " but that's the basic idea. You can do this more sophisticated, this tree search. And this is a"}, {"start": 871.6, "end": 879.52, "text": " topic that we might cover in a video about AlphaGo or Alpha0. But in essence, you can do the same"}, {"start": 879.52, "end": 886.24, "text": " thing as AlphaGo or Alpha0 except if you're not working with the simulator, but you're working"}, {"start": 886.24, "end": 894.24, "text": " with a learned model on the hidden states of the true observations. So B is how you would actually"}, {"start": 894.24, "end": 901.36, "text": " act, right? So for each observation here, we'd say you'd run such a tree search and you kind of"}, {"start": 901.36, "end": 907.6, "text": " get a histogram over visited actions. And again, we'll skip over that here, but this, this is a"}, {"start": 907.6, "end": 914.64, "text": " part of the Alpha0 paper and you decide on an action. And that will give you a reward and a"}, {"start": 914.64, "end": 922.24, "text": " next observation. And that's how you act. And then you train these things end to end. So you,"}, {"start": 923.52, "end": 933.28, "text": " you train, you train the networks such that, of course, the reward, you know what the rewards"}, {"start": 933.28, "end": 939.6, "text": " are, right? The reward prediction of G, you know what that should be, right? From given a trajectory"}, {"start": 939.6, "end": 946.48, "text": " and action sequence, you know what the individual reward should be. So that's, you can train G for that,"}, {"start": 947.12, "end": 954.88, "text": " first of all, you can also train to predict the correct value functions. Like in classic reinforcement"}, {"start": 954.88, "end": 961.4399999999999, "text": " learning, you can do like an end step into the future prediction or you can play until the end"}, {"start": 961.44, "end": 969.5200000000001, "text": " sample trajectories and so on. And the policy, you predict, you, you predict the policy, your"}, {"start": 969.5200000000001, "end": 976.5600000000001, "text": " approximate policy to match your true actions, right? Because your true actions you've generated by"}, {"start": 976.5600000000001, "end": 984.48, "text": " doing this entire tree search thing, which is, you know, that you're, what you're actually going to do."}, {"start": 984.48, "end": 991.76, "text": " So you're training your approximate policy predictor that you use to run the tree search"}, {"start": 991.76, "end": 1004.4, "text": " up to match as close as possible to your actual actions, right? This in this fashion. So this"}, {"start": 1004.4, "end": 1013.12, "text": " policy resulting from hidden state zero should be as close as possible to the action you actually took"}, {"start": 1013.12, "end": 1022.88, "text": " in the observation that led to hidden state zero. Yeah, so this is how you search, search, act,"}, {"start": 1022.88, "end": 1032.48, "text": " and train using mu zero. And this is pretty, this is it, right? This is the rest is experiments."}, {"start": 1032.48, "end": 1040.4, "text": " The rest is simply showing that they can handle these games. They can keep the performance basically"}, {"start": 1040.4, "end": 1048.0, "text": " of the simulator based alpha zero in games. Sorry, where are the results here? Yeah, so in these"}, {"start": 1048.0, "end": 1055.2, "text": " games, in these left hand games, they can keep the performance of alpha zero, even exceeded here in"}, {"start": 1055.2, "end": 1062.8000000000002, "text": " go. And remember, they don't have a simulator like alpha zero. They have to learn this model."}, {"start": 1062.8, "end": 1072.32, "text": " And in Atari, they actually outcompete the current state of the art, which is I think R2D2"}, {"start": 1073.68, "end": 1081.84, "text": " or impala. But it's it's some model, I guess some model three RL baseline here on the on Atari."}, {"start": 1083.2, "end": 1090.24, "text": " So that's pretty cool. And I think that brings RL to kind of a new level with this hidden learning."}, {"start": 1090.24, "end": 1100.08, "text": " And yeah, they so they compare against against multiple ones. Oh, the R2D2 is a different thing. All right."}, {"start": 1104.32, "end": 1112.4, "text": " Yeah, so that's that's that. For me, it's a cool paper. It's short, read it if you if you want."}, {"start": 1113.84, "end": 1117.2, "text": " Invite you to also look at the additional experiments where they basically"}, {"start": 1117.2, "end": 1123.3600000000001, "text": " a blade, what they need is the learned model really as good or better as the real simulator."}, {"start": 1123.3600000000001, "end": 1129.2, "text": " Does it take as much time? Actually takes less time, which for for higher elo, which is pretty cool."}, {"start": 1129.2, "end": 1151.2, "text": " How many simulations are needed? Things like this. All right, that was it. I like this paper. Check it out. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=KXEEqcwXn8w | A neurally plausible model learns successor representations in partially observable environments | Successor representations are a mid-point between model-based and model-free reinforcement learning. This paper learns successor representation in environments where only incomplete information is available.
Abstract:
Animals need to devise strategies to maximize returns while interacting with their environment based on incoming noisy sensory observations. Task-relevant states, such as the agent's location within an environment or the presence of a predator, are often not directly observable but must be inferred using available sensory information. Successor representations (SR) have been proposed as a middle-ground between model-based and model-free reinforcement learning strategies, allowing for fast value computation and rapid adaptation to changes in the reward function or goal locations. Indeed, recent studies suggest that features of neural responses are consistent with the SR framework. However, it is not clear how such representations might be learned and computed in partially observed, noisy environments. Here, we introduce a neurally plausible model using distributional successor features, which builds on the distributed distributional code for the representation and computation of uncertainty, and which allows for efficient value function computation in partially observed environments via the successor representation. We show that distributional successor features can support reinforcement learning in noisy environments in which direct learning of successful policies is infeasible.
Authors: Eszter Vertes, Maneesh Sahani
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, hi there. Today we're looking at a nearly plausible model, learned successor representations in partially observable environments by Esther Verdes and Manish Saani. This paper is a paper on a topic that has been interesting for a while and that's successor representations. So we'll dive into all of this. The title is Ferro Langthian complicated, but ultimately we're dealing with a setting of reinforcement learning. So if you know something about reinforcement learning in reinforcement learning, usually you have a agent which yeah let's just say this is you and there is an environment which is a big black box that you don't know anything about. This is environment and what the environment gives you is what's called an observation. So an observation could be anything but in this case let's just assume it's a you get a little picture right of what's in front of you. So in front of you might be a tree and in front of you might be a house and then you can perform an action a and this action in this case might be to enter the house right and then the environment in the next step it gives you back a new picture and says ah you're now inside the house. So here is a door that leads you to this room and the door that leads you that room and there's a little table in front of you. Right so this is this just this cycle of action observation and with that you're trying to collect some reward over time. Now there are different ways of achieving this this reward over time so basically the reward is going to be for example or you could get a reward for finding the kitchen or for going as into as many rooms as possible or you know anything like this. So the other objective is to learn a what's called a policy. So which actions to take. So action one action two action three given the observations that maximizes your rewards. So there's mainly two ways to go about this. There's the model free and the model based reinforcement learning approach. Let's split them. So in the model free model free approach what you're trying to do is you're trying to simply learn a policy and we call this here pi of s and s is your state and the state you can think of it as the observation. So in this policy we'll simply output an action and this this is the kind of the simplest setup of model free reinforcement learning. The important thing here is you're trying to learn this usually there's parameters theta of this policy pi. This could be a neural network and the theta are then the weights of the neural network. So you're trying to learn the neural network such that if you give it a state it just outputs the action. So you have this neural network with your state you input the state into layer layer layer layer layer layer and then it outputs one of maybe three actions. Go North go South go West maybe go East. Right this could be four four actions. You're just trying to train the neural network using back prop and the reward signal through what's called the reinforced trick or variance thereof. This is model free reinforcement learning. It's very it's very easy to implement let's say and it's very applicable and it will simply give you a mapping. You know you have to know nothing about how the world works. It'll simply tell you at the end if you're in this state do that action and the reward will be high. In contrast there is the other world. This is the model based reinforcement learning. So in model based reinforcement learning what you have is a model of the world and the model of the world. Let's is best described for example if you play chess right. So if you play chess and this is a let's do a simplified chess board here four by four and you have a pawn right here right here. You have a pawn and you know if I do the action of moving the pawn forward I know the pawn will then be in this square right here right in the next time step. I know that because I have a model of the world and know how the world works and I can predict basically the results of my actions. So if you have a model based reinforcement learning setup if you know how the world works you can do something like a search. So given you're here in the state right you know if I do action one I go to this state if I do action two I go to that state and if I do action three I go to this other state and from each of the states you can then say ah but again I have three actions and I can you know go into these three states go into these maybe here two and maybe here I can go into these actually let's do three as well right and then the question more becomes can you find a path through this thing such that at the end you are in the state that you want to end up right. So for example here is outside and then here you can go to the tree to the house or to the field and in the house you can go to the bedroom the bathroom you know the kitchen and you know all of this you have a model so you can actually kind of compute what would happen if I do something and then search for the best path whereas in the model free reinforcement learning approach what you simply do is you'd say here is a state and the state is for example I am in the house and now give me the action that would maximize my future reward and you're trying to learn this directly so it's a very different style of reinforcement learning basically one is a one is a pure machine learning approach and the other one is a search problem now you can of course mix and match the two like for example people in AlphaGo have done they have a model based reinforcement learning that also has kind of a learning machine learning elements but in between now we have the successor features so the successor representations they are if you will they are somewhere in between the two so they kind of trade off the advantages of model free where you only have to learn a function right from state to something with the advantages of model base the fact that you actually have a bit of an idea of how the world works and can adjust quickly to let's say different reward structures or things like this so what do successor representations do successor representations basically learn how states are connected and this is a classic successor representation so the successor representation M here of policy pi the policy remember is what tells you which action you should take in a given state you you define it as a connection between state i and state j and m of s i s j means given that i am in s i so this could be the kitchen and your goal is to find in the bedroom and if this is the kitchen given that i am in state s i what's the probability that in the future at some point i will transition to s i right given that i'm in the kitchen what's the probability that i'll end up in the bedroom at some point in the future and this is formally expressed this is the expectation over your policy and it's the it's the indicator function that the future state sorry this is the future state t plus k you see k goes from zero to infinity so for all of the future and s t is the one you're in now so for any future state this is equal to s j now of course this makes no sense unless you kind of discount I have a discount factor here so if you're in state if you're in the bedroom further in the future then this value would be lower so that this value is high if you will transition from s i to s j with high probability in the near future and this is a successor representation right it basically tells you if you want to go from state s i to state s j how likely is that in the near future right so if the if this number is high you know that that these two states are closely connected that you can expect to end up in state s j somewhere down the line if you're in s i now and one more representation if you consider the vector m pi of s i given all of the s j some dot here so this is a vector you can actually compare two states s i so if one is if you plug in here you plug in the kitchen and then also you plug in the I don't know the garage if they and you will get out two vectors right you get two vectors if those vectors are very similar then you know that if if you're in the kitchen or in the garage it doesn't matter you're gonna end up you have a similar future trajectory basically however if those two vectors are far apart you know that these two states are far apart with respect to your policy so this is pretty cool things you can do with successor representations and I hope this gives you kind of some insight so another neat trick is that if you have a value function so and the value function in this case there's a simplified assumption but you don't actually need it the simplified assumption is that the reward only depends on the state you're in basically doesn't matter how you get to the state like the actions you perform if you're in a given state so if you're in a given room in the house you'll get some reward like for example if you find the bedroom then you win that's a reward that would only be characterized by the state if that's the case you can compute the value function of of the reinforcement learning problem simply by integrating over the over the success representations so for each state you simply go over all of the possible other states and you ask how likely am I to go to that state and what reward will I have in that state and that's your value function so pretty pretty simple you can actually learn the successor representations by TD learning by temporal difference learning which is a method that's applied throughout like throughout reinforcement learning especially in thing places like Q learning and yeah and also for learning value functions so pretty neat successor representations this paper then goes from successor representations of individual state to successor representations over continuous space so right now we have these states state kitchen you go to the bedroom you go to somewhere right and these states were kind of discrete places so there was a house and you have different you know different rooms in the house and you can go between them now we're dealing more with continuous states so you can generalize these successor representations to continue state by considering not the states themselves but features of the of the state and a feature and this here you have to kind of imagine as binary features and the features let me give like some really dumb examples but maybe it helps you like one feature could be the smell does it smell in the room like just binary does it smell or doesn't it smell right and then one feature could there be is there sunlight and then one feature could be is it warm right with and these are all binary features and so you kind of you have to build the features such that if the features are the same then the states should be fairly you know close in whatever sense so for example if it smells but there is no sunlight you're probably somewhere in the bathroom like where exactly in xy coordinates you are in the bathroom it doesn't really matter to this as long as like the features are high and so if it smells and there is no sunlight you're probably somewhere in the bathroom and that makes all the states in the bathroom all the coordinates close together so this is how you have to imagine these features you can define your success or representations exactly the same over these features except that the representation is now not from state i to state j but from a state to a given feature so that means if I am in state st at the current time what is the probability that in the near future this feature will be high right so if I am right now in the or close to the bathroom let's say the the probability that smell oh sorry this should be a highlight the probability that smell is high in the future is very high right so this this number would be high so exactly the same except for these continuous features now and you can do the same thing including defining the value function as a simple linear multiplication with these features that is an assumption under the assumption that the reward is a linear function of the features of the states which is the analogous assumption to saying that the reward only depends on the state in the linear case or somewhat of an analogous function not entirely all right so you can also learn this by temporal difference learning exactly the same so this is pretty cool these are the success or representations and you can actually you know if you learn them you have kind of a model of how the world works not as much a model as the model based reinforcement learning where you know exactly how it works right here you know exactly how the world works you have this model in model three you don't know how the world works at all you simply know if I'm in this state and do this action that that'll turn out really well but in the successor representation framework you have you have an idea of what states there are we'll do the discrete case right now so this could be kitchen this could be outdoor this could be bedroom and so you you you have an idea what states there are and so on and how they connect to each other like you say ah from the kitchen I can easily go to the bedroom but I cannot as well go to maybe the bathroom from outdoor I can easily go to the kitchen but I can't go to the bedroom and so on so you have kind of an idea of how all of these states connect to each other and that that is the successor representation you can already see how that helps learning agent a lot if you introduce the successor if you have the successor representation now what this this paper deals with in essence is it says okay the successor representations are cool but it has only so far been done in a case where you have full observability and the full observability is the case where you kind of know what state you're in right you kind of know that um sorry you are in the kitchen you are outdoors you are in the bedroom that is not known but what if you don't and I mean most problems you don't what if you just have a picture like here right you just see a tree in the house right you don't you kind of have to infer that you are outdoors right and if you're here you just get this picture of a couple of doors and the table and you have to infer that you are now in the living room right so in essence there is an additional layer of complexity not only do you go from state from state to state to state but you don't actually observe the states what you observe is from each state you observe what are called observations right so you only observe these and you have to infer what the kind of have to guess what the underlying states are in order to know what you should do to get to the next state right you only ever observe the observations so this here is the actual thing this is kitchen and this here could be a picture of the kitchen right there's a counter there's a stove yeah and so you you get kind of what I what I mean in their example they have they simplify this to kind of a toy data setup where you have this environment and this is one beautiful picture I don't why oh well just you have one this setup and it is this box basically this box and it has this wall right and then you have an agent that is able to walk around in here like with whatever policy the policy determines how it walks around but then what you observe is not the actual position but what you observe is for example for this position you observe a random point here so they basically add noise to each observer to each state and if you're in this state you will observe one of these points in this circle right so your your trajectory might look to you as you observe it much more much like go for example from here to here to here to here and you kind of have to guess what the underlying state is and you see this here this this blue thing is what the agent actually does but the gray thing is what it observes and the observations are sometimes even outside of this of this boundary and this this orange thing is now the infertile and that's what we actually want is to go from the observed to this inferred and we want to that inferred is as close as possible to this true latent state right so the way they do it is they introduce this distributional distributed coding for the for the features basically what they say is they say we will build a framework where we can where we represent the features as as as expectations over some distribution and the expectation will call mu and mu is simply the kind of mean of this of this feature under this distribution this is very general so let's look at what at how to plug this in so what they now have to do is they have to learn these two things right they have to first of all if I draw this picture again these are the underlying states and they kind of transition into each other so this is state one state two state three and with action one action two we transition from state to state but also there are these observations observation one observation two observation three so the agent needs to learn two different things first of all it needs to learn given an observation what state am I probably in right this is the first thing it needs to learn and then the second thing it needs to learn is given this state and this action what's the next state that I will go to right and this is and this this of course these things down here they're not observed so these things down here you can only do in distribution so I'm going to represent this with it P here you can only kind of do this in distribution and the way they handle it is they always maintain the expected value of these things and that's they do this in this wake sleep algorithm all right so this is me re-recording this part because I have done a terrible job at the first time so I want to understand this wake sleep algorithm to compute the things that we don't know let me draw this actually again so the way this algorithm does it is actually pretty cool it has two phases sleep phase and awake phase and it alternates between the two constantly it's kind of like expectation maximization and ultimately what you want to learn are two different sets of parameters w and t now you whenever you learn t you use w the one that you've already learned and whenever you learn w you use the t that you've already learned so it's kind of a bootstrapping each other up the two functions you learn here are this f w and the t here so t is just a matrix and f of w is a function the function has weights the weights w so you see in the sleep phase you update w and in the wake phase you update team now why is this called wake and sleep it's because in the wake phase you you're actually so called awake and you use real observations so in the wake phase and I find it easier to start actually at the wake phase in the wake phase you collect observations so you let your agent go around your it's environment and collect a bunch of observations you you don't know what the states are what you do is simply you collect these observations now it's not that important what the policy is here so you you basically follow some policy and you collect these observations right and then what you what you say is okay I have the function f of w and remember since we're in the wake phase we're learning t so we assume we already have the w in in essence in practice we start out with a random one and right and then kind of alternate between the two phases until both get really good but so we already have a w and we use it to update team how do we do this this is we need to understand what this function f of w does f of w takes this mu and the current observation and produces a new mu so what is a mu this this mu here this mu here as we saw above here the mu is the expectation over the features and in essence the mu is a guess the mu is your best guess of what the features of the state are or in the in the discrete case you could also say a guess of what the state is right so you don't know the state right but you what you want to maintain is a distribution over states so you want to kind of maintain this distribution but you can't you know calculate you can't properly efficiently calculate with an entire distribution unless you assume it's some sort of Gaussian or so but what you can do is you can simply take its mean mu right and and that's your best guess for what the state is the state could be anywhere anywhere here right according to this distribution but you simply come up with mu which is your best guess so the function the function f of w it takes in the best guess of where you were up until the last step and it also takes as an argument your current observation and it gives you the output of f is mu t right is the best guess of where you are now yeah it's pretty so pretty straightforward if you think about it so for every for every observation you want to have kind of a guess of where the yours what your state is and that's mu right so what f does is it takes whatever observations you had these observations gave rise to a mu that guess where you are you take this mu and you take this observation and from that you derive the next guess of where you are yeah you just say I guessed I was in the kitchen before now I moved I observed that I move through some sort of door and there's some sort of table so given that I get I thought I was in the kitchen and that I observed this thing now I'm probably in the living room all right that's what f w is so you input the observations that you had and you input your current observation right to get the guess of where you're next and these are real observations right and then you simply update t what does t do t relates your current and your next guess and that's important we already said that if takes your your kind of last guess and gives you the next guess t does kind of the same thing but t does it without having without relying on an additional observation t simply says well if I am here or if my guess is that I am in the kitchen then what's the probability that in the next step I'll be in the living room without observing anything right it's t is simply relating states to each other or relating guesses of states to each other right so it's simply it's simply saying well under the current policy that I am what is the kind of distribution of going from one room to the next room right so you learn in the wake face you learn the t the t simply represents how you move from state to state so it's exactly basically this function here except that it's not from state to state but it relates your guess about your guess your mu of the state one to the mu of the state two right and then in the sleep face so in the sleep face you you now assume that you have a good estimate of how the states relate to each other and what you can then do is you can actually sample trajectories and this is why it's called sleeping it's kind of like dreaming so given that you have a model t of how states transition to each other or your your guesses about states more precisely you can have sample state trajectories so you can dream up how you would move in an environment right and the assumption here is that you know the process that that if you have a state that gives you an observation for example in their experiments is always the state x x y coordinates and that's corrupted by Gaussian noise there is also ways to learn this transition this what's called the this is what's called the observation process right but you assume you know it so you can sample trajectories of states and corresponding observations right now this is not the real world but this is using this t down here you kind of know how or you kind of have some sort of model can you learn a model of how you move about the world so you sample these trajectories and from these trajectories you can now learn the f of w function so you see since you know what the state is right you can compute these features exactly and then you can learn this f of w function that gives you a takes in the guess of the last state and the current observation and gives you the next the guess of the next state and that you can then use temporal difference learning this is always here also with the t here we have temporal difference kind of a temporal difference learning to learn the parameters w so it's it's very kind of convoluted but ultimately it's a simple process in the wake phase you go into the world and actually collect real observations right and you have a method of deriving from these observations deriving the guesses about the states and so what you can do is you can learn a transition between the states right if you have a good guess of what the states are given each observation you can learn how to transition from one state to the next state except you don't do it in actual states you do it in guesses about states then once you have a model of how you move from one state to the next state you can go and dream up such state trajectories right you can dream a state trajectories and therefore also you can dream how you would observe them and given that you can learn then a better function that relates your guess your guess about a state given the observation to the actual features of the state since for this particular thing you know what the state is right so this this is this two step process notice the cool thing we've never actually had to learn this mu explicitly we never had to learn how to go from observations to your guesses about states because we can compute this recursively right so you simply start out with mu zero which is a guess about the initial state and then you go to mu one and mu two and you never actually have to learn that function all right all right so that's how they that's how they kind of learn these these success representations and experiments of this are fairly cool here is another diagram of how that looks like you have a state this gives you an observation and from that you derive a guess of what this state is right so you can now look at what the agent learned the agent actually learns dynamics of this room it means if you're here you probably go somewhere right there's no clear direction but if you're close to the wall your next states are probably going to be inwards of this wall right and yeah I've already shown you this picture so they have a last cool experiment here where what they do is they specify a reward and the reward is down here and from each state you want to know which way do I have to go to right get the reward now if they give the agent the value of the latent state and the latent state here are just your xy coordinates if they give this to the agent and they let it run they let it learn the structure of the world it will correctly conclude a holoc these are the kind of here is high value states lower lower lower lower lower lower value states right up until over here are the most low value states because you travel the longest to go to the reward if you just give it the observation the noisy observation it will actually assign high value to states here because of course it it doesn't infer the latent state it simply takes the observation as face value says well I was here and I reached here pretty quickly so it must be a good state but in fact it wasn't here it was here and the added noise would just corrupt the observation so you see it learns kind of a wrong model of the world whereas if you use this ddc you see who sorry about that if you use this ddc you see you are much closer to the true state of the world like to the to the one on the left here where you so on the left here you actually kind of cheat right you give it the actual state but on here you give it the observation but tell it it's actually a noisy observation and you use what this paper proposed and again it will learn to assign a low value to these states because it needs to go all the way around even though it has seen supposedly seen the agent go from here to here directly but it kind of understands that it's just a noisy observation all right so this was this from this paper it's a very very cool approach I think to reinforcement learning and there's some more experiments where you can see that this ddc actually helps and I'm excited about successor representations and how to incorporate them in reinforcement learning because it seems a perfect kind of middle ground between model based and model free rl right with that thanks for listening and bye bye | [{"start": 0.0, "end": 4.92, "text": " Alright, hi there. Today we're looking at a nearly plausible model,"}, {"start": 4.92, "end": 10.0, "text": " learned successor representations in partially observable environments by Esther"}, {"start": 10.0, "end": 18.36, "text": " Verdes and Manish Saani. This paper is a paper on a topic that has been"}, {"start": 18.36, "end": 25.04, "text": " interesting for a while and that's successor representations. So we'll dive"}, {"start": 25.04, "end": 30.799999999999997, "text": " into all of this. The title is Ferro Langthian complicated, but ultimately we're"}, {"start": 30.799999999999997, "end": 36.08, "text": " dealing with a setting of reinforcement learning. So if you know something about"}, {"start": 36.08, "end": 41.879999999999995, "text": " reinforcement learning in reinforcement learning, usually you have a agent"}, {"start": 41.879999999999995, "end": 50.42, "text": " which yeah let's just say this is you and there is an environment which is a"}, {"start": 50.42, "end": 55.46, "text": " big black box that you don't know anything about. This is environment and what"}, {"start": 55.46, "end": 59.14, "text": " the environment gives you is what's called an observation. So an observation"}, {"start": 59.14, "end": 63.94, "text": " could be anything but in this case let's just assume it's a you get a little"}, {"start": 63.94, "end": 69.3, "text": " picture right of what's in front of you. So in front of you might be a tree and"}, {"start": 69.3, "end": 76.22, "text": " in front of you might be a house and then you can perform an action a and this"}, {"start": 76.22, "end": 81.94, "text": " action in this case might be to enter the house right and then the environment in"}, {"start": 81.94, "end": 86.26, "text": " the next step it gives you back a new picture and says ah you're now inside the"}, {"start": 86.26, "end": 91.34, "text": " house. So here is a door that leads you to this room and the door that leads you"}, {"start": 91.34, "end": 95.42, "text": " that room and there's a little table in front of you. Right so this is this"}, {"start": 95.42, "end": 101.22, "text": " just this cycle of action observation and with that you're trying to collect"}, {"start": 101.22, "end": 110.14, "text": " some reward over time. Now there are different ways of achieving this this"}, {"start": 110.14, "end": 115.22, "text": " reward over time so basically the reward is going to be for example or you"}, {"start": 115.22, "end": 122.18, "text": " could get a reward for finding the kitchen or for going as into as many rooms as"}, {"start": 122.18, "end": 126.9, "text": " possible or you know anything like this. So the other objective is to learn a"}, {"start": 126.9, "end": 131.54, "text": " what's called a policy. So which actions to take. So action one action two action"}, {"start": 131.54, "end": 137.42000000000002, "text": " three given the observations that maximizes your rewards. So there's mainly two"}, {"start": 137.42000000000002, "end": 141.34, "text": " ways to go about this. There's the model free and the model based reinforcement"}, {"start": 141.34, "end": 149.14000000000001, "text": " learning approach. Let's split them. So in the model free model free approach what"}, {"start": 149.14000000000001, "end": 154.18, "text": " you're trying to do is you're trying to simply learn a policy and we call this"}, {"start": 154.18, "end": 160.06, "text": " here pi of s and s is your state and the state you can think of it as the"}, {"start": 160.06, "end": 168.82, "text": " observation. So in this policy we'll simply output an action and this this is"}, {"start": 168.82, "end": 171.94, "text": " the kind of the simplest setup of model free reinforcement learning. The"}, {"start": 171.94, "end": 176.38, "text": " important thing here is you're trying to learn this usually there's parameters"}, {"start": 176.38, "end": 181.5, "text": " theta of this policy pi. This could be a neural network and the theta are then"}, {"start": 181.5, "end": 185.86, "text": " the weights of the neural network. So you're trying to learn the neural network such"}, {"start": 185.86, "end": 190.78, "text": " that if you give it a state it just outputs the action. So you have this neural"}, {"start": 190.78, "end": 195.42, "text": " network with your state you input the state into layer layer layer layer layer"}, {"start": 195.42, "end": 202.02, "text": " layer and then it outputs one of maybe three actions. Go North go South go West"}, {"start": 202.02, "end": 207.18, "text": " maybe go East. Right this could be four four actions. You're just trying to"}, {"start": 207.18, "end": 212.02, "text": " train the neural network using back prop and the reward signal through what's"}, {"start": 212.02, "end": 216.9, "text": " called the reinforced trick or variance thereof. This is model free"}, {"start": 216.9, "end": 222.26000000000002, "text": " reinforcement learning. It's very it's very easy to implement let's say and"}, {"start": 222.26000000000002, "end": 227.58, "text": " it's very applicable and it will simply give you a mapping. You know you"}, {"start": 227.58, "end": 231.66, "text": " have to know nothing about how the world works. It'll simply tell you at the"}, {"start": 231.66, "end": 237.5, "text": " end if you're in this state do that action and the reward will be high. In contrast"}, {"start": 237.5, "end": 244.01999999999998, "text": " there is the other world. This is the model based reinforcement learning. So in"}, {"start": 244.01999999999998, "end": 248.34, "text": " model based reinforcement learning what you have is a model of the world and"}, {"start": 248.34, "end": 253.82, "text": " the model of the world. Let's is best described for example if you play chess"}, {"start": 253.82, "end": 259.3, "text": " right. So if you play chess and this is a let's do a simplified chess board here"}, {"start": 259.3, "end": 267.46000000000004, "text": " four by four and you have a pawn right here right here. You have a pawn and you"}, {"start": 267.46000000000004, "end": 273.34000000000003, "text": " know if I do the action of moving the pawn forward I know the pawn will then be"}, {"start": 273.34000000000003, "end": 278.5, "text": " in this square right here right in the next time step. I know that because I"}, {"start": 278.5, "end": 282.98, "text": " have a model of the world and know how the world works and I can predict"}, {"start": 282.98, "end": 287.78000000000003, "text": " basically the results of my actions. So if you have a model based reinforcement"}, {"start": 287.78, "end": 291.97999999999996, "text": " learning setup if you know how the world works you can do something like a"}, {"start": 291.97999999999996, "end": 297.94, "text": " search. So given you're here in the state right you know if I do action one I"}, {"start": 297.94, "end": 302.65999999999997, "text": " go to this state if I do action two I go to that state and if I do action three I"}, {"start": 302.65999999999997, "end": 306.7, "text": " go to this other state and from each of the states you can then say ah but again"}, {"start": 306.7, "end": 312.65999999999997, "text": " I have three actions and I can you know go into these three states go into these"}, {"start": 312.65999999999997, "end": 317.61999999999995, "text": " maybe here two and maybe here I can go into these actually let's do three as"}, {"start": 317.62, "end": 323.5, "text": " well right and then the question more becomes can you find a path through this"}, {"start": 323.5, "end": 330.02, "text": " thing such that at the end you are in the state that you want to end up right."}, {"start": 330.02, "end": 336.46, "text": " So for example here is outside and then here you can go to the tree to the house"}, {"start": 336.46, "end": 344.18, "text": " or to the field and in the house you can go to the bedroom the bathroom you know"}, {"start": 344.18, "end": 349.22, "text": " the kitchen and you know all of this you have a model so you can actually kind"}, {"start": 349.22, "end": 353.22, "text": " of compute what would happen if I do something and then search for the best path"}, {"start": 353.22, "end": 358.34000000000003, "text": " whereas in the model free reinforcement learning approach what you simply do is"}, {"start": 358.34000000000003, "end": 365.14, "text": " you'd say here is a state and the state is for example I am in the house and now"}, {"start": 365.14, "end": 369.66, "text": " give me the action that would maximize my future reward and you're trying to"}, {"start": 369.66, "end": 375.38000000000005, "text": " learn this directly so it's a very different style of reinforcement learning"}, {"start": 375.38000000000005, "end": 379.70000000000005, "text": " basically one is a one is a pure machine learning approach and the other one"}, {"start": 379.70000000000005, "end": 383.94000000000005, "text": " is a search problem now you can of course mix and match the two like for"}, {"start": 383.94000000000005, "end": 387.70000000000005, "text": " example people in AlphaGo have done they have a model based reinforcement"}, {"start": 387.70000000000005, "end": 394.02000000000004, "text": " learning that also has kind of a learning machine learning elements but in"}, {"start": 394.02, "end": 400.26, "text": " between now we have the successor features so the successor representations they"}, {"start": 400.26, "end": 406.09999999999997, "text": " are if you will they are somewhere in between the two so they kind of trade"}, {"start": 406.09999999999997, "end": 413.02, "text": " off the advantages of model free where you only have to learn a function right"}, {"start": 413.02, "end": 418.7, "text": " from state to something with the advantages of model base the fact that you"}, {"start": 418.7, "end": 423.02, "text": " actually have a bit of an idea of how the world works and can adjust quickly to"}, {"start": 423.02, "end": 429.46, "text": " let's say different reward structures or things like this so what do"}, {"start": 429.46, "end": 434.09999999999997, "text": " successor representations do successor representations basically learn how"}, {"start": 434.09999999999997, "end": 440.41999999999996, "text": " states are connected and this is a classic successor representation so the"}, {"start": 440.41999999999996, "end": 445.94, "text": " successor representation M here of policy pi the policy remember is what"}, {"start": 445.94, "end": 452.65999999999997, "text": " tells you which action you should take in a given state you you define it as a"}, {"start": 452.66, "end": 462.06, "text": " connection between state i and state j and m of s i s j means given that i am in"}, {"start": 462.06, "end": 468.70000000000005, "text": " s i so this could be the kitchen and your goal is to find in the bedroom and if"}, {"start": 468.70000000000005, "end": 476.54, "text": " this is the kitchen given that i am in state s i what's the probability that in"}, {"start": 476.54, "end": 484.06, "text": " the future at some point i will transition to s i right given that i'm in the"}, {"start": 484.06, "end": 490.1, "text": " kitchen what's the probability that i'll end up in the bedroom at some point in"}, {"start": 490.1, "end": 495.46000000000004, "text": " the future and this is formally expressed this is the expectation over your"}, {"start": 495.46000000000004, "end": 504.1, "text": " policy and it's the it's the indicator function that the future state sorry"}, {"start": 504.1, "end": 509.5, "text": " this is the future state t plus k you see k goes from zero to infinity so for"}, {"start": 509.5, "end": 515.5, "text": " all of the future and s t is the one you're in now so for any future state this"}, {"start": 515.5, "end": 520.86, "text": " is equal to s j now of course this makes no sense unless you kind of discount"}, {"start": 520.86, "end": 525.66, "text": " I have a discount factor here so if you're in state if you're in the"}, {"start": 525.66, "end": 529.6600000000001, "text": " bedroom further in the future then this value would be lower so that this"}, {"start": 529.66, "end": 535.3, "text": " value is high if you will transition from s i to s j with high probability in"}, {"start": 535.3, "end": 540.06, "text": " the near future and this is a successor representation right it basically"}, {"start": 540.06, "end": 545.5799999999999, "text": " tells you if you want to go from state s i to state s j how likely is that in"}, {"start": 545.5799999999999, "end": 554.06, "text": " the near future right so if the if this number is high you know that that these"}, {"start": 554.06, "end": 559.6199999999999, "text": " two states are closely connected that you can expect to end up in state s j"}, {"start": 559.6199999999999, "end": 565.9799999999999, "text": " somewhere down the line if you're in s i now and one more representation if you"}, {"start": 565.9799999999999, "end": 574.3399999999999, "text": " consider the vector m pi of s i given all of the s j some dot here so this is a"}, {"start": 574.3399999999999, "end": 581.78, "text": " vector you can actually compare two states s i so if one is if you plug in here"}, {"start": 581.78, "end": 591.22, "text": " you plug in the kitchen and then also you plug in the I don't know the garage if"}, {"start": 591.22, "end": 595.62, "text": " they and you will get out two vectors right you get two vectors if those vectors"}, {"start": 595.62, "end": 600.02, "text": " are very similar then you know that if if you're in the kitchen or in the"}, {"start": 600.02, "end": 604.5799999999999, "text": " garage it doesn't matter you're gonna end up you have a similar future"}, {"start": 604.5799999999999, "end": 609.86, "text": " trajectory basically however if those two vectors are far apart you know that"}, {"start": 609.86, "end": 614.54, "text": " these two states are far apart with respect to your policy so this is pretty"}, {"start": 614.54, "end": 619.22, "text": " cool things you can do with successor representations and I hope this gives"}, {"start": 619.22, "end": 627.7, "text": " you kind of some insight so another neat trick is that if you have a value"}, {"start": 627.7, "end": 632.54, "text": " function so and the value function in this case there's a simplified assumption"}, {"start": 632.54, "end": 636.3000000000001, "text": " but you don't actually need it the simplified assumption is that the reward"}, {"start": 636.3, "end": 640.6999999999999, "text": " only depends on the state you're in basically doesn't matter how you get to"}, {"start": 640.6999999999999, "end": 644.42, "text": " the state like the actions you perform if you're in a given state so if you're"}, {"start": 644.42, "end": 648.06, "text": " in a given room in the house you'll get some reward like for example if you"}, {"start": 648.06, "end": 653.66, "text": " find the bedroom then you win that's a reward that would only be characterized"}, {"start": 653.66, "end": 660.8199999999999, "text": " by the state if that's the case you can compute the value function of of the"}, {"start": 660.82, "end": 667.1, "text": " reinforcement learning problem simply by integrating over the over the success"}, {"start": 667.1, "end": 672.1400000000001, "text": " representations so for each state you simply go over all of the possible"}, {"start": 672.1400000000001, "end": 677.0200000000001, "text": " other states and you ask how likely am I to go to that state and what reward will"}, {"start": 677.0200000000001, "end": 682.9000000000001, "text": " I have in that state and that's your value function so pretty pretty simple you"}, {"start": 682.9000000000001, "end": 688.98, "text": " can actually learn the successor representations by TD learning by temporal"}, {"start": 688.98, "end": 693.1800000000001, "text": " difference learning which is a method that's applied throughout like throughout"}, {"start": 693.1800000000001, "end": 700.02, "text": " reinforcement learning especially in thing places like Q learning and yeah and"}, {"start": 700.02, "end": 708.1, "text": " also for learning value functions so pretty neat successor representations"}, {"start": 708.1, "end": 714.94, "text": " this paper then goes from successor representations of individual state to"}, {"start": 714.94, "end": 721.0200000000001, "text": " successor representations over continuous space so right now we have these"}, {"start": 721.0200000000001, "end": 726.98, "text": " states state kitchen you go to the bedroom you go to somewhere right and these"}, {"start": 726.98, "end": 732.2600000000001, "text": " states were kind of discrete places so there was a house and you have different"}, {"start": 732.2600000000001, "end": 737.58, "text": " you know different rooms in the house and you can go between them now we're"}, {"start": 737.58, "end": 743.5400000000001, "text": " dealing more with continuous states so you can generalize these"}, {"start": 743.54, "end": 747.66, "text": " successor representations to continue state by considering not the states"}, {"start": 747.66, "end": 754.6999999999999, "text": " themselves but features of the of the state and a feature and this here you have"}, {"start": 754.6999999999999, "end": 761.2199999999999, "text": " to kind of imagine as binary features and the features let me give like some"}, {"start": 761.2199999999999, "end": 765.8199999999999, "text": " really dumb examples but maybe it helps you like one feature could be the"}, {"start": 765.8199999999999, "end": 770.38, "text": " smell does it smell in the room like just binary does it smell or doesn't it"}, {"start": 770.38, "end": 777.54, "text": " smell right and then one feature could there be is there sunlight and then one"}, {"start": 777.54, "end": 791.58, "text": " feature could be is it warm right with and these are all binary features and so"}, {"start": 791.58, "end": 797.78, "text": " you kind of you have to build the features such that if the features are the"}, {"start": 797.78, "end": 804.6999999999999, "text": " same then the states should be fairly you know close in whatever sense so for"}, {"start": 804.6999999999999, "end": 809.98, "text": " example if it smells but there is no sunlight you're probably somewhere in the"}, {"start": 809.98, "end": 814.9399999999999, "text": " bathroom like where exactly in xy coordinates you are in the bathroom it doesn't"}, {"start": 814.9399999999999, "end": 820.78, "text": " really matter to this as long as like the features are high and so if it smells"}, {"start": 820.78, "end": 826.02, "text": " and there is no sunlight you're probably somewhere in the bathroom and that makes"}, {"start": 826.02, "end": 831.54, "text": " all the states in the bathroom all the coordinates close together so this is"}, {"start": 831.54, "end": 835.14, "text": " how you have to imagine these features you can define your success or"}, {"start": 835.14, "end": 839.5, "text": " representations exactly the same over these features except that the"}, {"start": 839.5, "end": 845.74, "text": " representation is now not from state i to state j but from a state to a given"}, {"start": 845.74, "end": 852.5, "text": " feature so that means if I am in state st at the current time what is the"}, {"start": 852.5, "end": 859.42, "text": " probability that in the near future this feature will be high right so if I am"}, {"start": 859.42, "end": 866.9, "text": " right now in the or close to the bathroom let's say the the probability that"}, {"start": 866.9, "end": 873.42, "text": " smell oh sorry this should be a highlight the probability that smell is high in"}, {"start": 873.42, "end": 879.1, "text": " the future is very high right so this this number would be high so exactly the"}, {"start": 879.1, "end": 884.58, "text": " same except for these continuous features now and you can do the same thing"}, {"start": 884.58, "end": 892.82, "text": " including defining the value function as a simple linear multiplication with"}, {"start": 892.82, "end": 897.82, "text": " these features that is an assumption under the assumption that the reward is a"}, {"start": 897.82, "end": 902.86, "text": " linear function of the features of the states which is the analogous assumption"}, {"start": 902.86, "end": 907.58, "text": " to saying that the reward only depends on the state in the linear case or"}, {"start": 907.58, "end": 915.14, "text": " somewhat of an analogous function not entirely all right so you can also learn"}, {"start": 915.14, "end": 918.98, "text": " this by temporal difference learning exactly the same so this is pretty cool"}, {"start": 918.98, "end": 925.7800000000001, "text": " these are the success or representations and you can actually you know if you"}, {"start": 925.7800000000001, "end": 931.0200000000001, "text": " learn them you have kind of a model of how the world works not as much a"}, {"start": 931.0200000000001, "end": 936.9000000000001, "text": " model as the model based reinforcement learning where you know exactly how it"}, {"start": 936.9, "end": 942.3, "text": " works right here you know exactly how the world works you have this model in"}, {"start": 942.3, "end": 946.3, "text": " model three you don't know how the world works at all you simply know if I'm in"}, {"start": 946.3, "end": 951.1, "text": " this state and do this action that that'll turn out really well but in the"}, {"start": 951.1, "end": 958.4599999999999, "text": " successor representation framework you have you have an idea of what states"}, {"start": 958.4599999999999, "end": 963.38, "text": " there are we'll do the discrete case right now so this could be kitchen this"}, {"start": 963.38, "end": 970.82, "text": " could be outdoor this could be bedroom and so you you you have an idea what"}, {"start": 970.82, "end": 975.98, "text": " states there are and so on and how they connect to each other like you say"}, {"start": 975.98, "end": 981.3, "text": " ah from the kitchen I can easily go to the bedroom but I cannot as well go to"}, {"start": 981.3, "end": 988.42, "text": " maybe the bathroom from outdoor I can easily go to the kitchen but I can't go"}, {"start": 988.42, "end": 993.62, "text": " to the bedroom and so on so you have kind of an idea of how all of these states"}, {"start": 993.62, "end": 998.18, "text": " connect to each other and that that is the successor representation you can"}, {"start": 998.18, "end": 1003.62, "text": " already see how that helps learning agent a lot if you introduce the"}, {"start": 1003.62, "end": 1009.18, "text": " successor if you have the successor representation now what this this paper"}, {"start": 1009.18, "end": 1014.3, "text": " deals with in essence is it says okay the successor representations are cool"}, {"start": 1014.3, "end": 1019.9799999999999, "text": " but it has only so far been done in a case where you have full observability"}, {"start": 1019.9799999999999, "end": 1025.74, "text": " and the full observability is the case where you kind of know what state you're"}, {"start": 1025.74, "end": 1032.8999999999999, "text": " in right you kind of know that um sorry you are in the kitchen you are outdoors"}, {"start": 1032.8999999999999, "end": 1039.1399999999999, "text": " you are in the bedroom that is not known but what if you don't and I mean most"}, {"start": 1039.1399999999999, "end": 1043.82, "text": " problems you don't what if you just have a picture like here right you just see"}, {"start": 1043.82, "end": 1048.78, "text": " a tree in the house right you don't you kind of have to infer that you are"}, {"start": 1048.78, "end": 1052.7, "text": " outdoors right and if you're here you just get this picture of a couple of"}, {"start": 1052.7, "end": 1058.54, "text": " doors and the table and you have to infer that you are now in the living room"}, {"start": 1058.54, "end": 1065.6599999999999, "text": " right so in essence there is an additional layer of complexity not only do you"}, {"start": 1065.66, "end": 1076.02, "text": " go from state from state to state to state but you don't actually observe the"}, {"start": 1076.02, "end": 1081.6200000000001, "text": " states what you observe is from each state you observe what are called"}, {"start": 1081.6200000000001, "end": 1090.3400000000001, "text": " observations right so you only observe these and you have to infer what the"}, {"start": 1090.34, "end": 1095.86, "text": " kind of have to guess what the underlying states are in order to know what you"}, {"start": 1095.86, "end": 1101.3, "text": " should do to get to the next state right you only ever observe the observations"}, {"start": 1101.3, "end": 1108.1, "text": " so this here is the actual thing this is kitchen and this here could be a"}, {"start": 1108.1, "end": 1115.62, "text": " picture of the kitchen right there's a counter there's a stove yeah and so you"}, {"start": 1115.62, "end": 1121.3, "text": " you get kind of what I what I mean in their example they have they simplify"}, {"start": 1121.3, "end": 1128.9399999999998, "text": " this to kind of a toy data setup where you have this environment and this is"}, {"start": 1128.9399999999998, "end": 1136.3799999999999, "text": " one beautiful picture I don't why oh well just you have one this setup and it"}, {"start": 1136.3799999999999, "end": 1143.86, "text": " is this box basically this box and it has this wall right and then you have an"}, {"start": 1143.86, "end": 1149.8999999999999, "text": " agent that is able to walk around in here like with whatever policy the"}, {"start": 1149.8999999999999, "end": 1154.62, "text": " policy determines how it walks around but then what you observe is not the"}, {"start": 1154.62, "end": 1158.62, "text": " actual position but what you observe is for example for this position you"}, {"start": 1158.62, "end": 1163.2199999999998, "text": " observe a random point here so they basically add noise to each"}, {"start": 1163.2199999999998, "end": 1168.1399999999999, "text": " observer to each state and if you're in this state you will observe one of these"}, {"start": 1168.14, "end": 1174.5400000000002, "text": " points in this circle right so your your trajectory might look to you as you"}, {"start": 1174.5400000000002, "end": 1180.18, "text": " observe it much more much like go for example from here to here to here to"}, {"start": 1180.18, "end": 1186.5, "text": " here and you kind of have to guess what the underlying state is and you see"}, {"start": 1186.5, "end": 1193.14, "text": " this here this this blue thing is what the agent actually does but the gray"}, {"start": 1193.14, "end": 1198.38, "text": " thing is what it observes and the observations are sometimes even outside of"}, {"start": 1198.38, "end": 1205.5, "text": " this of this boundary and this this orange thing is now the infertile and"}, {"start": 1205.5, "end": 1212.94, "text": " that's what we actually want is to go from the observed to this inferred and"}, {"start": 1212.94, "end": 1218.66, "text": " we want to that inferred is as close as possible to this true latent state"}, {"start": 1218.66, "end": 1225.46, "text": " right so the way they do it is they introduce this distributional distributed"}, {"start": 1225.46, "end": 1240.94, "text": " coding for the for the features basically what they say is they say we will"}, {"start": 1240.94, "end": 1249.22, "text": " build a framework where we can where we represent the features as as as"}, {"start": 1249.22, "end": 1258.94, "text": " expectations over some distribution and the expectation will call mu and mu is"}, {"start": 1258.94, "end": 1265.06, "text": " simply the kind of mean of this of this feature under this distribution this"}, {"start": 1265.06, "end": 1276.94, "text": " is very general so let's look at what at how to plug this in so what they now"}, {"start": 1276.94, "end": 1281.98, "text": " have to do is they have to learn these two things right they have to first of"}, {"start": 1281.98, "end": 1289.82, "text": " all if I draw this picture again these are the underlying states and they kind"}, {"start": 1289.82, "end": 1294.9399999999998, "text": " of transition into each other so this is state one state two state three and with"}, {"start": 1294.9399999999998, "end": 1299.8999999999999, "text": " action one action two we transition from state to state but also there are"}, {"start": 1299.8999999999999, "end": 1308.4199999999998, "text": " these observations observation one observation two observation three so the agent"}, {"start": 1308.4199999999998, "end": 1314.98, "text": " needs to learn two different things first of all it needs to learn given an"}, {"start": 1314.98, "end": 1321.22, "text": " observation what state am I probably in right this is the first thing it needs"}, {"start": 1321.22, "end": 1325.94, "text": " to learn and then the second thing it needs to learn is given this state and"}, {"start": 1325.94, "end": 1334.66, "text": " this action what's the next state that I will go to right and this is and this"}, {"start": 1334.66, "end": 1339.82, "text": " this of course these things down here they're not observed so these things down"}, {"start": 1339.82, "end": 1345.1799999999998, "text": " here you can only do in distribution so I'm going to represent this with it P"}, {"start": 1345.1799999999998, "end": 1350.02, "text": " here you can only kind of do this in distribution and the way they handle it"}, {"start": 1350.02, "end": 1360.1799999999998, "text": " is they always maintain the expected value of these things and that's they do"}, {"start": 1360.1799999999998, "end": 1365.4199999999998, "text": " this in this wake sleep algorithm all right so this is me re-recording this part"}, {"start": 1365.42, "end": 1371.5800000000002, "text": " because I have done a terrible job at the first time so I want to understand this"}, {"start": 1371.5800000000002, "end": 1378.26, "text": " wake sleep algorithm to compute the things that we don't know let me draw this"}, {"start": 1378.26, "end": 1391.3000000000002, "text": " actually again so the way this algorithm does it is actually pretty cool it has"}, {"start": 1391.3, "end": 1397.4199999999998, "text": " two phases sleep phase and awake phase and it alternates between the two"}, {"start": 1397.4199999999998, "end": 1402.06, "text": " constantly it's kind of like expectation maximization and ultimately what you"}, {"start": 1402.06, "end": 1409.46, "text": " want to learn are two different sets of parameters w and t now you whenever you"}, {"start": 1409.46, "end": 1415.46, "text": " learn t you use w the one that you've already learned and whenever you learn"}, {"start": 1415.46, "end": 1420.3799999999999, "text": " w you use the t that you've already learned so it's kind of a bootstrapping"}, {"start": 1420.38, "end": 1430.3000000000002, "text": " each other up the two functions you learn here are this f w and the t here so"}, {"start": 1430.3000000000002, "end": 1438.94, "text": " t is just a matrix and f of w is a function the function has weights the weights"}, {"start": 1438.94, "end": 1444.7, "text": " w so you see in the sleep phase you update w and in the wake phase you update"}, {"start": 1444.7, "end": 1450.42, "text": " team now why is this called wake and sleep it's because in the wake phase you"}, {"start": 1450.42, "end": 1456.02, "text": " you're actually so called awake and you use real observations so in the wake"}, {"start": 1456.02, "end": 1460.5, "text": " phase and I find it easier to start actually at the wake phase in the wake"}, {"start": 1460.5, "end": 1465.94, "text": " phase you collect observations so you let your agent go around your it's"}, {"start": 1465.94, "end": 1469.94, "text": " environment and collect a bunch of observations you you don't know what the"}, {"start": 1469.94, "end": 1475.46, "text": " states are what you do is simply you collect these observations now it's not"}, {"start": 1475.46, "end": 1480.14, "text": " that important what the policy is here so you you basically follow some"}, {"start": 1480.14, "end": 1488.66, "text": " policy and you collect these observations right and then what you what you say"}, {"start": 1488.66, "end": 1494.74, "text": " is okay I have the function f of w and remember since we're in the wake"}, {"start": 1494.74, "end": 1501.14, "text": " phase we're learning t so we assume we already have the w in in essence in"}, {"start": 1501.14, "end": 1505.58, "text": " practice we start out with a random one and right and then kind of alternate"}, {"start": 1505.58, "end": 1511.6200000000001, "text": " between the two phases until both get really good but so we already have a"}, {"start": 1511.6200000000001, "end": 1517.3, "text": " w and we use it to update team how do we do this this is we need to understand"}, {"start": 1517.3, "end": 1524.6200000000001, "text": " what this function f of w does f of w takes this mu and the current observation"}, {"start": 1524.62, "end": 1536.78, "text": " and produces a new mu so what is a mu this this mu here this mu here as we saw"}, {"start": 1536.78, "end": 1544.4199999999998, "text": " above here the mu is the expectation over the features and in essence the"}, {"start": 1544.4199999999998, "end": 1551.86, "text": " mu is a guess the mu is your best guess of what the features of the state are"}, {"start": 1551.86, "end": 1558.02, "text": " or in the in the discrete case you could also say a guess of what the state is"}, {"start": 1558.02, "end": 1565.3799999999999, "text": " right so you don't know the state right but you what you want to maintain is a"}, {"start": 1565.3799999999999, "end": 1570.1, "text": " distribution over states so you want to kind of maintain this distribution but"}, {"start": 1570.1, "end": 1575.26, "text": " you can't you know calculate you can't properly efficiently calculate with an"}, {"start": 1575.26, "end": 1580.26, "text": " entire distribution unless you assume it's some sort of Gaussian or so but what"}, {"start": 1580.26, "end": 1588.3, "text": " you can do is you can simply take its mean mu right and and that's your best"}, {"start": 1588.3, "end": 1594.5, "text": " guess for what the state is the state could be anywhere anywhere here right"}, {"start": 1594.5, "end": 1599.86, "text": " according to this distribution but you simply come up with mu which is your best"}, {"start": 1599.86, "end": 1611.4599999999998, "text": " guess so the function the function f of w it takes in the best guess of where you"}, {"start": 1611.4599999999998, "end": 1617.78, "text": " were up until the last step and it also takes as an argument your current"}, {"start": 1617.78, "end": 1625.74, "text": " observation and it gives you the output of f is mu t right is the best guess of"}, {"start": 1625.74, "end": 1630.22, "text": " where you are now yeah it's pretty so pretty straightforward if you think"}, {"start": 1630.22, "end": 1638.74, "text": " about it so for every for every observation you want to have kind of a guess of"}, {"start": 1638.74, "end": 1645.1, "text": " where the yours what your state is and that's mu right so what f does is it"}, {"start": 1645.1, "end": 1651.34, "text": " takes whatever observations you had these observations gave rise to a mu that"}, {"start": 1651.34, "end": 1656.86, "text": " guess where you are you take this mu and you take this observation and from that"}, {"start": 1656.86, "end": 1662.86, "text": " you derive the next guess of where you are yeah you just say I guessed I was in"}, {"start": 1662.86, "end": 1669.58, "text": " the kitchen before now I moved I observed that I move through some sort of"}, {"start": 1669.58, "end": 1674.9399999999998, "text": " door and there's some sort of table so given that I get I thought I was in the"}, {"start": 1674.9399999999998, "end": 1679.4199999999998, "text": " kitchen and that I observed this thing now I'm probably in the living room"}, {"start": 1679.42, "end": 1689.02, "text": " all right that's what f w is so you input the observations that you had and you"}, {"start": 1689.02, "end": 1693.8200000000002, "text": " input your current observation right to get the guess of where you're next and"}, {"start": 1693.8200000000002, "end": 1699.3400000000001, "text": " these are real observations right and then you simply update t what does t do"}, {"start": 1699.3400000000001, "end": 1706.7, "text": " t relates your current and your next guess and that's important we already"}, {"start": 1706.7, "end": 1714.46, "text": " said that if takes your your kind of last guess and gives you the next guess"}, {"start": 1714.46, "end": 1720.7, "text": " t does kind of the same thing but t does it without having without relying on"}, {"start": 1720.7, "end": 1727.18, "text": " an additional observation t simply says well if I am here or if my guess is that I"}, {"start": 1727.18, "end": 1732.94, "text": " am in the kitchen then what's the probability that in the next step I'll be in"}, {"start": 1732.94, "end": 1738.78, "text": " the living room without observing anything right it's t is simply relating"}, {"start": 1738.78, "end": 1745.74, "text": " states to each other or relating guesses of states to each other right so it's"}, {"start": 1745.74, "end": 1752.22, "text": " simply it's simply saying well under the current policy that I am what is the"}, {"start": 1752.22, "end": 1757.66, "text": " kind of distribution of going from one room to the next room"}, {"start": 1757.66, "end": 1763.02, "text": " right so you learn in the wake face you learn the t the t simply represents how"}, {"start": 1763.02, "end": 1768.3000000000002, "text": " you move from state to state so it's exactly basically this function here except"}, {"start": 1768.3000000000002, "end": 1773.9, "text": " that it's not from state to state but it relates your guess about your guess"}, {"start": 1773.9, "end": 1780.94, "text": " your mu of the state one to the mu of the state two"}, {"start": 1780.94, "end": 1789.5800000000002, "text": " right and then in the sleep face so in the sleep face you you now assume that"}, {"start": 1789.5800000000002, "end": 1793.66, "text": " you have a good estimate of how the states relate to each other and what you"}, {"start": 1793.66, "end": 1798.8600000000001, "text": " can then do is you can actually sample trajectories and this is why it's called"}, {"start": 1798.8600000000001, "end": 1804.46, "text": " sleeping it's kind of like dreaming so given that you have a model t of how"}, {"start": 1804.46, "end": 1809.18, "text": " states transition to each other or your your guesses about states more"}, {"start": 1809.18, "end": 1815.8200000000002, "text": " precisely you can have sample state trajectories so you can dream up how you"}, {"start": 1815.8200000000002, "end": 1821.74, "text": " would move in an environment right and the assumption here is that you know"}, {"start": 1821.74, "end": 1827.18, "text": " the process that that if you have a state that gives you an observation for"}, {"start": 1827.18, "end": 1832.14, "text": " example in their experiments is always the state x x y coordinates and that's"}, {"start": 1832.14, "end": 1837.26, "text": " corrupted by Gaussian noise there is also ways to learn this"}, {"start": 1837.26, "end": 1841.26, "text": " transition this what's called the this is what's called the observation"}, {"start": 1841.26, "end": 1848.46, "text": " process right but you assume you know it so you can sample trajectories of"}, {"start": 1848.46, "end": 1854.46, "text": " states and corresponding observations right now this is not the real world but"}, {"start": 1854.46, "end": 1860.3799999999999, "text": " this is using this t down here you kind of know"}, {"start": 1860.3799999999999, "end": 1865.02, "text": " how or you kind of have some sort of model can you learn a model of how you move"}, {"start": 1865.02, "end": 1869.9, "text": " about the world so you sample these trajectories and from these trajectories"}, {"start": 1869.9, "end": 1876.06, "text": " you can now learn the f of w function so you see since you know what the state"}, {"start": 1876.06, "end": 1880.54, "text": " is right you can compute these features exactly"}, {"start": 1880.54, "end": 1886.7, "text": " and then you can learn this f of w function that gives you"}, {"start": 1886.7, "end": 1892.62, "text": " a takes in the guess of the last state and the current observation"}, {"start": 1892.62, "end": 1897.1, "text": " and gives you the next the guess of the next state"}, {"start": 1897.1, "end": 1903.9799999999998, "text": " and that you can then use temporal difference learning this is always here"}, {"start": 1903.9799999999998, "end": 1908.4599999999998, "text": " also with the t here we have temporal difference kind of a temporal"}, {"start": 1908.4599999999998, "end": 1915.5, "text": " difference learning to learn the parameters w"}, {"start": 1915.5, "end": 1923.02, "text": " so it's it's very kind of convoluted but ultimately it's a simple process"}, {"start": 1923.02, "end": 1928.62, "text": " in the wake phase you go into the world and actually collect real observations"}, {"start": 1928.62, "end": 1936.06, "text": " right and you have a method of deriving from these observations"}, {"start": 1936.06, "end": 1943.02, "text": " deriving the guesses about the states and so what you can do is you can learn"}, {"start": 1943.02, "end": 1947.66, "text": " a transition between the states right if you have a good guess of what the"}, {"start": 1947.66, "end": 1951.42, "text": " states are given each observation you can learn"}, {"start": 1951.42, "end": 1955.66, "text": " how to transition from one state to the next state except you don't do it in"}, {"start": 1955.66, "end": 1959.9, "text": " actual states you do it in guesses about states"}, {"start": 1959.9, "end": 1964.78, "text": " then once you have a model of how you move from one state to the next state"}, {"start": 1964.78, "end": 1971.18, "text": " you can go and dream up such state trajectories right you can dream"}, {"start": 1971.18, "end": 1975.1000000000001, "text": " a state trajectories and therefore also you can dream how you would observe"}, {"start": 1975.1000000000001, "end": 1980.78, "text": " them and given that you can learn then a better function that relates your"}, {"start": 1980.78, "end": 1985.9, "text": " guess your guess about a state given the observation"}, {"start": 1985.9, "end": 1991.26, "text": " to the actual features of the state since for this particular thing you know"}, {"start": 1991.26, "end": 1999.1000000000001, "text": " what the state is right so this this is this two step process"}, {"start": 1999.1, "end": 2004.78, "text": " notice the cool thing we've never actually had to learn this mu explicitly we"}, {"start": 2004.78, "end": 2011.34, "text": " never had to learn how to go from observations to your guesses about states"}, {"start": 2011.34, "end": 2018.1399999999999, "text": " because we can compute this recursively right so you simply start out with mu"}, {"start": 2018.1399999999999, "end": 2021.1, "text": " zero which is a guess about the initial state"}, {"start": 2021.1, "end": 2026.9399999999998, "text": " and then you go to mu one and mu two and you never actually have to learn"}, {"start": 2026.94, "end": 2031.5800000000002, "text": " that function all right all right so that's how they"}, {"start": 2031.5800000000002, "end": 2035.9, "text": " that's how they kind of learn these these success representations and"}, {"start": 2035.9, "end": 2041.8200000000002, "text": " experiments of this are fairly cool here is another diagram of how that"}, {"start": 2041.8200000000002, "end": 2045.26, "text": " looks like you have a state this gives you an observation and from that you"}, {"start": 2045.26, "end": 2051.5, "text": " derive a guess of what this state is right so you can now look at what the"}, {"start": 2051.5, "end": 2056.14, "text": " agent learned the agent actually learns dynamics of this room it means if"}, {"start": 2056.14, "end": 2062.46, "text": " you're here you probably go somewhere right there's no clear direction but if"}, {"start": 2062.46, "end": 2066.94, "text": " you're close to the wall your next states are probably going to be"}, {"start": 2066.94, "end": 2073.5, "text": " inwards of this wall right and yeah I've already shown you this picture"}, {"start": 2073.5, "end": 2078.54, "text": " so they have a last cool experiment here"}, {"start": 2078.54, "end": 2082.8599999999997, "text": " where what they do is they specify a reward"}, {"start": 2082.86, "end": 2087.9, "text": " and the reward is down here and from each state you want to know"}, {"start": 2087.9, "end": 2092.1400000000003, "text": " which way do I have to go to right get the reward"}, {"start": 2092.1400000000003, "end": 2097.34, "text": " now if they give the agent the value of the latent"}, {"start": 2097.34, "end": 2101.42, "text": " state and the latent state here are just your xy coordinates if they give this"}, {"start": 2101.42, "end": 2105.5, "text": " to the agent and they let it run they let it learn the structure of the world"}, {"start": 2105.5, "end": 2109.58, "text": " it will correctly conclude a holoc these are the kind of here is high value"}, {"start": 2109.58, "end": 2113.8199999999997, "text": " states lower lower lower lower lower lower value states right up until over"}, {"start": 2113.8199999999997, "end": 2116.94, "text": " here are the most low value states because you"}, {"start": 2116.94, "end": 2120.62, "text": " travel the longest to go to the reward"}, {"start": 2122.46, "end": 2126.2999999999997, "text": " if you just give it the observation the noisy observation"}, {"start": 2126.2999999999997, "end": 2130.06, "text": " it will actually assign high value to states here"}, {"start": 2130.06, "end": 2135.74, "text": " because of course it it doesn't infer the latent state it simply takes the"}, {"start": 2135.74, "end": 2141.2599999999998, "text": " observation as face value says well I was here and I reached here pretty quickly"}, {"start": 2141.2599999999998, "end": 2146.62, "text": " so it must be a good state but in fact it wasn't here it was here and the added"}, {"start": 2146.62, "end": 2151.8199999999997, "text": " noise would just corrupt the observation so you see it learns kind of a wrong"}, {"start": 2151.8199999999997, "end": 2156.8599999999997, "text": " model of the world whereas if you use this ddc"}, {"start": 2156.8599999999997, "end": 2162.54, "text": " you see who sorry about that if you use this ddc you see you are much closer"}, {"start": 2162.54, "end": 2168.14, "text": " to the true state of the world like to the to the one on the left here"}, {"start": 2168.14, "end": 2173.02, "text": " where you so on the left here you actually kind of cheat right you give it the"}, {"start": 2173.02, "end": 2177.58, "text": " actual state but on here you give it the observation but tell it it's actually"}, {"start": 2177.58, "end": 2180.3, "text": " a noisy observation and you use what this paper"}, {"start": 2180.3, "end": 2184.3, "text": " proposed and again it will learn to assign a low value to these states"}, {"start": 2184.3, "end": 2188.7, "text": " because it needs to go all the way around even though it has seen"}, {"start": 2188.7, "end": 2192.9399999999996, "text": " supposedly seen the agent go from here to here directly"}, {"start": 2192.9399999999996, "end": 2197.8999999999996, "text": " but it kind of understands that it's just a noisy observation"}, {"start": 2197.8999999999996, "end": 2203.4199999999996, "text": " all right so this was this from this paper it's a very very cool approach I think"}, {"start": 2203.4199999999996, "end": 2206.7799999999997, "text": " to reinforcement learning and there's some more experiments where you can see"}, {"start": 2206.7799999999997, "end": 2212.3799999999997, "text": " that this ddc actually helps and I'm excited about successor representations"}, {"start": 2212.3799999999997, "end": 2216.7799999999997, "text": " and how to incorporate them in reinforcement learning because it seems a"}, {"start": 2216.78, "end": 2220.86, "text": " perfect kind of middle ground between model based and model free"}, {"start": 2220.86, "end": 2250.78, "text": " rl right with that thanks for listening and bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=Xc9Rkbg6IZA | SinGAN: Learning a Generative Model from a Single Natural Image | With just a single image as an input, this algorithm learns a generative model that matches the input image's patch distribution at multiple scales and resolutions. This enables sampling of extremely realistic looking variations on the original image and much more.
Abstract:
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GANs, each responsible for learning the patch distribution at a different scale of the image. This allows generating new samples of arbitrary size and aspect ratio, that have significant variability, yet maintain both the global structure and the fine textures of the training image. In contrast to previous single image GAN schemes, our approach is not limited to texture images, and is not conditional (i.e. it generates samples from noise). User studies confirm that the generated samples are commonly confused to be real images. We illustrate the utility of SinGAN in a wide range of image manipulation tasks.
Authors: Tamar Rott Shaham, Tali Dekel, Tomer Michaeli
https://arxiv.org/abs/1905.01164
https://github.com/tamarott/SinGAN
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Hi there, today we'll look at SINGAN, learning a generative model from a single natural image by Tamar Rott Shaham, Tali Dekel and Tomar Mikhaili. So this paper, as it says, it's dealing with learning a generative model from just one image. And this kind of needs to be stressed because most generative model, even if they produce single image samples, they're kind of trained on a large image database beforehand to kind of learn what an image is. But this algorithm really starts out clean slate, right? Algorithm starts out with nothing and then you give it this one single training image. And from that, it can then generate all of these things, right? Without ever having seen any other images during training. And the second row is simply a second example where you start clean slate, input this image and then produce these. And you can see there's quite a bit of variety in the samples you produce from this image. So basically, the task is if you're just given one image, learn something about the distribution and this paper specifically deals with patch distributions at different scales. So this could be learned about the distribution of these grass to sky here. So learn about the individual birds and so on. And then at lower scales, learn about how the border of this grass looks, right? So you can, the generative model learns, ah, there's always kind of grass at the bottom in where there's just one image at the largest scale. But then at lower scales, sometimes the border looks like a sharp corner. And sometimes the border is relatively flat like here. So it can vary up those things and it can make, it can make the border different. As also the birds, it kind of learns how the individual birds look and how they're distributed and therefore it can change that. You see there's quite a bit of variety here. You can also change the aspect ratio and you can actually do much more, ah, much weirder things with it. For example, here are some examples of applications. First there is paint to image. So these are different tasks here. So the top row is always the training image. This is the single image you give the algorithm. And then you have a row of input and then this is what the algorithm outputs. So in paint to image, you input the training image and you input a, you can do this in MS paint or something kind of the way you want the image to look. So what you want the algorithm to do is take the style, you want to take the style of this image and put it into the form of that image and it produces this. Looks pretty good. In editing, you can tell the algorithm, all right, I want this, I want this tower to go lower down. I want this house to be more wide. So you'll get an image like this and you can see there are clear kind of contours here and here that are not nice and also the house is, you know, pixel stretched and so on. So this algorithm, this generative algorithm can produce this image from it, which looks much better here around the borders and kind of fills in missing windows. Too much, of course, the patch statistics that it sees in this top image, right? You always have to think that just all this algorithm sees is the topmost image to learn from. Harmonization is a task where you have an input image and then you like copy paste some object in it and what it does is it will kind of adjust the patch statistics of that object to the surrounding image. And super resolution finally, finally, we get what every single action movie, just the NSA can do. It's like, ah, here is the security camera footage. Zoom in, in hands, like, yeah, so I doubt that, you know, hidden number plates here, pixelation number plates, all of a sudden can become readable and identifiable, but still this, this is very cool. And lastly, you can do animation from this as you can guess, I guess it's not a, it's not a, a movie. All right, let's look at how they do all of this kind of stuff. All of this is the same model that can be tasked to do these different things through various probing at its essence. It's this multi scale gun. And the gun is trained to have a series of generators and a series of discriminators and you always train them one by one. So first you train the lowest resolution and then you keep it fixed and then train the next resolution and so on until you're at the highest resolution. So in each layer, so at the bottom layer, we simply feed in, we simply feed in noise to a generator of again and the generator generates an image. All right. Now you take this image and you take a down sampled version of your training image. Remember, you just have one training image. You take a down sampled version of that and you let the discriminator decide which one's real, which one's fake and you train the generator to fool the discriminator as much as possible. Now if you were to do this with the entire image, of course the generator would simply learn to reproduce the original image. So that's no good. So what this paper does more is that the discriminator actually doesn't work on the entire image but just on patches of the image. And that's too so that the basically can't memorize the entire image. So the discriminator will pick these patches, these overlapping patches basically, you can imagine it's something like this overlapping patches and it will try to decide for each one is this patch real or is this patch fake. So the generator produces the entire image. This is what the generator produces the entire image. But the discriminator can only see the image in patches, in overlapping patches. And that's what makes this paper kind of work otherwise they would just remember the single training image, right, because you only have one training image. You kind of need some variety. So this is at the lowest scale, right? You remember you input the noise and the lowest scale in this example is for example, 25 by 25 pixel. You scale down your original image here, also to 25 by 25 and then you let the discriminator decide. So once you've trained this, once you've trained this generator to make very good 25 by 25 pixel images that in this patch way fool the discriminator, you keep it fixed. For the next stage, what you want to do is you always want to go through this layer first. So forget this discriminator now, we've trained this stage, right? Keep this generator fixed. Input noise, output, whatever the generator produces, then take this up scale it's for example, multiply each side by two to 50 by 50 pixels. Input this together with some new noise into the next stage generator and then the same as before, this generator produces an image. You scale down your original image, you scale it down to now 50 by 50 pixels and you let the discriminator decide again in patches. Now since the discriminator patches are always the same size, but we scale down the image less and less, the effective patch size of the discriminator becomes much lower. So now this discriminator only sees the image in patches like so, right? And also the generated image that comes in here, it also sees in these patches, it tries to decide are these patches from real or from fake images? So you can see that the lowest layer here, this layer is trained to kind of get the coarse grain structure of the image, right? So the discriminator will kind of see very large patches and so the generator must match the kind of large scale structure. These patches won't be very, very high resolution because we downscaled the image, but they will be large across the image. So the generator must match the coarse, low resolution stuff in the image. But as you go up the layers, up and up the layers, your discriminator sees less and less of the picture at once, excuse me, it sees less and less of the picture at once. And so this discriminator here in the top most layer can only concentrate on very small patches and therefore this generator will only have to produce things that look real at a very, very small scale, right? So in essence, you have this series of generators trained that each one is tasked with basically modeling details at a finer and finer scale until you come to the last final scale. But then each input of the each one is the output of the last one. So basically you take whatever the last one has produced and the last one is really good at doing coarse or grain things and you add to it your details of this level and this will in the end give you a very realistic image that matches at every level of resolution, matches the kind of statistics, the patch statistics of this real image. So that's the whole point kind of of this thing to have this series of generators one after the other, each one adds their own details at its own scale. And this works super well apparently. So each generator is just built like this. It takes the not some noise and the image of the lower scale, it adds them, sorry for these artifacts, it puts it through five convolutional layers and then simply combines it with the input, right? And this will produce this image at this scale. So that's each layer, it's just five conv layers. And since they're fully convolutional, you can actually change the aspect ratio at inference time, you can change the resolution and so on. It seems pretty neat. Of course from experience I can tell you that this probably didn't work at the first try and there is a lot of work, even though it seems pretty easy, keep that in mind. So for training this, there are actually two different losses. First of all, you have what's called the adversarial loss. And the adversarial loss is your classic gain loss, right? Where the generator tries to, tries to fool the discriminator and the discriminator tries to catch the generator. But then also you have a reconstruction loss and the reconstruction loss specifically deals at each layer, at each layer you train the generator to reconstruct the original image when you put in a zero noise, except that the lowest layer. But essentially what you want to do is you want to say, well, when I don't input any noise, then please reconstruct the original image. And that seems to be important for this setup to include this noise so that the generative model is basically able to reconstruct the original image as a whole. So these two losses are combined to form the training objective. And again, this is not trained on a data set. It is trained on a single, each on a single image. And the productions are pretty cool. So again, here are more samples from just the single training images at the left side. And then you have random samples from the single image. You can do things like super resolution where this picture has been super-resoluted to that picture. And I like that they investigate the effects of their setup. So they ask, okay, what happens if we just have two different scales in this scaling setup? Then you see the patch statistics will be very fine-grained and it won't match any sort of coarse-grained structure. If you have very many scales, the more scales you have, basically, the more different scales you capture. Even more interesting is what if, so at this layer where we have GG, G-R-U scale-up, scale-up, and so on, what you could do is you could not start here, but you say, okay, crap this layer, what we actually do is we take the original image and we scale it down and we input that into here instead of inputting the output from the lower layer. So basically you start at, let's say, the ground truth. And that effect is shown here. So if you start at the lowest layer in this particular example, you see that the, sometimes there are weird things. But what you can do is start at, let's say, an intermediate layer with the original image and then the variety you get because you kind of keep the coarse-grained structure the same. The variety you get will only be in the right we set, there are different layers, but you now eliminate these two layers and replace them with the original image at the scale. So the variety you get will only be from these finer-grained, lower-resolution patches things. So for example, as you can see here, the zebra samples now differ in how exactly their stripes are manifested. And this seems pretty cool. So you have kind of a handle on how fine-grained you want your details or your changes to be. And they do a bunch of more experiments where you can do a lot of kind of playful things with this thing. There is code available. For example, here you can see editing, again, as an example, where they compare also with content-aware move, which I think is implemented in Photoshop and paint harmonization, as we saw before. So all of these kind of things are very playful, are very cool, and I encourage you to check out this paper and the code seems pretty easy. I have a remark though. This again is only learned from a single image, and that's the kind of cool part. But it should be possible to combine this with some sort of approach over a dataset. Like if I have a model that is really good at a single image, producing something that looks like a single image, I should be able to combine it with a model that has been learned from a database. It's kind of like an abasian approach where you say, okay, I want to produce the best image. So I want to maximize the probability of this image given the other image. But then you can also say aha, but that's kind of proportional to j given i times p of i, right? You know, base rule. And it seems that this paper is dealing mostly with kind of maximizing the likelihood of the output while you could probably combine it with some sort of prior over natural images and come up with an even better model. Of course, then you'd need an actual database of images and training procedure, and you need a way to combine these two models. So maybe that's a bit of a challenge. Anyway, cool paper. Check it out. Yeah, bye bye. | [{"start": 0.0, "end": 7.2, "text": " Hi there, today we'll look at SINGAN, learning a generative model from a single natural image"}, {"start": 7.2, "end": 12.88, "text": " by Tamar Rott Shaham, Tali Dekel and Tomar Mikhaili."}, {"start": 12.88, "end": 18.56, "text": " So this paper, as it says, it's dealing with learning a generative model from just one"}, {"start": 18.56, "end": 19.56, "text": " image."}, {"start": 19.56, "end": 23.72, "text": " And this kind of needs to be stressed because most generative model, even if they produce"}, {"start": 23.72, "end": 29.32, "text": " single image samples, they're kind of trained on a large image database beforehand to kind"}, {"start": 29.32, "end": 32.52, "text": " of learn what an image is."}, {"start": 32.52, "end": 36.2, "text": " But this algorithm really starts out clean slate, right?"}, {"start": 36.2, "end": 42.24, "text": " Algorithm starts out with nothing and then you give it this one single training image."}, {"start": 42.24, "end": 47.24, "text": " And from that, it can then generate all of these things, right?"}, {"start": 47.24, "end": 51.6, "text": " Without ever having seen any other images during training."}, {"start": 51.6, "end": 58.0, "text": " And the second row is simply a second example where you start clean slate, input this image"}, {"start": 58.0, "end": 61.08, "text": " and then produce these."}, {"start": 61.08, "end": 64.76, "text": " And you can see there's quite a bit of variety in the samples you produce from this image."}, {"start": 64.76, "end": 72.44, "text": " So basically, the task is if you're just given one image, learn something about the distribution"}, {"start": 72.44, "end": 77.36, "text": " and this paper specifically deals with patch distributions at different scales."}, {"start": 77.36, "end": 84.2, "text": " So this could be learned about the distribution of these grass to sky here."}, {"start": 84.2, "end": 90.36, "text": " So learn about the individual birds and so on."}, {"start": 90.36, "end": 95.60000000000001, "text": " And then at lower scales, learn about how the border of this grass looks, right?"}, {"start": 95.60000000000001, "end": 102.80000000000001, "text": " So you can, the generative model learns, ah, there's always kind of grass at the bottom"}, {"start": 102.80000000000001, "end": 106.52000000000001, "text": " in where there's just one image at the largest scale."}, {"start": 106.52000000000001, "end": 113.04, "text": " But then at lower scales, sometimes the border looks like a sharp corner."}, {"start": 113.04, "end": 117.80000000000001, "text": " And sometimes the border is relatively flat like here."}, {"start": 117.80000000000001, "end": 123.44000000000001, "text": " So it can vary up those things and it can make, it can make the border different."}, {"start": 123.44000000000001, "end": 128.6, "text": " As also the birds, it kind of learns how the individual birds look and how they're distributed"}, {"start": 128.6, "end": 132.04000000000002, "text": " and therefore it can change that."}, {"start": 132.04000000000002, "end": 134.12, "text": " You see there's quite a bit of variety here."}, {"start": 134.12, "end": 138.92000000000002, "text": " You can also change the aspect ratio and you can actually do much more, ah, much weirder"}, {"start": 138.92000000000002, "end": 140.92000000000002, "text": " things with it."}, {"start": 140.92, "end": 145.23999999999998, "text": " For example, here are some examples of applications."}, {"start": 145.23999999999998, "end": 146.92, "text": " First there is paint to image."}, {"start": 146.92, "end": 148.72, "text": " So these are different tasks here."}, {"start": 148.72, "end": 151.79999999999998, "text": " So the top row is always the training image."}, {"start": 151.79999999999998, "end": 154.92, "text": " This is the single image you give the algorithm."}, {"start": 154.92, "end": 159.76, "text": " And then you have a row of input and then this is what the algorithm outputs."}, {"start": 159.76, "end": 166.76, "text": " So in paint to image, you input the training image and you input a, you can do this in MS"}, {"start": 166.76, "end": 172.39999999999998, "text": " paint or something kind of the way you want the image to look."}, {"start": 172.39999999999998, "end": 178.39999999999998, "text": " So what you want the algorithm to do is take the style, you want to take the style of this"}, {"start": 178.39999999999998, "end": 184.48, "text": " image and put it into the form of that image and it produces this."}, {"start": 184.48, "end": 186.04, "text": " Looks pretty good."}, {"start": 186.04, "end": 193.88, "text": " In editing, you can tell the algorithm, all right, I want this, I want this tower to go"}, {"start": 193.88, "end": 195.72, "text": " lower down."}, {"start": 195.72, "end": 198.88, "text": " I want this house to be more wide."}, {"start": 198.88, "end": 203.96, "text": " So you'll get an image like this and you can see there are clear kind of contours here"}, {"start": 203.96, "end": 210.32, "text": " and here that are not nice and also the house is, you know, pixel stretched and so on."}, {"start": 210.32, "end": 217.07999999999998, "text": " So this algorithm, this generative algorithm can produce this image from it, which looks"}, {"start": 217.07999999999998, "end": 223.96, "text": " much better here around the borders and kind of fills in missing windows."}, {"start": 223.96, "end": 228.56, "text": " Too much, of course, the patch statistics that it sees in this top image, right?"}, {"start": 228.56, "end": 234.24, "text": " You always have to think that just all this algorithm sees is the topmost image to learn"}, {"start": 234.24, "end": 235.64000000000001, "text": " from."}, {"start": 235.64000000000001, "end": 241.68, "text": " Harmonization is a task where you have an input image and then you like copy paste some"}, {"start": 241.68, "end": 247.52, "text": " object in it and what it does is it will kind of adjust the patch statistics of that"}, {"start": 247.52, "end": 251.0, "text": " object to the surrounding image."}, {"start": 251.0, "end": 258.0, "text": " And super resolution finally, finally, we get what every single action movie, just the"}, {"start": 258.0, "end": 259.6, "text": " NSA can do."}, {"start": 259.6, "end": 263.24, "text": " It's like, ah, here is the security camera footage."}, {"start": 263.24, "end": 274.04, "text": " Zoom in, in hands, like, yeah, so I doubt that, you know, hidden number plates here,"}, {"start": 274.04, "end": 280.04, "text": " pixelation number plates, all of a sudden can become readable and identifiable, but still"}, {"start": 280.04, "end": 282.44, "text": " this, this is very cool."}, {"start": 282.44, "end": 289.64000000000004, "text": " And lastly, you can do animation from this as you can guess, I guess it's not a, it's"}, {"start": 289.64000000000004, "end": 293.16, "text": " not a, a movie."}, {"start": 293.16, "end": 296.92, "text": " All right, let's look at how they do all of this kind of stuff."}, {"start": 296.92, "end": 301.40000000000003, "text": " All of this is the same model that can be tasked to do these different things through"}, {"start": 301.40000000000003, "end": 304.8, "text": " various probing at its essence."}, {"start": 304.8, "end": 307.64000000000004, "text": " It's this multi scale gun."}, {"start": 307.64, "end": 314.12, "text": " And the gun is trained to have a series of generators and a series of discriminators"}, {"start": 314.12, "end": 316.76, "text": " and you always train them one by one."}, {"start": 316.76, "end": 321.8, "text": " So first you train the lowest resolution and then you keep it fixed and then train the"}, {"start": 321.8, "end": 325.68, "text": " next resolution and so on until you're at the highest resolution."}, {"start": 325.68, "end": 334.0, "text": " So in each layer, so at the bottom layer, we simply feed in, we simply feed in noise to"}, {"start": 334.0, "end": 340.24, "text": " a generator of again and the generator generates an image."}, {"start": 340.24, "end": 341.24, "text": " All right."}, {"start": 341.24, "end": 346.64, "text": " Now you take this image and you take a down sampled version of your training image."}, {"start": 346.64, "end": 349.0, "text": " Remember, you just have one training image."}, {"start": 349.0, "end": 355.56, "text": " You take a down sampled version of that and you let the discriminator decide which one's"}, {"start": 355.56, "end": 359.88, "text": " real, which one's fake and you train the generator to fool the discriminator as much"}, {"start": 359.88, "end": 361.16, "text": " as possible."}, {"start": 361.16, "end": 364.64000000000004, "text": " Now if you were to do this with the entire image, of course the generator would simply"}, {"start": 364.64000000000004, "end": 368.52000000000004, "text": " learn to reproduce the original image."}, {"start": 368.52000000000004, "end": 370.8, "text": " So that's no good."}, {"start": 370.8, "end": 378.52000000000004, "text": " So what this paper does more is that the discriminator actually doesn't work on the entire image"}, {"start": 378.52000000000004, "end": 382.44000000000005, "text": " but just on patches of the image."}, {"start": 382.44000000000005, "end": 390.16, "text": " And that's too so that the basically can't memorize the entire image."}, {"start": 390.16, "end": 397.76000000000005, "text": " So the discriminator will pick these patches, these overlapping patches basically, you can"}, {"start": 397.76000000000005, "end": 403.48, "text": " imagine it's something like this overlapping patches and it will try to decide for each"}, {"start": 403.48, "end": 408.20000000000005, "text": " one is this patch real or is this patch fake."}, {"start": 408.20000000000005, "end": 412.36, "text": " So the generator produces the entire image."}, {"start": 412.36, "end": 417.08000000000004, "text": " This is what the generator produces the entire image."}, {"start": 417.08, "end": 423.88, "text": " But the discriminator can only see the image in patches, in overlapping patches."}, {"start": 423.88, "end": 431.28, "text": " And that's what makes this paper kind of work otherwise they would just remember the"}, {"start": 431.28, "end": 434.88, "text": " single training image, right, because you only have one training image."}, {"start": 434.88, "end": 437.91999999999996, "text": " You kind of need some variety."}, {"start": 437.91999999999996, "end": 440.47999999999996, "text": " So this is at the lowest scale, right?"}, {"start": 440.47999999999996, "end": 447.0, "text": " You remember you input the noise and the lowest scale in this example is for example,"}, {"start": 447.0, "end": 449.04, "text": " 25 by 25 pixel."}, {"start": 449.04, "end": 456.56, "text": " You scale down your original image here, also to 25 by 25 and then you let the discriminator"}, {"start": 456.56, "end": 457.72, "text": " decide."}, {"start": 457.72, "end": 462.96, "text": " So once you've trained this, once you've trained this generator to make very good 25 by"}, {"start": 462.96, "end": 471.68, "text": " 25 pixel images that in this patch way fool the discriminator, you keep it fixed."}, {"start": 471.68, "end": 475.88, "text": " For the next stage, what you want to do is you always want to go through this layer"}, {"start": 475.88, "end": 477.08, "text": " first."}, {"start": 477.08, "end": 481.92, "text": " So forget this discriminator now, we've trained this stage, right?"}, {"start": 481.92, "end": 484.08, "text": " Keep this generator fixed."}, {"start": 484.08, "end": 492.96, "text": " Input noise, output, whatever the generator produces, then take this up scale it's for"}, {"start": 492.96, "end": 498.08, "text": " example, multiply each side by two to 50 by 50 pixels."}, {"start": 498.08, "end": 504.36, "text": " Input this together with some new noise into the next stage generator and then the same"}, {"start": 504.36, "end": 507.24, "text": " as before, this generator produces an image."}, {"start": 507.24, "end": 515.2, "text": " You scale down your original image, you scale it down to now 50 by 50 pixels and you let"}, {"start": 515.2, "end": 518.16, "text": " the discriminator decide again in patches."}, {"start": 518.16, "end": 523.48, "text": " Now since the discriminator patches are always the same size, but we scale down the image"}, {"start": 523.48, "end": 528.2, "text": " less and less, the effective patch size of the discriminator becomes much lower."}, {"start": 528.2, "end": 536.72, "text": " So now this discriminator only sees the image in patches like so, right?"}, {"start": 536.72, "end": 543.4000000000001, "text": " And also the generated image that comes in here, it also sees in these patches, it tries"}, {"start": 543.4000000000001, "end": 549.96, "text": " to decide are these patches from real or from fake images?"}, {"start": 549.96, "end": 559.32, "text": " So you can see that the lowest layer here, this layer is trained to kind of get the"}, {"start": 559.32, "end": 564.44, "text": " coarse grain structure of the image, right?"}, {"start": 564.44, "end": 573.1600000000001, "text": " So the discriminator will kind of see very large patches and so the generator must match"}, {"start": 573.1600000000001, "end": 575.6800000000001, "text": " the kind of large scale structure."}, {"start": 575.68, "end": 581.0, "text": " These patches won't be very, very high resolution because we downscaled the image, but they will"}, {"start": 581.0, "end": 582.76, "text": " be large across the image."}, {"start": 582.76, "end": 589.4, "text": " So the generator must match the coarse, low resolution stuff in the image."}, {"start": 589.4, "end": 597.76, "text": " But as you go up the layers, up and up the layers, your discriminator sees less and less"}, {"start": 597.76, "end": 605.24, "text": " of the picture at once, excuse me, it sees less and less of the picture at once."}, {"start": 605.24, "end": 611.12, "text": " And so this discriminator here in the top most layer can only concentrate on very small"}, {"start": 611.12, "end": 618.8, "text": " patches and therefore this generator will only have to produce things that look real at"}, {"start": 618.8, "end": 622.32, "text": " a very, very small scale, right?"}, {"start": 622.32, "end": 631.64, "text": " So in essence, you have this series of generators trained that each one is tasked with basically"}, {"start": 631.64, "end": 637.84, "text": " modeling details at a finer and finer scale until you come to the last final scale."}, {"start": 637.84, "end": 642.24, "text": " But then each input of the each one is the output of the last one."}, {"start": 642.24, "end": 646.88, "text": " So basically you take whatever the last one has produced and the last one is really good"}, {"start": 646.88, "end": 654.76, "text": " at doing coarse or grain things and you add to it your details of this level and this"}, {"start": 654.76, "end": 662.76, "text": " will in the end give you a very realistic image that matches at every level of resolution,"}, {"start": 662.76, "end": 669.3199999999999, "text": " matches the kind of statistics, the patch statistics of this real image."}, {"start": 669.3199999999999, "end": 676.2, "text": " So that's the whole point kind of of this thing to have this series of generators one"}, {"start": 676.2, "end": 682.56, "text": " after the other, each one adds their own details at its own scale."}, {"start": 682.56, "end": 687.04, "text": " And this works super well apparently. So each generator is just built like this."}, {"start": 687.04, "end": 694.1999999999999, "text": " It takes the not some noise and the image of the lower scale, it adds them, sorry for"}, {"start": 694.1999999999999, "end": 701.4799999999999, "text": " these artifacts, it puts it through five convolutional layers and then simply combines it"}, {"start": 701.4799999999999, "end": 704.16, "text": " with the input, right?"}, {"start": 704.16, "end": 709.0, "text": " And this will produce this image at this scale."}, {"start": 709.0, "end": 712.92, "text": " So that's each layer, it's just five conv layers."}, {"start": 712.92, "end": 718.24, "text": " And since they're fully convolutional, you can actually change the aspect ratio at"}, {"start": 718.24, "end": 723.48, "text": " inference time, you can change the resolution and so on."}, {"start": 723.48, "end": 728.84, "text": " It seems pretty neat."}, {"start": 728.84, "end": 734.56, "text": " Of course from experience I can tell you that this probably didn't work at the first try"}, {"start": 734.56, "end": 740.8399999999999, "text": " and there is a lot of work, even though it seems pretty easy, keep that in mind."}, {"start": 740.8399999999999, "end": 744.8399999999999, "text": " So for training this, there are actually two different losses."}, {"start": 744.8399999999999, "end": 748.8399999999999, "text": " First of all, you have what's called the adversarial loss."}, {"start": 748.8399999999999, "end": 753.2399999999999, "text": " And the adversarial loss is your classic gain loss, right?"}, {"start": 753.2399999999999, "end": 758.0, "text": " Where the generator tries to, tries to fool the discriminator and the discriminator tries"}, {"start": 758.0, "end": 759.8399999999999, "text": " to catch the generator."}, {"start": 759.84, "end": 766.0400000000001, "text": " But then also you have a reconstruction loss and the reconstruction loss specifically deals"}, {"start": 766.0400000000001, "end": 777.0, "text": " at each layer, at each layer you train the generator to reconstruct the original image"}, {"start": 777.0, "end": 781.24, "text": " when you put in a zero noise, except that the lowest layer."}, {"start": 781.24, "end": 787.0, "text": " But essentially what you want to do is you want to say, well, when I don't input any"}, {"start": 787.0, "end": 792.04, "text": " noise, then please reconstruct the original image."}, {"start": 792.04, "end": 798.52, "text": " And that seems to be important for this setup to include this noise so that the generative"}, {"start": 798.52, "end": 805.64, "text": " model is basically able to reconstruct the original image as a whole."}, {"start": 805.64, "end": 809.56, "text": " So these two losses are combined to form the training objective."}, {"start": 809.56, "end": 812.2, "text": " And again, this is not trained on a data set."}, {"start": 812.2, "end": 816.9200000000001, "text": " It is trained on a single, each on a single image."}, {"start": 816.9200000000001, "end": 821.44, "text": " And the productions are pretty cool."}, {"start": 821.44, "end": 827.2, "text": " So again, here are more samples from just the single training images at the left side."}, {"start": 827.2, "end": 830.44, "text": " And then you have random samples from the single image."}, {"start": 830.44, "end": 835.36, "text": " You can do things like super resolution where this picture has been super-resoluted to"}, {"start": 835.36, "end": 837.0400000000001, "text": " that picture."}, {"start": 837.04, "end": 843.92, "text": " And I like that they investigate the effects of their setup."}, {"start": 843.92, "end": 851.5999999999999, "text": " So they ask, okay, what happens if we just have two different scales in this scaling setup?"}, {"start": 851.5999999999999, "end": 860.0799999999999, "text": " Then you see the patch statistics will be very fine-grained and it won't match any"}, {"start": 860.0799999999999, "end": 862.36, "text": " sort of coarse-grained structure."}, {"start": 862.36, "end": 870.52, "text": " If you have very many scales, the more scales you have, basically, the more different scales"}, {"start": 870.52, "end": 873.6, "text": " you capture."}, {"start": 873.6, "end": 883.36, "text": " Even more interesting is what if, so at this layer where we have GG, G-R-U scale-up,"}, {"start": 883.36, "end": 889.9200000000001, "text": " scale-up, and so on, what you could do is you could not start here, but you say, okay,"}, {"start": 889.92, "end": 894.28, "text": " crap this layer, what we actually do is we take the original image and we scale it down"}, {"start": 894.28, "end": 899.8399999999999, "text": " and we input that into here instead of inputting the output from the lower layer."}, {"start": 899.8399999999999, "end": 903.16, "text": " So basically you start at, let's say, the ground truth."}, {"start": 903.16, "end": 907.4799999999999, "text": " And that effect is shown here."}, {"start": 907.4799999999999, "end": 918.4399999999999, "text": " So if you start at the lowest layer in this particular example, you see that the, sometimes"}, {"start": 918.44, "end": 920.6800000000001, "text": " there are weird things."}, {"start": 920.6800000000001, "end": 926.7600000000001, "text": " But what you can do is start at, let's say, an intermediate layer with the original image"}, {"start": 926.7600000000001, "end": 932.08, "text": " and then the variety you get because you kind of keep the coarse-grained structure the"}, {"start": 932.08, "end": 933.08, "text": " same."}, {"start": 933.08, "end": 937.9200000000001, "text": " The variety you get will only be in the right we set, there are different layers, but"}, {"start": 937.9200000000001, "end": 943.4000000000001, "text": " you now eliminate these two layers and replace them with the original image at the scale."}, {"start": 943.4, "end": 948.92, "text": " So the variety you get will only be from these finer-grained, lower-resolution patches"}, {"start": 948.92, "end": 949.92, "text": " things."}, {"start": 949.92, "end": 956.72, "text": " So for example, as you can see here, the zebra samples now differ in how exactly their"}, {"start": 956.72, "end": 959.0799999999999, "text": " stripes are manifested."}, {"start": 959.0799999999999, "end": 963.68, "text": " And this seems pretty cool."}, {"start": 963.68, "end": 970.64, "text": " So you have kind of a handle on how fine-grained you want your details or your changes to be."}, {"start": 970.64, "end": 978.08, "text": " And they do a bunch of more experiments where you can do a lot of kind of playful things"}, {"start": 978.08, "end": 979.64, "text": " with this thing."}, {"start": 979.64, "end": 982.24, "text": " There is code available."}, {"start": 982.24, "end": 988.64, "text": " For example, here you can see editing, again, as an example, where they compare also with"}, {"start": 988.64, "end": 996.84, "text": " content-aware move, which I think is implemented in Photoshop and paint harmonization, as we saw"}, {"start": 996.84, "end": 998.72, "text": " before."}, {"start": 998.72, "end": 1004.48, "text": " So all of these kind of things are very playful, are very cool, and I encourage you to check"}, {"start": 1004.48, "end": 1007.5600000000001, "text": " out this paper and the code seems pretty easy."}, {"start": 1007.5600000000001, "end": 1009.72, "text": " I have a remark though."}, {"start": 1009.72, "end": 1014.5600000000001, "text": " This again is only learned from a single image, and that's the kind of cool part."}, {"start": 1014.5600000000001, "end": 1023.1600000000001, "text": " But it should be possible to combine this with some sort of approach over a dataset."}, {"start": 1023.16, "end": 1031.24, "text": " Like if I have a model that is really good at a single image, producing something that"}, {"start": 1031.24, "end": 1036.1599999999999, "text": " looks like a single image, I should be able to combine it with a model that has been learned"}, {"start": 1036.1599999999999, "end": 1039.1599999999999, "text": " from a database."}, {"start": 1039.1599999999999, "end": 1046.84, "text": " It's kind of like an abasian approach where you say, okay, I want to produce the best image."}, {"start": 1046.84, "end": 1055.1999999999998, "text": " So I want to maximize the probability of this image given the other image."}, {"start": 1055.1999999999998, "end": 1067.1999999999998, "text": " But then you can also say aha, but that's kind of proportional to j given i times p of"}, {"start": 1067.1999999999998, "end": 1068.1999999999998, "text": " i, right?"}, {"start": 1068.1999999999998, "end": 1069.28, "text": " You know, base rule."}, {"start": 1069.28, "end": 1074.8799999999999, "text": " And it seems that this paper is dealing mostly with kind of maximizing the likelihood"}, {"start": 1074.88, "end": 1080.44, "text": " of the output while you could probably combine it with some sort of prior over natural"}, {"start": 1080.44, "end": 1085.1200000000001, "text": " images and come up with an even better model."}, {"start": 1085.1200000000001, "end": 1091.2800000000002, "text": " Of course, then you'd need an actual database of images and training procedure, and you"}, {"start": 1091.2800000000002, "end": 1093.72, "text": " need a way to combine these two models."}, {"start": 1093.72, "end": 1095.24, "text": " So maybe that's a bit of a challenge."}, {"start": 1095.24, "end": 1096.64, "text": " Anyway, cool paper."}, {"start": 1096.64, "end": 1097.64, "text": " Check it out."}, {"start": 1097.64, "end": 1127.6000000000001, "text": " Yeah, bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=BTLCdge7uSQ | AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning | DeepMind's new agent to tackle yet another Esport: Starcraft II. This agent uses deep reinforcement learning with a new technique, called League Training, to catapult itself to Grandmaster-level skill at playing this game.
Abstract:
Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.
Authors: Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, Rémi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario Wünsch, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, David Silver
https://www.deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher | Alright, let's talk about Alpha Star, Grandmaster level and Starcraft 2 using multi-agent reinforcement learning. The corresponding paper looks like this and is by Oriol Vinyals at Al from Deep Mind and has been published in the Journal of Nature recently. Now let me say this first, stop publishing in Nature. This is a journal, it's not open access, it makes its readers pay for getting the article. So actually you can access this article or a public version of it for free but you can't print it, you can't download it unless you pay for it. And this to me it seems ridiculous because none of this money goes to the authors of the article, none of this money goes to the reviewers. The review quality isn't notably better at least in the field of computer science. All of this is a publicity stunt by Deep Mind because Nature has been kind of impactful in the last decades, it's like, oh look at me, I got a big dick, I publish in Nature. Nothing more than that, it's like open AI saying their model is too dangerous to release to the world. I guess Deep Mind might make the same claim. But Alpha Star, it's like too dangerous of a Starcraft player. Yeah, stop this, publish your research in open access, Nature or journals like these four computer science, it's a remnant of the last century. So go on and join everyone else in distributing your knowledge. Alright, rant over, let's jump in into this article. So the article describes how to train a reinforcement learning agent to play the game of Starcraft 2. So Starcraft 2 is this game for everyone who doesn't know. Just very quickly explain the game. Starcraft 2 is a real time strategy game and you're kind of in this top third person view and you control your units and the goal is kind of to move your units around. And first of all build up buildings and using those buildings you can then produce more and more diverse units. And ultimately you want to kind of produce some sort of army that can go to the opponent and destroy the opponent's base. So you control all of this on a computer using a mouse and a keyboard and Starcraft is notable for being very balanced. So there are three different races you can play. So first are the Terran which are kind of human-ish. They have marines and tanks and helicopters I believe and things like this. Then the Protoss are some sort of alien race that are super advanced so they can teleport and have energy shields and things like that. And then the Zerg and the Zerg are kind of a key ground dwelling creatures that infect things and spread like a disease. So the interesting thing here is compared to other real time strategy games is that the three races they play very different. So the game is almost a different game if you play as a different race. But they are so well balanced that almost any match up is kind of a fair game between equally skilled players. So that's makes Starcraft pretty unique. Also pretty unique is the very, very high action per minute rates that pro players get like they play this insanely fast. So the game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy base. So to train an RL agent to play this is very hard because like the action space is very high. You have to target with your mouse part of the screen. You have to look what is on the screen. What can I do? There is this mini map down here. Our things you could do, their opponents you can target and so on. So all of this is very, very difficult for an RL agent and at the end right after whatever 10 minutes right you play, play, play, play, play. After 10 minutes you either win or you lose and the RL agent has to figure out which of the actions that I did during those 10 minutes right is was this one was this one which lead to me winning or losing. These are very hard problems for reinforcement learning and the deep mind has combined almost every trick in the book known so far to RL to achieve this. Now the main contribution I'd say here that is novel is what is called league training and we'll get to that. So first of all if you don't know what reinforcement learning is reinforcement learning is basically what I just described you have an input right which is could be this thing here and you have a set of actions you have a set of actions that you can do which the set of actions here is anywhere you can click right you can click anywhere on the screen and you have to do this over and over and over and over again until you either win or you lose and from that you will see you will at the end receive yeah you win or you lose and then you have to kind of learn to play the game. So it's machine learning hardcore because you get minimal information and have to achieve a lot of things from it. So the first thing that deep mind actually does is it does supervised learning and we'll get into how exactly the model works later but first thing deep mind does is it trains an agent to simply imitate humans right so you have human data and from the human data you so these are games played by humans good humans right not not people like me so these are games played with humans from a significantly high elo and the first thing you extract is this Z here. Now Z is a is a called a statistics vector and as I understand it it's mainly the build order which means in which order do you build your buildings and units and this is very important in Starcraft this is a strategic decision where you say okay first I'm going to build three worker units this is like three workers worker worker and then I'm going to build a house and then I'm going to and so on. So these are major strategic decisions that that you kind of have to make with minutes ahead of time to plan in advance. And this is kind of a stays constant for the game so this is extracted and provided to the model as an input. So what is the current strategy basically the current overall strategy. The second thing that is extracted is this is at every time step the observation that the humans had so the screen that humans see and also the actions that the human did right. So the human takes its mouse and clicks somewhere right this is supposed to be a mouse pointer and clicks here right and then the model this part here this is the model and this is the policy function of the model so the policy decides what to do right is trained to match the action that the human did. So in essence first you train an agent to simply imitate humans and this you can do by supervised learning right this is classic machine learning each each step you have this input which is an image and you have the strategy you're trying to follow and from these two you're simply trying to match the action that human did assuming the human made a good decision. So this is how you initialize right you don't start from scratch. Now I have to say that even though this name is alpha star it has surprisingly little to do with alpha go or alpha zero that deep mind has done before mainly this is entirely model free reinforcement learning and goes more into the direction of classic classic deep RL and you can see with the human data you can already get pretty far so these down here are the leagues of starcraft and this here are percentiles of players and you see with the supervised training you can get almost you can get better than 80 85% of human players already right so pretty pretty impressive already simply by imitating humans now so the the way to to further improve this and let's actually go first into how the model looks like so down here they describe this model. So the model is supposed to map from input to output so from the screen that the agency's right and some other things to what the agent is going to do to an action a. If you simply do this at every time step then you have a game playing agent so first the question is of course how does this happen. Now the input isn't only the thing that the agencies which is this the minimap and the the minimap I believe that's the minimap or the entire map. Well it's in essence it's a picture it is also a list of entities so the game engine extracts a list of entities and this can be inside the screen here and outside the screen for friendly so the assumption is the agent knows about all of its units and where they are and what their statistics are so in this entities thing for each entity you have a list of what is its health what is its type what is its position does it carry any items and so on all the things you need to know about this entity this is in this list of entities and along with that also opponent entities but only the ones that are on screen right so all of this goes into this list of entities and then a next features are scalar features and as I understand it's scalar features are things like what race are you playing currently what time is it in the game and so on so these are additional features and also baseline features and this this is mainly used to train the the value network and if you this is not going to make sense if you know nothing about reinforcement learning but one main contribution of this paper is or not contribution or but kind of thing that they claim is that for computing the value network they also use the observations so all of this for of the opponent player because you know this during training because you're doing self play and you don't need this value network during inference you can actually do this and this improves performance significantly all right so that's just for people who know RL very well everyone else don't don't worry too much about these things all right so these are the inputs the scalar features the entity and the minimap they go each one goes through separate encoders so the minimap goes through a resonant which is a convolutional network and the entities go through a transformer which is kind of a thing to it's it's appropriate to encode a set of entities right scalar features go through a classic feed forward network MLP all of these get combined here into a deep LSTM that goes over time now the deep LSTM is what really makes the strategy because each time step each time step screen like this is input into the into the thing that but the agent also needs to remember what did it do last steps two steps ago right this is important because you don't have full observability you need to know what did I do in the in the past and that's where the so if the last step you saw this screen and the step before you saw this screen right then all of this would go through this encoding step into the LSTM right so the LSTM will encode now over time all of these different steps and so you can kind of say all right if I have just started building a building I should probably not build the same building again even though I can't see it on the screen right because I know that three steps ago I did start building a build a building yeah so this is kind of the LSTM is basically where you integrate your strategy over time so from the LSTM you have to make two predictions you have to make a prediction of what to do this is the action and how valuable is your current state and how valuable is your current state this is called the value network usually this is a core component of deep reinforcement learning these two components one is called the policy which would be everything over here and what is called the value network which is called everything over here these are the things you need to do after critical learning and after critical learning is the current state of the art in deep or else so deep mind does nothing else here except as I said they use these baseline features for the value network but if you don't know what a value network is don't worry about it the important part for playing the game is actually the part over here the called the policy so first you need to do to decide what action you do and there are many action types in StarCraft as I already said you can build a building you can move a unit you can actually move the camera that's an action type right because you want to maybe see what's over here or over here or over here so that's an action you can do and if you have decided on what action you want to do you have to decide when do I do it so you see the action type once you figured it out it goes into the next neural network and that decides okay when do I do it when do I do this action so it specifies a delay then once you've decided what to do and when to do it it goes into the next neural network and that decides should I put this into the queue of actions because the agent here is limited to a certain number of actions per second and I think it's 22 actions per 5 second or something like this so in order to mimic human limitations so the there's a queue of actions to be executed and the agent needs to decide do I really want is this action so important to put it into the queue all right if you have decided what to do when to do it whether you would like to do it at all right then you have to you have to say it goes into the next neural network and you have to say all right which units do I want to do it with right if you want to build a building you can have to choose one or many workers to do it I don't actually know how Starcraft works in this I'm a bit of a new but you have to you have to select units with which to do the action for most of the thing and there I like the use of a pointer network here so what a pointer network is is a network that can point to its own input it's sort of like an attention network but not really in a pointer network if you have a set of inputs like we have here so entity entity entity entity right all these entities and you can see the entity embedding the entity encoder actually has skip connections that go here right so this network directly gets these these entities as input it can then right you then you have a neural network on top of that neural network that neural network takes all of these things as an input and what the neural network will output is a pointer to one of these things right I can say look I point to this thing right here this is a called a pointer network and yeah as I said it's different from an attention network which might so an attention network is where you get a distribution actually you get a distribution in both cases there is a difference but we don't have to really time to go into it here but in essence with a pointer network you can select which of these entities you want to do something with all right now you've decided on which action when whether to cue it with which unit to do it now you have to decide for some actions for example if the action is attack or heal or something this target unit which unit you want to target or which which location on the map you want to target this is the target point here and you can see again here are skip connections from the entity encoder and from the spatial encoder to these things and while the target unit is an attention network that's this like a much like a pointer network you will kind of point to places in lists the target point is a deconvoluitional resonant what that means is so you have the spatial encoder here will embed the mini map so there will be a neural network right here actually let's throw the neural network in this color right here it will give you a an embedding of that right and that's what you what you feed into that's what you feed for example into the LSTM but then what you do is you have a deconvoluitional network which again produces a mini map but on this mini map there there's not it's not the original mini map but it's kind of a distribution of locations so it's a log here here do I want to point all right so the this neural network is responsible for producing this dot on the mini map is saying okay I know what to do when to do it with which units to do it and so on I want to do it right here on the mini map okay and now you have it right you go from the input which are these things the mini map then to tease and so on to what do I want to do where when with which units and so on right this is called a policy and it's extremely complicated every one of these boxes here is a neural network and you can see it's very it's it's a lot to train and they of course they have a lot of resources since they are deep-mind but that's the the main thing all right they have a few tricks to train this and we won't go too much into this but one of the tricks is V-trace from the Impala paper one of another trick is up go up going policy update and a third trick is TD lambda learning here and all of these are kind of improvements onto classic acrocritic reinforcement learning style like A2C or A3C if you are interested then you can you know look into these things so that's how they train it and the question now is what's the protocol for training it we saw okay there is supervised learning cool then there is reinforcement learning all right but you can't just apply and this is in the reinforcement learning this is what we said you get kind of a reward and the reward goes into the TD lambda and V-trace and and up going policy update to train the value function and the policy but the special thing that this paper introduces is what's called leak training now in in papers like alpha go or alpha zero what had been done is called self-play and self-play basically means you have an agent you have an agent right you have this how an an agent that's this is supposed to be an artificial intelligence right how to make it artificial okay it has a little hat right a funky hat it's a robot and the robot will play a copy of itself right and the copy might be slightly different but the it basically these two these two play each other and thereby become better and better and better and you can see this like over time as as the purple one gets better the blue one gets better as well because they they kind of play against each other and when one falls behind right when one falls behind then they simply copy over from the other one they basically copy the other one and then they catch up again right they catch out right and they continue competing so by competing against each other they get better and this is called self-play now people have noticed this kind of leads to instabilities because you can get kind of trapped trapped in cycles like rock paper scissors cycles so what they do is they will actually as they get better so this is the first version right and the second version they are a bit better now so they have bigger hats right and here bigger bigger larger hats right and down here they are even better so they they have like ginormous hats but they might have some weaknesses because they only play against each other right so this is the same players but over time what they will do is they will actually play occasionally play old versions of the other player or of themselves right occasionally the new versions will fall back and play old versions or not only the current versions of the agent or old versions of themselves right so this is called fictitious self-play and that you always play the you know not only play the your current kind of opponent or your current self I mean it's the same anyway because you keep copying the weights you also play the old ones and this paper goes a step further and says actually we we do this but we want to prioritize the good ones so for example we know that we know that the current ones are good right but we know that this particular one was also pretty good so far so we are we keep making we keep making these these new ones play against this one more often and this has led to kind of an improvement in these kind of self-play algorithms and the the real new part of this alpha star paper is the fact that they do this league training and in the league training this this is what it looks like but I find this graphic rather confusing I'd rather explain it like something like this all right so there is your current your current strategy and you have a hat right and you do all of the you do all of the all of the I play against myself with the smaller hat thing right I play against past versions of myself fine but then you also do you have what's called exploiters and exploiters and exploiters is a let's call it a triangle hat because it's very evil what it does is it specifically targets only the current good agent right so this this agent right here is tasked with playing old versions of itself and playing the exploiter both at the same time but the the exploiter is only tasked with playing this thing so what it can do is it can specialize in exploiting whatever weaknesses this player has of course the hope is that the this player will become better in response because there's a player trying to exploit it right so every and as this as this player becomes better than this player here it's reinitialized and tries to find new weaknesses right so as this as this one continues to learn so the exploiters they are initialized you can see this here so these are called the main agents and you can see they play against each other right one of them they play against each other they play against past versions of themselves so these are past versions of themselves but then there are these main exploiters and the main exploiters they're constantly reinitialized from human data right you can see this here they're reinitialized and they only play against these main players right they don't have to deal with any of the past players or playing against themselves stuff they only try to exploit the main players and thereby the main players get better once they get better than an exploiter they are reinitialized so the exploiters are reinitialized to find new exploits of the main agents the third component is what's called a league exploiter and the league exploiter is the following so the league let's the league exploiter here and its hat is a wavy hat and what the league exploiter does is it plays against past versions of itself and others so it does play against the league exploiter sorry with smaller wavy hat it also plays against this thing by the way this this here also plays against past versions of this and of everything else you can see here the past version arrows it goes against all past players so this this represents all the past players that ever existed and so the the so does the so here but also against past versions of this of this main exploiter here but the important thing is the current main exploiter doesn't play past versions of itself right so this also plays this and this place this and this place this and this also plays this so the league exploiter they they do take part in this called league business like playing against past versions of all the players but it only plays against the main exploiters and this is a thing that I find missing here honestly I don't know if I don't understand this but I'm pretty sure I do like these also play these and that's an arrow missing in the in the drawing the league exploiters play the main agents but the main difference between the league exploiters and the main agents is the league exploiters they don't play themselves right there's no there's no playing themselves on the league exploiters so the league exploiters what they can do is they can find weaknesses of the entire league and kind of train train the by playing against the main opponents using those found weaknesses you bet that the main the main agents will get better against those major weaknesses of the entire league right so the main agents for first of all they get better by playing the main exploiters because the main exploiters are mainly trying to exploit the main agents the main agents also get better by playing the league exploiters because the league exploiters find weaknesses of the entire league right so and the main agents they also get better by playing each other. So that makes these these main agents kind of you can say they're trained against everything under the sun, right, against any possible exploit that can be found either in selves or generally and thereby they get really good at start craft because they can counter pretty much everything. So this is how league training works and this is what I feel is the main contribution of this paper to the reinforcement learning world. Now they do an ablation study here. You can see where where where this ends up. So these final agents here they end up in Grandmaster level star craft and beat 99.0 some percent of human players. So really really good. They do an ablation study of all of the tricks they use. So this is pretty much all tricks they use and you can see here this includes these league composition, right, what happens if we only have main agents, then main exploiters, league exploiters and you can see the elo going up. Then you can see okay multi agent learning how much does this fictitious self play the fact that we prioritize to strong players and so on. How much does this help and you again see the elo going up. How much does it help that we use human data? How much does it use that we help that we use these different networks and so on. They have very good kind of ablation studies of how much each of the things help. Here they investigate what if we didn't have a camera interface. So what if we could see the entire game at once and not only kind of the opponents that are within the camera and what if we didn't need to move the camera. They investigate the off policy learning corrections that we mentioned and so on. So I find this very cool that they do these huge ablation studies to show really how much each of these tricks that they use helps in generating their superior performance. And here you can see how these agents develop. So overtraining it, they have a massive infrastructure and they train for days. You can see this here, but you can see that the main agents just get better and better and better and better. While the main exploiters of course they stay the same, but they are they kind of keep getting re-initialized. So this main agent is trying to exploit these, sorry, these main exploiters trying to exploit these main agents. This one is trying to exploit these one. They're not by themselves really good agents, but they're simply trying to to find and exploit weaknesses of the main agents. Likewise, these league exploiters, they do get better with the league, but they are only concerned with exploiting current and past versions of the league, right? Also to make the main agents better. So everything is geared towards making these main agents better. And you can see it actually works. They have some analysis of which builds, which units these agents build. I'm not too versed in StarCraft to comment on this, but all in all, I find this to be a very cool paper. And I find it to be described fairly clear what they do. Though they do not release the source code. They release some kind of pseudo code. But the analysis and the ablations are very good. The results are let's say questionable because of course, you can't compare machines to humans, especially in a game where you have to make quick actions. Even if you limit the actions, they do this here. So they have this monitoring layer which limits the actions and introduces delay and so on. But still, it's not the same as a human who might not always be able to do these 22 actions per five seconds. If something quick happens, they may need to have some kind of relaxation phase and so on. But they try with these kind of delays and action limits. They try to model this, these kind of limitations. So that's yeah, that's, I find this as fair as possible. This is what I find kind of problematic. So the own units, as I said, the agent can also see the ones that are outside the camera. And that seems kind of shady because of course, you can claim humans can do whatever command groups to also control units outside the camera, but it's not really the case. So that's sort of a distinct advantage that the machine has. But yeah, in any case, I find it to be very well done. And I hope this made it a bit clear what the exact contributions are. And with that, have a fun time playing against Alpha Star. Bye bye. | [{"start": 0.0, "end": 7.4, "text": " Alright, let's talk about Alpha Star, Grandmaster level and Starcraft 2 using multi-agent reinforcement"}, {"start": 7.4, "end": 8.4, "text": " learning."}, {"start": 8.4, "end": 15.32, "text": " The corresponding paper looks like this and is by Oriol Vinyals at Al from Deep Mind and"}, {"start": 15.32, "end": 19.56, "text": " has been published in the Journal of Nature recently."}, {"start": 19.56, "end": 24.16, "text": " Now let me say this first, stop publishing in Nature."}, {"start": 24.16, "end": 29.96, "text": " This is a journal, it's not open access, it makes its readers pay for getting the article."}, {"start": 29.96, "end": 36.519999999999996, "text": " So actually you can access this article or a public version of it for free but you can't"}, {"start": 36.519999999999996, "end": 40.84, "text": " print it, you can't download it unless you pay for it."}, {"start": 40.84, "end": 47.92, "text": " And this to me it seems ridiculous because none of this money goes to the authors of"}, {"start": 47.92, "end": 50.96, "text": " the article, none of this money goes to the reviewers."}, {"start": 50.96, "end": 56.28, "text": " The review quality isn't notably better at least in the field of computer science."}, {"start": 56.28, "end": 61.8, "text": " All of this is a publicity stunt by Deep Mind because Nature has been kind of impactful"}, {"start": 61.8, "end": 68.2, "text": " in the last decades, it's like, oh look at me, I got a big dick, I publish in Nature."}, {"start": 68.2, "end": 73.24000000000001, "text": " Nothing more than that, it's like open AI saying their model is too dangerous to release"}, {"start": 73.24000000000001, "end": 74.24000000000001, "text": " to the world."}, {"start": 74.24000000000001, "end": 77.28, "text": " I guess Deep Mind might make the same claim."}, {"start": 77.28, "end": 81.6, "text": " But Alpha Star, it's like too dangerous of a Starcraft player."}, {"start": 81.6, "end": 90.44, "text": " Yeah, stop this, publish your research in open access, Nature or journals like these"}, {"start": 90.44, "end": 94.68, "text": " four computer science, it's a remnant of the last century."}, {"start": 94.68, "end": 99.88, "text": " So go on and join everyone else in distributing your knowledge."}, {"start": 99.88, "end": 104.48, "text": " Alright, rant over, let's jump in into this article."}, {"start": 104.48, "end": 110.84, "text": " So the article describes how to train a reinforcement learning agent to play the game of Starcraft"}, {"start": 110.84, "end": 112.16, "text": " 2."}, {"start": 112.16, "end": 117.2, "text": " So Starcraft 2 is this game for everyone who doesn't know."}, {"start": 117.2, "end": 118.96000000000001, "text": " Just very quickly explain the game."}, {"start": 118.96000000000001, "end": 125.12, "text": " Starcraft 2 is a real time strategy game and you're kind of in this top third person view"}, {"start": 125.12, "end": 130.2, "text": " and you control your units and the goal is kind of to move your units around."}, {"start": 130.2, "end": 135.64, "text": " And first of all build up buildings and using those buildings you can then produce more"}, {"start": 135.64, "end": 137.79999999999998, "text": " and more diverse units."}, {"start": 137.79999999999998, "end": 143.0, "text": " And ultimately you want to kind of produce some sort of army that can go to the opponent"}, {"start": 143.0, "end": 145.95999999999998, "text": " and destroy the opponent's base."}, {"start": 145.95999999999998, "end": 152.23999999999998, "text": " So you control all of this on a computer using a mouse and a keyboard and Starcraft is notable"}, {"start": 152.23999999999998, "end": 154.32, "text": " for being very balanced."}, {"start": 154.32, "end": 157.76, "text": " So there are three different races you can play."}, {"start": 157.76, "end": 163.56, "text": " So first are the Terran which are kind of human-ish."}, {"start": 163.56, "end": 170.76, "text": " They have marines and tanks and helicopters I believe and things like this."}, {"start": 170.76, "end": 177.88, "text": " Then the Protoss are some sort of alien race that are super advanced so they can teleport"}, {"start": 177.88, "end": 182.2, "text": " and have energy shields and things like that."}, {"start": 182.2, "end": 190.92, "text": " And then the Zerg and the Zerg are kind of a key ground dwelling creatures that infect"}, {"start": 190.92, "end": 195.76, "text": " things and spread like a disease."}, {"start": 195.76, "end": 200.83999999999997, "text": " So the interesting thing here is compared to other real time strategy games is that the"}, {"start": 200.83999999999997, "end": 204.07999999999998, "text": " three races they play very different."}, {"start": 204.07999999999998, "end": 209.51999999999998, "text": " So the game is almost a different game if you play as a different race."}, {"start": 209.52, "end": 216.0, "text": " But they are so well balanced that almost any match up is kind of a fair game between"}, {"start": 216.0, "end": 218.36, "text": " equally skilled players."}, {"start": 218.36, "end": 220.64000000000001, "text": " So that's makes Starcraft pretty unique."}, {"start": 220.64000000000001, "end": 226.64000000000001, "text": " Also pretty unique is the very, very high action per minute rates that pro players get"}, {"start": 226.64000000000001, "end": 229.56, "text": " like they play this insanely fast."}, {"start": 229.56, "end": 235.16000000000003, "text": " So the game lasts about 10 to 15 minutes and as I said the goal is to destroy the enemy"}, {"start": 235.16000000000003, "end": 236.16000000000003, "text": " base."}, {"start": 236.16, "end": 243.8, "text": " So to train an RL agent to play this is very hard because like the action space is very"}, {"start": 243.8, "end": 244.8, "text": " high."}, {"start": 244.8, "end": 248.76, "text": " You have to target with your mouse part of the screen."}, {"start": 248.76, "end": 251.2, "text": " You have to look what is on the screen."}, {"start": 251.2, "end": 253.64, "text": " What can I do?"}, {"start": 253.64, "end": 257.36, "text": " There is this mini map down here."}, {"start": 257.36, "end": 261.4, "text": " Our things you could do, their opponents you can target and so on."}, {"start": 261.4, "end": 270.32, "text": " So all of this is very, very difficult for an RL agent and at the end right after whatever"}, {"start": 270.32, "end": 273.79999999999995, "text": " 10 minutes right you play, play, play, play, play."}, {"start": 273.79999999999995, "end": 279.79999999999995, "text": " After 10 minutes you either win or you lose and the RL agent has to figure out which of"}, {"start": 279.79999999999995, "end": 285.79999999999995, "text": " the actions that I did during those 10 minutes right is was this one was this one which lead"}, {"start": 285.79999999999995, "end": 287.88, "text": " to me winning or losing."}, {"start": 287.88, "end": 295.76, "text": " These are very hard problems for reinforcement learning and the deep mind has combined"}, {"start": 295.76, "end": 300.84, "text": " almost every trick in the book known so far to RL to achieve this."}, {"start": 300.84, "end": 308.12, "text": " Now the main contribution I'd say here that is novel is what is called league training"}, {"start": 308.12, "end": 310.92, "text": " and we'll get to that."}, {"start": 310.92, "end": 319.48, "text": " So first of all if you don't know what reinforcement learning is reinforcement learning is basically"}, {"start": 319.48, "end": 326.16, "text": " what I just described you have an input right which is could be this thing here and you"}, {"start": 326.16, "end": 331.12, "text": " have a set of actions you have a set of actions that you can do which the set of actions"}, {"start": 331.12, "end": 336.20000000000005, "text": " here is anywhere you can click right you can click anywhere on the screen and you have"}, {"start": 336.2, "end": 343.2, "text": " to do this over and over and over and over again until you either win or you lose and from"}, {"start": 343.2, "end": 348.56, "text": " that you will see you will at the end receive yeah you win or you lose and then you have"}, {"start": 348.56, "end": 350.84, "text": " to kind of learn to play the game."}, {"start": 350.84, "end": 355.84, "text": " So it's machine learning hardcore because you get minimal information and have to achieve"}, {"start": 355.84, "end": 358.03999999999996, "text": " a lot of things from it."}, {"start": 358.04, "end": 366.88, "text": " So the first thing that deep mind actually does is it does supervised learning and we'll"}, {"start": 366.88, "end": 374.56, "text": " get into how exactly the model works later but first thing deep mind does is it trains"}, {"start": 374.56, "end": 382.88, "text": " an agent to simply imitate humans right so you have human data and from the human data"}, {"start": 382.88, "end": 391.0, "text": " you so these are games played by humans good humans right not not people like me so"}, {"start": 391.0, "end": 397.84, "text": " these are games played with humans from a significantly high elo and the first thing"}, {"start": 397.84, "end": 400.15999999999997, "text": " you extract is this Z here."}, {"start": 400.15999999999997, "end": 405.76, "text": " Now Z is a is a called a statistics vector and as I understand it it's mainly the build"}, {"start": 405.76, "end": 411.04, "text": " order which means in which order do you build your buildings and units and this is very"}, {"start": 411.04, "end": 416.6, "text": " important in Starcraft this is a strategic decision where you say okay first I'm going"}, {"start": 416.6, "end": 423.28000000000003, "text": " to build three worker units this is like three workers worker worker and then I'm going"}, {"start": 423.28000000000003, "end": 426.32000000000005, "text": " to build a house and then I'm going to and so on."}, {"start": 426.32000000000005, "end": 434.6, "text": " So these are major strategic decisions that that you kind of have to make with minutes"}, {"start": 434.6, "end": 438.20000000000005, "text": " ahead of time to plan in advance."}, {"start": 438.2, "end": 444.96, "text": " And this is kind of a stays constant for the game so this is extracted and provided to"}, {"start": 444.96, "end": 446.56, "text": " the model as an input."}, {"start": 446.56, "end": 451.4, "text": " So what is the current strategy basically the current overall strategy."}, {"start": 451.4, "end": 457.76, "text": " The second thing that is extracted is this is at every time step the observation that"}, {"start": 457.76, "end": 465.44, "text": " the humans had so the screen that humans see and also the actions that the human did"}, {"start": 465.44, "end": 467.0, "text": " right."}, {"start": 467.0, "end": 475.8, "text": " So the human takes its mouse and clicks somewhere right this is supposed to be a mouse pointer"}, {"start": 475.8, "end": 483.12, "text": " and clicks here right and then the model this part here this is the model and this is the"}, {"start": 483.12, "end": 490.64, "text": " policy function of the model so the policy decides what to do right is trained to match"}, {"start": 490.64, "end": 492.76, "text": " the action that the human did."}, {"start": 492.76, "end": 499.76, "text": " So in essence first you train an agent to simply imitate humans and this you can do by supervised"}, {"start": 499.76, "end": 506.8, "text": " learning right this is classic machine learning each each step you have this input which is"}, {"start": 506.8, "end": 511.92, "text": " an image and you have the strategy you're trying to follow and from these two you're simply"}, {"start": 511.92, "end": 518.16, "text": " trying to match the action that human did assuming the human made a good decision."}, {"start": 518.16, "end": 524.16, "text": " So this is how you initialize right you don't start from scratch."}, {"start": 524.16, "end": 531.0799999999999, "text": " Now I have to say that even though this name is alpha star it has surprisingly little to"}, {"start": 531.0799999999999, "end": 539.52, "text": " do with alpha go or alpha zero that deep mind has done before mainly this is entirely model"}, {"start": 539.52, "end": 549.68, "text": " free reinforcement learning and goes more into the direction of classic classic deep RL"}, {"start": 549.68, "end": 554.0799999999999, "text": " and you can see with the human data you can already get pretty far so these down here"}, {"start": 554.0799999999999, "end": 562.3199999999999, "text": " are the leagues of starcraft and this here are percentiles of players and you see with"}, {"start": 562.3199999999999, "end": 568.68, "text": " the supervised training you can get almost you can get better than 80 85% of human players"}, {"start": 568.68, "end": 583.2399999999999, "text": " already right so pretty pretty impressive already simply by imitating humans now so the"}, {"start": 583.2399999999999, "end": 590.4799999999999, "text": " the way to to further improve this and let's actually go first into how the model looks"}, {"start": 590.4799999999999, "end": 597.5999999999999, "text": " like so down here they describe this model."}, {"start": 597.6, "end": 607.4, "text": " So the model is supposed to map from input to output so from the screen that the agency's"}, {"start": 607.4, "end": 615.24, "text": " right and some other things to what the agent is going to do to an action a."}, {"start": 615.24, "end": 621.12, "text": " If you simply do this at every time step then you have a game playing agent so first the"}, {"start": 621.12, "end": 624.0400000000001, "text": " question is of course how does this happen."}, {"start": 624.04, "end": 631.76, "text": " Now the input isn't only the thing that the agencies which is this the minimap and the"}, {"start": 631.76, "end": 637.8399999999999, "text": " the minimap I believe that's the minimap or the entire map."}, {"start": 637.8399999999999, "end": 648.52, "text": " Well it's in essence it's a picture it is also a list of entities so the game engine"}, {"start": 648.52, "end": 654.84, "text": " extracts a list of entities and this can be inside the screen here and outside the"}, {"start": 654.84, "end": 663.1999999999999, "text": " screen for friendly so the assumption is the agent knows about all of its units and"}, {"start": 663.1999999999999, "end": 668.8, "text": " where they are and what their statistics are so in this entities thing for each entity"}, {"start": 668.8, "end": 675.64, "text": " you have a list of what is its health what is its type what is its position does it carry"}, {"start": 675.64, "end": 680.3199999999999, "text": " any items and so on all the things you need to know about this entity this is in this"}, {"start": 680.3199999999999, "end": 687.4, "text": " list of entities and along with that also opponent entities but only the ones that are on"}, {"start": 687.4, "end": 696.3199999999999, "text": " screen right so all of this goes into this list of entities and then a next features are"}, {"start": 696.3199999999999, "end": 701.4399999999999, "text": " scalar features and as I understand it's scalar features are things like what race are"}, {"start": 701.44, "end": 708.44, "text": " you playing currently what time is it in the game and so on so these are additional features"}, {"start": 708.44, "end": 718.2, "text": " and also baseline features and this this is mainly used to train the the value network"}, {"start": 718.2, "end": 722.96, "text": " and if you this is not going to make sense if you know nothing about reinforcement learning"}, {"start": 722.96, "end": 729.8800000000001, "text": " but one main contribution of this paper is or not contribution or but kind of thing"}, {"start": 729.88, "end": 735.6, "text": " that they claim is that for computing the value network they also use the observations"}, {"start": 735.6, "end": 741.16, "text": " so all of this for of the opponent player because you know this during training because"}, {"start": 741.16, "end": 747.0, "text": " you're doing self play and you don't need this value network during inference you can"}, {"start": 747.0, "end": 752.56, "text": " actually do this and this improves performance significantly all right so that's just"}, {"start": 752.56, "end": 762.9599999999999, "text": " for people who know RL very well everyone else don't don't worry too much about these things"}, {"start": 762.9599999999999, "end": 768.04, "text": " all right so these are the inputs the scalar features the entity and the minimap they"}, {"start": 768.04, "end": 772.7199999999999, "text": " go each one goes through separate encoders so the minimap goes through a resonant which"}, {"start": 772.7199999999999, "end": 779.4399999999999, "text": " is a convolutional network and the entities go through a transformer which is kind of"}, {"start": 779.44, "end": 784.8800000000001, "text": " a thing to it's it's appropriate to encode a set of entities right scalar features go"}, {"start": 784.8800000000001, "end": 792.12, "text": " through a classic feed forward network MLP all of these get combined here into a deep LSTM"}, {"start": 792.12, "end": 799.2800000000001, "text": " that goes over time now the deep LSTM is what really makes the strategy because each time"}, {"start": 799.2800000000001, "end": 807.24, "text": " step each time step screen like this is input into the into the thing that but the agent"}, {"start": 807.24, "end": 812.4, "text": " also needs to remember what did it do last steps two steps ago right this is important"}, {"start": 812.4, "end": 817.36, "text": " because you don't have full observability you need to know what did I do in the in the"}, {"start": 817.36, "end": 824.28, "text": " past and that's where the so if the last step you saw this screen and the step before"}, {"start": 824.28, "end": 830.48, "text": " you saw this screen right then all of this would go through this encoding step into the"}, {"start": 830.48, "end": 838.84, "text": " LSTM right so the LSTM will encode now over time all of these different steps and so"}, {"start": 838.84, "end": 844.6800000000001, "text": " you can kind of say all right if I have just started building a building I should probably"}, {"start": 844.6800000000001, "end": 849.8000000000001, "text": " not build the same building again even though I can't see it on the screen right because"}, {"start": 849.8000000000001, "end": 859.16, "text": " I know that three steps ago I did start building a build a building yeah so this is kind of"}, {"start": 859.16, "end": 867.4, "text": " the LSTM is basically where you integrate your strategy over time so from the LSTM you have"}, {"start": 867.4, "end": 876.24, "text": " to make two predictions you have to make a prediction of what to do this is the action"}, {"start": 876.24, "end": 881.76, "text": " and how valuable is your current state and how valuable is your current state this is"}, {"start": 881.76, "end": 887.8, "text": " called the value network usually this is a core component of deep reinforcement learning"}, {"start": 887.8, "end": 891.92, "text": " these two components one is called the policy which would be everything over here and what"}, {"start": 891.92, "end": 896.7199999999999, "text": " is called the value network which is called everything over here these are the things"}, {"start": 896.7199999999999, "end": 901.24, "text": " you need to do after critical learning and after critical learning is the current state"}, {"start": 901.24, "end": 906.0, "text": " of the art in deep or else so deep mind does nothing else here except as I said they use"}, {"start": 906.0, "end": 910.7199999999999, "text": " these baseline features for the value network but if you don't know what a value network"}, {"start": 910.7199999999999, "end": 916.56, "text": " is don't worry about it the important part for playing the game is actually the part"}, {"start": 916.56, "end": 923.56, "text": " over here the called the policy so first you need to do to decide what action you do and"}, {"start": 923.56, "end": 927.8399999999999, "text": " there are many action types in StarCraft as I already said you can build a building you"}, {"start": 927.8399999999999, "end": 931.64, "text": " can move a unit you can actually move the camera that's an action type right because"}, {"start": 931.64, "end": 936.4399999999999, "text": " you want to maybe see what's over here or over here or over here so that's an action"}, {"start": 936.4399999999999, "end": 945.28, "text": " you can do and if you have decided on what action you want to do you have to decide when"}, {"start": 945.28, "end": 950.9599999999999, "text": " do I do it so you see the action type once you figured it out it goes into the next neural"}, {"start": 950.9599999999999, "end": 958.4399999999999, "text": " network and that decides okay when do I do it when do I do this action so it specifies"}, {"start": 958.4399999999999, "end": 965.8399999999999, "text": " a delay then once you've decided what to do and when to do it it goes into the next neural"}, {"start": 965.8399999999999, "end": 973.8399999999999, "text": " network and that decides should I put this into the queue of actions because the agent here"}, {"start": 973.84, "end": 980.44, "text": " is limited to a certain number of actions per second and I think it's 22 actions per"}, {"start": 980.44, "end": 989.1600000000001, "text": " 5 second or something like this so in order to mimic human limitations so the there's"}, {"start": 989.1600000000001, "end": 994.5600000000001, "text": " a queue of actions to be executed and the agent needs to decide do I really want is this"}, {"start": 994.5600000000001, "end": 1000.96, "text": " action so important to put it into the queue all right if you have decided what to do"}, {"start": 1000.96, "end": 1008.2, "text": " when to do it whether you would like to do it at all right then you have to you have to"}, {"start": 1008.2, "end": 1014.4000000000001, "text": " say it goes into the next neural network and you have to say all right which units do I"}, {"start": 1014.4000000000001, "end": 1019.32, "text": " want to do it with right if you want to build a building you can have to choose one or many"}, {"start": 1019.32, "end": 1025.24, "text": " workers to do it I don't actually know how Starcraft works in this I'm a bit of a new but"}, {"start": 1025.24, "end": 1030.8400000000001, "text": " you have to you have to select units with which to do the action for most of the thing and"}, {"start": 1030.84, "end": 1037.48, "text": " there I like the use of a pointer network here so what a pointer network is is a network"}, {"start": 1037.48, "end": 1042.48, "text": " that can point to its own input it's sort of like an attention network but not really"}, {"start": 1042.48, "end": 1047.72, "text": " in a pointer network if you have a set of inputs like we have here so entity entity entity"}, {"start": 1047.72, "end": 1054.1599999999999, "text": " entity right all these entities and you can see the entity embedding the entity encoder"}, {"start": 1054.16, "end": 1061.6200000000001, "text": " actually has skip connections that go here right so this network directly gets these"}, {"start": 1061.6200000000001, "end": 1068.0, "text": " these entities as input it can then right you then you have a neural network on top of"}, {"start": 1068.0, "end": 1080.76, "text": " that neural network that neural network takes all of these things as an input and what the"}, {"start": 1080.76, "end": 1088.04, "text": " neural network will output is a pointer to one of these things right I can say look I"}, {"start": 1088.04, "end": 1094.32, "text": " point to this thing right here this is a called a pointer network and yeah as I said it's"}, {"start": 1094.32, "end": 1102.36, "text": " different from an attention network which might so an attention network is where you get"}, {"start": 1102.36, "end": 1106.8799999999999, "text": " a distribution actually you get a distribution in both cases there is a difference but we"}, {"start": 1106.88, "end": 1114.48, "text": " don't have to really time to go into it here but in essence with a pointer network you can"}, {"start": 1114.48, "end": 1119.64, "text": " select which of these entities you want to do something with all right now you've decided"}, {"start": 1119.64, "end": 1127.64, "text": " on which action when whether to cue it with which unit to do it now you have to decide"}, {"start": 1127.64, "end": 1133.16, "text": " for some actions for example if the action is attack or heal or something this target"}, {"start": 1133.16, "end": 1141.72, "text": " unit which unit you want to target or which which location on the map you want to target"}, {"start": 1141.72, "end": 1148.0400000000002, "text": " this is the target point here and you can see again here are skip connections from the"}, {"start": 1148.0400000000002, "end": 1155.0400000000002, "text": " entity encoder and from the spatial encoder to these things and while the target unit"}, {"start": 1155.0400000000002, "end": 1160.3200000000002, "text": " is an attention network that's this like a much like a pointer network you will kind of"}, {"start": 1160.32, "end": 1169.72, "text": " point to places in lists the target point is a deconvoluitional resonant what that means"}, {"start": 1169.72, "end": 1176.6399999999999, "text": " is so you have the spatial encoder here will embed the mini map so there will be a neural"}, {"start": 1176.6399999999999, "end": 1182.48, "text": " network right here actually let's throw the neural network in this color right here it will"}, {"start": 1182.48, "end": 1189.9199999999998, "text": " give you a an embedding of that right and that's what you what you feed into that's what"}, {"start": 1189.92, "end": 1198.0, "text": " you feed for example into the LSTM but then what you do is you have a deconvoluitional network"}, {"start": 1198.0, "end": 1206.3600000000001, "text": " which again produces a mini map but on this mini map there there's not it's not the original"}, {"start": 1206.3600000000001, "end": 1214.48, "text": " mini map but it's kind of a distribution of locations so it's a log here here do I want"}, {"start": 1214.48, "end": 1222.96, "text": " to point all right so the this neural network is responsible for producing this dot on the"}, {"start": 1222.96, "end": 1228.84, "text": " mini map is saying okay I know what to do when to do it with which units to do it and so"}, {"start": 1228.84, "end": 1238.56, "text": " on I want to do it right here on the mini map okay and now you have it right you go from"}, {"start": 1238.56, "end": 1244.04, "text": " the input which are these things the mini map then to tease and so on to what do I want"}, {"start": 1244.04, "end": 1250.56, "text": " to do where when with which units and so on right this is called a policy and it's extremely"}, {"start": 1250.56, "end": 1257.6399999999999, "text": " complicated every one of these boxes here is a neural network and you can see it's very"}, {"start": 1257.6399999999999, "end": 1263.2, "text": " it's it's a lot to train and they of course they have a lot of resources since they are"}, {"start": 1263.2, "end": 1275.4, "text": " deep-mind but that's the the main thing all right they have a few tricks to train this and"}, {"start": 1275.4, "end": 1284.8400000000001, "text": " we won't go too much into this but one of the tricks is V-trace from the Impala paper"}, {"start": 1284.84, "end": 1293.3999999999999, "text": " one of another trick is up go up going policy update and a third trick is TD lambda learning"}, {"start": 1294.1999999999998, "end": 1301.56, "text": " here and all of these are kind of improvements onto classic acrocritic reinforcement learning"}, {"start": 1301.56, "end": 1308.28, "text": " style like A2C or A3C if you are interested then you can you know look into these things"}, {"start": 1308.28, "end": 1321.6399999999999, "text": " so that's how they train it and the question now is what's the protocol for training it we saw"}, {"start": 1321.6399999999999, "end": 1327.48, "text": " okay there is supervised learning cool then there is reinforcement learning all right but you"}, {"start": 1327.48, "end": 1332.44, "text": " can't just apply and this is in the reinforcement learning this is what we said you get kind of"}, {"start": 1332.44, "end": 1339.48, "text": " a reward and the reward goes into the TD lambda and V-trace and and up going policy update to train"}, {"start": 1339.96, "end": 1348.04, "text": " the value function and the policy but the special thing that this paper introduces is what's called"}, {"start": 1348.04, "end": 1356.8400000000001, "text": " leak training now in in papers like alpha go or alpha zero what had been done is called self-play"}, {"start": 1356.84, "end": 1363.72, "text": " and self-play basically means you have an agent you have an agent right you have this how an"}, {"start": 1363.72, "end": 1370.52, "text": " an agent that's this is supposed to be an artificial intelligence right how to make it artificial"}, {"start": 1371.08, "end": 1381.8799999999999, "text": " okay it has a little hat right a funky hat it's a robot and the robot will play a copy of itself"}, {"start": 1381.88, "end": 1390.8400000000001, "text": " right and the copy might be slightly different but the it basically these two these two play"}, {"start": 1390.8400000000001, "end": 1396.5200000000002, "text": " each other and thereby become better and better and better and you can see this like over time"}, {"start": 1396.5200000000002, "end": 1402.68, "text": " as as the purple one gets better the blue one gets better as well because they they kind of play"}, {"start": 1402.68, "end": 1408.8400000000001, "text": " against each other and when one falls behind right when one falls behind then they simply copy over"}, {"start": 1408.84, "end": 1415.1599999999999, "text": " from the other one they basically copy the other one and then they catch up again right they catch"}, {"start": 1415.1599999999999, "end": 1422.84, "text": " out right and they continue competing so by competing against each other they get better and"}, {"start": 1423.6399999999999, "end": 1429.8, "text": " this is called self-play now people have noticed this kind of leads to instabilities because you can"}, {"start": 1429.8, "end": 1437.8799999999999, "text": " get kind of trapped trapped in cycles like rock paper scissors cycles so what they do is they will"}, {"start": 1437.88, "end": 1443.64, "text": " actually as they get better so this is the first version right and the second version"}, {"start": 1444.44, "end": 1451.8000000000002, "text": " they are a bit better now so they have bigger hats right and here bigger"}, {"start": 1453.64, "end": 1461.4, "text": " bigger larger hats right and down here they are even better so they they have like ginormous hats"}, {"start": 1461.4, "end": 1466.1200000000001, "text": " but they might have some weaknesses because they only play against each other right so this is"}, {"start": 1466.12, "end": 1474.12, "text": " the same players but over time what they will do is they will actually play occasionally play"}, {"start": 1474.12, "end": 1482.6799999999998, "text": " old versions of the other player or of themselves right occasionally the new versions will fall"}, {"start": 1482.6799999999998, "end": 1489.56, "text": " back and play old versions or not only the current versions of the agent or old versions of themselves"}, {"start": 1489.56, "end": 1498.28, "text": " right so this is called fictitious self-play and that you always play the you know not only play"}, {"start": 1498.28, "end": 1503.6399999999999, "text": " the your current kind of opponent or your current self I mean it's the same anyway because you keep"}, {"start": 1503.6399999999999, "end": 1510.6, "text": " copying the weights you also play the old ones and this paper goes a step further and says actually"}, {"start": 1510.6, "end": 1519.6399999999999, "text": " we we do this but we want to prioritize the good ones so for example we know that we know that"}, {"start": 1519.6399999999999, "end": 1526.1999999999998, "text": " the current ones are good right but we know that this particular one was also pretty good"}, {"start": 1526.84, "end": 1535.8799999999999, "text": " so far so we are we keep making we keep making these these new ones play against this one more often"}, {"start": 1535.88, "end": 1543.8000000000002, "text": " and this has led to kind of an improvement in these kind of self-play algorithms and the the"}, {"start": 1543.8000000000002, "end": 1551.48, "text": " real new part of this alpha star paper is the fact that they do this league training and in the"}, {"start": 1551.48, "end": 1559.0, "text": " league training this this is what it looks like but I find this graphic rather confusing I'd rather"}, {"start": 1559.0, "end": 1566.36, "text": " explain it like something like this all right so there is your current your current strategy"}, {"start": 1567.16, "end": 1577.48, "text": " and you have a hat right and you do all of the you do all of the all of the I play against myself"}, {"start": 1577.48, "end": 1585.24, "text": " with the smaller hat thing right I play against past versions of myself fine but then you also do"}, {"start": 1585.24, "end": 1598.1200000000001, "text": " you have what's called exploiters and exploiters and exploiters is a let's call it a triangle hat"}, {"start": 1598.1200000000001, "end": 1606.36, "text": " because it's very evil what it does is it specifically targets only the current good agent right"}, {"start": 1606.36, "end": 1613.64, "text": " so this this agent right here is tasked with playing old versions of itself and playing the"}, {"start": 1613.64, "end": 1621.8000000000002, "text": " exploiter both at the same time but the the exploiter is only tasked with playing this thing so"}, {"start": 1623.24, "end": 1629.72, "text": " what it can do is it can specialize in exploiting whatever weaknesses this player has of course the"}, {"start": 1629.72, "end": 1636.3600000000001, "text": " hope is that the this player will become better in response because there's a player trying to"}, {"start": 1636.36, "end": 1643.0, "text": " exploit it right so every and as this as this player becomes better than this player here it's"}, {"start": 1643.0, "end": 1649.9599999999998, "text": " reinitialized and tries to find new weaknesses right so as this as this one continues to learn"}, {"start": 1649.9599999999998, "end": 1657.6399999999999, "text": " so the exploiters they are initialized you can see this here so these are called the main agents"}, {"start": 1657.6399999999999, "end": 1662.52, "text": " and you can see they play against each other right one of them they play against each other"}, {"start": 1662.52, "end": 1668.92, "text": " they play against past versions of themselves so these are past versions of themselves"}, {"start": 1669.8799999999999, "end": 1674.28, "text": " but then there are these main exploiters and the main exploiters they're constantly"}, {"start": 1674.28, "end": 1680.2, "text": " reinitialized from human data right you can see this here they're reinitialized and they"}, {"start": 1680.92, "end": 1686.52, "text": " only play against these main players right they don't have to deal with any of the past"}, {"start": 1686.52, "end": 1692.44, "text": " players or playing against themselves stuff they only try to exploit the main players and thereby"}, {"start": 1692.44, "end": 1698.04, "text": " the main players get better once they get better than an exploiter they are reinitialized"}, {"start": 1698.92, "end": 1704.76, "text": " so the exploiters are reinitialized to find new exploits of the main agents the third component"}, {"start": 1705.64, "end": 1711.56, "text": " is what's called a league exploiter and the league exploiter is the following so the league let's"}, {"start": 1711.56, "end": 1723.72, "text": " the league exploiter here and its hat is a wavy hat and what the league exploiter does is it plays"}, {"start": 1723.72, "end": 1731.96, "text": " against past versions of itself and others so it does play against the league exploiter sorry with"}, {"start": 1731.96, "end": 1742.92, "text": " smaller wavy hat it also plays against this thing by the way this this here also plays against past"}, {"start": 1742.92, "end": 1749.16, "text": " versions of this and of everything else you can see here the past version arrows it goes against"}, {"start": 1749.16, "end": 1754.92, "text": " all past players so this this represents all the past players that ever existed and so the"}, {"start": 1754.92, "end": 1765.3200000000002, "text": " the so does the so here but also against past versions of this of this main exploiter here"}, {"start": 1766.52, "end": 1772.68, "text": " but the important thing is the current main exploiter doesn't play past versions of itself"}, {"start": 1773.24, "end": 1780.76, "text": " right so this also plays this and this place this and this place this and this also plays this"}, {"start": 1780.76, "end": 1788.36, "text": " so the league exploiter they they do take part in this called league business like playing against"}, {"start": 1788.36, "end": 1802.04, "text": " past versions of all the players but it only plays against the main exploiters and this is a"}, {"start": 1802.04, "end": 1807.48, "text": " thing that I find missing here honestly I don't know if I don't understand this but I'm pretty sure"}, {"start": 1807.48, "end": 1814.68, "text": " I do like these also play these and that's an arrow missing in the in the drawing the league"}, {"start": 1814.68, "end": 1818.84, "text": " exploiters play the main agents but the main difference between the league exploiters and the"}, {"start": 1818.84, "end": 1824.6, "text": " main agents is the league exploiters they don't play themselves right there's no there's no"}, {"start": 1824.6, "end": 1831.0, "text": " playing themselves on the league exploiters so the league exploiters what they can do is they can"}, {"start": 1831.0, "end": 1839.48, "text": " find weaknesses of the entire league and kind of train train the by playing against the main"}, {"start": 1839.48, "end": 1848.04, "text": " opponents using those found weaknesses you bet that the main the main agents will get better"}, {"start": 1848.04, "end": 1855.96, "text": " against those major weaknesses of the entire league right so the main agents for first of all they"}, {"start": 1855.96, "end": 1861.88, "text": " get better by playing the main exploiters because the main exploiters are mainly trying to exploit"}, {"start": 1861.88, "end": 1869.0, "text": " the main agents the main agents also get better by playing the league exploiters because the league"}, {"start": 1869.0, "end": 1876.1200000000001, "text": " exploiters find weaknesses of the entire league right so and the main agents they also get better"}, {"start": 1876.12, "end": 1885.4799999999998, "text": " by playing each other. So that makes these these main agents kind of you can say they're"}, {"start": 1885.4799999999998, "end": 1891.2399999999998, "text": " trained against everything under the sun, right, against any possible exploit that can"}, {"start": 1891.2399999999998, "end": 1898.84, "text": " be found either in selves or generally and thereby they get really good at start craft because"}, {"start": 1898.84, "end": 1904.56, "text": " they can counter pretty much everything. So this is how league training works and this is what"}, {"start": 1904.56, "end": 1912.44, "text": " I feel is the main contribution of this paper to the reinforcement learning world. Now they do an"}, {"start": 1912.44, "end": 1921.76, "text": " ablation study here. You can see where where where this ends up. So these final agents here they end up"}, {"start": 1921.76, "end": 1932.3999999999999, "text": " in Grandmaster level star craft and beat 99.0 some percent of human players. So really really good."}, {"start": 1932.4, "end": 1942.0, "text": " They do an ablation study of all of the tricks they use. So this is pretty much all tricks they"}, {"start": 1942.0, "end": 1950.96, "text": " use and you can see here this includes these league composition, right, what happens if we only"}, {"start": 1950.96, "end": 1956.72, "text": " have main agents, then main exploiters, league exploiters and you can see the elo going up."}, {"start": 1956.72, "end": 1967.68, "text": " Then you can see okay multi agent learning how much does this fictitious self play the fact that"}, {"start": 1967.68, "end": 1974.56, "text": " we prioritize to strong players and so on. How much does this help and you again see the elo going up."}, {"start": 1976.0, "end": 1981.1200000000001, "text": " How much does it help that we use human data? How much does it use that we help that we use these"}, {"start": 1981.12, "end": 1990.56, "text": " different networks and so on. They have very good kind of ablation studies of how much each of"}, {"start": 1990.56, "end": 1997.6, "text": " the things help. Here they investigate what if we didn't have a camera interface. So what if we"}, {"start": 1997.6, "end": 2005.52, "text": " could see the entire game at once and not only kind of the opponents that are within the camera"}, {"start": 2005.52, "end": 2013.04, "text": " and what if we didn't need to move the camera. They investigate the off policy learning corrections"}, {"start": 2013.04, "end": 2020.0, "text": " that we mentioned and so on. So I find this very cool that they do these huge ablation studies"}, {"start": 2020.0, "end": 2026.8799999999999, "text": " to show really how much each of these tricks that they use helps in generating their superior"}, {"start": 2026.88, "end": 2037.1200000000001, "text": " performance. And here you can see how these agents develop. So overtraining it, they have a massive"}, {"start": 2037.1200000000001, "end": 2043.2, "text": " infrastructure and they train for days. You can see this here, but you can see that the main agents"}, {"start": 2043.2, "end": 2049.84, "text": " just get better and better and better and better. While the main exploiters of course they stay"}, {"start": 2049.84, "end": 2056.8, "text": " the same, but they are they kind of keep getting re-initialized. So this main agent is trying to"}, {"start": 2056.8, "end": 2065.6800000000003, "text": " exploit these, sorry, these main exploiters trying to exploit these main agents. This one is trying"}, {"start": 2065.6800000000003, "end": 2070.96, "text": " to exploit these one. They're not by themselves really good agents, but they're simply trying to"}, {"start": 2070.96, "end": 2077.44, "text": " to find and exploit weaknesses of the main agents. Likewise, these league exploiters, they do get"}, {"start": 2077.44, "end": 2085.28, "text": " better with the league, but they are only concerned with exploiting current and past versions of"}, {"start": 2085.28, "end": 2090.88, "text": " the league, right? Also to make the main agents better. So everything is geared towards making"}, {"start": 2090.88, "end": 2101.36, "text": " these main agents better. And you can see it actually works. They have some analysis of which"}, {"start": 2102.0800000000004, "end": 2108.6400000000003, "text": " builds, which units these agents build. I'm not too versed in StarCraft to comment on this,"}, {"start": 2108.64, "end": 2117.7599999999998, "text": " but all in all, I find this to be a very cool paper. And I find it to be described fairly clear"}, {"start": 2117.7599999999998, "end": 2125.68, "text": " what they do. Though they do not release the source code. They release some kind of pseudo code."}, {"start": 2125.68, "end": 2133.6, "text": " But the analysis and the ablations are very good. The results are let's say questionable because"}, {"start": 2133.6, "end": 2143.7599999999998, "text": " of course, you can't compare machines to humans, especially in a game where you have to make"}, {"start": 2143.7599999999998, "end": 2150.96, "text": " quick actions. Even if you limit the actions, they do this here. So they have this monitoring"}, {"start": 2150.96, "end": 2160.88, "text": " layer which limits the actions and introduces delay and so on. But still, it's not the same as a"}, {"start": 2160.88, "end": 2169.84, "text": " human who might not always be able to do these 22 actions per five seconds. If something quick"}, {"start": 2169.84, "end": 2175.36, "text": " happens, they may need to have some kind of relaxation phase and so on. But they try with these"}, {"start": 2175.36, "end": 2183.12, "text": " kind of delays and action limits. They try to model this, these kind of limitations. So that's"}, {"start": 2184.1600000000003, "end": 2190.0, "text": " yeah, that's, I find this as fair as possible. This is what I find kind of problematic. So"}, {"start": 2190.0, "end": 2196.4, "text": " the own units, as I said, the agent can also see the ones that are outside the camera. And"}, {"start": 2197.84, "end": 2204.4, "text": " that seems kind of shady because of course, you can claim humans can do whatever command groups"}, {"start": 2204.4, "end": 2214.24, "text": " to also control units outside the camera, but it's not really the case. So that's sort of a"}, {"start": 2214.24, "end": 2223.04, "text": " distinct advantage that the machine has. But yeah, in any case, I find it to be very well done."}, {"start": 2223.04, "end": 2231.2, "text": " And I hope this made it a bit clear what the exact contributions are. And with that,"}, {"start": 2231.2, "end": 2243.68, "text": " have a fun time playing against Alpha Star. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=kOy49NqZeqI | IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures | Policy Gradient RL on a massively distributed scale with theoretical guarantees!
Abstract:
In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have developed a new distributed agent IMPALA (Importance Weighted Actor-Learner Architecture) that not only uses resources more efficiently in single-machine training but also scales to thousands of machines without sacrificing data efficiency or resource utilisation. We achieve stable learning at high throughput by combining decoupled acting and learning with a novel off-policy correction method called V-trace. We demonstrate the effectiveness of IMPALA for multi-task reinforcement learning on DMLab-30 (a set of 30 tasks from the DeepMind Lab environment (Beattie et al., 2016)) and Atari-57 (all available Atari games in Arcade Learning Environment (Bellemare et al., 2013a)). Our results show that IMPALA is able to achieve better performance than previous agents with less data, and crucially exhibits positive transfer between tasks as a result of its multi-task approach.
Authors: Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymir Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, Koray Kavukcuoglu
https://arxiv.org/abs/1802.01561
https://github.com/deepmind/scalable_agent | Hi there! Today we're looking at Impala, scalable distributed deep RL with important-sweighted actor-learner architectures by Lossé Espéhal, Hubert Sawyer, Remy Munos et al. So this paper deals with a new architecture for deep reinforcement learning, specifically distributed deep reinforcement learning. So that means settings where you go beyond one single machine or beyond one single accelerator like a GPU. So I want to introduce this by showing you this task here. This is called the DeepMind Lab and the DeepMind Lab is a kind of a 3D environment as you can see or these are screenshots where they're very different goals but some of this as you can see are kind of lab-erent style things where you have to collect apples, some are platformers where you I guess have to jump around and so on, find objects. So DeepMind introduced this as kind of a as an reinforcement learning environment and what you can do the agent as you can see here has a camera it perceives pixels and it can get rewards for performing actions. The actions it can perform is it can you know walk back and forth, it can jump, pick and crouch, it can rotate. So this is kind of a limited set of actions that it can do but it can move around in this 3D world and it needs to achieve some goals. So that usually this is kind of a good setting for reinforcement learning and this paper doesn't do a whole lot of new things in terms of reinforcement learning but it does a lot of things to kind of make it work in a distributed setting. So usually what you would like to do is something like A2C. A2C is advantage-actor-critical learning and it's a very successful algorithm in reinforcement learning. We won't go into this much here but basic elements of it is you have R2 things you have a policy and usually this is called pi sorry about that. Usually this is called pi a policy that you input your current state. So you're current observation at time T and you want to score an action, right? Action A. Now you might have maybe as we saw before you can walk left, walk right and so on. So you might have 10 actions or so. So in here you would put action 1 or action 2 or action 3 and for this you would get a probability distribution over each action. So maybe in this particular state so each time with the same state. So you would get a distribution something like this, right? So here you should probably go with action 3. That's your policy function. Policy function pi tells you in this particular state which action should you take how often kind of gives you a distribution. The second thing you want is what's called a value function. So the value function V, capital V usually, you input your state and it will output, it will output what the value is of that state. And that's usually termed kind of as as a lowercase V. The value of the state is given if you're in a maze, right? I'm going to draw a maze from the top here, right? So here is the goal. And let's say you are, oops, you're right here, the green, right? And you have the choice of going forward to the right or to the left. Now this would be your policy here. Your policy, you would ask your policy and a one would maybe be go forward, a two go to the left, a three to the right. So your policy would decide what to do. Your value function, however, would decide in each of the states. So where you are plus where you could go here, here, here. So basically for each state in the system, it will give you a value in particular. In this case, it would probably give you a very, very high value here. Like, yeah, this is a good point because you're very close to the goal right here. This is probably not so good a point and this is a very bad point because you're going to corner, you're actually moving farther away from the goal. So if your value function is trained well, then you can, you can use that also to assess your situation. So the value function for each state, yes, it will give you a numerical value of how good that state is in terms of reaching your goal. And the A to C algorithm, now deals with the interplay of these. The A to C uses actually both of these in an interplay. So it will use one to teach the other one, right. And this interplay between those gives mix for a very successful reinforcement learning algorithm. Now, the way A to C does it is, as you can see here, what it does is it has to, there are two variants here, think step and think trajectories. But in essence, it has to run these episodes. And these here are steps in the episodes. And let's say an episode as four steps before it can do the learning part and the learning part is here, the orange thing. Once it has done a step of learning, it has to run episodes again. And then it has can do learning again. And that's because of a limitation of this, which is called on policy learning. So on, in on policy learning, you always want to have your update step, which is the orange part, to be fed with data. So the, all of these, all of these steps here go into this update steps. And it's necessary that the steps that you make the updates from are computed with kind of the most current version of the of the agent. Right. So that the agent will go into the world, make some steps using its neural network. Maybe I should explain. So the agent, right, is this box and the agent has this policy, right. And with this policy, as we saw, it will go and it will interact with the world, right, outside of itself. And it will kind of the world will give back observations and it will then interact again. So you can move a step forward, right. First, first thing is move the step, step forward. And then the world gives it back a, aha, you are now no longer here. You've moved here, right. And then it's, ah, now I want to move to the left. And the world says, okay, so you're no longer here. You've moved one to the left. And this on the right here are the observations. And this is the left here are the actions. And for the A to C, it's kind of necessary that we always have a current version of the policy generating these steps in order to be able to learn from them. And then the next steps also need to be kind of current and to be learned. Now, there have been attempts to decentralize this. And this is exactly what Impala does. So Impala splits this into multiple, um, workers. You can think of this as different machines. So there is a split here. And these are called actors. And this is called a learner. Now the actors, they will go ahead and they will run episodes on their own, right. Occasionally, or they will run episodes and they will communicate those episodes to the learner. And the learner will continuously here learn. So these orange steps can be made in much more quick succession. And don't have to be like synchronized as in the A to C. Here is another way of seeing this over here. We'll just concentrate on this on this, um, left thing here. So there is a learner and there are actors. And every now and then the actor sinks its model from the learner. These are different machines. So this can happen over the network. Every now and then the actor gets like an update of the newest policy network. And then the actor will just go ahead and run episodes using that policy, right? Episode, episode, episode, episode, episode steps, steps, without interfering with anything else. And then once it has run an episode or multiple, it will communicate this back to the learner. And if all the actors do this, right, the learner gets a whole bunch of these episodes and then can learn from all of them simultaneously. And it can do so in kind of with a in kind of very fast succession as you see here. So the work is split. Now of course, you run into a problem. Namely, as we saw in the A to C algorithm, this type of reinforcement learning requires basically that you always run the episode with the current model. And that's not the case here, right? The actor may sink the parameters, may sink the parameters once in a while. But then it will run these episodes. Right? When it runs these episodes here, it has no idea or it, the learner in the meantime has continually been updating the model. While the actor kind of has an old model. So these episodes here are run with an old model. So the learner, if it tries to learn from this, must kind of correct for this fact. And the big kind of theoretical contribution of this paper is how to correct for the fact that the data you learn from comes from an outdated policy model. And this is what's called the trace correction. So without going too much into the details here, V trace correction happens as follows. So what you define are what's called V trace targets. And these V trace targets are basically the targets that you train your value function towards. Right? So the value function, as we discussed before, that is a, that is the thing that tells you how good each state is. And the targets you train this towards. And you're also, by the way, using this V trace corrections in policy updates. So these are defined as follows. So the V trace target for step S is the value function at step S plus this correction thing. And the correction thing basically, well, I'm going to break this down some more. So the V at current S is your value function plus. And this is a sum over all future steps over. And this is a discount factor. And this is kind of a delta from one step to the next. So you're in an episode and you've made some steps. Right? And let's say we are here. Right? This is S. And so your little V S will be whatever your value function says of S plus kind of a correction for each step that you make going to the future like this. And the main part of these is this here, which is basically the reward at the step plus the difference of the value functions of the steps after it. And what V trace introduces now is this bit here. And these CI again are computed as such. So all this kind of is very nested. So there's a there's a big multiplication here. It's a very nested thing. But in the very, very, very core of it, you can see the following. These V trace corrections are a ratio between pi and mu. And pi is the policy of the learner. That is the current policy. And mu is the policy that has been used to generate the to generate the episode. And this is truncated by a minimum. And usually the C bar is one. So let's consider what happens here. What happens is let's say that mu is higher than pi for a given pair of AI and XA. What does it mean? It means that in the past, you run an episode, you come, you are in this maze. Right? Such, such, and you're here. Right? Now the and the goal, let's say the goal is down here. And the action is going over here. That's the action that you're considering here. Now your mu, which is your old policy that the actor has synced at some point, mu might say, this is very good. Right? Because it moves you towards the goal. But then your your pi, the learner has been learning since the eight, since the agent, the actor has synchronized the way it's the learner has been learning. And the learner might know, wait, wait, since you have decided this, I have actually learned that this might not be such a good move. Because, you know, there's a wall here and I'd rather go down here and then over here. So what it will do, it will since pi is low and mu is higher, it will downway this action. And this is how you correct for the fact that there are old weights. By basically downwaying, wherever the old policy thought of an action as being worth more than the new policy does. And this is how you make up for the fact that the new policy, you assume it knows better because it has learned more. And thereby you you give lower weight to the data points where the the policies have diverged a lot. So that's at the core of it. And you can think of in terms of here, you can think of it as maybe here at this step, you're at a point where the old policy that the actor has has updated itself to says we should do action one. Right? But the new policy that the learner has in the meantime has learned more says, well, we should do action too. And if this is the case, then this whole rest of the episode is downweight because it is no longer current knowledge. Right? And this is not just kind of a heuristic, but they actually do prove that this this comes with some guarantees, especially reduces to kind of the classic reinforcement algorithms. If you assume that mu is always pie, so that current policy is the old policy, and therefore you're in the old setting. Right? So this was a bit of a lengthy explanation of the math behind it. And at the end, what you do is following. You train your value function using this update. And you can see here, it's simply the gradient of the value function scaled by the thing that contains this V trace target. Right? Then you update your policy in this direction. And this is the classic reinforcement learning, reinforced style policy update, where here you have the gradient of the of the policy. And here you have the weighing by the reward. And specifically here, it is the reward plus this V trace target. And this thing here is a bias correction or a bias reducing sorry, variance reducing bias. That was terrible. The final form is what's called an entropy penalty, where you want to push the entropy of your policy up such that the agent kind of is biased towards exploring more than exploiting, if you know of the classic exploration exploitation dilemma. So that's that's what you do. You compute these V trace targets, update your value and policy according to these equations. And there you go. So what do what does impala do specifically in this deep mind lab? They have two architectures. First of all, they have this, they have this small architecture, second, they have this large architecture. And they just kind of try it out on these. And they measure how many frames per second they can get in. And you see here, compared to on single machine, compared to a three C, they are bringing a lot more frames per second. This is just on a single machine. But then on distributed setting, the scale up also is very significant that they reach. That's because they don't have to wait for other things. They can just go ahead. Everything runs at full speed basically. And everything runs in parallel. And the fact that that some of the information is old is corrected by V trace. And the last thing I want to show is the wall clock time. I think this is the important plot in this deep mind lab on overall the tasks. The wall clock time compared to the score. You can see a three C while it does increase over time. The impala variance up here increase in much, much faster wall clock time. So that's the that's the paper. They have a lot of proofs and appendix which I'm not going to go over. If you want to give it a try, then it is it is not called impala on GitHub. It is called I think scalable agent. So on GitHub, it is called scalable agent. I think. But you'll find it if you search for impala gate top or something like this. Other than that, thanks for listening. And see you next time. | [{"start": 0.0, "end": 6.0, "text": " Hi there! Today we're looking at Impala, scalable distributed deep RL with"}, {"start": 6.0, "end": 12.040000000000001, "text": " important-sweighted actor-learner architectures by Loss\u00e9 Esp\u00e9hal, Hubert Sawyer,"}, {"start": 12.040000000000001, "end": 18.6, "text": " Remy Munos et al. So this paper deals with a new architecture for deep"}, {"start": 18.6, "end": 23.56, "text": " reinforcement learning, specifically distributed deep reinforcement learning."}, {"start": 23.56, "end": 30.279999999999998, "text": " So that means settings where you go beyond one single machine or beyond one single"}, {"start": 30.279999999999998, "end": 36.12, "text": " accelerator like a GPU. So I want to introduce this by showing you this task here."}, {"start": 36.12, "end": 42.16, "text": " This is called the DeepMind Lab and the DeepMind Lab is a kind of a 3D"}, {"start": 42.16, "end": 47.519999999999996, "text": " environment as you can see or these are screenshots where they're very"}, {"start": 47.519999999999996, "end": 51.32, "text": " different goals but some of this as you can see are kind of lab-erent style"}, {"start": 51.32, "end": 57.04, "text": " things where you have to collect apples, some are platformers where you I guess"}, {"start": 57.04, "end": 62.6, "text": " have to jump around and so on, find objects. So DeepMind introduced this as"}, {"start": 62.6, "end": 70.32, "text": " kind of a as an reinforcement learning environment and what you can do the"}, {"start": 70.32, "end": 76.08, "text": " agent as you can see here has a camera it perceives pixels and it can get"}, {"start": 76.08, "end": 81.8, "text": " rewards for performing actions. The actions it can perform is it can you know"}, {"start": 81.8, "end": 87.2, "text": " walk back and forth, it can jump, pick and crouch, it can rotate. So this is kind"}, {"start": 87.2, "end": 91.84, "text": " of a limited set of actions that it can do but it can move around in this 3D world"}, {"start": 91.84, "end": 98.08, "text": " and it needs to achieve some goals. So that usually this is kind of a good"}, {"start": 98.08, "end": 105.28, "text": " setting for reinforcement learning and this paper doesn't do a whole lot of"}, {"start": 105.28, "end": 109.84, "text": " new things in terms of reinforcement learning but it does a lot of things to"}, {"start": 109.84, "end": 116.36, "text": " kind of make it work in a distributed setting. So usually what you would like"}, {"start": 116.36, "end": 124.2, "text": " to do is something like A2C. A2C is advantage-actor-critical learning and it's"}, {"start": 124.2, "end": 128.56, "text": " a very successful algorithm in reinforcement learning. We won't go into this"}, {"start": 128.56, "end": 136.32, "text": " much here but basic elements of it is you have R2 things you have a policy and"}, {"start": 136.32, "end": 142.2, "text": " usually this is called pi sorry about that. Usually this is called pi a policy"}, {"start": 142.2, "end": 148.16, "text": " that you input your current state. So you're current observation at time T and"}, {"start": 148.16, "end": 157.72, "text": " you want to score an action, right? Action A. Now you might have maybe as we saw"}, {"start": 157.72, "end": 161.96, "text": " before you can walk left, walk right and so on. So you might have 10 actions or so."}, {"start": 161.96, "end": 170.88, "text": " So in here you would put action 1 or action 2 or action 3 and for this you"}, {"start": 170.88, "end": 176.68, "text": " would get a probability distribution over each action. So maybe in this particular"}, {"start": 176.68, "end": 184.0, "text": " state so each time with the same state. So you would get a distribution something"}, {"start": 184.0, "end": 190.92, "text": " like this, right? So here you should probably go with action 3. That's your"}, {"start": 190.92, "end": 197.2, "text": " policy function. Policy function pi tells you in this particular state which"}, {"start": 197.2, "end": 202.48, "text": " action should you take how often kind of gives you a distribution. The second"}, {"start": 202.48, "end": 208.64, "text": " thing you want is what's called a value function. So the value function V, capital"}, {"start": 208.64, "end": 217.67999999999998, "text": " V usually, you input your state and it will output, it will output what the value is"}, {"start": 217.67999999999998, "end": 223.83999999999997, "text": " of that state. And that's usually termed kind of as as a lowercase V. The value of"}, {"start": 223.83999999999997, "end": 229.07999999999998, "text": " the state is given if you're in a maze, right? I'm going to draw a maze from the top"}, {"start": 229.07999999999998, "end": 238.39999999999998, "text": " here, right? So here is the goal. And let's say"}, {"start": 238.4, "end": 247.16, "text": " you are, oops, you're right here, the green, right? And you have the choice of"}, {"start": 247.16, "end": 254.28, "text": " going forward to the right or to the left. Now this would be your policy here."}, {"start": 254.28, "end": 260.76, "text": " Your policy, you would ask your policy and a one would maybe be go forward, a two"}, {"start": 260.76, "end": 265.72, "text": " go to the left, a three to the right. So your policy would decide what to do."}, {"start": 265.72, "end": 272.0, "text": " Your value function, however, would decide in each of the states. So where you are"}, {"start": 272.0, "end": 276.6, "text": " plus where you could go here, here, here. So basically for each state in the system,"}, {"start": 276.6, "end": 282.32000000000005, "text": " it will give you a value in particular. In this case, it would probably give you a"}, {"start": 282.32000000000005, "end": 287.6, "text": " very, very high value here. Like, yeah, this is a good point because you're very close"}, {"start": 287.6, "end": 293.40000000000003, "text": " to the goal right here. This is probably not so good a point and this is a very"}, {"start": 293.4, "end": 298.64, "text": " bad point because you're going to corner, you're actually moving farther away from the"}, {"start": 298.64, "end": 306.12, "text": " goal. So if your value function is trained well, then you can, you can use that also to"}, {"start": 306.12, "end": 313.03999999999996, "text": " assess your situation. So the value function for each state, yes, it will give you a numerical"}, {"start": 313.03999999999996, "end": 322.47999999999996, "text": " value of how good that state is in terms of reaching your goal. And the A to C algorithm,"}, {"start": 322.48, "end": 328.8, "text": " now deals with the interplay of these. The A to C uses actually both of these in an interplay."}, {"start": 328.8, "end": 336.84000000000003, "text": " So it will use one to teach the other one, right. And this interplay between those gives"}, {"start": 336.84000000000003, "end": 343.24, "text": " mix for a very successful reinforcement learning algorithm. Now, the way A to C does"}, {"start": 343.24, "end": 349.84000000000003, "text": " it is, as you can see here, what it does is it has to, there are two variants here,"}, {"start": 349.84, "end": 356.52, "text": " think step and think trajectories. But in essence, it has to run these episodes. And these"}, {"start": 356.52, "end": 362.12, "text": " here are steps in the episodes. And let's say an episode as four steps before it can"}, {"start": 362.12, "end": 367.0, "text": " do the learning part and the learning part is here, the orange thing. Once it has done"}, {"start": 367.0, "end": 373.67999999999995, "text": " a step of learning, it has to run episodes again. And then it has can do learning again."}, {"start": 373.67999999999995, "end": 379.28, "text": " And that's because of a limitation of this, which is called on policy learning. So on,"}, {"start": 379.28, "end": 386.32, "text": " in on policy learning, you always want to have your update step, which is the orange part,"}, {"start": 386.32, "end": 392.67999999999995, "text": " to be fed with data. So the, all of these, all of these steps here go into this update"}, {"start": 392.67999999999995, "end": 399.4, "text": " steps. And it's necessary that the steps that you make the updates from are computed"}, {"start": 399.4, "end": 405.67999999999995, "text": " with kind of the most current version of the of the agent. Right. So that the agent will"}, {"start": 405.68, "end": 411.28000000000003, "text": " go into the world, make some steps using its neural network. Maybe I should explain."}, {"start": 411.28000000000003, "end": 419.24, "text": " So the agent, right, is this box and the agent has this policy, right. And with this policy,"}, {"start": 419.24, "end": 425.08, "text": " as we saw, it will go and it will interact with the world, right, outside of itself. And"}, {"start": 425.08, "end": 430.68, "text": " it will kind of the world will give back observations and it will then interact again. So you"}, {"start": 430.68, "end": 435.92, "text": " can move a step forward, right. First, first thing is move the step, step forward. And"}, {"start": 435.92, "end": 442.44, "text": " then the world gives it back a, aha, you are now no longer here. You've moved here, right."}, {"start": 442.44, "end": 446.48, "text": " And then it's, ah, now I want to move to the left. And the world says, okay, so you're"}, {"start": 446.48, "end": 452.4, "text": " no longer here. You've moved one to the left. And this on the right here are the observations."}, {"start": 452.4, "end": 457.6, "text": " And this is the left here are the actions. And for the A to C, it's kind of necessary"}, {"start": 457.6, "end": 464.32000000000005, "text": " that we always have a current version of the policy generating these steps in order to"}, {"start": 464.32000000000005, "end": 470.20000000000005, "text": " be able to learn from them. And then the next steps also need to be kind of current and"}, {"start": 470.20000000000005, "end": 477.0, "text": " to be learned. Now, there have been attempts to decentralize this. And this is exactly what"}, {"start": 477.0, "end": 485.84000000000003, "text": " Impala does. So Impala splits this into multiple, um, workers. You can think of this as"}, {"start": 485.84, "end": 492.64, "text": " different machines. So there is a split here. And these are called actors. And this is"}, {"start": 492.64, "end": 499.4, "text": " called a learner. Now the actors, they will go ahead and they will run episodes on their"}, {"start": 499.4, "end": 506.2, "text": " own, right. Occasionally, or they will run episodes and they will communicate those"}, {"start": 506.2, "end": 512.8399999999999, "text": " episodes to the learner. And the learner will continuously here learn. So these orange"}, {"start": 512.84, "end": 520.12, "text": " steps can be made in much more quick succession. And don't have to be like synchronized as"}, {"start": 520.12, "end": 527.6, "text": " in the A to C. Here is another way of seeing this over here. We'll just concentrate on this"}, {"start": 527.6, "end": 534.24, "text": " on this, um, left thing here. So there is a learner and there are actors. And every now"}, {"start": 534.24, "end": 539.72, "text": " and then the actor sinks its model from the learner. These are different machines. So"}, {"start": 539.72, "end": 544.1600000000001, "text": " this can happen over the network. Every now and then the actor gets like an update of"}, {"start": 544.1600000000001, "end": 551.6, "text": " the newest policy network. And then the actor will just go ahead and run episodes using that"}, {"start": 551.6, "end": 557.8000000000001, "text": " policy, right? Episode, episode, episode, episode, episode steps, steps, without interfering"}, {"start": 557.8000000000001, "end": 562.9200000000001, "text": " with anything else. And then once it has run an episode or multiple, it will communicate"}, {"start": 562.9200000000001, "end": 569.2, "text": " this back to the learner. And if all the actors do this, right, the learner gets a whole"}, {"start": 569.2, "end": 576.0, "text": " bunch of these episodes and then can learn from all of them simultaneously. And it can"}, {"start": 576.0, "end": 583.6800000000001, "text": " do so in kind of with a in kind of very fast succession as you see here. So the work is"}, {"start": 583.6800000000001, "end": 591.32, "text": " split. Now of course, you run into a problem. Namely, as we saw in the A to C algorithm,"}, {"start": 591.32, "end": 599.08, "text": " this type of reinforcement learning requires basically that you always run the episode"}, {"start": 599.08, "end": 606.08, "text": " with the current model. And that's not the case here, right? The actor may sink the parameters,"}, {"start": 606.08, "end": 611.84, "text": " may sink the parameters once in a while. But then it will run these episodes. Right? When"}, {"start": 611.84, "end": 620.88, "text": " it runs these episodes here, it has no idea or it, the learner in the meantime has continually"}, {"start": 620.88, "end": 626.88, "text": " been updating the model. While the actor kind of has an old model. So these episodes here"}, {"start": 626.88, "end": 633.24, "text": " are run with an old model. So the learner, if it tries to learn from this, must kind of"}, {"start": 633.24, "end": 639.4, "text": " correct for this fact. And the big kind of theoretical contribution of this paper is"}, {"start": 639.4, "end": 646.56, "text": " how to correct for the fact that the data you learn from comes from an outdated policy"}, {"start": 646.56, "end": 655.52, "text": " model. And this is what's called the trace correction. So without going too much into"}, {"start": 655.52, "end": 669.28, "text": " the details here, V trace correction happens as follows. So what you define are what's called"}, {"start": 669.28, "end": 676.36, "text": " V trace targets. And these V trace targets are basically the targets that you train your"}, {"start": 676.36, "end": 686.5600000000001, "text": " value function towards. Right? So the value function, as we discussed before, that is a,"}, {"start": 686.5600000000001, "end": 691.92, "text": " that is the thing that tells you how good each state is. And the targets you train this"}, {"start": 691.92, "end": 698.84, "text": " towards. And you're also, by the way, using this V trace corrections in policy updates."}, {"start": 698.84, "end": 707.6, "text": " So these are defined as follows. So the V trace target for step S is the value function"}, {"start": 707.6, "end": 717.9200000000001, "text": " at step S plus this correction thing. And the correction thing basically, well, I'm"}, {"start": 717.92, "end": 729.9599999999999, "text": " going to break this down some more. So the V at current S is your value function plus."}, {"start": 729.9599999999999, "end": 737.8, "text": " And this is a sum over all future steps over. And this is a discount factor. And this"}, {"start": 737.8, "end": 743.1999999999999, "text": " is kind of a delta from one step to the next. So you're in an episode and you've made"}, {"start": 743.2, "end": 755.96, "text": " some steps. Right? And let's say we are here. Right? This is S. And so your little V"}, {"start": 755.96, "end": 769.24, "text": " S will be whatever your value function says of S plus kind of a correction for each step"}, {"start": 769.24, "end": 777.4, "text": " that you make going to the future like this. And the main part of these is this here,"}, {"start": 777.4, "end": 783.5600000000001, "text": " which is basically the reward at the step plus the difference of the value functions of"}, {"start": 783.5600000000001, "end": 797.08, "text": " the steps after it. And what V trace introduces now is this bit here. And these CI again are"}, {"start": 797.08, "end": 802.8000000000001, "text": " computed as such. So all this kind of is very nested. So there's a there's a big multiplication"}, {"start": 802.8000000000001, "end": 808.44, "text": " here. It's a very nested thing. But in the very, very, very core of it, you can see the"}, {"start": 808.44, "end": 818.8000000000001, "text": " following. These V trace corrections are a ratio between pi and mu. And pi is the policy"}, {"start": 818.8000000000001, "end": 825.1600000000001, "text": " of the learner. That is the current policy. And mu is the policy that has been used to"}, {"start": 825.16, "end": 834.6, "text": " generate the to generate the episode. And this is truncated by a minimum. And usually the"}, {"start": 834.6, "end": 846.1999999999999, "text": " C bar is one. So let's consider what happens here. What happens is let's say that mu is"}, {"start": 846.1999999999999, "end": 854.1999999999999, "text": " higher than pi for a given pair of AI and XA. What does it mean? It means that in the past,"}, {"start": 854.2, "end": 865.48, "text": " you run an episode, you come, you are in this maze. Right? Such, such, and you're here. Right?"}, {"start": 865.48, "end": 878.2, "text": " Now the and the goal, let's say the goal is down here. And the action is going over here."}, {"start": 878.2, "end": 887.08, "text": " That's the action that you're considering here. Now your mu, which is your old policy that"}, {"start": 887.08, "end": 893.96, "text": " the actor has synced at some point, mu might say, this is very good. Right? Because it"}, {"start": 893.96, "end": 904.2, "text": " moves you towards the goal. But then your your pi, the learner has been learning since the"}, {"start": 904.2, "end": 909.48, "text": " eight, since the agent, the actor has synchronized the way it's the learner has been learning. And"}, {"start": 909.48, "end": 916.44, "text": " the learner might know, wait, wait, since you have decided this, I have actually learned that"}, {"start": 916.44, "end": 922.44, "text": " this might not be such a good move. Because, you know, there's a wall here and I'd rather go"}, {"start": 922.44, "end": 931.1600000000001, "text": " down here and then over here. So what it will do, it will since pi is low and mu is higher,"}, {"start": 931.16, "end": 937.24, "text": " it will downway this action. And this is how you correct for the fact that there are old weights."}, {"start": 938.28, "end": 946.04, "text": " By basically downwaying, wherever the old policy thought of an action as being worth more than"}, {"start": 946.04, "end": 951.3199999999999, "text": " the new policy does. And this is how you make up for the fact that the new policy, you assume it"}, {"start": 951.3199999999999, "end": 957.64, "text": " knows better because it has learned more. And thereby you you give lower weight to the data points"}, {"start": 957.64, "end": 968.52, "text": " where the the policies have diverged a lot. So that's at the core of it. And you can think of in"}, {"start": 968.52, "end": 976.6, "text": " terms of here, you can think of it as maybe here at this step, you're at a point where the old"}, {"start": 976.6, "end": 987.24, "text": " policy that the actor has has updated itself to says we should do action one. Right? But the new"}, {"start": 987.24, "end": 994.12, "text": " policy that the learner has in the meantime has learned more says, well, we should do action too."}, {"start": 995.08, "end": 1005.0, "text": " And if this is the case, then this whole rest of the episode is downweight because it is no longer"}, {"start": 1005.0, "end": 1012.04, "text": " current knowledge. Right? And this is not just kind of a heuristic, but they actually do prove"}, {"start": 1012.04, "end": 1018.28, "text": " that this this comes with some guarantees, especially reduces to kind of the classic reinforcement"}, {"start": 1018.28, "end": 1024.6, "text": " algorithms. If you assume that mu is always pie, so that current policy is the old policy,"}, {"start": 1024.6, "end": 1028.92, "text": " and therefore you're in the old setting. Right? So this was a bit of a lengthy explanation"}, {"start": 1029.48, "end": 1038.28, "text": " of the math behind it. And at the end, what you do is following. You train your value function"}, {"start": 1038.28, "end": 1046.68, "text": " using this update. And you can see here, it's simply the gradient of the value function scaled by"}, {"start": 1047.6399999999999, "end": 1056.2, "text": " the thing that contains this V trace target. Right? Then you update your policy in this direction."}, {"start": 1056.2, "end": 1062.04, "text": " And this is the classic reinforcement learning, reinforced style policy update,"}, {"start": 1062.04, "end": 1071.6399999999999, "text": " where here you have the gradient of the of the policy. And here you have the weighing by the"}, {"start": 1071.6399999999999, "end": 1080.52, "text": " reward. And specifically here, it is the reward plus this V trace target. And this thing here is"}, {"start": 1080.52, "end": 1089.08, "text": " a bias correction or a bias reducing sorry, variance reducing bias. That was terrible."}, {"start": 1089.08, "end": 1098.1999999999998, "text": " The final form is what's called an entropy penalty, where you want to push the entropy of your"}, {"start": 1098.1999999999998, "end": 1106.04, "text": " policy up such that the agent kind of is biased towards exploring more than exploiting, if you know"}, {"start": 1106.04, "end": 1111.1599999999999, "text": " of the classic exploration exploitation dilemma. So that's that's what you do. You compute these"}, {"start": 1111.16, "end": 1119.3200000000002, "text": " V trace targets, update your value and policy according to these equations. And there you go."}, {"start": 1120.44, "end": 1125.24, "text": " So what do what does impala do specifically in this deep mind lab?"}, {"start": 1125.24, "end": 1131.24, "text": " They have two architectures. First of all, they have this, they have this small architecture,"}, {"start": 1131.24, "end": 1137.72, "text": " second, they have this large architecture. And they just kind of try it out on these. And they"}, {"start": 1137.72, "end": 1143.32, "text": " measure how many frames per second they can get in. And you see here, compared to on single"}, {"start": 1143.32, "end": 1151.16, "text": " machine, compared to a three C, they are bringing a lot more frames per second. This is just on a"}, {"start": 1151.16, "end": 1159.56, "text": " single machine. But then on distributed setting, the scale up also is very significant that they reach."}, {"start": 1161.64, "end": 1165.4, "text": " That's because they don't have to wait for other things. They can just go ahead."}, {"start": 1165.4, "end": 1172.6000000000001, "text": " Everything runs at full speed basically. And everything runs in parallel. And the fact that"}, {"start": 1172.6000000000001, "end": 1179.8000000000002, "text": " that some of the information is old is corrected by V trace. And the last thing I want to show is"}, {"start": 1181.16, "end": 1188.2800000000002, "text": " the wall clock time. I think this is the important plot in this deep mind lab on overall the tasks."}, {"start": 1188.28, "end": 1197.6399999999999, "text": " The wall clock time compared to the score. You can see a three C while it does increase over time."}, {"start": 1197.6399999999999, "end": 1204.04, "text": " The impala variance up here increase in much, much faster wall clock time."}, {"start": 1207.24, "end": 1212.76, "text": " So that's the that's the paper. They have a lot of proofs and appendix which I'm not going to go"}, {"start": 1212.76, "end": 1221.8799999999999, "text": " over. If you want to give it a try, then it is it is not called impala on GitHub. It is called"}, {"start": 1221.8799999999999, "end": 1236.84, "text": " I think scalable agent. So on GitHub, it is called scalable agent. I think. But you'll find it"}, {"start": 1236.84, "end": 1244.28, "text": " if you search for impala gate top or something like this. Other than that, thanks for listening."}, {"start": 1244.28, "end": 1274.12, "text": " And see you next time."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=ctCv_NRpqvM | The Visual Task Adaptation Benchmark | This paper presents a new benchmark for Visual Task Adaptation (i.e. BERT for images) and investigates several baseline methods for doing so.
Abstract:
Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Yet, the absence of a unified yardstick to evaluate general visual representations hinders progress. Many sub-fields promise representations, but each has different evaluation protocols that are either too constrained (linear classification), limited in scope (ImageNet, CIFAR, Pascal-VOC), or only loosely related to representation quality (generation). We present the Visual Task Adaptation Benchmark (VTAB): a diverse, realistic, and challenging benchmark to evaluate representations. VTAB embodies one principle: good representations adapt to unseen tasks with few examples. We run a large VTAB study of popular algorithms, answering questions like: How effective are ImageNet representation on non-standard datasets? Are generative models competitive? Is self-supervision useful if one already has labels?
Authors: Xiaohua Zhai, Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, Lucas Beyer, Olivier Bachem, Michael Tschannen, Marcin Michalski, Olivier Bousquet, Sylvain Gelly, Neil Houlsby
https://arxiv.org/abs/1910.04867
https://github.com/google-research/task_adaptation | Hi there. Today we're looking at the Visual Task Adaptation Benchmark by a list of authors that's way too long to read out all from Google Brain. So what is this paper? This paper cares about a new benchmark that is abbreviated VTab. And VTab is a benchmark for a task called Visual Task Adaptation. So a benchmark, the meaning of a benchmark is it's kind of a number that you achieve with a model and whoever has the highest number is the best in this task. Right? So the benchmark kind of standardizes how you evaluate models. And the model is here they do Visual Task Adaptation. So what is Visual Task Adaptation? So this is Visual Task Adaptation. It's kind of illustrated in this figure. Imagine you have a bunch of what are called Visual Tasks. And the Visual Task, and this is the right side here. A Visual Task is anything that can be solved from just Visual input. So basically given a picture or many pictures and you ask kind of a question about it, if that question can be answered by just looking at the picture, then that's called a Visual Task. For example, in this dataset you might be asked whether a picture contains a dog or a cat. In this dataset you might be asked to outline where the objects are. So here the plane you might be able to segment or you might be able to point out where buildings are in the images. Right here, here, there's no building here. So there's there's varieties of tasks that are possible. Or in the bottom domain you might be asked which one of the two red dots here is closer to the observer in 3D space. Or you might be asked in this picture please count the number of gray boxes. So there's a bunch of all of these count as Visual Tasks. Now the setting that the author's imagine here is there are many of these Visual Tasks in the world for which there isn't much training data. Like imagine something like this. These are aerial images so you kind of need a satellite or a plane to obtain them and then you need to label them. So all of this isn't that cheap. Even more so in a for example medical domain where you have very expensive CT images of patients and then you need to obtain them and you need to convince the patients to release their data and someone needs to label it. So it's very costly to obtain lots of training data. Now what we want to do is we want to for all of these tasks we ideally want to build neural networks, deep neural networks because we know they're super accurate but they are only super accurate if you have lots of training data. So that conflicts with the fact that we might not have so much training data for these tasks. So the proposed solution here is what's called Visual Tasks Adaptation and it's the following. Imagine you have lots and lots of what's called here upstream data and upstream data what they mean is data that is similar to the data here but not exactly the same but you have lots of it and the example given is ImageNet. So imagine this here to be ImageNet ImageNet ImageNet is a dataset with over a million images all of them are labeled into one of a thousand classes and so you can build a very good model for ImageNet to predict the ImageNet class. And you can get very accurate if lots of data cool. So you build this model but now what you want to do is you want to use this what's here called an adaptation algorithm and you want to use that model that you trained on ImageNet data and kind of change it just a bit. So you start from the model you have that works on ImageNet and with the few training data you have here on the right side and the author has actually standardized this in the benchmark to 1K samples. So you only have a thousand training samples you know compared to the millions that you potentially need. You have a thousand samples and you adapt your model to these tasks right. So you train the model on ImageNet and you adapt it to predict whether or not there's a cat or a dog and you adapt it to segment these images and you adapt it to predict the depth of points. So you can consider this kind of as a pre-training thing. So you pre-trained your model on ImageNet and then you adapt it to these others. And that's what's called task adaptation. It's not exactly pre-training in the classic sense because pre-training in the classic sense means basically that you retain the same model but here it's a bit different. So in stage 1 right you train a deep neural network on lots of training data and a deep neural network here this might be you know you have a bunch of layers layer layer layer layer layer and then here you have a thousand you classify into a thousand classes right. This is your model. And then in stage 2 over here you adapt this model and what it ultimately means is you take for example this part here up until the second to last layer transfer it over put it here right. Bam bam bam bam bam you retain the weights you keep the weights but then you add just one or two new layers and classify your new task. This could be is it a cat or is it a dog right. And then you train you can either elect to only train the green part here or you can train the whole thing. The second thing is called fine tuning right. And the author is mostly elect to do fine tuning in this work. So you carry over the weights and you add a new head and then you train the entire thing with the 1000 samples that you have for this task. And then you the kind of the goal is to get as good as possible on that one task where you only have a thousand samples. If your pre training was good so if your stage 1 was good then you would expect that stage 2 would profit a lot from this pre training which basically means that even though you only have a thousand samples you can reach accuracy that would usually only be possible with much more samples. So that's the idea behind it right. And this is what's called visual task adaptation. So these the authors propose a benchmark for this. A benchmark for this part right for the adaptation algorithm. So at the adaptation algorithm they propose as a baseline is train on ImageNet and then fine tune. So that's an adaptation algorithm. They propose a score for this. So if you come up with a better adaptation algorithm for example you could say no I'm going to whatever train on YouTube data and then do like fine tune that and then maybe you'd reach a higher maybe you'd reach better accuracies in these tasks over here and then your score would be higher right. So it's kind of a benchmark to compare adaptation algorithms. So here your benchmark score and this is conditioned on n the number of samples that you have in the in the layer 2 tasks and this here is standardized to 1000 in their case. So the score of an adaptation algorithm a is the following. It's the it's the expectation over this is kind of an error measure and you can you can think of it basically as a classic test set classification error on the layer 2 tasks of that adaptation algorithm if given the data set of a layer 2 tasks of n samples and the layer 2 task here comes from a distribution of layer 2 tasks. So what does it mean this distribution of layer 2 tasks they imagine and they show this in this picture they imagine the visual tasks like on this on this big landscape of visual tasks right here and what they ideally want to do is they want to sample a task here and this task correspond to classifying these dog images and very close to it could be classifying bird images but then very far away could be a task of counting and depth estimation and so on. They imagine all the visual tasks are have some kind of some sort of distribution right. So what happens is you sample one of those visual tasks or each for each each element in this expectation you sample one of them you build the data set with 1000 samples right you put it through your adaptation algorithm so your adaptation algorithm for example you're pre trained to imagine that you adapted to to that task with a thousand samples and then you compute your error metric on that. Now if you do this over the whole distribution you get an expectation of this error metric in all the visual tasks and that will be your score. So what does it what does it mean in practice I mean in practice you don't have this distribution right in practice you have a list so like list here is a list of tasks right there's this task this task this task is there's whatever the pets task and then there is the aerial there and there is the counting right you have a list of tasks and what is it like this stuff and this expectation ultimately right stage one train a model M stage two for each of these tasks adapt the model M or fine tune your model M on these tasks then for each task get an error rate error rate one task two gives you error rate two tasks three gives you error rate three then simply one over N sum them up so take the take the average error rate of the of the of all of the tasks and that's your score. This is kind of my first criticism of this thing like this this all just seems like super mathematicized with like oh we imagine all of these tasks being in some distribution somewhere like there there is a distribution of tasks and we have an expectation over the distribution now like why just say here's a bunch of tasks right adapt your model to each one of them get the average error rate done that's your score that would have been first of all much easier and second of all they never actually care to characterize this distribution like if if they were to actually rigorously characterize this distribution of visual tasks I would agree that this formulation makes sense but all they say basically all they say is tasks that a human can solve from visual input alone and they give a bunch of examples of a good task would be the following right and you probably figured it out the task is is it a square or is it a triangle right that's a that's a visual task in the classic sense human can solve it from visual input alone then the following task wouldn't be as easy levels 1 0 0 1 so the task I had in mind was is the spelling is the spelling of the shape over here does it contain an a so square contains an a circle doesn't line doesn't but triangle contains an a right so for this you kind of need world knowledge and you can't just solve it from visual input alone right especially not you can't generalize to new new shapes if you if you just from visual input so they and they say appendix b they validate this right they validate that humans can solve it but I actually disagree with this because just because humans can solve a task just from visual input doesn't mean that they don't use world knowledge in it like in this whatever pets example here right humans know how cats and dogs look anatomically right how they look from the side and from the back and so on even if they haven't seen it in a picture they they know how they behave and so on what is kind of realistic setting for a cat and a dog to be in so all of this it seems kind of a bit shady and the reason I'm saying this is if you make this distribution formulation you also you have to give a rigorous definition and because if a new task arrives now like a one that's not in your list like never been before here in the world like new taskcribes how do we know whether or not we should include it in the list or not right how do we know whether it's part of this distribution or not um just seems very very shaky so that being said they do give this list and this list has 19 tasks that's down here so there are 19 tasks they're categorized as natural which means natural images uh these these yeah the examples here are pets flowers images the house numbers and so on specialized images are for example images with that you special equipment for example medical images and then structured means where that's down here structured means that the model needs come to comprehend the structure of a scene so they give an example of object counting or 3D depth prediction I mean that's that's fair enough they they have these 19 tasks but and they show kind of the tasks down here here's a list of tasks and kind of their baseline method on it uh but but why for me like the question is why exactly these tasks if they don't specify this distribution why these tasks and they don't really like they do some they do a lot of experimentation actually in investigation but what's kind of missing for me is to show that these tasks first of all are kind of internally consistent in that they're really visual tasks and second of all that they kind of cover this distribution or they represent this entire distribution that they're trying to model uh and it seems to me unclear why exactly these tasks why they left others out and included these ones um in all fairness probably they simply took the ones that that they could get their hands on but still I feel that this is very shaky and um that might that might lead to the benchmark not being adapted very widely but all right enough with the criticism let's go further in this so they do present this kind of baseline experiments and they they pre-trained always on image net and then they they uh fine tune on these layer two tasks and the way they pre-trained here is um listed here for example so if they pre-trained a generative model it actually performs worse than if they just train from scratch for the layer two tasks on the thousand samples right self supervised is kind of a pre-training method where if you have an image you do something like you rotate it to the right or to the left and then you ask a model some sort of a discriminator did it did I turn it to the right or to the left like a zero is to the right I left and one is to the right so you this is called self supervised you don't need labels for this right um and it it kind of works well semi supervised has some of the labels and supervised has is like image net with full labels and you kind of see unsurprisingly that the more information you have the the the better you are going to be um in all of these these kind of tasks interestingly the generative pre-training works the worst worse than even from scratch training so that's kind of that's sort of special um what what I do really appreciate about this this investigation here is that they investigate a lot of variants of this of this um benchmark and they come to the conclusion I think that's encapsulated here well for example we find two models using 16 Google Cloud TPU Hardware Accelerators that's expensive right but they say we conduct additional experiments to assess whether our result can be reproduced with a more basic hardware setup we evaluate on all the tap tasks using a single and video p100 GPU with a thousand steps 64 images per mini batch right so they verify that you can do this benchmark you can take part in this benchmark even if you don't have much time or money or hardware right um that's why for example they limit they limit the number of examples in the layer two tasks to a thousand they do investigate that this correlates with your performance if you were to include the full data sets of the layer two tasks so if you just include a thousand examples that correlates well they do investigate they do investigate whether you can put it on a single GPU they do investigate if you only run it for a thousand steps here you see this experiment you have to run it for a thousand steps basically and you're almost at the level if as if you were to run it for 50,000 steps so there's a lots of work to that goes into making sure that everybody can kind of participate in this benchmark and that I appreciate this a lot and there is actually code available so if you go to uh GitHub and you just search for task adaptation actually I had it open before but I don't know so you go to GitHub and you go to google research and search for task adaptation you'll you'll find it there is code that downloads all of the data sets for you prepares them and there's a script that runs your layer one model so you need to provide it a layer one model but then there is a script that um that runs it on all of the different layer two tasks and at the end calculates your benchmark for you so that's pretty neat and I would encourage you if you have a good idea for a pre-training or for a adaptation algorithm take part in the benchmark I suspect there'll be a leaderboard kind of online leaderboard coming out at some point otherwise you simply can report the number in your papers and I hope you are going to be successful at that all right so that was it from me um have lots of fun and bye bye | [{"start": 0.0, "end": 8.0, "text": " Hi there. Today we're looking at the Visual Task Adaptation Benchmark by a list of authors"}, {"start": 8.0, "end": 15.8, "text": " that's way too long to read out all from Google Brain. So what is this paper? This paper"}, {"start": 15.8, "end": 24.96, "text": " cares about a new benchmark that is abbreviated VTab. And VTab is a benchmark for a task"}, {"start": 24.96, "end": 31.96, "text": " called Visual Task Adaptation. So a benchmark, the meaning of a benchmark is it's kind of a"}, {"start": 32.72, "end": 40.2, "text": " number that you achieve with a model and whoever has the highest number is the best in this"}, {"start": 40.2, "end": 48.0, "text": " task. Right? So the benchmark kind of standardizes how you evaluate models. And the model is"}, {"start": 48.0, "end": 55.0, "text": " here they do Visual Task Adaptation. So what is Visual Task Adaptation? So this is Visual"}, {"start": 57.36, "end": 63.519999999999996, "text": " Task Adaptation. It's kind of illustrated in this figure. Imagine you have a bunch of"}, {"start": 63.519999999999996, "end": 69.12, "text": " what are called Visual Tasks. And the Visual Task, and this is the right side here."}, {"start": 69.12, "end": 75.48, "text": " A Visual Task is anything that can be solved from just Visual input. So basically given"}, {"start": 75.48, "end": 82.48, "text": " a picture or many pictures and you ask kind of a question about it, if that question can"}, {"start": 83.04, "end": 89.04, "text": " be answered by just looking at the picture, then that's called a Visual Task. For example,"}, {"start": 89.04, "end": 95.54, "text": " in this dataset you might be asked whether a picture contains a dog or a cat. In this"}, {"start": 95.54, "end": 103.54, "text": " dataset you might be asked to outline where the objects are. So here the plane you might"}, {"start": 103.54, "end": 110.54, "text": " be able to segment or you might be able to point out where buildings are in the images."}, {"start": 110.54, "end": 115.74000000000001, "text": " Right here, here, there's no building here. So there's there's varieties of tasks that"}, {"start": 115.74000000000001, "end": 122.24000000000001, "text": " are possible. Or in the bottom domain you might be asked which one of the two red dots"}, {"start": 122.24000000000001, "end": 129.24, "text": " here is closer to the observer in 3D space. Or you might be asked in this picture please"}, {"start": 129.24, "end": 136.24, "text": " count the number of gray boxes. So there's a bunch of all of these count as Visual Tasks."}, {"start": 137.14000000000001, "end": 143.14000000000001, "text": " Now the setting that the author's imagine here is there are many of these Visual Tasks"}, {"start": 143.14000000000001, "end": 150.14000000000001, "text": " in the world for which there isn't much training data. Like imagine something like this."}, {"start": 150.9, "end": 154.70000000000002, "text": " These are aerial images so you kind of need a satellite or a plane to obtain them and"}, {"start": 154.7, "end": 161.7, "text": " then you need to label them. So all of this isn't that cheap. Even more so in a for example"}, {"start": 161.7, "end": 168.45999999999998, "text": " medical domain where you have very expensive CT images of patients and then you need to"}, {"start": 168.45999999999998, "end": 173.82, "text": " obtain them and you need to convince the patients to release their data and someone needs"}, {"start": 173.82, "end": 179.82, "text": " to label it. So it's very costly to obtain lots of training data. Now what we want to"}, {"start": 179.82, "end": 185.62, "text": " do is we want to for all of these tasks we ideally want to build neural networks, deep"}, {"start": 185.62, "end": 191.1, "text": " neural networks because we know they're super accurate but they are only super accurate"}, {"start": 191.1, "end": 197.1, "text": " if you have lots of training data. So that conflicts with the fact that we might not have so much"}, {"start": 197.1, "end": 203.78, "text": " training data for these tasks. So the proposed solution here is what's called Visual Tasks"}, {"start": 203.78, "end": 210.14000000000001, "text": " Adaptation and it's the following. Imagine you have lots and lots of what's called here"}, {"start": 210.14000000000001, "end": 217.26, "text": " upstream data and upstream data what they mean is data that is similar to the data here"}, {"start": 217.26, "end": 223.54, "text": " but not exactly the same but you have lots of it and the example given is ImageNet."}, {"start": 223.54, "end": 231.6, "text": " So imagine this here to be ImageNet ImageNet ImageNet is a dataset with over a million"}, {"start": 231.6, "end": 239.54, "text": " images all of them are labeled into one of a thousand classes and so you can build a very"}, {"start": 239.54, "end": 248.14, "text": " good model for ImageNet to predict the ImageNet class. And you can get very accurate if lots"}, {"start": 248.14, "end": 253.54, "text": " of data cool. So you build this model but now what you want to do is you want to use this"}, {"start": 253.54, "end": 259.78, "text": " what's here called an adaptation algorithm and you want to use that model that you trained"}, {"start": 259.78, "end": 266.26, "text": " on ImageNet data and kind of change it just a bit. So you start from the model you have"}, {"start": 266.26, "end": 271.79999999999995, "text": " that works on ImageNet and with the few training data you have here on the right side and"}, {"start": 271.79999999999995, "end": 276.78, "text": " the author has actually standardized this in the benchmark to 1K samples. So you only"}, {"start": 276.78, "end": 281.58, "text": " have a thousand training samples you know compared to the millions that you potentially"}, {"start": 281.58, "end": 289.29999999999995, "text": " need. You have a thousand samples and you adapt your model to these tasks right."}, {"start": 289.3, "end": 293.74, "text": " So you train the model on ImageNet and you adapt it to predict whether or not there's"}, {"start": 293.74, "end": 299.86, "text": " a cat or a dog and you adapt it to segment these images and you adapt it to predict the"}, {"start": 299.86, "end": 306.78000000000003, "text": " depth of points. So you can consider this kind of as a pre-training thing. So you pre-trained"}, {"start": 306.78000000000003, "end": 313.54, "text": " your model on ImageNet and then you adapt it to these others. And that's what's called"}, {"start": 313.54, "end": 319.1, "text": " task adaptation. It's not exactly pre-training in the classic sense because pre-training"}, {"start": 319.1, "end": 326.02000000000004, "text": " in the classic sense means basically that you retain the same model but here it's a bit"}, {"start": 326.02000000000004, "end": 332.94, "text": " different. So in stage 1 right you train a deep neural network on lots of training data"}, {"start": 332.94, "end": 337.24, "text": " and a deep neural network here this might be you know you have a bunch of layers layer"}, {"start": 337.24, "end": 343.46000000000004, "text": " layer layer layer layer and then here you have a thousand you classify into a thousand"}, {"start": 343.46, "end": 351.02, "text": " classes right. This is your model. And then in stage 2 over here you adapt this model"}, {"start": 351.02, "end": 357.62, "text": " and what it ultimately means is you take for example this part here up until the second"}, {"start": 357.62, "end": 366.02, "text": " to last layer transfer it over put it here right. Bam bam bam bam bam you retain the"}, {"start": 366.02, "end": 374.46, "text": " weights you keep the weights but then you add just one or two new layers and classify"}, {"start": 374.46, "end": 380.26, "text": " your new task. This could be is it a cat or is it a dog right. And then you train you"}, {"start": 380.26, "end": 387.46, "text": " can either elect to only train the green part here or you can train the whole thing."}, {"start": 387.46, "end": 392.85999999999996, "text": " The second thing is called fine tuning right. And the author is mostly elect to do fine"}, {"start": 392.86, "end": 400.02000000000004, "text": " tuning in this work. So you carry over the weights and you add a new head and then you train"}, {"start": 400.02000000000004, "end": 407.66, "text": " the entire thing with the 1000 samples that you have for this task. And then you the kind"}, {"start": 407.66, "end": 412.46000000000004, "text": " of the goal is to get as good as possible on that one task where you only have a thousand"}, {"start": 412.46000000000004, "end": 421.66, "text": " samples. If your pre training was good so if your stage 1 was good then you would expect"}, {"start": 421.66, "end": 427.98, "text": " that stage 2 would profit a lot from this pre training which basically means that even"}, {"start": 427.98, "end": 433.46000000000004, "text": " though you only have a thousand samples you can reach accuracy that would usually only"}, {"start": 433.46000000000004, "end": 444.1, "text": " be possible with much more samples. So that's the idea behind it right. And this is what's"}, {"start": 444.1, "end": 451.06, "text": " called visual task adaptation. So these the authors propose a benchmark for this. A"}, {"start": 451.06, "end": 457.82, "text": " benchmark for this part right for the adaptation algorithm. So at the adaptation algorithm"}, {"start": 457.82, "end": 464.14, "text": " they propose as a baseline is train on ImageNet and then fine tune. So that's an adaptation"}, {"start": 464.14, "end": 469.94, "text": " algorithm. They propose a score for this. So if you come up with a better adaptation"}, {"start": 469.94, "end": 476.7, "text": " algorithm for example you could say no I'm going to whatever train on YouTube data and"}, {"start": 476.7, "end": 483.34, "text": " then do like fine tune that and then maybe you'd reach a higher maybe you'd reach better"}, {"start": 483.34, "end": 488.9, "text": " accuracies in these tasks over here and then your score would be higher right. So it's"}, {"start": 488.9, "end": 496.46, "text": " kind of a benchmark to compare adaptation algorithms. So here your benchmark score and"}, {"start": 496.46, "end": 503.09999999999997, "text": " this is conditioned on n the number of samples that you have in the in the layer 2 tasks"}, {"start": 503.1, "end": 511.1, "text": " and this here is standardized to 1000 in their case. So the score of an adaptation algorithm"}, {"start": 511.1, "end": 524.26, "text": " a is the following. It's the it's the expectation over this is kind of an error measure and you"}, {"start": 524.26, "end": 529.22, "text": " can you can think of it basically as a classic test set classification error on the layer"}, {"start": 529.22, "end": 539.1800000000001, "text": " 2 tasks of that adaptation algorithm if given the data set of a layer 2 tasks of n samples"}, {"start": 539.1800000000001, "end": 548.94, "text": " and the layer 2 task here comes from a distribution of layer 2 tasks. So what does it mean this"}, {"start": 548.94, "end": 554.62, "text": " distribution of layer 2 tasks they imagine and they show this in this picture they imagine"}, {"start": 554.62, "end": 562.9, "text": " the visual tasks like on this on this big landscape of visual tasks right here and what"}, {"start": 562.9, "end": 568.14, "text": " they ideally want to do is they want to sample a task here and this task correspond to"}, {"start": 568.14, "end": 574.34, "text": " classifying these dog images and very close to it could be classifying bird images but"}, {"start": 574.34, "end": 580.82, "text": " then very far away could be a task of counting and depth estimation and so on. They imagine"}, {"start": 580.82, "end": 587.2600000000001, "text": " all the visual tasks are have some kind of some sort of distribution right. So what happens"}, {"start": 587.2600000000001, "end": 595.1400000000001, "text": " is you sample one of those visual tasks or each for each each element in this expectation"}, {"start": 595.1400000000001, "end": 602.7, "text": " you sample one of them you build the data set with 1000 samples right you put it through"}, {"start": 602.7, "end": 606.6600000000001, "text": " your adaptation algorithm so your adaptation algorithm for example you're pre trained"}, {"start": 606.66, "end": 612.54, "text": " to imagine that you adapted to to that task with a thousand samples and then you compute"}, {"start": 612.54, "end": 621.8199999999999, "text": " your error metric on that. Now if you do this over the whole distribution you get an expectation"}, {"start": 621.8199999999999, "end": 629.66, "text": " of this error metric in all the visual tasks and that will be your score. So what does it"}, {"start": 629.66, "end": 635.42, "text": " what does it mean in practice I mean in practice you don't have this distribution right in"}, {"start": 635.42, "end": 642.2199999999999, "text": " practice you have a list so like list here is a list of tasks right there's this task this"}, {"start": 642.2199999999999, "end": 649.14, "text": " task this task is there's whatever the pets task and then there is the aerial there and there"}, {"start": 649.14, "end": 658.78, "text": " is the counting right you have a list of tasks and what is it like this stuff and this expectation"}, {"start": 658.78, "end": 666.62, "text": " ultimately right stage one train a model M stage two for each of these tasks adapt the"}, {"start": 666.62, "end": 674.3399999999999, "text": " model M or fine tune your model M on these tasks then for each task get an error rate error"}, {"start": 674.3399999999999, "end": 682.3, "text": " rate one task two gives you error rate two tasks three gives you error rate three then"}, {"start": 682.3, "end": 692.8599999999999, "text": " simply one over N sum them up so take the take the average error rate of the of the of all of"}, {"start": 692.8599999999999, "end": 699.02, "text": " the tasks and that's your score. This is kind of my first criticism of this thing like this this"}, {"start": 699.02, "end": 705.02, "text": " all just seems like super mathematicized with like oh we imagine all of these tasks being in some"}, {"start": 705.02, "end": 713.42, "text": " distribution somewhere like there there is a distribution of tasks and we have an expectation"}, {"start": 713.42, "end": 720.14, "text": " over the distribution now like why just say here's a bunch of tasks right adapt your model"}, {"start": 720.14, "end": 727.9, "text": " to each one of them get the average error rate done that's your score that would have been"}, {"start": 727.9, "end": 733.1, "text": " first of all much easier and second of all they never actually care to characterize this"}, {"start": 733.1, "end": 738.0600000000001, "text": " distribution like if if they were to actually rigorously characterize this distribution of"}, {"start": 738.0600000000001, "end": 744.22, "text": " visual tasks I would agree that this formulation makes sense but all they say basically"}, {"start": 745.66, "end": 753.82, "text": " all they say is tasks that a human can solve from visual input alone and they give a bunch of"}, {"start": 753.82, "end": 768.7, "text": " examples of a good task would be the following right and you probably figured it out the task"}, {"start": 768.7, "end": 774.22, "text": " is is it a square or is it a triangle right that's a that's a visual task in the classic sense"}, {"start": 774.22, "end": 788.5400000000001, "text": " human can solve it from visual input alone then the following task wouldn't be as easy levels 1 0 0 1"}, {"start": 788.5400000000001, "end": 796.94, "text": " so the task I had in mind was is the spelling is the spelling of the shape over here does it"}, {"start": 796.94, "end": 805.82, "text": " contain an a so square contains an a circle doesn't line doesn't but triangle contains an a"}, {"start": 805.82, "end": 811.2600000000001, "text": " right so for this you kind of need world knowledge and you can't just solve it from visual input"}, {"start": 811.2600000000001, "end": 819.0200000000001, "text": " alone right especially not you can't generalize to new new shapes if you if you just from visual input"}, {"start": 819.02, "end": 830.22, "text": " so they and they say appendix b they validate this right they validate that humans can solve it"}, {"start": 830.78, "end": 837.26, "text": " but I actually disagree with this because just because humans can solve a task just from visual input"}, {"start": 837.26, "end": 843.66, "text": " doesn't mean that they don't use world knowledge in it like in this whatever pets example here"}, {"start": 843.66, "end": 851.98, "text": " right humans know how cats and dogs look anatomically right how they look from the side and from the back"}, {"start": 851.98, "end": 858.62, "text": " and so on even if they haven't seen it in a picture they they know how they behave and so on what"}, {"start": 858.62, "end": 867.26, "text": " is kind of realistic setting for a cat and a dog to be in so all of this it seems kind of a bit"}, {"start": 867.26, "end": 873.9, "text": " shady and the reason I'm saying this is if you make this distribution formulation you also you have"}, {"start": 873.9, "end": 880.86, "text": " to give a rigorous definition and because if a new task arrives now like a one that's not in your"}, {"start": 880.86, "end": 887.58, "text": " list like never been before here in the world like new taskcribes how do we know whether or not we"}, {"start": 887.58, "end": 894.46, "text": " should include it in the list or not right how do we know whether it's part of this distribution or not"}, {"start": 894.46, "end": 903.58, "text": " um just seems very very shaky so that being said they do give this list and this list has 19 tasks"}, {"start": 905.26, "end": 911.4200000000001, "text": " that's down here so there are 19 tasks they're categorized as natural which means natural images"}, {"start": 911.98, "end": 917.9000000000001, "text": " uh these these yeah the examples here are pets flowers images the house numbers and so on"}, {"start": 917.9, "end": 926.06, "text": " specialized images are for example images with that you special equipment for example medical images"}, {"start": 927.02, "end": 935.5, "text": " and then structured means where that's down here structured means that the model needs come to"}, {"start": 935.5, "end": 942.38, "text": " comprehend the structure of a scene so they give an example of object counting or 3D depth prediction"}, {"start": 942.38, "end": 950.14, "text": " I mean that's that's fair enough they they have these 19 tasks but and they show kind of the tasks"}, {"start": 951.82, "end": 959.1, "text": " down here here's a list of tasks and kind of their baseline method on it uh but but why"}, {"start": 960.14, "end": 967.26, "text": " for me like the question is why exactly these tasks if they don't specify this distribution"}, {"start": 967.26, "end": 973.1, "text": " why these tasks and they don't really like they do some they do a lot of experimentation actually"}, {"start": 973.1, "end": 978.7, "text": " in investigation but what's kind of missing for me is to show that these tasks first of all"}, {"start": 979.34, "end": 983.8199999999999, "text": " are kind of internally consistent in that they're really visual tasks and second of all"}, {"start": 984.54, "end": 989.8199999999999, "text": " that they kind of cover this distribution or they represent this entire distribution that"}, {"start": 989.8199999999999, "end": 996.22, "text": " they're trying to model uh and it seems to me unclear why exactly these tasks why they left"}, {"start": 996.22, "end": 1004.22, "text": " others out and included these ones um in all fairness probably they simply took the ones that"}, {"start": 1004.22, "end": 1014.22, "text": " that they could get their hands on but still I feel that this is very shaky and um that might"}, {"start": 1014.22, "end": 1020.62, "text": " that might lead to the benchmark not being adapted very widely but all right enough with the"}, {"start": 1020.62, "end": 1029.5, "text": " criticism let's go further in this so they do present this kind of baseline experiments and they"}, {"start": 1030.38, "end": 1038.38, "text": " they pre-trained always on image net and then they they uh fine tune on these layer two tasks and"}, {"start": 1038.38, "end": 1045.5, "text": " the way they pre-trained here is um listed here for example so if they pre-trained a generative"}, {"start": 1045.5, "end": 1051.02, "text": " model it actually performs worse than if they just train from scratch for the layer two tasks on the"}, {"start": 1051.02, "end": 1058.38, "text": " thousand samples right self supervised is kind of a pre-training method where if you have an image"}, {"start": 1058.38, "end": 1063.66, "text": " you do something like you rotate it to the right or to the left and then you ask a model some sort"}, {"start": 1063.66, "end": 1069.42, "text": " of a discriminator did it did I turn it to the right or to the left like a zero is to the right"}, {"start": 1069.42, "end": 1075.8200000000002, "text": " I left and one is to the right so you this is called self supervised you don't need labels for this"}, {"start": 1075.8200000000002, "end": 1084.54, "text": " right um and it it kind of works well semi supervised has some of the labels and supervised has"}, {"start": 1084.54, "end": 1090.94, "text": " is like image net with full labels and you kind of see unsurprisingly that the more information you have"}, {"start": 1090.94, "end": 1098.6200000000001, "text": " the the the better you are going to be um in all of these these kind of tasks interestingly the"}, {"start": 1098.62, "end": 1106.4599999999998, "text": " generative pre-training works the worst worse than even from scratch training so that's kind of"}, {"start": 1107.4199999999998, "end": 1116.86, "text": " that's sort of special um what what I do really appreciate about this this investigation here is"}, {"start": 1116.86, "end": 1127.1799999999998, "text": " that they investigate a lot of variants of this of this um benchmark and they come to the conclusion"}, {"start": 1127.18, "end": 1133.26, "text": " I think that's encapsulated here well for example we find two models using 16 Google Cloud"}, {"start": 1133.9, "end": 1140.14, "text": " TPU Hardware Accelerators that's expensive right but they say we conduct additional experiments"}, {"start": 1140.7, "end": 1145.66, "text": " to assess whether our result can be reproduced with a more basic hardware setup we evaluate on all"}, {"start": 1145.66, "end": 1153.18, "text": " the tap tasks using a single and video p100 GPU with a thousand steps 64 images per mini batch"}, {"start": 1153.18, "end": 1160.22, "text": " right so they verify that you can do this benchmark you can take part in this benchmark even if"}, {"start": 1160.22, "end": 1167.66, "text": " you don't have much time or money or hardware right um that's why for example they limit they limit"}, {"start": 1168.22, "end": 1176.0600000000002, "text": " the number of examples in the layer two tasks to a thousand they do investigate that this correlates"}, {"start": 1176.0600000000002, "end": 1181.3400000000001, "text": " with your performance if you were to include the full data sets of the layer two tasks so if you"}, {"start": 1181.34, "end": 1187.4199999999998, "text": " just include a thousand examples that correlates well they do investigate they do investigate whether"}, {"start": 1187.4199999999998, "end": 1194.6999999999998, "text": " you can put it on a single GPU they do investigate if you only run it for a thousand steps here you"}, {"start": 1194.6999999999998, "end": 1200.78, "text": " see this experiment you have to run it for a thousand steps basically and you're almost at the level"}, {"start": 1201.34, "end": 1207.1, "text": " if as if you were to run it for 50,000 steps so there's a lots of work to that goes into"}, {"start": 1207.1, "end": 1214.3, "text": " making sure that everybody can kind of participate in this benchmark and that I appreciate this a lot"}, {"start": 1214.3, "end": 1223.34, "text": " and there is actually code available so if you go to uh GitHub and you just search for task adaptation"}, {"start": 1223.34, "end": 1230.62, "text": " actually I had it open before but I don't know so you go to GitHub and you go to google research"}, {"start": 1230.62, "end": 1244.62, "text": " and search for task adaptation you'll you'll find it there is code that downloads all of the"}, {"start": 1244.62, "end": 1251.02, "text": " data sets for you prepares them and there's a script that runs your layer one model so you need"}, {"start": 1251.02, "end": 1258.78, "text": " to provide it a layer one model but then there is a script that um that runs it on all of the"}, {"start": 1258.78, "end": 1266.22, "text": " different layer two tasks and at the end calculates your benchmark for you so that's pretty neat"}, {"start": 1266.22, "end": 1272.54, "text": " and I would encourage you if you have a good idea for a pre-training or for a adaptation algorithm"}, {"start": 1273.26, "end": 1278.7, "text": " take part in the benchmark I suspect there'll be a leaderboard kind of online leaderboard"}, {"start": 1278.7, "end": 1284.7, "text": " coming out at some point otherwise you simply can report the number in your papers and I hope"}, {"start": 1284.7, "end": 1295.5, "text": " you are going to be successful at that all right so that was it from me um have lots of fun and bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=69IjNZaoeao | LeDeepChef 👨🍳 Deep Reinforcement Learning Agent for Families of Text-Based Games | The AI cook is here! This agent learns to play a text-based game where the goal is to prepare a meal according to a recipe. Challenges? Many! The number of possible actions is huge, ingredients change and can include ones never seen before, you need to navigate rooms, use tools, manage an inventory and sequence everything correctly and all of this from a noisy textual description that the game engine throws at you. This paper mixes supervised explicit training with reinforcement learning in order to solve this task.
Abstract:
While Reinforcement Learning (RL) approaches lead to significant achievements in a variety of areas in recent history, natural language tasks remained mostly unaffected, due to the compositional and combinatorial nature that makes them notoriously hard to optimize. With the emerging field of Text-Based Games (TBGs), researchers try to bridge this gap. Inspired by the success of RL algorithms on Atari games, the idea is to develop new methods in a restricted game world and then gradually move to more complex environments. Previous work in the area of TBGs has mainly focused on solving individual games. We, however, consider the task of designing an agent that not just succeeds in a single game, but performs well across a whole family of games, sharing the same theme. In this work, we present our deep RL agent--LeDeepChef--that shows generalization capabilities to never-before-seen games of the same family with different environments and task descriptions. The agent participated in Microsoft Research's "First TextWorld Problems: A Language and Reinforcement Learning Challenge" and outperformed all but one competitor on the final test set. The games from the challenge all share the same theme, namely cooking in a modern house environment, but differ significantly in the arrangement of the rooms, the presented objects, and the specific goal (recipe to cook). To build an agent that achieves high scores across a whole family of games, we use an actor-critic framework and prune the action-space by using ideas from hierarchical reinforcement learning and a specialized module trained on a recipe database.
Authors: Leonard Adolphs, Thomas Hofmann
https://arxiv.org/abs/1909.01646 | Hi there! Today we're looking at LaDepchef, deep reinforcement learning agent for families of tech space games by Leonardo Adolfs and Thomas Huffman. So this is a paper about engineering an agent for a particular family of tasks. This is different from reinforcement learning agents that, for example, are just good at one game, let's say, pong or whatnot. And even I guess even things like Starcraft. Though this kind of depends on what you mean by game. So what are we talking about here? The following is a text-based games where the goal is to cook recipes. Right, so let's just jump in and see what goes on. The game starts by telling you you are hungry. Right, let's cook a delicious meal. And so on. So the objective is basically always the same. It's find the cookbook, read recipe that's in it, then collect all the things that are in the recipe, prepare them in certain ways that are also specified by the recipe, and then at the end you have a meal and then you can eat the meal. And that will give you points. But since it's a text-based games and the input doesn't come structured, but it comes in natural text. So the game tells you, for example, kitchen. So basically you're in the kitchen. You are now in the kitchen. I guess you better just go and list everything you see here. You hear a noise. You spin around. So you see that the kind of input you get from the game is very playful. It has a lot of descriptive elements. Sometimes you see a closed oven. You make out a table. Then you can see on the counter you can make out a sliced fried red hot pepper and so on. So it's very much not trivial to kind of parse this in a traditional way. If you were to go about this by simply writing an algorithm extracting things, it's very hard because for example you might see that there's an oven, but it's a closed oven. You make out a table. So this is kind of a synonym for you. You see a table. But you see like there is a table. You can make out a sliced fried red hot pepper. And here it's important. Not only do you need to realize that there is a red hot pepper, but also that its state is sliced and fried. This is important because you need all ingredients in a certain state. You examine the stove. So there is a stove. So all these things you need to kind of understand. So if you now look, there is a recipe book in here or no, there isn't a recipe. You can examine recipe. I guess there is a recipe book in that room. If there is a recipe book, then you can examine the recipe. And that's the command. So the arrows here always indicate that that's a user command. And these you have to type. That's like the next thing that your agent needs to do. You can select from a predefined set of actions. You actually need to type in the things you want to do. And these are a lot. Like there are a lot of possibilities of what you could type in. Even if you restrict it to kind of what you know the game accepts, there are still so many actions. It's way different than for example Atari games. They always have eight action. Like there's eight buttons. You could possibly present. That's it. And here there are like combinatorically many things you can do. Like you can prepare and take and all the ingredients. You don't know which ingredients come. So then yeah. So here you examine the recipe. Let's look at the recipe. It says you open the recipe, start reading. Recipe number one. Here are the ingredients. Red hot pepper. Here for right now. That's just one ingredient. Then there are directions. So what do you need to do? Slice the red hot pepper. Fry the red hot pepper and prepare the meal. That's that are those are the directions of the recipe. You also have this this inventory command. With Tazi which you're carrying next difficulty. The inventory is finite. So you can't carry everything at some point. You have to drop things that are unnecessary. You can't just take everything. Here you see the command take red hot pepper. That only works if there's a red hot pepper in the room. And here says you take the red hot pepper from the counter. Your score has just gone up by one point. And then if you type inventory. It says you're carrying a sliced fried red hot pepper. Again here it all it says the state of the ingredient. So the ingredient is the red hot pepper. The state is sliced and fried. And then you can prepare meal and then you can eat meal. And then it says your score has just gone up by one point. And these are the scores you collect. So there are a lot of difficulties that are actually not shown in this example. For example, there are different rooms. You may have noticed here you're in the kitchen. But there could be other rooms and you start in a random room. You also need to navigate through the rooms. The doors to the rooms could be closed. And then you need to open them and so on. You can only for example, if this pepper here weren't already sliced and fried, you need to find, you can only slice it if there is a knife in the room. You can only fry it if there is a frying pan or an oven or a stove. Sorry, a stove in the room. And then you'd have to notice that there is a knife. If there is no knife, you'd need to take the red hot pepper, bring it to a room with a knife and then slice it. So this is vastly difficult game. The last difficulty is actually that in the test set, there will be ingredients that you haven't seen during training. So also that there, your agent needs to generalize. That's why it says a family of text-based games because the objective always the same to kind of cook the recipe. But the things you have to do and the things that appear and so on, those are those change basically from episode to episode. And the test set will be different than the training set or kind of there will be unseen data. Alright, so how does this paper go about solving this problem? This paper basically does the following and we are going here from high level to low level. On the highest level, it's a reinforcement learning agent. And that is sort of how you would imagine an RL agent to work. So here at the end, you have a policy. And the policy predicts an action. If you don't know what a kind of a policy in an action, things are in RL. These are basic RL concept and we kind of skip them here and I'll assume everyone knows what they are. But essentially a policy specifies which action you take next given the current game state. So the policy is made up scores different actions. So at each step, there are K actions available. And these K actions, I foreset, there are almost infinitely many actions that you could take. The first difficulty and that's the thing that actually comes in here is to reduce all of the possible actions that you can't even list to just K commands. So that we'll go into that later how this is done. But basically one of the main contributions of this paper is how do you even specify what is reasonable, what would be reasonable to do in the current situation. And then the policy over here only has to decide among those reasonable actions, not among all actions. So but given that you have K reasonable commands, you see here, command one, these are embedded and then fed into GRUs which are a recurrent neural networks. So for each of these commands, you'll get a 32 dimensional vector. This 32 dimensional vector is here C1 through CK. Each are combined with an encoding of the current state. So these 32 dimensional vector are combined with encoding of the current state, which is 256 dimensional and then fed into a neural network that will output a probability distribution over these actions. This is pretty classic in a deeper enforcement learning. So you have an action encoding and the state encoding and the policy decides on that. The state encoding, you'll see here, it's the same everywhere of course because the current game state is the current game state. This comes from this model up here. What this does is over here, you have the what you would call the state, the current observation. And the current observation is composed of many things. And specifically the following eight things. So the first one is actually called observation, which is, I would call all of this the current observation from an RL perspective. But the first is actually observation. It's whatever you saw, the big text you saw before, like you were in the kitchen, it looks like this, it smells like this, you turn around and so on. This would be the observation. It's what the game engine says at the current time step. This is just a piece of text, right? Second, missing items, third, unnecessary items. Now these things, you might wonder, okay, how do I know what what items are missing and unnecessary? These things come from another model that this paper trains and we'll get into that later. But basically they have a method of specifying which items are still missing, which are necessary. And they list those here. Then description, which is the output of the last look commands. So in each room, you can look, you can type look, and then it will give you a description of the room and what's in there. The previous commands, this is often used in RL either explicitly or implicitly through a recurrent network in order to give the agent an idea of what happened in the previous steps or what it did so that it doesn't repeat actions unnecessarily. Or so it learns to not repeat actions unnecessarily. Required utilities, again, this is a model that's kind of trying to predict what utilities are required to perform some actions. So as I said before, if you want to slice the red hot pepper, you need a knife. If you want to fry it, you need a stove. Discovered locations. As I said, there are different rooms. You actually don't know what rooms there are before you actually go in there. So before you go through a door, you reach another room. So the list of previously discovered and visited locations is there. And then the name of the current location, it is also there. So these are eight things that make up the current observation. These eight things are just strings of text. And these eight things are each one, as you can see here, these are the eight things from observation to location. Each one are embedded and fed also into an RNM. So for each of these eight things, you'll obtain a 32-dimensional vector. And these are all concatenated to make up one big to 56-dimensional vector. So this 256-dimensional vector will contain all the necessary information about the current room, what's in there, what items are you still missing, what items do you have in your inventory, which ones are unnecessary and so on. So if you train this correctly, this 256-dimensional vector will describe the current game state as it is relevant to your agent, like everything about it, every relevant information that's in here will be encoded in this vector. Now this vector isn't the final state encoding yet. What you'll have is you feed this into an RNM that takes, as input, the last time steps you have to imagine, the last time step already there was observation blah blah blah. This entire thing was, I'm just copying this box over here. So this entire thing was already done last step and already fed into an RNM. So this is an RNM that actually goes over time. And the last, whatever the output here is, it will be fed to the next step. This is a trick often done in reinforcement learning as well, that you actually have a recurrent neural network over the time steps. So each time step you have a certain observation, you encode it and so on, you get a description of that and then you feed this into an RNM. What the RNM can learn to do is it can learn to react to different, not only to the current observation, but to the current observation conditioned on the history of previous observations. So it can learn, before I was in this room, now I'm in this new room. So I actually haven't taken all the items from this room yet because I just came into this room and so on. So the kind of component where you are able to look at the past and what happened in the past is encaptured by this RNM here. So it's fairly complicated architecture, but this here, this state encoding that is conditioned on the history then goes into this into here. That's the vector that goes in here is combined with each action. So all of these actions here, these K actions and this is all fed through a neural network and that will give you the policy. This is a really complicated thing, but if you look at it, it's not too difficult actually. So what you'll do is you will take your observations here. This is all observation. It will be encoded and combined with the history in order to give you this, in order to give you an encoding of the current state. On the other hand, you'll take all of the possible commands that you could perform right now and code each one separately, right, into an embedding and then you combine each one of those with this encoding you specified previously. From that, you make your decision which action to take next. And the action here is the one that's output is the action we take next sampled from this policy. The last thing you need is a value network and this is just important for reinforcement learning which tells you from this state here. So I'm getting weird with colors here. From this state here, which is the same as this one. So you simply transfer this over from this state. How valuable is that? What's my value of this state? And the value is if I'm in this state and I act as I normally act, what are all my future rewards going to be combined. So it basically gives you a value of this state. You can think of this in, for example, terms of chess, if you had this in chess and then this here is it would be a description of the chess board, this h team. And the value would be how valuable is this position for you. So if you're very much ahead in material and position and so on, this value would be very high if you're behind this value would be very low. And this is an error that works simply trying to predict that value. So with all of this, you now have a, you have a good basis to do reinforcement learning. You have a policy, you have a value network. And from that, you can train an RL agent. And this is done classically in an actocratic way where you do advantage learning here, the advantage, the policy you train weighted by the advantage, then the value network you train to be close to their reward. And then you have an entropy penalty. If you don't know what these things are, the video will get bit too long if I were to go over these reinforcement learning concepts, but these are very standard in reinforcement learning. So you can train these, you can basically train what it does is you can train these neural networks in absence of label training data. Because you don't know what the best action is in each step, right? There's no one telling you. You just have a reward. You just sometimes you get a point and you don't know which actions led to that. So these things will actually allow you to train these neural networks by using just the reward without knowing which exact actions were right and wrong. And that's the core of reinforcement learning, obviously. All right. So the core, one of the core ingredients actually is this recipe manager. And the recipe manager is a submodel that does the following. So here it takes as an input the cookbook here and it also takes as an input the inventory and it outputs something like this. And this, this is a, this is a table of representation of what it outputs. It will output all the ingredients that you need for the recipe. Whether or not this input that this ingredient is currently missing from your inventory and action to perform. So which actions still need to be performed. So let's look at the following. Let's look at this example. The recipe tells you you need the ingredients are a carrot, a red hot pepper and a white onion. And the inventory says you care you're carrying a white onion and a carrot, right? So down here you see a ha we do actually have, we do actually have a carrot. So it's not missing. The carrot isn't missing. You have it in your inventory. The red hot pepper is missing. We don't have it in the inventory. But we need it for the recipe. The white onion, we need for the recipe, but it's not missing. Then it also is for each of the ingredients is supposed to tell you this recipe model, each of the what you still need to perform on it. So here it says slice the carrot, roast the carrot. And you simply have a carrot. It doesn't say sliced a roast. That means it's not sliced and roasted. So the recipe is supposed to output you still need to slice and roast the carrot. Here for example for the white onion says fry the white onion. And as you can see in the inventory it says you're carrying a fried white onion. So for the white onion, you see we don't need to do anything anymore. So the recipe model is basically trying to to make this table here. And this table you can see as an intermediary step in order to do all the other things. And the difference here to a pure RL method and this important, the difference is that this representation, this intermediate table representation is done explicitly. So the recipe model really produces a table like this. And not just in other RL methods, people will go about and make this recipe model output some sort of, you know, let's say a 200 dimensional vector that's supposed to encompass all of this information. And that doesn't appear to work as well. Like often that if you simply train this end to end, that will not pick up on the important information because the training signal tends to be way too weak. You have to imagine you already have this really, really big model construction here. And you're trying to learn it. You're trying to learn it from a tiny reward signal that you get at the end, right? This is very noisy signal. Now if you're now trying to say, well, the inputs to these things, right, this command here and we also saw the inputs to these, these depend on this recipe model. Also now are whatever giant neural network construction here. And we'll all train this end to end. And these will actually not be text. These will actually be some sort of latent vectors. That will often fail because you're now just trying to extract information from too noisy of a reward signal. So the authors here do actually pretty neat separation of that. And they train this recipe model with actually an augmented data set. So they go to free base and get more food items. And then they construct a data set that resembles this and train it in a supervised way to output tables, tables like this. So this is pretty smart. And I think it's a good lesson if you ever attempt something like this that really, really important information such as this one. If you can train it in a supervised way as a kind of a pre-processing step to your RL procedure, that's extremely helpful. Here you can see how this is then used. So by combining this table that was output from the recipe model and your inventory and the output of this look command, you can then generate these commands. So before we said it's important to reduce the everything you could do, which is infinite things, to everything that is reasonable to do currently. And this model here does that. So given this, given that, and given the description of what's currently in the room, you can have generate these commands. And for example, take knife. If you have to slice something because you see a knife is in the room and you could conceivably take the knife, right? You can construct these commands, but also since you know what's in your inventory and since you know which things are still missing, you can generate commands like take the widening or drop the water because you don't need the water. So the authors also group these things here in this what they call high level commands, which take all required items from here, simply means take everything that's in the room that is not in the inventory, but you need it. So these things, which for an oral agent, it makes sense to group these things together because it doesn't make sense to have them as two separate things. If you need both, take both. If you don't need any, what if you have an inventory drop, all of these things. So that makes sense. This is small optimization that apparently brought some gains, but the kind of the overarching message here is that once you have a once you have this information from the recipe model, you can then use it in many useful ways in order to make life for your oral agent easier. All right. So that kind of is the entire model. It's very it's quite convoluted, but basically you start with this here, this recipe manager, you decide you output this table down here, which ingredients are in the recipe, are they still missing and which actions to any to perform. You then combine it with this information here, the information about the current room and your inventory in order to come up with a set of commands that are conceivable to do here. You combine these commands with some commands that are always available. So commands that are always available are things like look, inventory, prepare meal, you add that, right. You add that if the recipe manager does not output any missing and the agent's location is the kitchen. So you can add these other items and also we're not even going to get into that. You're at navigational items because there are doors in these rooms and you need to navigate around. So they actually train another model to here is the to detect detect directions that you could move into and open doors for every closed door in the room. So that's another challenge that the agent needs to overcome. They have to build an entire model to predict which doors are there and are they closed. You need to open them. So these commands if there are doors and if you can move through them, these commands are also added to this set of commands that are reasonable. So now we have a set of commands that are reasonable over here. Then you describe the room here. You put both into this embedding and then finally your policy outputs and action. That's the entire process. Very convoluted, very big, very astonishing that this works with our L-butt in order to get it to work. You actually need to do this supervised training. And the experimental evidence here is quite solid in that they compare to baseline systems that use classic techniques and they do some ablation over their individual parts and they get second place, I think, in a competition about these tech space games. So that's pretty good. And that was it for me. Check it out. Bye bye. | [{"start": 0.0, "end": 6.32, "text": " Hi there! Today we're looking at LaDepchef, deep reinforcement learning agent for families"}, {"start": 6.32, "end": 14.84, "text": " of tech space games by Leonardo Adolfs and Thomas Huffman. So this is a paper about engineering"}, {"start": 14.84, "end": 20.92, "text": " an agent for a particular family of tasks. This is different from reinforcement learning"}, {"start": 20.92, "end": 29.2, "text": " agents that, for example, are just good at one game, let's say, pong or whatnot. And even"}, {"start": 29.2, "end": 37.72, "text": " I guess even things like Starcraft. Though this kind of depends on what you mean by game."}, {"start": 37.72, "end": 45.64, "text": " So what are we talking about here? The following is a text-based games where the goal is to"}, {"start": 45.64, "end": 58.120000000000005, "text": " cook recipes. Right, so let's just jump in and see what goes on. The game starts by telling"}, {"start": 58.12, "end": 65.12, "text": " you you are hungry. Right, let's cook a delicious meal. And so on. So the objective is basically"}, {"start": 65.12, "end": 71.12, "text": " always the same. It's find the cookbook, read recipe that's in it, then collect all the"}, {"start": 71.12, "end": 78.8, "text": " things that are in the recipe, prepare them in certain ways that are also specified"}, {"start": 78.8, "end": 83.92, "text": " by the recipe, and then at the end you have a meal and then you can eat the meal. And"}, {"start": 83.92, "end": 90.48, "text": " that will give you points. But since it's a text-based games and the input doesn't come"}, {"start": 90.48, "end": 98.28, "text": " structured, but it comes in natural text. So the game tells you, for example, kitchen."}, {"start": 98.28, "end": 102.48, "text": " So basically you're in the kitchen. You are now in the kitchen. I guess you better just"}, {"start": 102.48, "end": 108.52000000000001, "text": " go and list everything you see here. You hear a noise. You spin around. So you see that"}, {"start": 108.52, "end": 115.52, "text": " the kind of input you get from the game is very playful. It has a lot of descriptive elements."}, {"start": 115.52, "end": 126.52, "text": " Sometimes you see a closed oven. You make out a table. Then you can see on the counter"}, {"start": 126.52, "end": 134.32, "text": " you can make out a sliced fried red hot pepper and so on. So it's very much not trivial"}, {"start": 134.32, "end": 140.48, "text": " to kind of parse this in a traditional way. If you were to go about this by simply writing"}, {"start": 140.48, "end": 146.28, "text": " an algorithm extracting things, it's very hard because for example you might see that"}, {"start": 146.28, "end": 151.92, "text": " there's an oven, but it's a closed oven. You make out a table. So this is kind of a"}, {"start": 151.92, "end": 159.12, "text": " synonym for you. You see a table. But you see like there is a table. You can make out"}, {"start": 159.12, "end": 164.12, "text": " a sliced fried red hot pepper. And here it's important. Not only do you need to realize"}, {"start": 164.12, "end": 171.04, "text": " that there is a red hot pepper, but also that its state is sliced and fried. This is important"}, {"start": 171.04, "end": 181.0, "text": " because you need all ingredients in a certain state. You examine the stove. So there is"}, {"start": 181.0, "end": 191.04000000000002, "text": " a stove. So all these things you need to kind of understand. So if you now look, there"}, {"start": 191.04, "end": 199.51999999999998, "text": " is a recipe book in here or no, there isn't a recipe. You can examine recipe. I guess"}, {"start": 199.51999999999998, "end": 205.76, "text": " there is a recipe book in that room. If there is a recipe book, then you can examine the"}, {"start": 205.76, "end": 211.51999999999998, "text": " recipe. And that's the command. So the arrows here always indicate that that's a user"}, {"start": 211.51999999999998, "end": 218.39999999999998, "text": " command. And these you have to type. That's like the next thing that your agent needs"}, {"start": 218.4, "end": 225.88, "text": " to do. You can select from a predefined set of actions. You actually need to type in"}, {"start": 225.88, "end": 232.4, "text": " the things you want to do. And these are a lot. Like there are a lot of possibilities"}, {"start": 232.4, "end": 237.4, "text": " of what you could type in. Even if you restrict it to kind of what you know the game accepts,"}, {"start": 237.4, "end": 243.68, "text": " there are still so many actions. It's way different than for example Atari games. They"}, {"start": 243.68, "end": 248.28, "text": " always have eight action. Like there's eight buttons. You could possibly present. That's"}, {"start": 248.28, "end": 256.08, "text": " it. And here there are like combinatorically many things you can do. Like you can prepare"}, {"start": 256.08, "end": 263.56, "text": " and take and all the ingredients. You don't know which ingredients come. So then yeah."}, {"start": 263.56, "end": 268.12, "text": " So here you examine the recipe. Let's look at the recipe. It says you open the recipe,"}, {"start": 268.12, "end": 274.84000000000003, "text": " start reading. Recipe number one. Here are the ingredients. Red hot pepper. Here for"}, {"start": 274.84, "end": 278.67999999999995, "text": " right now. That's just one ingredient. Then there are directions. So what do you need to"}, {"start": 278.67999999999995, "end": 284.71999999999997, "text": " do? Slice the red hot pepper. Fry the red hot pepper and prepare the meal. That's"}, {"start": 284.71999999999997, "end": 291.32, "text": " that are those are the directions of the recipe. You also have this this inventory command."}, {"start": 291.32, "end": 299.64, "text": " With Tazi which you're carrying next difficulty. The inventory is finite. So you can't carry"}, {"start": 299.64, "end": 304.12, "text": " everything at some point. You have to drop things that are unnecessary. You can't just"}, {"start": 304.12, "end": 310.76, "text": " take everything. Here you see the command take red hot pepper. That only works if there's"}, {"start": 310.76, "end": 317.24, "text": " a red hot pepper in the room. And here says you take the red hot pepper from the counter."}, {"start": 317.24, "end": 322.0, "text": " Your score has just gone up by one point. And then if you type inventory. It says you're"}, {"start": 322.0, "end": 331.2, "text": " carrying a sliced fried red hot pepper. Again here it all it says the state of the ingredient."}, {"start": 331.2, "end": 337.08, "text": " So the ingredient is the red hot pepper. The state is sliced and fried. And then you can"}, {"start": 337.08, "end": 341.71999999999997, "text": " prepare meal and then you can eat meal. And then it says your score has just gone up by"}, {"start": 341.71999999999997, "end": 347.36, "text": " one point. And these are the scores you collect. So there are a lot of difficulties that"}, {"start": 347.36, "end": 352.48, "text": " are actually not shown in this example. For example, there are different rooms. You may"}, {"start": 352.48, "end": 358.24, "text": " have noticed here you're in the kitchen. But there could be other rooms and you start"}, {"start": 358.24, "end": 364.24, "text": " in a random room. You also need to navigate through the rooms. The doors to the rooms could"}, {"start": 364.24, "end": 375.68, "text": " be closed. And then you need to open them and so on. You can only for example, if this"}, {"start": 375.68, "end": 385.44, "text": " pepper here weren't already sliced and fried, you need to find, you can only slice it if"}, {"start": 385.44, "end": 391.94, "text": " there is a knife in the room. You can only fry it if there is a frying pan or an oven"}, {"start": 391.94, "end": 401.6, "text": " or a stove. Sorry, a stove in the room. And then you'd have to notice that there is a"}, {"start": 401.6, "end": 406.52, "text": " knife. If there is no knife, you'd need to take the red hot pepper, bring it to a room"}, {"start": 406.52, "end": 415.15999999999997, "text": " with a knife and then slice it. So this is vastly difficult game. The last difficulty"}, {"start": 415.16, "end": 423.0, "text": " is actually that in the test set, there will be ingredients that you haven't seen during"}, {"start": 423.0, "end": 429.88000000000005, "text": " training. So also that there, your agent needs to generalize. That's why it says a family"}, {"start": 429.88000000000005, "end": 434.92, "text": " of text-based games because the objective always the same to kind of cook the recipe. But"}, {"start": 434.92, "end": 440.48, "text": " the things you have to do and the things that appear and so on, those are those change"}, {"start": 440.48, "end": 447.24, "text": " basically from episode to episode. And the test set will be different than the training"}, {"start": 447.24, "end": 454.0, "text": " set or kind of there will be unseen data. Alright, so how does this paper go about solving"}, {"start": 454.0, "end": 464.24, "text": " this problem? This paper basically does the following and we are going here from high"}, {"start": 464.24, "end": 471.92, "text": " level to low level. On the highest level, it's a reinforcement learning agent. And that"}, {"start": 471.92, "end": 484.28000000000003, "text": " is sort of how you would imagine an RL agent to work. So here at the end, you have a policy."}, {"start": 484.28000000000003, "end": 489.32, "text": " And the policy predicts an action. If you don't know what a kind of a policy in an action,"}, {"start": 489.32, "end": 496.12, "text": " things are in RL. These are basic RL concept and we kind of skip them here and I'll assume"}, {"start": 496.12, "end": 502.68, "text": " everyone knows what they are. But essentially a policy specifies which action you take next"}, {"start": 502.68, "end": 511.03999999999996, "text": " given the current game state. So the policy is made up scores different actions. So at"}, {"start": 511.04, "end": 519.48, "text": " each step, there are K actions available. And these K actions, I foreset, there are almost"}, {"start": 519.48, "end": 528.16, "text": " infinitely many actions that you could take. The first difficulty and that's the thing"}, {"start": 528.16, "end": 535.04, "text": " that actually comes in here is to reduce all of the possible actions that you can't even"}, {"start": 535.04, "end": 543.3199999999999, "text": " list to just K commands. So that we'll go into that later how this is done. But basically"}, {"start": 543.3199999999999, "end": 552.0799999999999, "text": " one of the main contributions of this paper is how do you even specify what is reasonable,"}, {"start": 552.0799999999999, "end": 557.64, "text": " what would be reasonable to do in the current situation. And then the policy over here only"}, {"start": 557.64, "end": 564.0799999999999, "text": " has to decide among those reasonable actions, not among all actions. So but given that you"}, {"start": 564.08, "end": 571.6800000000001, "text": " have K reasonable commands, you see here, command one, these are embedded and then fed into"}, {"start": 571.6800000000001, "end": 577.2, "text": " GRUs which are a recurrent neural networks. So for each of these commands, you'll get"}, {"start": 577.2, "end": 589.48, "text": " a 32 dimensional vector. This 32 dimensional vector is here C1 through CK. Each are combined"}, {"start": 589.48, "end": 597.64, "text": " with an encoding of the current state. So these 32 dimensional vector are combined with"}, {"start": 597.64, "end": 606.08, "text": " encoding of the current state, which is 256 dimensional and then fed into a neural network"}, {"start": 606.08, "end": 611.64, "text": " that will output a probability distribution over these actions. This is pretty classic"}, {"start": 611.64, "end": 617.48, "text": " in a deeper enforcement learning. So you have an action encoding and the state encoding"}, {"start": 617.48, "end": 622.96, "text": " and the policy decides on that. The state encoding, you'll see here, it's the same everywhere"}, {"start": 622.96, "end": 628.28, "text": " of course because the current game state is the current game state. This comes from this"}, {"start": 628.28, "end": 637.36, "text": " model up here. What this does is over here, you have the what you would call the state,"}, {"start": 637.36, "end": 646.76, "text": " the current observation. And the current observation is composed of many things. And specifically"}, {"start": 646.76, "end": 652.28, "text": " the following eight things. So the first one is actually called observation, which is,"}, {"start": 652.28, "end": 658.28, "text": " I would call all of this the current observation from an RL perspective. But the first is"}, {"start": 658.28, "end": 663.6, "text": " actually observation. It's whatever you saw, the big text you saw before, like you were"}, {"start": 663.6, "end": 667.88, "text": " in the kitchen, it looks like this, it smells like this, you turn around and so on. This"}, {"start": 667.88, "end": 672.76, "text": " would be the observation. It's what the game engine says at the current time step. This"}, {"start": 672.76, "end": 681.64, "text": " is just a piece of text, right? Second, missing items, third, unnecessary items. Now these"}, {"start": 681.64, "end": 689.08, "text": " things, you might wonder, okay, how do I know what what items are missing and unnecessary?"}, {"start": 689.08, "end": 696.88, "text": " These things come from another model that this paper trains and we'll get into that later."}, {"start": 696.88, "end": 702.0, "text": " But basically they have a method of specifying which items are still missing, which are"}, {"start": 702.0, "end": 710.92, "text": " necessary. And they list those here. Then description, which is the output of the last look"}, {"start": 710.92, "end": 715.32, "text": " commands. So in each room, you can look, you can type look, and then it will give you a"}, {"start": 715.32, "end": 723.64, "text": " description of the room and what's in there. The previous commands, this is often used"}, {"start": 723.64, "end": 731.88, "text": " in RL either explicitly or implicitly through a recurrent network in order to give the"}, {"start": 731.88, "end": 738.52, "text": " agent an idea of what happened in the previous steps or what it did so that it doesn't repeat"}, {"start": 738.52, "end": 747.12, "text": " actions unnecessarily. Or so it learns to not repeat actions unnecessarily. Required"}, {"start": 747.12, "end": 755.2, "text": " utilities, again, this is a model that's kind of trying to predict what utilities are required"}, {"start": 755.2, "end": 760.96, "text": " to perform some actions. So as I said before, if you want to slice the red hot pepper, you"}, {"start": 760.96, "end": 769.36, "text": " need a knife. If you want to fry it, you need a stove. Discovered locations. As I said,"}, {"start": 769.36, "end": 775.0400000000001, "text": " there are different rooms. You actually don't know what rooms there are before you actually"}, {"start": 775.0400000000001, "end": 782.6800000000001, "text": " go in there. So before you go through a door, you reach another room. So the list of previously"}, {"start": 782.6800000000001, "end": 790.52, "text": " discovered and visited locations is there. And then the name of the current location,"}, {"start": 790.52, "end": 797.68, "text": " it is also there. So these are eight things that make up the current observation. These"}, {"start": 797.68, "end": 804.72, "text": " eight things are just strings of text. And these eight things are each one, as you can"}, {"start": 804.72, "end": 810.56, "text": " see here, these are the eight things from observation to location. Each one are embedded"}, {"start": 810.56, "end": 817.24, "text": " and fed also into an RNM. So for each of these eight things, you'll obtain a 32-dimensional"}, {"start": 817.24, "end": 823.76, "text": " vector. And these are all concatenated to make up one big to 56-dimensional vector. So"}, {"start": 823.76, "end": 830.84, "text": " this 256-dimensional vector will contain all the necessary information about the current"}, {"start": 830.84, "end": 836.0, "text": " room, what's in there, what items are you still missing, what items do you have in your"}, {"start": 836.0, "end": 841.16, "text": " inventory, which ones are unnecessary and so on. So if you train this correctly, this"}, {"start": 841.16, "end": 849.04, "text": " 256-dimensional vector will describe the current game state as it is relevant to your"}, {"start": 849.04, "end": 854.68, "text": " agent, like everything about it, every relevant information that's in here will be encoded"}, {"start": 854.68, "end": 861.7199999999999, "text": " in this vector. Now this vector isn't the final state encoding yet. What you'll have"}, {"start": 861.7199999999999, "end": 869.36, "text": " is you feed this into an RNM that takes, as input, the last time steps you have to imagine,"}, {"start": 869.36, "end": 876.5600000000001, "text": " the last time step already there was observation blah blah blah. This entire thing was, I'm just"}, {"start": 876.5600000000001, "end": 887.72, "text": " copying this box over here. So this entire thing was already done last step and already fed"}, {"start": 887.72, "end": 895.2, "text": " into an RNM. So this is an RNM that actually goes over time. And the last, whatever the output"}, {"start": 895.2, "end": 901.2, "text": " here is, it will be fed to the next step. This is a trick often done in reinforcement learning"}, {"start": 901.2, "end": 908.5200000000001, "text": " as well, that you actually have a recurrent neural network over the time steps. So each"}, {"start": 908.5200000000001, "end": 913.5600000000001, "text": " time step you have a certain observation, you encode it and so on, you get a description"}, {"start": 913.5600000000001, "end": 919.1600000000001, "text": " of that and then you feed this into an RNM. What the RNM can learn to do is it can learn"}, {"start": 919.16, "end": 928.36, "text": " to react to different, not only to the current observation, but to the current observation"}, {"start": 928.36, "end": 935.16, "text": " conditioned on the history of previous observations. So it can learn, before I was in this room,"}, {"start": 935.16, "end": 942.9599999999999, "text": " now I'm in this new room. So I actually haven't taken all the items from this room yet because"}, {"start": 942.96, "end": 950.76, "text": " I just came into this room and so on. So the kind of component where you are able to look"}, {"start": 950.76, "end": 961.24, "text": " at the past and what happened in the past is encaptured by this RNM here. So it's fairly"}, {"start": 961.24, "end": 970.44, "text": " complicated architecture, but this here, this state encoding that is conditioned on the"}, {"start": 970.44, "end": 981.24, "text": " history then goes into this into here. That's the vector that goes in here is combined with"}, {"start": 981.24, "end": 990.12, "text": " each action. So all of these actions here, these K actions and this is all fed through"}, {"start": 990.12, "end": 996.1600000000001, "text": " a neural network and that will give you the policy. This is a really complicated thing,"}, {"start": 996.16, "end": 1006.52, "text": " but if you look at it, it's not too difficult actually. So what you'll do is you will take"}, {"start": 1006.52, "end": 1015.24, "text": " your observations here. This is all observation. It will be encoded and combined with the"}, {"start": 1015.24, "end": 1022.48, "text": " history in order to give you this, in order to give you an encoding of the current state."}, {"start": 1022.48, "end": 1028.04, "text": " On the other hand, you'll take all of the possible commands that you could perform right"}, {"start": 1028.04, "end": 1035.56, "text": " now and code each one separately, right, into an embedding and then you combine each one"}, {"start": 1035.56, "end": 1043.68, "text": " of those with this encoding you specified previously. From that, you make your decision"}, {"start": 1043.68, "end": 1051.3600000000001, "text": " which action to take next. And the action here is the one that's output is the action"}, {"start": 1051.36, "end": 1059.36, "text": " we take next sampled from this policy. The last thing you need is a value network and this"}, {"start": 1059.36, "end": 1068.1599999999999, "text": " is just important for reinforcement learning which tells you from this state here. So I'm"}, {"start": 1068.1599999999999, "end": 1075.4399999999998, "text": " getting weird with colors here. From this state here, which is the same as this one. So"}, {"start": 1075.4399999999998, "end": 1080.8799999999999, "text": " you simply transfer this over from this state. How valuable is that? What's my value of"}, {"start": 1080.88, "end": 1087.0800000000002, "text": " this state? And the value is if I'm in this state and I act as I normally act, what are"}, {"start": 1087.0800000000002, "end": 1094.1200000000001, "text": " all my future rewards going to be combined. So it basically gives you a value of this"}, {"start": 1094.1200000000001, "end": 1101.24, "text": " state. You can think of this in, for example, terms of chess, if you had this in chess and"}, {"start": 1101.24, "end": 1107.0400000000002, "text": " then this here is it would be a description of the chess board, this h team. And the value"}, {"start": 1107.04, "end": 1111.8, "text": " would be how valuable is this position for you. So if you're very much ahead in material"}, {"start": 1111.8, "end": 1117.12, "text": " and position and so on, this value would be very high if you're behind this value would"}, {"start": 1117.12, "end": 1125.12, "text": " be very low. And this is an error that works simply trying to predict that value. So with"}, {"start": 1125.12, "end": 1131.28, "text": " all of this, you now have a, you have a good basis to do reinforcement learning. You have"}, {"start": 1131.28, "end": 1139.28, "text": " a policy, you have a value network. And from that, you can train an RL agent. And this is"}, {"start": 1139.28, "end": 1148.6399999999999, "text": " done classically in an actocratic way where you do advantage learning here, the advantage,"}, {"start": 1148.6399999999999, "end": 1154.76, "text": " the policy you train weighted by the advantage, then the value network you train to be close"}, {"start": 1154.76, "end": 1160.08, "text": " to their reward. And then you have an entropy penalty. If you don't know what these things"}, {"start": 1160.08, "end": 1167.28, "text": " are, the video will get bit too long if I were to go over these reinforcement learning concepts,"}, {"start": 1167.28, "end": 1172.4399999999998, "text": " but these are very standard in reinforcement learning. So you can train these, you can"}, {"start": 1172.4399999999998, "end": 1179.36, "text": " basically train what it does is you can train these neural networks in absence of label"}, {"start": 1179.36, "end": 1184.6399999999999, "text": " training data. Because you don't know what the best action is in each step, right? There's"}, {"start": 1184.6399999999999, "end": 1189.48, "text": " no one telling you. You just have a reward. You just sometimes you get a point and you"}, {"start": 1189.48, "end": 1195.84, "text": " don't know which actions led to that. So these things will actually allow you to train these"}, {"start": 1195.84, "end": 1202.76, "text": " neural networks by using just the reward without knowing which exact actions were right and"}, {"start": 1202.76, "end": 1213.2, "text": " wrong. And that's the core of reinforcement learning, obviously. All right. So the core,"}, {"start": 1213.2, "end": 1219.92, "text": " one of the core ingredients actually is this recipe manager. And the recipe manager is a"}, {"start": 1219.92, "end": 1233.28, "text": " submodel that does the following. So here it takes as an input the cookbook here and it"}, {"start": 1233.28, "end": 1240.44, "text": " also takes as an input the inventory and it outputs something like this. And this, this"}, {"start": 1240.44, "end": 1247.2, "text": " is a, this is a table of representation of what it outputs. It will output all the ingredients"}, {"start": 1247.2, "end": 1256.16, "text": " that you need for the recipe. Whether or not this input that this ingredient is currently"}, {"start": 1256.16, "end": 1269.4, "text": " missing from your inventory and action to perform. So which actions still need to be performed."}, {"start": 1269.4, "end": 1274.64, "text": " So let's look at the following. Let's look at this example. The recipe tells you you need"}, {"start": 1274.64, "end": 1282.0, "text": " the ingredients are a carrot, a red hot pepper and a white onion. And the inventory says you"}, {"start": 1282.0, "end": 1292.3600000000001, "text": " care you're carrying a white onion and a carrot, right? So down here you see a ha we do actually"}, {"start": 1292.36, "end": 1301.84, "text": " have, we do actually have a carrot. So it's not missing. The carrot isn't missing. You have"}, {"start": 1301.84, "end": 1306.1999999999998, "text": " it in your inventory. The red hot pepper is missing. We don't have it in the inventory."}, {"start": 1306.1999999999998, "end": 1314.04, "text": " But we need it for the recipe. The white onion, we need for the recipe, but it's not missing."}, {"start": 1314.04, "end": 1321.12, "text": " Then it also is for each of the ingredients is supposed to tell you this recipe model,"}, {"start": 1321.12, "end": 1325.6, "text": " each of the what you still need to perform on it. So here it says slice the carrot, roast"}, {"start": 1325.6, "end": 1330.8799999999999, "text": " the carrot. And you simply have a carrot. It doesn't say sliced a roast. That means it's"}, {"start": 1330.8799999999999, "end": 1336.6799999999998, "text": " not sliced and roasted. So the recipe is supposed to output you still need to slice and roast"}, {"start": 1336.6799999999998, "end": 1344.56, "text": " the carrot. Here for example for the white onion says fry the white onion. And as you can"}, {"start": 1344.56, "end": 1354.52, "text": " see in the inventory it says you're carrying a fried white onion. So for the white onion,"}, {"start": 1354.52, "end": 1362.96, "text": " you see we don't need to do anything anymore. So the recipe model is basically trying to"}, {"start": 1362.96, "end": 1371.2, "text": " to make this table here. And this table you can see as an intermediary step in order to"}, {"start": 1371.2, "end": 1377.48, "text": " do all the other things. And the difference here to a pure RL method and this important,"}, {"start": 1377.48, "end": 1383.48, "text": " the difference is that this representation, this intermediate table representation is done"}, {"start": 1383.48, "end": 1391.2, "text": " explicitly. So the recipe model really produces a table like this. And not just in other"}, {"start": 1391.2, "end": 1397.8, "text": " RL methods, people will go about and make this recipe model output some sort of, you know,"}, {"start": 1397.8, "end": 1405.08, "text": " let's say a 200 dimensional vector that's supposed to encompass all of this information."}, {"start": 1405.08, "end": 1411.8799999999999, "text": " And that doesn't appear to work as well. Like often that if you simply train this end"}, {"start": 1411.8799999999999, "end": 1417.32, "text": " to end, that will not pick up on the important information because the training signal tends"}, {"start": 1417.32, "end": 1424.28, "text": " to be way too weak. You have to imagine you already have this really, really big model"}, {"start": 1424.28, "end": 1430.44, "text": " construction here. And you're trying to learn it. You're trying to learn it from a tiny"}, {"start": 1430.44, "end": 1436.96, "text": " reward signal that you get at the end, right? This is very noisy signal. Now if you're now"}, {"start": 1436.96, "end": 1443.44, "text": " trying to say, well, the inputs to these things, right, this command here and we also saw"}, {"start": 1443.44, "end": 1449.92, "text": " the inputs to these, these depend on this recipe model. Also now are whatever giant neural"}, {"start": 1449.92, "end": 1454.96, "text": " network construction here. And we'll all train this end to end. And these will actually"}, {"start": 1454.96, "end": 1461.8000000000002, "text": " not be text. These will actually be some sort of latent vectors. That will often fail"}, {"start": 1461.8000000000002, "end": 1468.1200000000001, "text": " because you're now just trying to extract information from too noisy of a reward signal."}, {"start": 1468.1200000000001, "end": 1474.3600000000001, "text": " So the authors here do actually pretty neat separation of that. And they train this"}, {"start": 1474.36, "end": 1481.32, "text": " recipe model with actually an augmented data set. So they go to free base and get more"}, {"start": 1481.32, "end": 1489.6799999999998, "text": " food items. And then they construct a data set that resembles this and train it in a supervised"}, {"start": 1489.6799999999998, "end": 1498.9599999999998, "text": " way to output tables, tables like this. So this is pretty smart. And I think it's a good"}, {"start": 1498.96, "end": 1504.88, "text": " lesson if you ever attempt something like this that really, really important information"}, {"start": 1504.88, "end": 1511.4, "text": " such as this one. If you can train it in a supervised way as a kind of a pre-processing"}, {"start": 1511.4, "end": 1520.72, "text": " step to your RL procedure, that's extremely helpful. Here you can see how this is then"}, {"start": 1520.72, "end": 1529.92, "text": " used. So by combining this table that was output from the recipe model and your inventory"}, {"start": 1529.92, "end": 1538.8, "text": " and the output of this look command, you can then generate these commands. So before we"}, {"start": 1538.8, "end": 1545.4, "text": " said it's important to reduce the everything you could do, which is infinite things, to"}, {"start": 1545.4, "end": 1553.0, "text": " everything that is reasonable to do currently. And this model here does that. So given this,"}, {"start": 1553.0, "end": 1559.16, "text": " given that, and given the description of what's currently in the room, you can have generate"}, {"start": 1559.16, "end": 1566.6000000000001, "text": " these commands. And for example, take knife. If you have to slice something because you"}, {"start": 1566.6000000000001, "end": 1573.72, "text": " see a knife is in the room and you could conceivably take the knife, right? You can construct"}, {"start": 1573.72, "end": 1583.92, "text": " these commands, but also since you know what's in your inventory and since you know which"}, {"start": 1583.92, "end": 1590.74, "text": " things are still missing, you can generate commands like take the widening or drop the"}, {"start": 1590.74, "end": 1598.16, "text": " water because you don't need the water. So the authors also group these things here in"}, {"start": 1598.16, "end": 1604.28, "text": " this what they call high level commands, which take all required items from here, simply"}, {"start": 1604.28, "end": 1609.96, "text": " means take everything that's in the room that is not in the inventory, but you need it."}, {"start": 1609.96, "end": 1615.8400000000001, "text": " So these things, which for an oral agent, it makes sense to group these things together"}, {"start": 1615.8400000000001, "end": 1620.92, "text": " because it doesn't make sense to have them as two separate things. If you need both,"}, {"start": 1620.92, "end": 1628.0400000000002, "text": " take both. If you don't need any, what if you have an inventory drop, all of these things."}, {"start": 1628.04, "end": 1635.28, "text": " So that makes sense. This is small optimization that apparently brought some gains, but"}, {"start": 1635.28, "end": 1642.44, "text": " the kind of the overarching message here is that once you have a once you have this information"}, {"start": 1642.44, "end": 1649.08, "text": " from the recipe model, you can then use it in many useful ways in order to make life"}, {"start": 1649.08, "end": 1657.76, "text": " for your oral agent easier. All right. So that kind of is the entire model. It's very"}, {"start": 1657.76, "end": 1663.92, "text": " it's quite convoluted, but basically you start with this here, this recipe manager,"}, {"start": 1663.92, "end": 1671.96, "text": " you decide you output this table down here, which ingredients are in the recipe, are they"}, {"start": 1671.96, "end": 1677.96, "text": " still missing and which actions to any to perform. You then combine it with this information"}, {"start": 1677.96, "end": 1683.6, "text": " here, the information about the current room and your inventory in order to come up with"}, {"start": 1683.6, "end": 1690.3999999999999, "text": " a set of commands that are conceivable to do here. You combine these commands with some"}, {"start": 1690.3999999999999, "end": 1700.28, "text": " commands that are always available. So commands that are always available are things like look,"}, {"start": 1700.28, "end": 1707.24, "text": " inventory, prepare meal, you add that, right. You add that if the recipe manager does not"}, {"start": 1707.24, "end": 1714.44, "text": " output any missing and the agent's location is the kitchen. So you can add these other"}, {"start": 1714.44, "end": 1721.28, "text": " items and also we're not even going to get into that. You're at navigational items because"}, {"start": 1721.28, "end": 1725.08, "text": " there are doors in these rooms and you need to navigate around. So they actually train"}, {"start": 1725.08, "end": 1735.72, "text": " another model to here is the to detect detect directions that you could move into and open"}, {"start": 1735.72, "end": 1742.08, "text": " doors for every closed door in the room. So that's another challenge that the agent needs"}, {"start": 1742.08, "end": 1747.4, "text": " to overcome. They have to build an entire model to predict which doors are there and are"}, {"start": 1747.4, "end": 1753.28, "text": " they closed. You need to open them. So these commands if there are doors and if you can"}, {"start": 1753.28, "end": 1759.64, "text": " move through them, these commands are also added to this set of commands that are reasonable."}, {"start": 1759.64, "end": 1766.0, "text": " So now we have a set of commands that are reasonable over here. Then you describe the room"}, {"start": 1766.0, "end": 1774.0, "text": " here. You put both into this embedding and then finally your policy outputs and action."}, {"start": 1774.0, "end": 1781.68, "text": " That's the entire process. Very convoluted, very big, very astonishing that this works with"}, {"start": 1781.68, "end": 1790.3200000000002, "text": " our L-butt in order to get it to work. You actually need to do this supervised training."}, {"start": 1790.3200000000002, "end": 1796.3600000000001, "text": " And the experimental evidence here is quite solid in that they compare to baseline systems"}, {"start": 1796.3600000000001, "end": 1808.16, "text": " that use classic techniques and they do some ablation over their individual parts and"}, {"start": 1808.16, "end": 1815.72, "text": " they get second place, I think, in a competition about these tech space games. So that's pretty"}, {"start": 1815.72, "end": 1845.68, "text": " good. And that was it for me. Check it out. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=BK3rv0MQMwY | [News] The Siraj Raval Controversy | Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a lot of students of his 200$ online-course have accused him of breaking major promises he made when advertising the course and denying them refunds. Second, his paper on "The Neural Qubit" appears to be plagiarized almost verbatim.
https://www.reddit.com/r/MachineLearning/comments/d7ad2y/d_siraj_raval_potentially_exploiting_students/
https://www.reddit.com/r/MachineLearning/comments/dh2xfs/d_siraj_has_a_new_paper_the_neural_qubit_its/ | There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion, just kind of stating what's up in a very high level overview. Because if you haven't heard of this, I think it's important that you do and this is both sad and funny to a degree more sad actually, but make your own opinions. So Siraj Raval is a very prominent YouTuber that makes videos mostly, let's say, coding tutorials or explaining short concepts in the field of machine learning. And recently also branched out into other fields like here, watch me build a marketing startup and so on. So what happened? It was two recent developments. First of all, he offered a course and the course was $200. And this is one of his students on Twitter and many more have come out and he offered this course for $200 and basically said make money with machine learning. That was the course. And he said he was going to take 500 students in this course and it would be personalized, learning personalized support from basically from him or he said he is all in into this course. Then the students discovered that there were actually over a thousand people in the course and there was almost no personalized support. So there was only what giving 50 minutes of his weekly time to do Q&A, 30 minutes of video content and they apparently also replied to all the code submissions with the exact same email. So things like this. He actually split up the students into two different Slack groups so they wouldn't notice that there are over a thousand people. So about two, 500 people groups. Then people wanted to wanted a refund and then apparently when he hit the Slack limit, he transferred them to Discord and he added everyone that wanted to refund to Discord channel and then simply banned them. I mean there are many more stories of students about this course. Apparently this was kind of really a bit of a scam this course without especially the refunds. There was no refund policy and then about two weeks I think into the course there was a refund policy so after two weeks after the course started and the refund policy said you can get a refund within two weeks of the course starting. So this is all just kind of really, really weird and I encourage you to read up more on this because there are many more stories about this course. So he apologized publicly and said he shouldn't have done that, he should have hired TAs and so on. That he apologized for it and that seemed to be kind of the end of that. I don't exactly know what happened to the students. Some claim they never got a refund and so on. But then it went on and it went on badly for Sriraj if I may say because he published a paper called the Neural Cupid and people have gone and it turns out that it is almost all plagiarized from one or two other papers. Actually, it turns out it's I think it's two papers and it's almost all plagiarized from there and you can see on the left the green sections and on the right the red sections are exactly identical for example. This table up here I think is on the next page of the other paper is exactly this from the other paper. If you look at whatever these equations they are all the same. The sentences are exactly the same and so on. He only changed like the also the diagrams you see here on the upper left exactly taken from this other paper and I think he mentions this other paper he cites it once and he says his work is kind of a derivative of that or leaned on that and so on but these aren't explicitly quotes here. The only change that he made or changed is like so whenever the other paper says we can write the combined transformation here he says I can write. Thanks to the CV encoding I get an online ear functional. There's a rule in computer science the only person who is allowed to do this is Donk Neuth. No one else. That's I mean that's holy rule broken. So more seriously he changed that and then he also kind of used a couple of synonyms which make no sense. So for example he replaced the word gate by the word door and of course a logic gate then becomes a logic door. So here it's a non Gaussian gate phi and I don't know if in this instance but in this instance he replaced it was it I don't know here it actually says gate but sometimes it's replaced by door and also he replaced the word complex Hilbert space to complicated Hilbert space which makes no sense at all. So this I yeah it's funny and sad at the same time. So this happened and again he's apologizing I've seen claims that mineral kbf is partly plagiarized is this true and he basically claims it he sort of blames it he says he's doing too many videos a week which I agree I mean I can I can tell you that making videos is hard even crappy videos like mine and his are actually edited and so on. But the problem is many people more came out and said that he did the same thing to their project here you see someone he did exact same thing to our project it took four people a couple of months to do he acted like it was his own and many more came out and said he plagiarized other he plagiarized other things as well where he basically just takes code and gives minimal or no attribution to the original authors and then pass it off as his own. This after this course yeah everyone this could not get any words. So this all happened I mean I encourage you to go read up on it to make up your own mind. I just want to point out quickly the end and I want to actually show the identity of the person posting this if you really want to find out but it's not about that person it's about the kind of sentiment so there is a sentiment around that you should kind of unfollow him and because that lens credibility to him and there is a point to be made of that kind of if if the kind of prominent researchers refer to him and so on that gives him some credibility but I'm also very much against sort of a cancel culture it is also the case that he no like no matter how much plagiarized has popularized the field more than anyone else and maybe you know there is a conversation to be had and a lesson to be learned without immediately cancelling someone that's just so that I mean there's it's a complicated issue but just kind of want to get this out there so go read up on this it's all it's a wild world so that being said bye bye have fun. | [{"start": 0.0, "end": 6.84, "text": " There is a massive controversy going on right now and in the middle is Siraj Raval,"}, {"start": 6.84, "end": 9.36, "text": " a prominent YouTuber."}, {"start": 9.36, "end": 14.96, "text": " So today I'll just be actually shortly reporting on this, not giving too much opinion,"}, {"start": 14.96, "end": 20.88, "text": " just kind of stating what's up in a very high level overview."}, {"start": 20.88, "end": 28.44, "text": " Because if you haven't heard of this, I think it's important that you do and this is both"}, {"start": 28.44, "end": 35.760000000000005, "text": " sad and funny to a degree more sad actually, but make your own opinions."}, {"start": 35.760000000000005, "end": 43.6, "text": " So Siraj Raval is a very prominent YouTuber that makes videos mostly, let's say, coding tutorials"}, {"start": 43.6, "end": 49.400000000000006, "text": " or explaining short concepts in the field of machine learning."}, {"start": 49.400000000000006, "end": 57.16, "text": " And recently also branched out into other fields like here, watch me build a marketing"}, {"start": 57.16, "end": 58.879999999999995, "text": " startup and so on."}, {"start": 58.879999999999995, "end": 59.879999999999995, "text": " So what happened?"}, {"start": 59.879999999999995, "end": 63.36, "text": " It was two recent developments."}, {"start": 63.36, "end": 68.8, "text": " First of all, he offered a course and the course was $200."}, {"start": 68.8, "end": 77.92, "text": " And this is one of his students on Twitter and many more have come out and he offered"}, {"start": 77.92, "end": 85.44, "text": " this course for $200 and basically said make money with machine learning."}, {"start": 85.44, "end": 87.56, "text": " That was the course."}, {"start": 87.56, "end": 96.2, "text": " And he said he was going to take 500 students in this course and it would be personalized,"}, {"start": 96.2, "end": 104.47999999999999, "text": " learning personalized support from basically from him or he said he is all in into this"}, {"start": 104.47999999999999, "end": 106.36, "text": " course."}, {"start": 106.36, "end": 113.72, "text": " Then the students discovered that there were actually over a thousand people in the course"}, {"start": 113.72, "end": 117.4, "text": " and there was almost no personalized support."}, {"start": 117.4, "end": 125.12, "text": " So there was only what giving 50 minutes of his weekly time to do Q&A, 30 minutes of video"}, {"start": 125.12, "end": 132.6, "text": " content and they apparently also replied to all the code submissions with the exact same"}, {"start": 132.6, "end": 135.04, "text": " email."}, {"start": 135.04, "end": 138.52, "text": " So things like this."}, {"start": 138.52, "end": 144.56, "text": " He actually split up the students into two different Slack groups so they wouldn't notice"}, {"start": 144.56, "end": 146.44, "text": " that there are over a thousand people."}, {"start": 146.44, "end": 152.32000000000002, "text": " So about two, 500 people groups."}, {"start": 152.32000000000002, "end": 160.32000000000002, "text": " Then people wanted to wanted a refund and then apparently when he hit the Slack limit,"}, {"start": 160.32000000000002, "end": 168.4, "text": " he transferred them to Discord and he added everyone that wanted to refund to Discord"}, {"start": 168.4, "end": 172.16, "text": " channel and then simply banned them."}, {"start": 172.16, "end": 179.20000000000002, "text": " I mean there are many more stories of students about this course."}, {"start": 179.20000000000002, "end": 188.24, "text": " Apparently this was kind of really a bit of a scam this course without especially the"}, {"start": 188.24, "end": 189.24, "text": " refunds."}, {"start": 189.24, "end": 195.6, "text": " There was no refund policy and then about two weeks I think into the course there was"}, {"start": 195.6, "end": 200.07999999999998, "text": " a refund policy so after two weeks after the course started and the refund policy said"}, {"start": 200.07999999999998, "end": 204.64, "text": " you can get a refund within two weeks of the course starting."}, {"start": 204.64, "end": 214.76, "text": " So this is all just kind of really, really weird and I encourage you to read up more on"}, {"start": 214.76, "end": 220.24, "text": " this because there are many more stories about this course."}, {"start": 220.24, "end": 230.64000000000001, "text": " So he apologized publicly and said he shouldn't have done that, he should have hired TAs and"}, {"start": 230.64000000000001, "end": 234.48000000000002, "text": " so on."}, {"start": 234.48000000000002, "end": 240.28, "text": " That he apologized for it and that seemed to be kind of the end of that."}, {"start": 240.28, "end": 242.52, "text": " I don't exactly know what happened to the students."}, {"start": 242.52, "end": 245.60000000000002, "text": " Some claim they never got a refund and so on."}, {"start": 245.6, "end": 253.6, "text": " But then it went on and it went on badly for Sriraj if I may say because he published"}, {"start": 253.6, "end": 260.88, "text": " a paper called the Neural Cupid and people have gone and it turns out that it is almost"}, {"start": 260.88, "end": 266.04, "text": " all plagiarized from one or two other papers."}, {"start": 266.04, "end": 271.4, "text": " Actually, it turns out it's I think it's two papers and it's almost all plagiarized from"}, {"start": 271.4, "end": 276.28, "text": " there and you can see on the left the green sections and on the right the red sections"}, {"start": 276.28, "end": 278.91999999999996, "text": " are exactly identical for example."}, {"start": 278.91999999999996, "end": 284.88, "text": " This table up here I think is on the next page of the other paper is exactly this from"}, {"start": 284.88, "end": 286.52, "text": " the other paper."}, {"start": 286.52, "end": 290.67999999999995, "text": " If you look at whatever these equations they are all the same."}, {"start": 290.67999999999995, "end": 293.79999999999995, "text": " The sentences are exactly the same and so on."}, {"start": 293.79999999999995, "end": 301.2, "text": " He only changed like the also the diagrams you see here on the upper left exactly taken"}, {"start": 301.2, "end": 308.2, "text": " from this other paper and I think he mentions this other paper he cites it once and he says"}, {"start": 308.2, "end": 314.84, "text": " his work is kind of a derivative of that or leaned on that and so on but these aren't"}, {"start": 314.84, "end": 319.08, "text": " explicitly quotes here."}, {"start": 319.08, "end": 325.28, "text": " The only change that he made or changed is like so whenever the other paper says we can"}, {"start": 325.28, "end": 330.44, "text": " write the combined transformation here he says I can write."}, {"start": 330.44, "end": 333.71999999999997, "text": " Thanks to the CV encoding I get an online ear functional."}, {"start": 333.71999999999997, "end": 338.4, "text": " There's a rule in computer science the only person who is allowed to do this is Donk"}, {"start": 338.4, "end": 339.4, "text": " Neuth."}, {"start": 339.4, "end": 340.4, "text": " No one else."}, {"start": 340.4, "end": 344.16, "text": " That's I mean that's holy rule broken."}, {"start": 344.16, "end": 351.28, "text": " So more seriously he changed that and then he also kind of used a couple of synonyms"}, {"start": 351.28, "end": 354.24, "text": " which make no sense."}, {"start": 354.24, "end": 360.40000000000003, "text": " So for example he replaced the word gate by the word door and of course a logic gate"}, {"start": 360.40000000000003, "end": 363.56, "text": " then becomes a logic door."}, {"start": 363.56, "end": 372.24, "text": " So here it's a non Gaussian gate phi and I don't know if in this instance but in this instance"}, {"start": 372.24, "end": 377.84000000000003, "text": " he replaced it was it I don't know here it actually says gate but sometimes it's replaced"}, {"start": 377.84, "end": 386.84, "text": " by door and also he replaced the word complex Hilbert space to complicated Hilbert space"}, {"start": 386.84, "end": 390.35999999999996, "text": " which makes no sense at all."}, {"start": 390.35999999999996, "end": 397.64, "text": " So this I yeah it's funny and sad at the same time."}, {"start": 397.64, "end": 406.67999999999995, "text": " So this happened and again he's apologizing I've seen claims that mineral kbf is partly"}, {"start": 406.68, "end": 415.2, "text": " plagiarized is this true and he basically claims it he sort of blames it he says he's"}, {"start": 415.2, "end": 421.36, "text": " doing too many videos a week which I agree I mean I can I can tell you that making videos"}, {"start": 421.36, "end": 429.24, "text": " is hard even crappy videos like mine and his are actually edited and so on."}, {"start": 429.24, "end": 438.8, "text": " But the problem is many people more came out and said that he did the same thing to their"}, {"start": 438.8, "end": 442.92, "text": " project here you see someone he did exact same thing to our project it took four people"}, {"start": 442.92, "end": 450.0, "text": " a couple of months to do he acted like it was his own and many more came out and said"}, {"start": 450.0, "end": 458.96, "text": " he plagiarized other he plagiarized other things as well where he basically just takes code"}, {"start": 458.96, "end": 466.36, "text": " and gives minimal or no attribution to the original authors and then pass it off as his"}, {"start": 466.36, "end": 470.2, "text": " own."}, {"start": 470.2, "end": 473.96, "text": " This after this course yeah everyone this could not get any words."}, {"start": 473.96, "end": 486.0, "text": " So this all happened I mean I encourage you to go read up on it to make up your own"}, {"start": 486.0, "end": 487.0, "text": " mind."}, {"start": 487.0, "end": 491.91999999999996, "text": " I just want to point out quickly the end and I want to actually show the identity of the"}, {"start": 491.91999999999996, "end": 497.67999999999995, "text": " person posting this if you really want to find out but it's not about that person it's"}, {"start": 497.67999999999995, "end": 501.59999999999997, "text": " about the kind of sentiment so there is a sentiment around that you should kind of"}, {"start": 501.6, "end": 511.16, "text": " unfollow him and because that lens credibility to him and there is a point to be made of that"}, {"start": 511.16, "end": 517.6800000000001, "text": " kind of if if the kind of prominent researchers refer to him and so on that gives him some"}, {"start": 517.6800000000001, "end": 524.36, "text": " credibility but I'm also very much against sort of a cancel culture it is also the case"}, {"start": 524.36, "end": 531.44, "text": " that he no like no matter how much plagiarized has popularized the field more than anyone"}, {"start": 531.44, "end": 540.36, "text": " else and maybe you know there is a conversation to be had and a lesson to be learned without"}, {"start": 540.36, "end": 547.4000000000001, "text": " immediately cancelling someone that's just so that I mean there's it's a complicated"}, {"start": 547.4000000000001, "end": 557.32, "text": " issue but just kind of want to get this out there so go read up on this it's all it's"}, {"start": 557.32, "end": 562.2, "text": " a wild world so that being said bye bye have fun."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=rvr143crpuU | Accelerating Deep Learning by Focusing on the Biggest Losers | What if you could reduce the time your network trains by only training on the hard examples? This paper proposes to select samples with high loss and only train on those in order to speed up training.
Abstract:
This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks (DNNs) by prioritizing examples with high loss at each iteration. Selective-Backprop uses the output of a training example's forward pass to decide whether to use that example to compute gradients and update parameters, or to skip immediately to the next example. By reducing the number of computationally-expensive backpropagation steps performed, Selective-Backprop accelerates training. Evaluation on CIFAR10, CIFAR100, and SVHN, across a variety of modern image models, shows that Selective-Backprop converges to target error rates up to 3.5x faster than with standard SGD and between 1.02--1.8x faster than a state-of-the-art importance sampling approach. Further acceleration of 26% can be achieved by using stale forward pass results for selection, thus also skipping forward passes of low priority examples.
Authors: Angela H. Jiang, Daniel L.-K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, Padmanabhan Pillai
https://arxiv.org/abs/1910.00762 | Hi there, today we're looking at accelerating deep learning by focusing on the biggest losers by Angela Giang at L. This paper is pretty simple, pretty short in idea and is a pretty much an engineering paper. So we'll go over this idea and give it a good look and discuss advantages, disadvantages, and so on. So what's the basic idea? The basic idea is the following. If you train in real network, what do you do? Usually you have a training date set, which are represent here, each line is a sample, and usually you have the network has a bunch of layers. So each line here is a layer of weights. What you do is you group your training date set into mini batches. Let's say that's a mini batch for samples and you pass it through the network. This is called the forward propagation. You then calculate the loss. So loss of your forward propagated signal and then you back propagate this loss. Now when back propagating, you want to back propagate the loss such that it reaches each of the layers and it tells each layer how to update itself. So what you want to do is for each layer, you actually need to back prop the loss once towards the layer below it and once towards itself in order for the layer below it to continue the back prop and for the layer itself to update its weights. So each time you back prop basically wants towards the lower layer and wants towards yourself. And that's a lot of work. So you see whatever work you have passing your samples through the network here, whatever work you have, you basically double the work going back. And that's the core idea of this paper is if you look at the following in a traditional training neural network, you'll have some overhead in each training step, some overhead of maybe putting the data to the GPU or something like this, then you have a time that you require for a forward pass and then you have a big chunk that you require for the backward pass. And you see it's about double the size of this forward pass. And this paper asks how can we reduce this backward pass time. So what they propose is the following. They propose, well, if the backward pass is expensive and we do it here for each data point in these mini batches, why don't we stop doing this and only kind of try to select examples that are important. And once we only have selected the important examples, only those examples get to do the backward pass. Thereby, let's say if we can only select one third of the examples to do the backward pass, we can reduce the amount that's required in the backward pass, the amount of work by one third or sorry, by two thirds. And the way they select the important examples is by looking at the loss. So they basically say whichever examples have a high loss, this must be the important examples, these are the hard examples. And if we only train on the hard examples or if we train on the hard examples more, then the network will learn on these hard examples faster. Of course, there is an implication there that if your network is good on the hard examples, it's also going to be good on the easy examples. That's like the definition of hard and easy examples. Of course, that's kind of a simplifying assumption. But so the idea is only select the hard examples and own by how much loss they have and only then backprop these hard examples. And that's how they can reduce this by a lot. Now there's several intricacies here. So the setup time of course is the same. What they do next is they forward propagate the entire mini batch here because they need the loss of each example and then therefore they need to forward propagate the entire mini batch. At the end of this, they select the examples with the highest loss and they only use those in training. Now training consists actually of another forward pass, but this one is much smaller because you only forward pass the examples that you're actually training on. And then the backward pass accordingly will also be much, much smaller because now again, you have less samples to actually train on. Now the reason that you even need this second forward pass is the following. When you do backprop, you can't simply start with a signal back here and then backprop that through the network. That doesn't work usually with most network architectures. Namely, what you need to do is actually while you forward pass, you need to remember information at each layer. A good example of this is the max pool operation. So in max pool, what you do is you may be have four pixels that are next to each other and you select one of them. Now you need to remember during the forward pass which one you selected. Otherwise, the backward pass won't work. You need to know which pixel to backprop through. And that's why at each point you need to remember information to inform the backward pass. So that's why basically you need a second forward pass with only the examples that you want to train on. So you forward pass once, calculate this loss, select the ones with the high loss, then forward pass these again and then backprop only these examples. That's the main, the main gist of it. This, and this is exactly what you see here. So forward pass everything, then forward pass those again that have high loss and then backprop them. There is an actually an interesting thing in this graph again that you see that this forward pass here also is shorter than this forward pass and I assume that's because this forward pass here actually needs to do those additional saving of information while this forward pass here is simply a forward pass without intention of backward passing and you can instruct the deep learning frameworks to then not remember this information. All right, then they have another improvement over their algorithm called stale selective backprop. So this is called selective backprop. They have stale selective backprop. What stale selective backprop does is it says, well, we might not always need this forward pass. Right. What we might be able to do is actually. So first we take the entire data set. That's actually, let's use a different color here. Let's use this. We take the entire data set forward property through the network and then save this save into some database. The losses. Right. And then we use these losses to select the individual points here for a while. Right. So we perform maybe, so maybe this is training here. You start here, you do this loss calculation and then you run your training until a couple of epochs and then you say, okay, now this information is really outdated. I should really update it. And you do this entire thing again. And then you run training some more until you again stop and say, okay, now this information is stale again. Thereby, you can amortize the cost of these for passes. So you pass your entire training set once in a while and then use those losses to select the hard examples. And that's amortized. You can reduce this forward pass that's used for selecting again by a lot. And of course the paper shows that this doesn't hurt your performance too much if you have this stale information. Right. So this is the entire idea of the algorithm. And the algorithm is detailed here. So very briefly, you have this buffer and you go through the data. You forward pass every example. You for each loss, you calculate the probability that you should retain it. So it's probabilistic framework. It's not absolute cut off. If you decide to choose it probabilistically, you append it to this buffer. If this buffer is of a certain size, then you do the back prop on only on this buffer. This buffer now only has the high loss examples or the high loss examples with higher probability. And don't forget within this backward here, there is also an implicit forward pass. And then you clear the buffer and go on. So there's lots of, you see, there's lots of forward passes here to compute the losses. And then every now and then there is a backward pass whenever the buffer is of certain size. Now the probabilistic calculation of how and when to retain a sample is very simple. You have a deck of recent losses, a history of recent losses. Simply calculate the percentile that a given loss has in this history. And that percentile will then decide on the probability. If you raise it to a power and that looks something like this. So what's often used in this paper is this 33 percent selection. That would be the blue curve. And you see the median example here. If you are in the median, then you have about a 33 percent chance of being retained. If you have a higher loss than that, then your probability rises. All right. So the first interesting thing actually is this graphic here where they show what the algorithm deems the hardest and easiest examples. So examples chosen least frequently. This is the C for 10 data set, which is images 32 by 32 in color. And you need to classify them into 10 categories. And you see the easiest images that ones least chosen least frequently are almost all automobiles. And of the automobiles, they're almost all where you see the full car with the wheels and well, not like this one, this one here. So these are what the algorithm deems easy samples. And if you look at the horror samples, examples chosen most frequently by selective back prop. It's these here. So it's for example, bird and bird. In this case, it's just kind of a smear. They're just kind of smears on a blue background. It's understandably that this resolution is pretty hard to make out that this is a bird. Airplane and automobile here, you see that it's a it's only a partial view of the thing. It's not like the full car like we saw in the easy pictures. It's only partial. And this seems to be pretty hard. It needs understandable. This cat here, to me, it's even unclear if this is a cat or a dog. And I think dog is also a category in C for 10. So the algorithm is certainly understandably confused by this example and deems it a hard example. And here, even more you see truck. And this isn't a truck as far as I can make out. These are two humans on the picture with no truck anywhere visible. So this seems to be a mislabeled example. And of course, mislabeled examples are going to be of high loss to the algorithm. This is the first criticism or thing. And the authors recognize this that if you upway your examples with high loss, you are going to upway all the mislabeled examples as well. And thereby you're going to train more on the mislabeled examples. And thereby you're going to possibly degrade your test performance much more than had you given every sample the same weight. And the authors address this actually nicely by doing an experiment. And the experiment is what if we artificially mislabeled examples, how much can these algorithms tolerate? And so they often have these graphics here where they show test error over time. So test error. And the x axis here is number of backproped images, which is kind of a time dimension in training these algorithms. You see the blue is a traditional model. And the pink is a selective backprop with a 33% retained rate. So you see the selective backprop is much faster in reaching a low error. And this first thing is simply with 1% of the labels shuffled. So 1% of the images are mislabeled. Selective backprop is still much faster than the traditional trajectory. If you go to 10% shuffled, still you can see selective backprop is faster reaching a low error. Of course the error now generally is higher. But if you go to 20% here, what you can see 20% shuffled labels, what you can see it starts to become clear that these selective backprop it retains 33% of the hardest examples. So and 20% of the examples have a wrong label. That means most of what it upways are wrongly labeled examples almost. They're still a lot of correctly labeled examples. But you see it gets to a lower but then it gets up again as it kind of massively overfits on these wrongly labeled examples because it upways them so much. It upways these because here in still every example is hard. So these wrongly labeled examples they'll get about the same weight as correctly labeled examples because the network isn't trained yet. But as you go lower it starts to massively overfit. So compared to the traditional model kind of just reaches this low error that okay is now corrupted by these wrong labels but it doesn't it doesn't hurt as much. So that's kind of my first criticism. If you have a lot of noisy labels or if you have a lot of mislabeled examples then this method might actually hurt more than it helps. But the level is interesting that it can kind of tolerate 10% but it gets kind of into trouble at 20 or so more percent. So this is the first criticism and that's how the authors address it. I really like the sublation study that they do. Here this is kind of the meat of the experiments. So what they show here these curves on the bottom and let's look at this curve is on the x axis actually at wall clock time. Now so how much time do you need in order to reach a kind of low error. Here is test set error. You see the traditional model in blue has a certain trajectory. Now cath 18 is a baseline. Don't worry about it. What we're interested in is this selective back prop which is the pink which you can see outperforms this traditional training. And what we're also interested in is the stale sb. So stale meaning it has this buffer of information that reduces it's supposed to reduce the time again. And you see that is even that even more outperforms the traditional approach. You can also see that the stale s here apparently doesn't hurt the performance too much. You see the error is fairly close and it reaches this error in a much faster time. This on c for 10. That is nice table up here where they show the speed up to reach a given error. So what they do is they take this error of the traditional model this test set error and they ask how fast are these methods in reaching this error times a constant. So times 1.1 times 1.2 times 1.4. Now of course the reaching 1.4 times the final error is easier and thereby but it's also easier for the traditional model of course. So that's the catch but these are kind of benchmarks they chose to have faster these models in reaching 1.1.1.1.4 times the error of a traditionally trained model. And you can see here on c for 10 for example actually it's go to s v h n. S v h n is the easiest of the of the data sets and it chose the most clear thing. So the traditional error is 1.7% and you see that the speed up is so this selective back prop is 3.4 times faster in reaching this 1.1 times the the error of this traditional model. And it's also 3.4 times faster reaching 1.2 times and it's 3.5 times faster in reaching 1.4 times. The stale selective back prop is even faster. So 4.3, 4.9, 5 times faster in reaching 1.4 times. This reaching 1.4 times the error. And so what you can what you can see here is that these methods really make it faster but also there's kind of two things two important things to note in this table. First of all you can see as you go to the right in the table the speed ups get higher. And what it means is that as you need to reach as you make the problem easier so as you need to reach a higher error which is as you need to reach a higher loss value. These methods are there faster. What that means is they're really fast at reaching a somewhat decent point which is represented here. They're really fast but if they need them to reach a more and more accurate performance they themselves get slower and slower. So this is of course clear because what you're doing is you're no longer treating every day to point the same. You are introducing a bias into your training by only training on the hard examples. So you're introducing a bias and this bias will give you a speed up but also hurt your performance. And thereby if you have to get more and more accurate you will lose much of that speed up because you need to reduce that bias at the end that you introduced. So that's the first caveat. As you want to get to a higher and higher performance these methods will help less and less because they basically introduce the bias to gain speed at the beginning of training or to reach less accurate points. The second thing is as you look at these problems here. So SVHN 1.7% error. C410 is a slightly harder problem, 2.9% error. And C4100 is really a harder problem where a traditional model has 18% error. If you look at the speed ups now then you can see even at this right most end here. You have the 3.5 and 5x speed up. Here we have a 1.5, 2x speed up, here we have a 1.2, 1.6x speed up. So as the problems get harder and as the kind of models get fancier, as the classes get more, then the speed up is much lower. And I believe that's because the bias you introduce by re-waying the samples, the bias you introduce will hurt you much more on a difficult and large problem with a large network than it will hurt you on an easy problem. Easy problem you will find introducing some bias, but if you have a hard noisy problem, then this bias you introduce will hurt you much more. And thereby the speed up that these methods give you is much, much less. And so this means that the performance of these models is directly anti-correlated with the hardness of the problem. And that tells me it kind of makes it almost unusable or it goes towards if I look at the numbers, if I look at the numbers over here and extrapolate that to something like ImageNet. It tells me that these methods are going to be almost useless on a dataset of the size and complexity as ImageNet. And the interesting problems nowadays are very much in the domain of more hard, more complex problems. So the kind of usefulness of this method in practice is something that I wouldn't bet on just from reading this paper. I'm open to be convinced otherwise, but just from reading this paper, it seems like the harder you make the problem, the less these methods help, and that's exactly not what you want. You want exactly the opposite. You want to say, oh, if I scale this up, it'll give me even more of a speed up. And that's going to be even better, but this is the opposite. And given that they have no, basically, no theoretical analysis of how much this bias hurts you or how you can still make it kind of good in expectation, how you would need to correct at the end and so on, I would first, of course, test it. I'm very interested to see tests on larger and more complex problems. From this, I'm a bit skeptical. I'm sorry. Yeah. So they show that on these tapes that it clearly helps, it clearly speeds up the training. And that's, of course, that's already a good, good thing. And they do the required experiments. They do the ablation studies on these data sets and so on. So you can see here, for example, on these first graphic on all the data sets, it clearly goes down as you introduce the more sophisticated algorithms. But again, you can see on the hard data set, it doesn't go down as much. All right, but they do discuss this. They're really fair to themselves. They do discuss this in their paper of how, you know, how practical this is and so on. And what else they tried and didn't work. And that's, I think that it's a really good paper in itself. And it's a really good investigation. All right. So that was it for me. Have a fun day. Bye bye. | [{"start": 0.0, "end": 5.5600000000000005, "text": " Hi there, today we're looking at accelerating deep learning by focusing on the biggest"}, {"start": 5.5600000000000005, "end": 15.4, "text": " losers by Angela Giang at L. This paper is pretty simple, pretty short in idea and is"}, {"start": 15.4, "end": 20.96, "text": " a pretty much an engineering paper. So we'll go over this idea and give it a good look"}, {"start": 20.96, "end": 27.28, "text": " and discuss advantages, disadvantages, and so on. So what's the basic idea? The basic"}, {"start": 27.28, "end": 34.800000000000004, "text": " idea is the following. If you train in real network, what do you do? Usually you have"}, {"start": 34.800000000000004, "end": 41.160000000000004, "text": " a training date set, which are represent here, each line is a sample, and usually you have"}, {"start": 41.160000000000004, "end": 48.32, "text": " the network has a bunch of layers. So each line here is a layer of weights. What you do"}, {"start": 48.32, "end": 53.2, "text": " is you group your training date set into mini batches. Let's say that's a mini batch for"}, {"start": 53.2, "end": 59.68000000000001, "text": " samples and you pass it through the network. This is called the forward propagation. You"}, {"start": 59.68000000000001, "end": 69.92, "text": " then calculate the loss. So loss of your forward propagated signal and then you back propagate"}, {"start": 69.92, "end": 76.32000000000001, "text": " this loss. Now when back propagating, you want to back propagate the loss such that it reaches"}, {"start": 76.32000000000001, "end": 81.44, "text": " each of the layers and it tells each layer how to update itself. So what you want to do is"}, {"start": 81.44, "end": 86.32, "text": " for each layer, you actually need to back prop the loss once towards the layer below it"}, {"start": 86.32, "end": 92.84, "text": " and once towards itself in order for the layer below it to continue the back prop and for"}, {"start": 92.84, "end": 99.88, "text": " the layer itself to update its weights. So each time you back prop basically wants towards"}, {"start": 99.88, "end": 106.44, "text": " the lower layer and wants towards yourself. And that's a lot of work. So you see whatever"}, {"start": 106.44, "end": 114.52, "text": " work you have passing your samples through the network here, whatever work you have, you"}, {"start": 114.52, "end": 121.96, "text": " basically double the work going back. And that's the core idea of this paper is if you"}, {"start": 121.96, "end": 129.8, "text": " look at the following in a traditional training neural network, you'll have some overhead in"}, {"start": 129.8, "end": 136.12, "text": " each training step, some overhead of maybe putting the data to the GPU or something like"}, {"start": 136.12, "end": 143.6, "text": " this, then you have a time that you require for a forward pass and then you have a big chunk"}, {"start": 143.6, "end": 148.96, "text": " that you require for the backward pass. And you see it's about double the size of this forward"}, {"start": 148.96, "end": 157.28, "text": " pass. And this paper asks how can we reduce this backward pass time. So what they propose"}, {"start": 157.28, "end": 164.08, "text": " is the following. They propose, well, if the backward pass is expensive and we do it here"}, {"start": 164.08, "end": 170.16000000000003, "text": " for each data point in these mini batches, why don't we stop doing this and only kind"}, {"start": 170.16000000000003, "end": 177.68, "text": " of try to select examples that are important. And once we only have selected the important"}, {"start": 177.68, "end": 185.4, "text": " examples, only those examples get to do the backward pass. Thereby, let's say if we can"}, {"start": 185.4, "end": 191.56, "text": " only select one third of the examples to do the backward pass, we can reduce the amount"}, {"start": 191.56, "end": 199.4, "text": " that's required in the backward pass, the amount of work by one third or sorry, by two thirds."}, {"start": 199.4, "end": 205.52, "text": " And the way they select the important examples is by looking at the loss. So they basically"}, {"start": 205.52, "end": 212.32, "text": " say whichever examples have a high loss, this must be the important examples, these are"}, {"start": 212.32, "end": 218.08, "text": " the hard examples. And if we only train on the hard examples or if we train on the hard"}, {"start": 218.08, "end": 227.48000000000002, "text": " examples more, then the network will learn on these hard examples faster. Of course, there"}, {"start": 227.48000000000002, "end": 232.04000000000002, "text": " is an implication there that if your network is good on the hard examples, it's also going"}, {"start": 232.04000000000002, "end": 239.60000000000002, "text": " to be good on the easy examples. That's like the definition of hard and easy examples."}, {"start": 239.60000000000002, "end": 245.08, "text": " Of course, that's kind of a simplifying assumption. But so the idea is only select the hard"}, {"start": 245.08, "end": 251.84, "text": " examples and own by how much loss they have and only then backprop these hard examples."}, {"start": 251.84, "end": 259.92, "text": " And that's how they can reduce this by a lot. Now there's several intricacies here."}, {"start": 259.92, "end": 264.68, "text": " So the setup time of course is the same. What they do next is they forward propagate the"}, {"start": 264.68, "end": 270.88, "text": " entire mini batch here because they need the loss of each example and then therefore they"}, {"start": 270.88, "end": 277.32, "text": " need to forward propagate the entire mini batch. At the end of this, they select the examples"}, {"start": 277.32, "end": 284.56, "text": " with the highest loss and they only use those in training. Now training consists actually"}, {"start": 284.56, "end": 289.28, "text": " of another forward pass, but this one is much smaller because you only forward pass the"}, {"start": 289.28, "end": 294.8, "text": " examples that you're actually training on. And then the backward pass accordingly will"}, {"start": 294.8, "end": 303.56, "text": " also be much, much smaller because now again, you have less samples to actually train on."}, {"start": 303.56, "end": 309.84000000000003, "text": " Now the reason that you even need this second forward pass is the following. When you do"}, {"start": 309.84000000000003, "end": 315.2, "text": " backprop, you can't simply start with a signal back here and then backprop that through"}, {"start": 315.2, "end": 321.16, "text": " the network. That doesn't work usually with most network architectures. Namely, what you"}, {"start": 321.16, "end": 326.84000000000003, "text": " need to do is actually while you forward pass, you need to remember information at each"}, {"start": 326.84000000000003, "end": 333.36, "text": " layer. A good example of this is the max pool operation. So in max pool, what you do is"}, {"start": 333.36, "end": 338.68, "text": " you may be have four pixels that are next to each other and you select one of them. Now"}, {"start": 338.68, "end": 344.28000000000003, "text": " you need to remember during the forward pass which one you selected. Otherwise, the backward"}, {"start": 344.28000000000003, "end": 350.76000000000005, "text": " pass won't work. You need to know which pixel to backprop through. And that's why at each"}, {"start": 350.76, "end": 357.24, "text": " point you need to remember information to inform the backward pass. So that's why basically"}, {"start": 357.24, "end": 368.64, "text": " you need a second forward pass with only the examples that you want to train on. So you"}, {"start": 368.64, "end": 375.14, "text": " forward pass once, calculate this loss, select the ones with the high loss, then forward"}, {"start": 375.14, "end": 380.71999999999997, "text": " pass these again and then backprop only these examples. That's the main, the main gist"}, {"start": 380.71999999999997, "end": 388.08, "text": " of it. This, and this is exactly what you see here. So forward pass everything, then forward"}, {"start": 388.08, "end": 393.84, "text": " pass those again that have high loss and then backprop them. There is an actually an interesting"}, {"start": 393.84, "end": 399.15999999999997, "text": " thing in this graph again that you see that this forward pass here also is shorter than"}, {"start": 399.15999999999997, "end": 403.76, "text": " this forward pass and I assume that's because this forward pass here actually needs to do"}, {"start": 403.76, "end": 408.96, "text": " those additional saving of information while this forward pass here is simply a forward"}, {"start": 408.96, "end": 414.0, "text": " pass without intention of backward passing and you can instruct the deep learning frameworks"}, {"start": 414.0, "end": 423.48, "text": " to then not remember this information. All right, then they have another improvement over"}, {"start": 423.48, "end": 429.03999999999996, "text": " their algorithm called stale selective backprop. So this is called selective backprop."}, {"start": 429.04, "end": 435.0, "text": " They have stale selective backprop. What stale selective backprop does is it says, well,"}, {"start": 435.0, "end": 442.56, "text": " we might not always need this forward pass. Right. What we might be able to do is actually."}, {"start": 442.56, "end": 448.52000000000004, "text": " So first we take the entire data set. That's actually, let's use a different color here."}, {"start": 448.52000000000004, "end": 455.64000000000004, "text": " Let's use this. We take the entire data set forward property through the network and then"}, {"start": 455.64, "end": 464.36, "text": " save this save into some database. The losses. Right. And then we use these losses to select"}, {"start": 464.36, "end": 472.56, "text": " the individual points here for a while. Right. So we perform maybe, so maybe this is training"}, {"start": 472.56, "end": 480.59999999999997, "text": " here. You start here, you do this loss calculation and then you run your training until a couple"}, {"start": 480.59999999999997, "end": 485.47999999999996, "text": " of epochs and then you say, okay, now this information is really outdated. I should really"}, {"start": 485.48, "end": 491.84000000000003, "text": " update it. And you do this entire thing again. And then you run training some more until"}, {"start": 491.84000000000003, "end": 498.88, "text": " you again stop and say, okay, now this information is stale again. Thereby, you can amortize the"}, {"start": 498.88, "end": 505.76, "text": " cost of these for passes. So you pass your entire training set once in a while and then"}, {"start": 505.76, "end": 513.64, "text": " use those losses to select the hard examples. And that's amortized. You can reduce this"}, {"start": 513.64, "end": 520.24, "text": " forward pass that's used for selecting again by a lot. And of course the paper shows that"}, {"start": 520.24, "end": 526.04, "text": " this doesn't hurt your performance too much if you have this stale information. Right. So"}, {"start": 526.04, "end": 535.24, "text": " this is the entire idea of the algorithm. And the algorithm is detailed here. So very briefly,"}, {"start": 535.24, "end": 541.4399999999999, "text": " you have this buffer and you go through the data. You forward pass every example. You"}, {"start": 541.44, "end": 546.5200000000001, "text": " for each loss, you calculate the probability that you should retain it. So it's probabilistic"}, {"start": 546.5200000000001, "end": 553.1600000000001, "text": " framework. It's not absolute cut off. If you decide to choose it probabilistically, you"}, {"start": 553.1600000000001, "end": 558.98, "text": " append it to this buffer. If this buffer is of a certain size, then you do the back"}, {"start": 558.98, "end": 564.0400000000001, "text": " prop on only on this buffer. This buffer now only has the high loss examples or the high"}, {"start": 564.0400000000001, "end": 569.7600000000001, "text": " loss examples with higher probability. And don't forget within this backward here, there"}, {"start": 569.76, "end": 577.3199999999999, "text": " is also an implicit forward pass. And then you clear the buffer and go on. So there's"}, {"start": 577.3199999999999, "end": 583.4399999999999, "text": " lots of, you see, there's lots of forward passes here to compute the losses. And then"}, {"start": 583.4399999999999, "end": 588.56, "text": " every now and then there is a backward pass whenever the buffer is of certain size. Now"}, {"start": 588.56, "end": 593.88, "text": " the probabilistic calculation of how and when to retain a sample is very simple. You have"}, {"start": 593.88, "end": 602.6, "text": " a deck of recent losses, a history of recent losses. Simply calculate the percentile"}, {"start": 602.6, "end": 609.0, "text": " that a given loss has in this history. And that percentile will then decide on the probability."}, {"start": 609.0, "end": 615.28, "text": " If you raise it to a power and that looks something like this. So what's often used in"}, {"start": 615.28, "end": 620.04, "text": " this paper is this 33 percent selection. That would be the blue curve. And you see the"}, {"start": 620.04, "end": 626.8, "text": " median example here. If you are in the median, then you have about a 33 percent chance"}, {"start": 626.8, "end": 632.8399999999999, "text": " of being retained. If you have a higher loss than that, then your probability rises."}, {"start": 632.8399999999999, "end": 639.7199999999999, "text": " All right. So the first interesting thing actually is this graphic here where they show"}, {"start": 639.7199999999999, "end": 647.36, "text": " what the algorithm deems the hardest and easiest examples. So examples chosen least frequently."}, {"start": 647.36, "end": 654.36, "text": " This is the C for 10 data set, which is images 32 by 32 in color. And you need to classify"}, {"start": 654.36, "end": 662.6, "text": " them into 10 categories. And you see the easiest images that ones least chosen least frequently"}, {"start": 662.6, "end": 669.4, "text": " are almost all automobiles. And of the automobiles, they're almost all where you see the full"}, {"start": 669.4, "end": 676.4, "text": " car with the wheels and well, not like this one, this one here. So these are what the"}, {"start": 676.4, "end": 683.8, "text": " algorithm deems easy samples. And if you look at the horror samples, examples chosen"}, {"start": 683.8, "end": 691.92, "text": " most frequently by selective back prop. It's these here. So it's for example, bird and"}, {"start": 691.92, "end": 698.36, "text": " bird. In this case, it's just kind of a smear. They're just kind of smears on a blue background."}, {"start": 698.36, "end": 703.1999999999999, "text": " It's understandably that this resolution is pretty hard to make out that this is a bird."}, {"start": 703.2, "end": 711.12, "text": " Airplane and automobile here, you see that it's a it's only a partial view of the thing."}, {"start": 711.12, "end": 715.72, "text": " It's not like the full car like we saw in the easy pictures. It's only partial. And"}, {"start": 715.72, "end": 723.36, "text": " this seems to be pretty hard. It needs understandable. This cat here, to me, it's even unclear if"}, {"start": 723.36, "end": 731.1600000000001, "text": " this is a cat or a dog. And I think dog is also a category in C for 10. So the algorithm"}, {"start": 731.16, "end": 738.04, "text": " is certainly understandably confused by this example and deems it a hard example. And"}, {"start": 738.04, "end": 745.12, "text": " here, even more you see truck. And this isn't a truck as far as I can make out. These are"}, {"start": 745.12, "end": 752.4, "text": " two humans on the picture with no truck anywhere visible. So this seems to be a mislabeled"}, {"start": 752.4, "end": 758.48, "text": " example. And of course, mislabeled examples are going to be of high loss to the algorithm."}, {"start": 758.48, "end": 767.4, "text": " This is the first criticism or thing. And the authors recognize this that if you upway"}, {"start": 767.4, "end": 774.08, "text": " your examples with high loss, you are going to upway all the mislabeled examples as well."}, {"start": 774.08, "end": 778.8000000000001, "text": " And thereby you're going to train more on the mislabeled examples. And thereby you're"}, {"start": 778.8000000000001, "end": 786.2, "text": " going to possibly degrade your test performance much more than had you given every sample the"}, {"start": 786.2, "end": 792.9200000000001, "text": " same weight. And the authors address this actually nicely by doing an experiment. And the"}, {"start": 792.9200000000001, "end": 800.1600000000001, "text": " experiment is what if we artificially mislabeled examples, how much can these algorithms tolerate?"}, {"start": 800.1600000000001, "end": 808.5200000000001, "text": " And so they often have these graphics here where they show test error over time. So test"}, {"start": 808.5200000000001, "end": 815.36, "text": " error. And the x axis here is number of backproped images, which is kind of a time dimension"}, {"start": 815.36, "end": 823.24, "text": " in training these algorithms. You see the blue is a traditional model. And the pink is"}, {"start": 823.24, "end": 831.52, "text": " a selective backprop with a 33% retained rate. So you see the selective backprop is much"}, {"start": 831.52, "end": 838.76, "text": " faster in reaching a low error. And this first thing is simply with 1% of the labels shuffled."}, {"start": 838.76, "end": 846.08, "text": " So 1% of the images are mislabeled. Selective backprop is still much faster than the traditional"}, {"start": 846.08, "end": 856.16, "text": " trajectory. If you go to 10% shuffled, still you can see selective backprop is faster reaching"}, {"start": 856.16, "end": 864.8, "text": " a low error. Of course the error now generally is higher. But if you go to 20% here, what"}, {"start": 864.8, "end": 871.24, "text": " you can see 20% shuffled labels, what you can see it starts to become clear that these"}, {"start": 871.24, "end": 881.1999999999999, "text": " selective backprop it retains 33% of the hardest examples. So and 20% of the examples have"}, {"start": 881.1999999999999, "end": 888.8399999999999, "text": " a wrong label. That means most of what it upways are wrongly labeled examples almost."}, {"start": 888.84, "end": 897.08, "text": " They're still a lot of correctly labeled examples. But you see it gets to a lower but then"}, {"start": 897.08, "end": 903.52, "text": " it gets up again as it kind of massively overfits on these wrongly labeled examples because"}, {"start": 903.52, "end": 911.52, "text": " it upways them so much. It upways these because here in still every example is hard. So"}, {"start": 911.52, "end": 916.4000000000001, "text": " these wrongly labeled examples they'll get about the same weight as correctly labeled"}, {"start": 916.4, "end": 921.48, "text": " examples because the network isn't trained yet. But as you go lower it starts to massively"}, {"start": 921.48, "end": 932.9599999999999, "text": " overfit. So compared to the traditional model kind of just reaches this low error that"}, {"start": 932.9599999999999, "end": 939.48, "text": " okay is now corrupted by these wrong labels but it doesn't it doesn't hurt as much. So"}, {"start": 939.48, "end": 944.6, "text": " that's kind of my first criticism. If you have a lot of noisy labels or if you have a"}, {"start": 944.6, "end": 952.9200000000001, "text": " lot of mislabeled examples then this method might actually hurt more than it helps. But"}, {"start": 952.9200000000001, "end": 960.5600000000001, "text": " the level is interesting that it can kind of tolerate 10% but it gets kind of into trouble"}, {"start": 960.5600000000001, "end": 968.4, "text": " at 20 or so more percent. So this is the first criticism and that's how the authors address"}, {"start": 968.4, "end": 975.68, "text": " it. I really like the sublation study that they do. Here this is kind of the meat of the"}, {"start": 975.68, "end": 980.12, "text": " experiments. So what they show here these curves on the bottom and let's look at this"}, {"start": 980.12, "end": 987.76, "text": " curve is on the x axis actually at wall clock time. Now so how much time do you need in"}, {"start": 987.76, "end": 994.64, "text": " order to reach a kind of low error. Here is test set error. You see the traditional model"}, {"start": 994.64, "end": 1002.88, "text": " in blue has a certain trajectory. Now cath 18 is a baseline. Don't worry about it. What"}, {"start": 1002.88, "end": 1008.36, "text": " we're interested in is this selective back prop which is the pink which you can see"}, {"start": 1008.36, "end": 1014.68, "text": " outperforms this traditional training. And what we're also interested in is the stale"}, {"start": 1014.68, "end": 1020.84, "text": " sb. So stale meaning it has this buffer of information that reduces it's supposed to"}, {"start": 1020.84, "end": 1027.76, "text": " reduce the time again. And you see that is even that even more outperforms the traditional"}, {"start": 1027.76, "end": 1034.88, "text": " approach. You can also see that the stale s here apparently doesn't hurt the performance"}, {"start": 1034.88, "end": 1040.92, "text": " too much. You see the error is fairly close and it reaches this error in a much faster"}, {"start": 1040.92, "end": 1051.64, "text": " time. This on c for 10. That is nice table up here where they show the speed up to reach"}, {"start": 1051.64, "end": 1057.3600000000001, "text": " a given error. So what they do is they take this error of the traditional model this test"}, {"start": 1057.3600000000001, "end": 1064.88, "text": " set error and they ask how fast are these methods in reaching this error times a constant."}, {"start": 1064.88, "end": 1072.3600000000001, "text": " So times 1.1 times 1.2 times 1.4. Now of course the reaching 1.4 times the final error is"}, {"start": 1072.3600000000001, "end": 1079.3600000000001, "text": " easier and thereby but it's also easier for the traditional model of course. So that's"}, {"start": 1079.3600000000001, "end": 1085.88, "text": " the catch but these are kind of benchmarks they chose to have faster these models in"}, {"start": 1085.88, "end": 1093.2, "text": " reaching 1.1.1.1.4 times the error of a traditionally trained model. And you can see here on c"}, {"start": 1093.2, "end": 1099.4, "text": " for 10 for example actually it's go to s v h n. S v h n is the easiest of the of the"}, {"start": 1099.4, "end": 1107.48, "text": " data sets and it chose the most clear thing. So the traditional error is 1.7% and you"}, {"start": 1107.48, "end": 1117.8400000000001, "text": " see that the speed up is so this selective back prop is 3.4 times faster in reaching this"}, {"start": 1117.84, "end": 1126.72, "text": " 1.1 times the the error of this traditional model. And it's also 3.4 times faster reaching"}, {"start": 1126.72, "end": 1137.48, "text": " 1.2 times and it's 3.5 times faster in reaching 1.4 times. The stale selective back prop is"}, {"start": 1137.48, "end": 1147.72, "text": " even faster. So 4.3, 4.9, 5 times faster in reaching 1.4 times. This reaching 1.4 times"}, {"start": 1147.72, "end": 1154.88, "text": " the error. And so what you can what you can see here is that these methods really make"}, {"start": 1154.88, "end": 1161.48, "text": " it faster but also there's kind of two things two important things to note in this table."}, {"start": 1161.48, "end": 1169.3600000000001, "text": " First of all you can see as you go to the right in the table the speed ups get higher."}, {"start": 1169.3600000000001, "end": 1176.3600000000001, "text": " And what it means is that as you need to reach as you make the problem easier so as you"}, {"start": 1176.36, "end": 1185.08, "text": " need to reach a higher error which is as you need to reach a higher loss value. These"}, {"start": 1185.08, "end": 1191.6799999999998, "text": " methods are there faster. What that means is they're really fast at reaching a somewhat"}, {"start": 1191.6799999999998, "end": 1199.1999999999998, "text": " decent point which is represented here. They're really fast but if they need them to reach"}, {"start": 1199.2, "end": 1207.8400000000001, "text": " a more and more accurate performance they themselves get slower and slower. So this is of"}, {"start": 1207.8400000000001, "end": 1213.4, "text": " course clear because what you're doing is you're no longer treating every day to point"}, {"start": 1213.4, "end": 1219.64, "text": " the same. You are introducing a bias into your training by only training on the hard"}, {"start": 1219.64, "end": 1226.04, "text": " examples. So you're introducing a bias and this bias will give you a speed up but also"}, {"start": 1226.04, "end": 1233.6, "text": " hurt your performance. And thereby if you have to get more and more accurate you will lose"}, {"start": 1233.6, "end": 1239.48, "text": " much of that speed up because you need to reduce that bias at the end that you introduced."}, {"start": 1239.48, "end": 1246.12, "text": " So that's the first caveat. As you want to get to a higher and higher performance these"}, {"start": 1246.12, "end": 1253.1599999999999, "text": " methods will help less and less because they basically introduce the bias to gain speed"}, {"start": 1253.16, "end": 1262.88, "text": " at the beginning of training or to reach less accurate points. The second thing is as you"}, {"start": 1262.88, "end": 1273.28, "text": " look at these problems here. So SVHN 1.7% error. C410 is a slightly harder problem, 2.9%"}, {"start": 1273.28, "end": 1280.0, "text": " error. And C4100 is really a harder problem where a traditional model has 18% error. If"}, {"start": 1280.0, "end": 1289.12, "text": " you look at the speed ups now then you can see even at this right most end here. You have"}, {"start": 1289.12, "end": 1298.12, "text": " the 3.5 and 5x speed up. Here we have a 1.5, 2x speed up, here we have a 1.2, 1.6x speed"}, {"start": 1298.12, "end": 1306.76, "text": " up. So as the problems get harder and as the kind of models get fancier, as the classes"}, {"start": 1306.76, "end": 1316.2, "text": " get more, then the speed up is much lower. And I believe that's because the bias you"}, {"start": 1316.2, "end": 1324.2, "text": " introduce by re-waying the samples, the bias you introduce will hurt you much more on a"}, {"start": 1324.2, "end": 1332.44, "text": " difficult and large problem with a large network than it will hurt you on an easy problem."}, {"start": 1332.44, "end": 1338.48, "text": " Easy problem you will find introducing some bias, but if you have a hard noisy problem,"}, {"start": 1338.48, "end": 1345.72, "text": " then this bias you introduce will hurt you much more. And thereby the speed up that these"}, {"start": 1345.72, "end": 1351.8400000000001, "text": " methods give you is much, much less. And so this means that the performance of these"}, {"start": 1351.8400000000001, "end": 1359.44, "text": " models is directly anti-correlated with the hardness of the problem. And that tells me"}, {"start": 1359.44, "end": 1365.44, "text": " it kind of makes it almost unusable or it goes towards if I look at the numbers, if"}, {"start": 1365.44, "end": 1371.96, "text": " I look at the numbers over here and extrapolate that to something like ImageNet. It tells me"}, {"start": 1371.96, "end": 1379.0800000000002, "text": " that these methods are going to be almost useless on a dataset of the size and complexity"}, {"start": 1379.0800000000002, "end": 1386.6000000000001, "text": " as ImageNet. And the interesting problems nowadays are very much in the domain of more"}, {"start": 1386.6, "end": 1396.08, "text": " hard, more complex problems. So the kind of usefulness of this method in practice is something"}, {"start": 1396.08, "end": 1401.84, "text": " that I wouldn't bet on just from reading this paper. I'm open to be convinced otherwise,"}, {"start": 1401.84, "end": 1405.4399999999998, "text": " but just from reading this paper, it seems like the harder you make the problem, the less"}, {"start": 1405.4399999999998, "end": 1409.1999999999998, "text": " these methods help, and that's exactly not what you want. You want exactly the opposite."}, {"start": 1409.1999999999998, "end": 1416.1599999999999, "text": " You want to say, oh, if I scale this up, it'll give me even more of a speed up. And that's"}, {"start": 1416.16, "end": 1424.0400000000002, "text": " going to be even better, but this is the opposite. And given that they have no, basically,"}, {"start": 1424.0400000000002, "end": 1430.24, "text": " no theoretical analysis of how much this bias hurts you or how you can still make it kind"}, {"start": 1430.24, "end": 1437.88, "text": " of good in expectation, how you would need to correct at the end and so on, I would first,"}, {"start": 1437.88, "end": 1444.0, "text": " of course, test it. I'm very interested to see tests on larger and more complex problems."}, {"start": 1444.0, "end": 1454.0, "text": " From this, I'm a bit skeptical. I'm sorry. Yeah. So they show that on these tapes that"}, {"start": 1454.0, "end": 1458.32, "text": " it clearly helps, it clearly speeds up the training. And that's, of course, that's already"}, {"start": 1458.32, "end": 1463.16, "text": " a good, good thing. And they do the required experiments. They do the ablation studies"}, {"start": 1463.16, "end": 1469.88, "text": " on these data sets and so on. So you can see here, for example, on these first graphic"}, {"start": 1469.88, "end": 1478.24, "text": " on all the data sets, it clearly goes down as you introduce the more sophisticated algorithms."}, {"start": 1478.24, "end": 1483.92, "text": " But again, you can see on the hard data set, it doesn't go down as much."}, {"start": 1483.92, "end": 1490.8000000000002, "text": " All right, but they do discuss this. They're really fair to themselves. They do discuss this"}, {"start": 1490.8000000000002, "end": 1497.2, "text": " in their paper of how, you know, how practical this is and so on. And what else they tried"}, {"start": 1497.2, "end": 1503.4, "text": " and didn't work. And that's, I think that it's a really good paper in itself. And it's"}, {"start": 1503.4, "end": 1532.96, "text": " a really good investigation. All right. So that was it for me. Have a fun day. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=MIEA8azwu1k | DEEP LEARNING MEME REVIEW - Episode 1 | The wait is finally over! Antonio and I discuss the best, funniest and dankest memes of the machine learning world. Join us for a laugh! | What are you done means before? No. Don't you have this show on YouTube when you review memes? No. You have this? I think that's an entirely new concept. We're just gonna steal this concept from PewDiePie. Okay. But first actual meme review, deep learning themes. Welcome. I'm joined by Antonio, who is a bit of a meme stir himself. And today we're just gonna kind of look at deep learning memes. Nice. Let's jump in. Oh, no, that's a paper. That is cool. That is cool. Okay. Being a DL researcher is not stressed at all. 26. No. That is incredible how he says like, but now, oh, I already, I hope his new body worked. Of course. Yeah, yeah, I was in a rubber way. There was no AI winter or anything. This was, this was always, that hint is so cool. Yeah. All right. Next meme. Next meme. I guess my brain is just really big. Oh, what else is really big? I thought you never asked. I've been reading top dates on the edge of a really steep. Big gradients are always good. I mean, look at that. Why wouldn't you want to land over there? Yeah, yeah, it's perfect. It seems much more interesting than down there. So perfect. I guess it's an, oh, minus seven or the four. That's a, that's a small epsilon. The very small epsilon. Yes. Almost often. Crazy. They designed this. Oh, he sees a new problem. Classifier. This is, this is the old days. The old days, yes, of sci-kits learned. It's still a silly belief. Works pretty well. No, we must use deep learning for everything. Oh, sorry. No, no, no, sorry. Let's just look at the next meme, please. I don't know the same. I'm not sure. I'm not sure. I'm not sure. I'm not sure. I'm not sure. I'm not sure. I'm not sure. Let's just look at the next meme, please. I don't know the sample. This is a cool template. Yes, it's a good template. Anopee researchers, birds and an excel net. Woo! What is excel net? So excel net is birds just train differently. Okay. And it costs like 10 times more to train it. Okay. And it's a bit better. How much does it cost electricity? So people have calculated this to train one excel net. And it costs about 250k. It's insane. But does it work 1% better? It's like that is like 5 PhD students. That's almost a Scootal language model as excel net. And how much is better than bird? A bit. A bit. Oh, a bit. That's all that kind of state of the art. Search archive for preprint. Search GitHub for code. Ask random idiots on Facebook. Me. Go. That's a go for this. Go. In some ways actually it is simpler to publish something on archive and not being completely like people just saying, Oh, you're an idiot and stuff like that. Because we're probably good at noticing. Probably gets a notice. Right. Yeah. No Facebook it doesn't get a notice. Yeah, that's a real key review. Exactly. If you are in a very good meme page on Facebook about deep learning, you're gonna get wrecked. Yes, exactly. That's not gonna happen. This software engineer designed a chatboard to chat with his girlfriend why he is busy at work. However, the girl eventually got suspicious over the speed. She was receiving messages from her boyfriend. Modern problems, choir, modern solutions. They're also like pretty good chatbots. Got suspicious with the timing. Now for the actual content. Well, fashion companies try to sell us. What we really want. Fashion M-nist. Fashion M-nist is the new cool thing. So cool. Does anyone use it? I use it. Cool. By the way, I found a huge saddle find. Nice. Neminist. Huge saddle. It is not very Neminist. Where is it? Um, places. How much accuracy do you get on fashion? Like M-s-m-nist. So easy. Like I just basically as I missed. I don't know. I'm not the fashion person. So I don't know what to call this. What? Who is this? This is a fat pants web. Yeah. The voice after using dropouts. The voice. Also, I don't know where they come from. Where do they come from? This is no. It is some comic. They are so, so beautiful. Are you still watching machine learning tutorials on YouTube? Did you check my internet history? Why can't you watch porn like a normal child? Andrew N.G. Why am I taking this? Who is this Andrew N.G.? I must use more carous code. Yes. What? What? What is wrong with you? This is Andrew N.G. But I understand that it makes you comfortable. You are respected and loved. He does. He says it's okay if I don't understand everything. Whereas, the porn is completely different. It's not okay. I'm really with my notes trying to follow. Wait. What was the plot? Why? When your binary classifier predicts 50% accuracy. It ain't much, but it's all in the work. That's what you want to get. Better than random. Exactly. Change your random seed until you get 51%. Is that your method works? Yes, exactly. You know that in finance, but it's actually the art. In what? In finance. Prediction of the last, if you have a profitime series, if you predict the next time point as the last time point, that's probably the best thing you can do. I'm going to switch my PhD topic too. Yes, and also some people with their fancy methods do worse. Because they say, yeah, because of this and that, and then it's just to be hot and then. Because it's just like you just predict whatever was very new. Good. Okay. Next meme. Next meme. Do you have any research right? Yeah. Your GPU, GPU. Oh, damn. Too bad I don't use GPU. I will start though. You know this math, math like the learning toolbox. Recently they introduced new, new, new, new, new, new, new, new stuff. Like with the networks. Yeah. And the graphs, which is basically as the brain. Yeah. And so basically you can learn stuff with math lab. With math lab. Exactly. Wow. Exactly. Can you learn to uninstall it? I look like all you need. You don't look like an envy that I don't know. That's what we really want. Exactly. Me. I sure hope my model's error rate isn't super high. Error rate. Sorry. Sorry. Sorry. Optimization is hard. Yeah, it's just hard. You do with advanced methods and then there is SGD. Yeah. That beats you every time. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. What is the state of art? This is still thinking about it. Okay. I didn't get what happened to me on it. Well, they sold the larger things. Yeah. They're different. They're not the same. Ah, I see. That means that they work kind of another way. Yes. But only do other things. Sort of. But then they do it on the same task. Ah, I see. No, they're like trying to abstract concepts into these capsules and then the capsules can route information to other capsules dynamically. Yeah. Does it work? No, I think so. Kind of. It kind of works. Yeah. Why are people like you can make it do something? Okay. Capsules. Capsules. Okay, like, Smeam, my desires are unconventional. So show me. RTX 2060, 2070, and 2080. Ah, yeah. No, don't let me. Look at them. I want to subatle. I just can't. Use a transformer. Is it a analyst? Yeah. I have failed you. You're a man. You again. No. Arnand's must come back. Yes, exactly. They're too touring complete. Not at all. Assistant, remember this location. Okay, I remember that. What did I ask you to remember? I remember what you told me. This location. What does this location mean? Visitor top results. Assistant. Machines are about to take over the world. Definitely. This intelligence. Yeah, exactly. Yeah, we must be very, very careful. Also, we job with them. What? What? You finished the means? Not yet. There's one more. So I have to preface this. So basically, this is a... So the robot's supposed to get the ball to the target. And in one setting, it has a reference motion of the robot. So it learns to learn from that. And then, for comparison, there's no reference motion. And it just learns from scratch. So first is with... Okay....and then three times and then without. With reference motion. Nice. Yeah. Wow. And now without. Get the ball there. Get it there. Get it there. It's so cute. Yes. Yes. We are AI, Doom. Yes. Down or ready? The damage is not... Yeah, I mean, I can see an army of robots. Is there arms left? The guns? They just take the ball and they're like... Pull the trigger like... Alright, this was it for episode one of Deep Learning Memories Review. Thanks so much for being here with us and having a good time. | [{"start": 0.0, "end": 2.0, "text": " What are you done means before?"}, {"start": 2.0, "end": 3.0, "text": " No."}, {"start": 3.0, "end": 5.0, "text": " Don't you have this show on YouTube when you review memes?"}, {"start": 5.0, "end": 6.0, "text": " No."}, {"start": 6.0, "end": 7.0, "text": " You have this?"}, {"start": 7.0, "end": 9.0, "text": " I think that's an entirely new concept."}, {"start": 13.0, "end": 16.0, "text": " We're just gonna steal this concept from PewDiePie."}, {"start": 16.0, "end": 17.0, "text": " Okay."}, {"start": 17.0, "end": 21.0, "text": " But first actual meme review, deep learning themes."}, {"start": 21.0, "end": 22.0, "text": " Welcome."}, {"start": 22.0, "end": 34.0, "text": " I'm joined by Antonio, who is a bit of a meme stir himself."}, {"start": 34.0, "end": 39.0, "text": " And today we're just gonna kind of look at deep learning memes."}, {"start": 39.0, "end": 40.0, "text": " Nice."}, {"start": 40.0, "end": 42.0, "text": " Let's jump in."}, {"start": 42.0, "end": 45.0, "text": " Oh, no, that's a paper."}, {"start": 45.0, "end": 46.0, "text": " That is cool."}, {"start": 46.0, "end": 47.0, "text": " That is cool."}, {"start": 47.0, "end": 48.0, "text": " Okay."}, {"start": 48.0, "end": 55.0, "text": " Being a DL researcher is not stressed at all."}, {"start": 55.0, "end": 56.0, "text": " 26."}, {"start": 56.0, "end": 57.0, "text": " No."}, {"start": 57.0, "end": 65.0, "text": " That is incredible how he says like, but now, oh, I already, I hope his new body worked."}, {"start": 65.0, "end": 66.0, "text": " Of course."}, {"start": 66.0, "end": 68.0, "text": " Yeah, yeah, I was in a rubber way."}, {"start": 68.0, "end": 71.0, "text": " There was no AI winter or anything."}, {"start": 71.0, "end": 74.0, "text": " This was, this was always, that hint is so cool."}, {"start": 74.0, "end": 75.0, "text": " Yeah."}, {"start": 75.0, "end": 78.0, "text": " All right."}, {"start": 78.0, "end": 79.0, "text": " Next meme."}, {"start": 79.0, "end": 80.0, "text": " Next meme."}, {"start": 80.0, "end": 82.0, "text": " I guess my brain is just really big."}, {"start": 82.0, "end": 84.0, "text": " Oh, what else is really big?"}, {"start": 84.0, "end": 86.0, "text": " I thought you never asked."}, {"start": 86.0, "end": 93.0, "text": " I've been reading top dates on the edge of a really steep."}, {"start": 93.0, "end": 95.0, "text": " Big gradients are always good."}, {"start": 95.0, "end": 96.0, "text": " I mean, look at that."}, {"start": 96.0, "end": 98.0, "text": " Why wouldn't you want to land over there?"}, {"start": 98.0, "end": 99.0, "text": " Yeah, yeah, it's perfect."}, {"start": 99.0, "end": 101.0, "text": " It seems much more interesting than down there."}, {"start": 101.0, "end": 102.0, "text": " So perfect."}, {"start": 102.0, "end": 104.0, "text": " I guess it's an, oh, minus seven or the four."}, {"start": 104.0, "end": 105.0, "text": " That's a, that's a small epsilon."}, {"start": 105.0, "end": 106.0, "text": " The very small epsilon."}, {"start": 106.0, "end": 107.0, "text": " Yes."}, {"start": 107.0, "end": 108.0, "text": " Almost often."}, {"start": 108.0, "end": 109.0, "text": " Crazy."}, {"start": 109.0, "end": 110.0, "text": " They designed this."}, {"start": 110.0, "end": 111.0, "text": " Oh, he sees a new problem."}, {"start": 111.0, "end": 112.0, "text": " Classifier."}, {"start": 112.0, "end": 114.0, "text": " This is, this is the old days."}, {"start": 114.0, "end": 116.0, "text": " The old days, yes, of sci-kits learned."}, {"start": 116.0, "end": 117.0, "text": " It's still a silly belief."}, {"start": 117.0, "end": 118.0, "text": " Works pretty well."}, {"start": 118.0, "end": 121.0, "text": " No, we must use deep learning for everything."}, {"start": 121.0, "end": 122.0, "text": " Oh, sorry."}, {"start": 122.0, "end": 123.0, "text": " No, no, no, sorry."}, {"start": 123.0, "end": 124.0, "text": " Let's just look at the next meme, please."}, {"start": 124.0, "end": 125.0, "text": " I don't know the same."}, {"start": 125.0, "end": 126.0, "text": " I'm not sure."}, {"start": 126.0, "end": 127.0, "text": " I'm not sure."}, {"start": 127.0, "end": 128.0, "text": " I'm not sure."}, {"start": 128.0, "end": 129.0, "text": " I'm not sure."}, {"start": 129.0, "end": 130.0, "text": " I'm not sure."}, {"start": 130.0, "end": 131.0, "text": " I'm not sure."}, {"start": 131.0, "end": 132.0, "text": " I'm not sure."}, {"start": 132.0, "end": 134.0, "text": " Let's just look at the next meme, please."}, {"start": 134.0, "end": 135.0, "text": " I don't know the sample."}, {"start": 135.0, "end": 136.0, "text": " This is a cool template."}, {"start": 136.0, "end": 137.0, "text": " Yes, it's a good template."}, {"start": 137.0, "end": 140.0, "text": " Anopee researchers, birds and an excel net."}, {"start": 140.0, "end": 141.0, "text": " Woo!"}, {"start": 141.0, "end": 142.0, "text": " What is excel net?"}, {"start": 142.0, "end": 145.0, "text": " So excel net is birds just train differently."}, {"start": 145.0, "end": 146.0, "text": " Okay."}, {"start": 146.0, "end": 149.0, "text": " And it costs like 10 times more to train it."}, {"start": 149.0, "end": 150.0, "text": " Okay."}, {"start": 150.0, "end": 152.0, "text": " And it's a bit better."}, {"start": 152.0, "end": 154.0, "text": " How much does it cost electricity?"}, {"start": 154.0, "end": 158.0, "text": " So people have calculated this to train one excel net."}, {"start": 158.0, "end": 162.0, "text": " And it costs about 250k."}, {"start": 162.0, "end": 164.0, "text": " It's insane."}, {"start": 164.0, "end": 166.0, "text": " But does it work 1% better?"}, {"start": 166.0, "end": 170.0, "text": " It's like that is like 5 PhD students."}, {"start": 170.0, "end": 173.0, "text": " That's almost a Scootal language model as excel net."}, {"start": 173.0, "end": 175.0, "text": " And how much is better than bird?"}, {"start": 175.0, "end": 176.0, "text": " A bit."}, {"start": 176.0, "end": 177.0, "text": " A bit."}, {"start": 177.0, "end": 178.0, "text": " Oh, a bit."}, {"start": 178.0, "end": 182.0, "text": " That's all that kind of state of the art."}, {"start": 182.0, "end": 185.0, "text": " Search archive for preprint."}, {"start": 185.0, "end": 187.0, "text": " Search GitHub for code."}, {"start": 187.0, "end": 190.0, "text": " Ask random idiots on Facebook."}, {"start": 190.0, "end": 191.0, "text": " Me."}, {"start": 191.0, "end": 192.0, "text": " Go."}, {"start": 192.0, "end": 194.0, "text": " That's a go for this."}, {"start": 194.0, "end": 195.0, "text": " Go."}, {"start": 195.0, "end": 198.0, "text": " In some ways actually it is simpler to publish something on archive"}, {"start": 198.0, "end": 201.0, "text": " and not being completely like people just saying,"}, {"start": 201.0, "end": 203.0, "text": " Oh, you're an idiot and stuff like that."}, {"start": 203.0, "end": 206.0, "text": " Because we're probably good at noticing."}, {"start": 206.0, "end": 207.0, "text": " Probably gets a notice."}, {"start": 207.0, "end": 208.0, "text": " Right."}, {"start": 208.0, "end": 209.0, "text": " Yeah."}, {"start": 209.0, "end": 210.0, "text": " No Facebook it doesn't get a notice."}, {"start": 210.0, "end": 212.0, "text": " Yeah, that's a real key review."}, {"start": 212.0, "end": 213.0, "text": " Exactly."}, {"start": 213.0, "end": 216.0, "text": " If you are in a very good meme page on Facebook about deep learning,"}, {"start": 216.0, "end": 217.0, "text": " you're gonna get wrecked."}, {"start": 217.0, "end": 218.0, "text": " Yes, exactly."}, {"start": 218.0, "end": 220.0, "text": " That's not gonna happen."}, {"start": 220.0, "end": 224.0, "text": " This software engineer designed a chatboard to chat with his girlfriend"}, {"start": 224.0, "end": 226.0, "text": " why he is busy at work."}, {"start": 226.0, "end": 230.0, "text": " However, the girl eventually got suspicious over the speed."}, {"start": 230.0, "end": 232.0, "text": " She was receiving messages from her boyfriend."}, {"start": 232.0, "end": 236.0, "text": " Modern problems, choir, modern solutions."}, {"start": 236.0, "end": 239.0, "text": " They're also like pretty good chatbots."}, {"start": 239.0, "end": 241.0, "text": " Got suspicious with the timing."}, {"start": 241.0, "end": 244.0, "text": " Now for the actual content."}, {"start": 244.0, "end": 249.0, "text": " Well, fashion companies try to sell us."}, {"start": 249.0, "end": 250.0, "text": " What we really want."}, {"start": 250.0, "end": 252.0, "text": " Fashion M-nist."}, {"start": 252.0, "end": 254.0, "text": " Fashion M-nist is the new cool thing."}, {"start": 254.0, "end": 255.0, "text": " So cool."}, {"start": 255.0, "end": 257.0, "text": " Does anyone use it?"}, {"start": 257.0, "end": 258.0, "text": " I use it."}, {"start": 258.0, "end": 259.0, "text": " Cool."}, {"start": 259.0, "end": 262.0, "text": " By the way, I found a huge saddle find."}, {"start": 262.0, "end": 263.0, "text": " Nice."}, {"start": 263.0, "end": 264.0, "text": " Neminist."}, {"start": 264.0, "end": 265.0, "text": " Huge saddle."}, {"start": 265.0, "end": 267.0, "text": " It is not very Neminist."}, {"start": 267.0, "end": 268.0, "text": " Where is it?"}, {"start": 268.0, "end": 269.0, "text": " Um, places."}, {"start": 269.0, "end": 272.0, "text": " How much accuracy do you get on fashion?"}, {"start": 272.0, "end": 273.0, "text": " Like M-s-m-nist."}, {"start": 273.0, "end": 274.0, "text": " So easy."}, {"start": 274.0, "end": 276.0, "text": " Like I just basically as I missed."}, {"start": 276.0, "end": 277.0, "text": " I don't know."}, {"start": 277.0, "end": 278.0, "text": " I'm not the fashion person."}, {"start": 278.0, "end": 280.0, "text": " So I don't know what to call this."}, {"start": 280.0, "end": 281.0, "text": " What?"}, {"start": 281.0, "end": 282.0, "text": " Who is this?"}, {"start": 282.0, "end": 284.0, "text": " This is a fat pants web."}, {"start": 284.0, "end": 285.0, "text": " Yeah."}, {"start": 285.0, "end": 288.0, "text": " The voice after using dropouts."}, {"start": 288.0, "end": 289.0, "text": " The voice."}, {"start": 289.0, "end": 293.0, "text": " Also, I don't know where they come from."}, {"start": 293.0, "end": 294.0, "text": " Where do they come from?"}, {"start": 294.0, "end": 295.0, "text": " This is no."}, {"start": 295.0, "end": 297.0, "text": " It is some comic."}, {"start": 297.0, "end": 300.0, "text": " They are so, so beautiful."}, {"start": 300.0, "end": 307.0, "text": " Are you still watching machine learning tutorials on YouTube?"}, {"start": 307.0, "end": 310.0, "text": " Did you check my internet history?"}, {"start": 310.0, "end": 313.0, "text": " Why can't you watch porn like a normal child?"}, {"start": 313.0, "end": 315.0, "text": " Andrew N.G."}, {"start": 315.0, "end": 317.0, "text": " Why am I taking this?"}, {"start": 317.0, "end": 319.0, "text": " Who is this Andrew N.G.?"}, {"start": 319.0, "end": 321.0, "text": " I must use more carous code."}, {"start": 321.0, "end": 322.0, "text": " Yes."}, {"start": 322.0, "end": 323.0, "text": " What?"}, {"start": 323.0, "end": 324.0, "text": " What?"}, {"start": 324.0, "end": 325.0, "text": " What is wrong with you?"}, {"start": 325.0, "end": 326.0, "text": " This is Andrew N.G."}, {"start": 326.0, "end": 329.0, "text": " But I understand that it makes you comfortable."}, {"start": 329.0, "end": 331.0, "text": " You are respected and loved."}, {"start": 331.0, "end": 332.0, "text": " He does."}, {"start": 332.0, "end": 335.0, "text": " He says it's okay if I don't understand everything."}, {"start": 335.0, "end": 337.0, "text": " Whereas, the porn is completely different."}, {"start": 337.0, "end": 338.0, "text": " It's not okay."}, {"start": 338.0, "end": 342.0, "text": " I'm really with my notes trying to follow."}, {"start": 342.0, "end": 343.0, "text": " Wait."}, {"start": 343.0, "end": 344.0, "text": " What was the plot?"}, {"start": 344.0, "end": 345.0, "text": " Why?"}, {"start": 345.0, "end": 349.0, "text": " When your binary classifier predicts 50% accuracy."}, {"start": 349.0, "end": 353.0, "text": " It ain't much, but it's all in the work."}, {"start": 353.0, "end": 354.0, "text": " That's what you want to get."}, {"start": 354.0, "end": 355.0, "text": " Better than random."}, {"start": 355.0, "end": 356.0, "text": " Exactly."}, {"start": 356.0, "end": 359.0, "text": " Change your random seed until you get 51%."}, {"start": 359.0, "end": 362.0, "text": " Is that your method works?"}, {"start": 362.0, "end": 363.0, "text": " Yes, exactly."}, {"start": 363.0, "end": 366.0, "text": " You know that in finance, but it's actually the art."}, {"start": 366.0, "end": 367.0, "text": " In what?"}, {"start": 367.0, "end": 368.0, "text": " In finance."}, {"start": 368.0, "end": 371.0, "text": " Prediction of the last, if you have a profitime series,"}, {"start": 371.0, "end": 376.0, "text": " if you predict the next time point as the last time point,"}, {"start": 376.0, "end": 379.0, "text": " that's probably the best thing you can do."}, {"start": 379.0, "end": 382.0, "text": " I'm going to switch my PhD topic too."}, {"start": 382.0, "end": 385.0, "text": " Yes, and also some people with their fancy methods do worse."}, {"start": 385.0, "end": 388.0, "text": " Because they say, yeah, because of this and that,"}, {"start": 388.0, "end": 390.0, "text": " and then it's just to be hot and then."}, {"start": 390.0, "end": 393.0, "text": " Because it's just like you just predict whatever was very new."}, {"start": 393.0, "end": 394.0, "text": " Good."}, {"start": 394.0, "end": 395.0, "text": " Okay."}, {"start": 395.0, "end": 396.0, "text": " Next meme."}, {"start": 396.0, "end": 397.0, "text": " Next meme."}, {"start": 397.0, "end": 399.0, "text": " Do you have any research right?"}, {"start": 399.0, "end": 400.0, "text": " Yeah."}, {"start": 400.0, "end": 402.0, "text": " Your GPU, GPU."}, {"start": 402.0, "end": 403.0, "text": " Oh, damn."}, {"start": 403.0, "end": 406.0, "text": " Too bad I don't use GPU."}, {"start": 406.0, "end": 408.0, "text": " I will start though."}, {"start": 408.0, "end": 411.0, "text": " You know this math, math like the learning toolbox."}, {"start": 411.0, "end": 416.0, "text": " Recently they introduced new, new, new, new, new, new, new, new stuff."}, {"start": 416.0, "end": 417.0, "text": " Like with the networks."}, {"start": 417.0, "end": 418.0, "text": " Yeah."}, {"start": 418.0, "end": 421.0, "text": " And the graphs, which is basically as the brain."}, {"start": 421.0, "end": 422.0, "text": " Yeah."}, {"start": 422.0, "end": 426.0, "text": " And so basically you can learn stuff with math lab."}, {"start": 426.0, "end": 427.0, "text": " With math lab."}, {"start": 427.0, "end": 428.0, "text": " Exactly."}, {"start": 428.0, "end": 429.0, "text": " Wow."}, {"start": 429.0, "end": 430.0, "text": " Exactly."}, {"start": 430.0, "end": 431.0, "text": " Can you learn to uninstall it?"}, {"start": 431.0, "end": 433.0, "text": " I look like all you need."}, {"start": 433.0, "end": 436.0, "text": " You don't look like an envy that I don't know."}, {"start": 436.0, "end": 439.0, "text": " That's what we really want."}, {"start": 439.0, "end": 440.0, "text": " Exactly."}, {"start": 440.0, "end": 441.0, "text": " Me."}, {"start": 441.0, "end": 446.0, "text": " I sure hope my model's error rate isn't super high."}, {"start": 446.0, "end": 447.0, "text": " Error rate."}, {"start": 447.0, "end": 448.0, "text": " Sorry."}, {"start": 448.0, "end": 450.0, "text": " Sorry."}, {"start": 450.0, "end": 451.0, "text": " Sorry."}, {"start": 451.0, "end": 454.0, "text": " Optimization is hard."}, {"start": 454.0, "end": 456.0, "text": " Yeah, it's just hard."}, {"start": 456.0, "end": 460.0, "text": " You do with advanced methods and then there is SGD."}, {"start": 460.0, "end": 461.0, "text": " Yeah."}, {"start": 461.0, "end": 463.0, "text": " That beats you every time."}, {"start": 463.0, "end": 464.0, "text": " Yeah."}, {"start": 464.0, "end": 467.0, "text": " Yeah."}, {"start": 467.0, "end": 468.0, "text": " Yeah."}, {"start": 468.0, "end": 469.0, "text": " Yeah."}, {"start": 469.0, "end": 470.0, "text": " Yeah."}, {"start": 470.0, "end": 471.0, "text": " Yeah."}, {"start": 471.0, "end": 472.0, "text": " Yeah."}, {"start": 472.0, "end": 473.0, "text": " Yeah."}, {"start": 473.0, "end": 474.0, "text": " Yeah."}, {"start": 474.0, "end": 475.0, "text": " Yeah."}, {"start": 475.0, "end": 478.0, "text": " What is the state of art?"}, {"start": 478.0, "end": 480.0, "text": " This is still thinking about it."}, {"start": 480.0, "end": 481.0, "text": " Okay."}, {"start": 481.0, "end": 485.0, "text": " I didn't get what happened to me on it."}, {"start": 485.0, "end": 488.0, "text": " Well, they sold the larger things."}, {"start": 488.0, "end": 489.0, "text": " Yeah."}, {"start": 489.0, "end": 494.0, "text": " They're different."}, {"start": 494.0, "end": 498.0, "text": " They're not the same."}, {"start": 498.0, "end": 500.0, "text": " Ah, I see."}, {"start": 500.0, "end": 506.0, "text": " That means that they work kind of another way."}, {"start": 506.0, "end": 507.0, "text": " Yes."}, {"start": 507.0, "end": 509.0, "text": " But only do other things."}, {"start": 509.0, "end": 510.0, "text": " Sort of."}, {"start": 510.0, "end": 514.0, "text": " But then they do it on the same task."}, {"start": 514.0, "end": 515.0, "text": " Ah, I see."}, {"start": 515.0, "end": 525.0, "text": " No, they're like trying to abstract concepts into these capsules and then the capsules can route information to other capsules dynamically."}, {"start": 525.0, "end": 526.0, "text": " Yeah."}, {"start": 526.0, "end": 527.0, "text": " Does it work?"}, {"start": 527.0, "end": 529.0, "text": " No, I think so."}, {"start": 529.0, "end": 530.0, "text": " Kind of."}, {"start": 530.0, "end": 531.0, "text": " It kind of works."}, {"start": 531.0, "end": 532.0, "text": " Yeah."}, {"start": 532.0, "end": 536.0, "text": " Why are people like you can make it do something?"}, {"start": 536.0, "end": 537.0, "text": " Okay."}, {"start": 537.0, "end": 538.0, "text": " Capsules."}, {"start": 538.0, "end": 539.0, "text": " Capsules."}, {"start": 539.0, "end": 546.0, "text": " Okay, like, Smeam, my desires are unconventional."}, {"start": 546.0, "end": 548.0, "text": " So show me."}, {"start": 548.0, "end": 553.0, "text": " RTX 2060, 2070, and 2080."}, {"start": 553.0, "end": 554.0, "text": " Ah, yeah."}, {"start": 554.0, "end": 555.0, "text": " No, don't let me."}, {"start": 555.0, "end": 556.0, "text": " Look at them."}, {"start": 556.0, "end": 557.0, "text": " I want to subatle."}, {"start": 557.0, "end": 562.0, "text": " I just can't."}, {"start": 562.0, "end": 563.0, "text": " Use a transformer."}, {"start": 563.0, "end": 564.0, "text": " Is it a analyst?"}, {"start": 564.0, "end": 565.0, "text": " Yeah."}, {"start": 565.0, "end": 566.0, "text": " I have failed you."}, {"start": 566.0, "end": 567.0, "text": " You're a man."}, {"start": 567.0, "end": 568.0, "text": " You again."}, {"start": 568.0, "end": 569.0, "text": " No."}, {"start": 569.0, "end": 571.0, "text": " Arnand's must come back."}, {"start": 571.0, "end": 572.0, "text": " Yes, exactly."}, {"start": 572.0, "end": 574.0, "text": " They're too touring complete."}, {"start": 574.0, "end": 577.0, "text": " Not at all."}, {"start": 577.0, "end": 579.0, "text": " Assistant, remember this location."}, {"start": 579.0, "end": 582.0, "text": " Okay, I remember that."}, {"start": 582.0, "end": 584.0, "text": " What did I ask you to remember?"}, {"start": 584.0, "end": 586.0, "text": " I remember what you told me."}, {"start": 586.0, "end": 588.0, "text": " This location."}, {"start": 588.0, "end": 590.0, "text": " What does this location mean?"}, {"start": 590.0, "end": 592.0, "text": " Visitor top results."}, {"start": 592.0, "end": 593.0, "text": " Assistant."}, {"start": 593.0, "end": 596.0, "text": " Machines are about to take over the world."}, {"start": 596.0, "end": 598.0, "text": " Definitely."}, {"start": 598.0, "end": 599.0, "text": " This intelligence."}, {"start": 599.0, "end": 600.0, "text": " Yeah, exactly."}, {"start": 600.0, "end": 602.0, "text": " Yeah, we must be very, very careful."}, {"start": 602.0, "end": 603.0, "text": " Also, we job with them."}, {"start": 603.0, "end": 604.0, "text": " What?"}, {"start": 604.0, "end": 605.0, "text": " What?"}, {"start": 605.0, "end": 606.0, "text": " You finished the means?"}, {"start": 606.0, "end": 607.0, "text": " Not yet."}, {"start": 607.0, "end": 608.0, "text": " There's one more."}, {"start": 608.0, "end": 610.0, "text": " So I have to preface this."}, {"start": 610.0, "end": 612.0, "text": " So basically, this is a..."}, {"start": 612.0, "end": 616.0, "text": " So the robot's supposed to get the ball to the target."}, {"start": 616.0, "end": 620.0, "text": " And in one setting, it has a reference motion of the robot."}, {"start": 620.0, "end": 627.0, "text": " So it learns to learn from that."}, {"start": 627.0, "end": 631.0, "text": " And then, for comparison, there's no reference motion."}, {"start": 631.0, "end": 634.0, "text": " And it just learns from scratch."}, {"start": 634.0, "end": 635.0, "text": " So first is with..."}, {"start": 635.0, "end": 636.0, "text": " Okay."}, {"start": 636.0, "end": 638.0, "text": "...and then three times and then without."}, {"start": 638.0, "end": 640.0, "text": " With reference motion."}, {"start": 640.0, "end": 641.0, "text": " Nice."}, {"start": 641.0, "end": 642.0, "text": " Yeah."}, {"start": 642.0, "end": 643.0, "text": " Wow."}, {"start": 643.0, "end": 645.0, "text": " And now without."}, {"start": 645.0, "end": 652.0, "text": " Get the ball there."}, {"start": 652.0, "end": 653.0, "text": " Get it there."}, {"start": 653.0, "end": 654.0, "text": " Get it there."}, {"start": 654.0, "end": 657.0, "text": " It's so cute."}, {"start": 657.0, "end": 658.0, "text": " Yes."}, {"start": 658.0, "end": 659.0, "text": " Yes."}, {"start": 659.0, "end": 662.0, "text": " We are AI, Doom."}, {"start": 662.0, "end": 663.0, "text": " Yes."}, {"start": 663.0, "end": 664.0, "text": " Down or ready?"}, {"start": 664.0, "end": 665.0, "text": " The damage is not..."}, {"start": 665.0, "end": 668.0, "text": " Yeah, I mean, I can see an army of robots."}, {"start": 668.0, "end": 670.0, "text": " Is there arms left?"}, {"start": 670.0, "end": 671.0, "text": " The guns?"}, {"start": 671.0, "end": 673.0, "text": " They just take the ball and they're like..."}, {"start": 673.0, "end": 676.0, "text": " Pull the trigger like..."}, {"start": 676.0, "end": 681.0, "text": " Alright, this was it for episode one of Deep Learning Memories Review."}, {"start": 681.0, "end": 710.0, "text": " Thanks so much for being here with us and having a good time."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=nXGHJTtFYRU | Dynamic Routing Between Capsules | Geoff Hinton's next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into capsules. Each capsule is responsible for detecting the presence and properties of one particular entity in the input sample. This information is then allocated dynamically to higher-level capsules in a novel and unconventional routing scheme. While Capsule Networks are still in their infancy, they are an exciting and promising new direction.
Abstract:
A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.
Authors: Sara Sabour, Nicholas Frosst, Geoffrey E Hinton
https://arxiv.org/abs/1710.09829
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/ | Hi there. Today we're looking at dynamic routing between capsules by Sara Sabur, Niklos Frost and Jeffrey Hinton of Google Brain. This paper is a bit older but it's made quite the impact at the time and so we'll go through it. I find it's a pretty hard paper to read and kind of understand because a lot of things are very implicit and hand wavy so we'll kind of go through it and try to get the best out of it, try to explain what capsules are and what they do and how they stack against current networks. So capsule network in essence is a new type of neural network made of capsules and here it says a capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. Kind of cryptic but so what they're saying is that in a capsule network let me try to draw one here actually. In a capsule network you have what's called capsules. The capsules you can imagine has just little blobs of things right and they're also ordered in layers in this case. Let's actually leave away the second layer and each of these of these capsules will correspond to an entity in the input. Let's say the input is an image so somewhere here there is an image right then maybe this capsule here will be responsible for detecting is there a wall in the image and this one will be responsible for detecting is there a roof. This one will be is there a door and this one will be responsible for detecting is there a lake in the image right. So now each of these capsules can for on one hand can either be high or low. So if you if you imagine now a situation where wall high roof high door high lake low it means probably the image has a house on it right. But second of all not only can it predict whether or not a given entity is present in an image but the individual capsules are also responsible for encoding the exact way or shape or form that this entity takes. So the wall could have different aspects such as color, color, green. It could have size, tall. It could have orientation. Tation is like I don't know vertical. Then roof could have angle right angle wide. So it's a wide roof or a flat roof right. These are these are kind of attributes of these things that also the capsules would encode. So ultimately what these capsules that they are proposing will output is the roof capsule here for example would output a vector. So the output of the roof capsule is a let me draw a coordinate system is a vector. Now the length of the vector will represent so that the length draw this norm here will represent the probability that the roof is in the image. That there is a roof in an image right. The roof is element of this input image. This is simply the length and the individual coordinates will encode these attributes. So this here for example this axis could be the angle of the roof and this axis could be the color. Let's say the angle is like some degree number that can be positive or negative. Maybe a roof can be like this. But in essence this is a flat roof and this is a very narrow angle roof. So you can imagine something like this and then the color could also be maybe parameterized on a one-dimensional. It can have more dimensions than two. I just can't draw more. So depending on where this where this arrow now points the for example this vector here has the same probability that there's a roof in the image like if the output is this but the color will be different. The angle will be the same because they're roughly on the same this axis here but the color of this will encode a different different color roof and then if the vector is something like this a very short vector it will encode the same the same angle and color directions. So maybe I shouldn't say the position on the axis is more like this angle and this this angle that encode the attributes. So the kind of the angular components if you will encode the attributes and the length encodes the probability. So this small vector has the same direction in terms of color and angle of the roof but it's much less probable much less likely. So this if the capsule outputs the little blue vector here it says well if there is a roof it's going to be this color and this angle but I'm really that really don't think there's a roof in this image whereas if it outputs the large green one then it says oh I'm pretty sure that there's a roof and it's going to be this angle and this this this angle and this color. Alright so that's that's is what each capsule is supposed to do. Each capsule takes the input and outputs a vector that encodes if the entity that the capsule is responsible for is present in the image a and b what properties this entity has and then we get to the point where there's the next layer of capsules. So the next layer of capsules takes information to each capsule here takes information from each capsule in the lower layer like like you're used to from your neural network and integrates this information and we'll talk about how this works it integrates all of this information right all of these are vectors now that come from the lower integrates all of this information and again each capsule in this next layer is responsible for a entity. Now these entities in the higher layers are usually composite entities of the lower layers so this one here could be responsible for house this one could be responsible for national park national park and this one could be responsible for beach or something like this right and then each of these will integrate all of this information from the lower layers and then come up with their own output vector encoding whether or not a given entity is present in the in the image of course the house class will pick up if there is a door or roof in the wall in the image the house class is will pick up on that or that's how it's meant to work house classes meant to pick up on that and then itself output the large vector saying there's probably a house in this in this image so each of these capsules in by itself is responsible for encoding the presence and attributes of a object or object part or entity or part of entity in the given input data and of course the last layer here it will simply be your classification layer so in the last layer you have as many capsules as you have classes in your classification task if so this is mainly for a classification task and then you can classify and you can kind of train the whole system like this so how exactly this happens will see next all right so they make kind of analogies to the visual system and so on we'll jump these you can everyone that does deep learning in some way is trying to to make that we're rather going to the specifics of how these capsules work and how their specific suggestions for them know that they say this is in no way the only implementation of capsules is just kind of an example to show how one could do it all right so first of all they present there what do you want call nonlinearity so there nonlinearity what what it needs to do is if you look at these caps on that works the outputs here the length of the outputs of these vectors right they're supposed to represent probabilities and as such they they need to be so here it roof this door may be a vector like this wall may be a vector like that so initially it we simply specify the output is a vector and in essence these capsules are implemented in much the same way like your classic neural network layer would be implemented so each of the each of these capsules will be in essence a neural network layer by itself that outputs a vector and there's nothing constrain in the length of the vector initially so their nonlinearity does constrain the vector to be of maximum length one and of minimum length here that's this nonlinearity here so S here is the unscaled output of the capsule and you can see here if the length of S gets close to 1 or sorry gets really large then this here is becomes irrelevant this whole term will be 1 and then the length of the final output of V here will be 1 right so if this is very large then the the length of the scale output will be 1 however if the if the length is really small of the original outputs if this goes towards 0 then this becomes irrelevant this becomes irrelevant this will go towards 0 and the entire length will go towards 0 so this is kind of a nice way to scale these outputs always to be between length 0 and 1 then next thing is so how this I find I find the the most complicated part right so we'll jump ahead actually to how a capsule's network is implemented and this is the the capsule network they implement so first it's an eminence classifier you have an eminence image here and it first goes through a simple convolutional layer that's that's nothing new this is a classic convolutional layer is there's 256 channels it has a 9 by 9 filters and stride 1 so it will output a 20 by 20 time by 256 tensor then each of these so each of the outputs here is sent to each of these capsules and now they're convolutional capsules so that makes it a bit more complicated but don't you know don't worry primarily about them being convolutional capsules the analogy is exactly as in a classic neural network you can implement these capsules as for it feet forward capsules or as convolutional capsules and maybe also as transformer capsules I don't think anyone's done that all right there's a paper for you the so you will send you'll send the output of this convolutional layer to each capsule and then you have basically just two layer of capsules here the first layer consists of 32 what they call primary caps sorry these 32 capsules each will output an 8 dimensional vector and I'm simplifying here it's it's convolutional but they will just for simplest they will each output an 8 dimensional vector right and these are exactly as we said before so each of these will be responsible ultimately for a given entity or part of entity being there like in eminence this could be is there a little curve on the bottom left side right this might indicate the presence of a six or an eight something like this and then the these capsules here is they're represented as rows so each of these rows here is a capsule and we have ten of these and these are your simply your final classification capsules so each capsule is responsible for indicating the presence or absence of one particular class of digits so this would be of a one of a two of a three of a four and so on of a zero I guess somewhere as well so these are ten capsules and the question is how does information go from a capsule here from the output of a capsule or to any of capsule here and the easy way to do this is simply to say as in a classical neural network the output here simply goes to the input here just you just put it there basically unchanged now there's a bit of an issue here with the dimensions but you can simply say well we simply put away matrix in to route into the capsules but the idea of these capsules and this paper is to say wait wait these capsules actually we want to make them decide to which capsule in the next layer will they send their input right so the capsules can kind of decide where they want to send their output to like where is this where is the capsule that detects the maybe this one detects is there a line in the right side of the image right indicating maybe a seven or a one this is probably most relevant for the one class and for the seven class so it might decide to route its output there and the idea of how this routing happens is basically the topic of this paper so the the capsules route their output to the appropriate next layers capsules how's this done all right this is done via the what's called the routing mechanism I find it quite poorly described here so I will simply draw it I will I will simply try to make it up all right so we have capsules and as I've drawn them before right we have one two three capsules and we maybe have two parent capsules each of these capsules here will output a vector as we said and we'll only do it for this this one sorry vector here so this will output this vector and it needs to decide where to hear or to hear do I send this output now what it does is there is an iterative procedure that has multiple steps and this is I think this is the least the way I understand I think the important part to understand is that if we forward pass data through this network it actually doesn't go forward in a straight line what it actually does is it goes through a layer and then it does multiple steps in between layers until it has decided where it wants to go in the next layer and then it goes on to the next layer and if there's another another capsule layers it does again multiple steps before it goes on so that's that's my take on it and the multiple steps are as follows first I'll send my output vector to to all of the all of the layers like equally all of the parent capsules and so will will everyone else right everyone will send their equally to the parent now this isn't just done and this may be here this isn't just done just by sending it but this is actually done by modulation of weight matrices so each thing here if this is capsule I and this is capsule J there's a weight matrix in between w i j that is learned right this is a static weight matrix and each one of these red are red arrows you see here has such a weight matrix attached to it so each each line you see here is actually modulated by such a weight matrix there is an quadratic number of these weight matrices flying around and this will also then allow you that maybe this vector is eight-dimensional but the input vector here is sixteen-dimensional what we saw before all right so the output input of capsule J here it will receive let's see what it receives it will receive the output of capsule will the output of capsule one V1 modulated by the let's let's call this yeah let's call this J modulated by one J W1 J and it will also receive this is a set the output of capsule two modulated by the weight matrix for sorry weight matrix for capsule two and so on now what it does is it adds this these all up into a softmax so sorry let's write this so soft it will add those all up in a softmax weighted fashion so it will actually compute a weighted average of those now the weights at the beginning are are just one because it gets each from each lower capsule it gets equal amount of this vector but then this will give you an output so this will give you some output let's put this in green this will give you an output that's I don't know how they call it in a paper let's just call it oh J right and then what you do is all right you compare how much do each of the individual contributions agree with oh J so you actually compute for each of these you would compute the inner product so you put compute the inner product of W1 J V1 with oh J and you would compute the inner product of W2 J V2 with oh J right the inner product and then these inner products here will become the weighting coefficients for the softmax in the next iteration all right so this I mean this this is a bit convoluted but ultimately what you're saying is if you're a capsule here you'll send your output forward you have an output you send it forward right to the other capsule and the other capsule will so this is this is your output and we'll forget about this weight matrix for now this is your the other capsule will output its own its own output computed from the lower layers now we do an iteration again if your output now aligns with this you will send more of it and then these two that I've drawn here actually align pretty well right so you'll send more of it is more more more right and now the maybe the output that next computed output of the same capsule will be even more in that direction because you've contributed more right you'll send more and then you'll like in the next iteration wow these two are really equal sorry this should be read here your yours just keeps being the same and then you say well I'm not going to send even more to that one right whereas another capsule that it's whose initial output was basically whose initial output was basically like this it will by itself compute the inner product with the original this original it will send it here right it will compute the inner product with the original output and it will realize well these two now align very much and then it will send less right it will send less to the next step and because it sends less in the next step of course the output will then probably align even less with that vector and then it will send less and less and less so this is called dynamic routing the idea behind it is kind of that you route by agreement so you will route to the parent capsules that agree with your output and by agreement we mean kind of the inner product is high after modulating by this weight matrix and that sort of so that basically means this weight matrix is responsible for deciding which information is relevant together whenever you have two vectors that align in the same layer then the in the sense of the capsule networks those represent the same kind of information and those will be routed together to the same capsule in terms of the examples we made maybe if a door and a roof is present then these these these weight matrices that connect door and roof to the house class they will transform a high vector and door and roof into aligning vectors for the house class and thereby saying look these two if I look at them through if I look at a door and a roof through the perspective of trying to be a house right then they are in much agreement on the presence of a house so if I am a house right I am a house and I look at a door and I look at a roof through the kind of from the perspective of being a house right this is this is what these weight matrices do they always have a perspective of the parent capsule then these two things they make a lot of sense together and thus I will route them to the same place so they can both contribute to their being a house now from the perspective of a house if I look at a little beach with a tree on it right then that does not that is not the same that does not really is not the same information as a door or a roof so I will not route this to the house in the in the same strength that is sort of the best way I have of explaining it how these capsules work basically the lower entities will always be routed for the relevance of the higher entities that are trying to they're trying to combine the lower entities if that wasn't the it's not entirely clear to me either yet but it's the best job I can give the routing is here formalized I find it hard to follow the important thing is that there is an inner loop in all of this so there is an like kind of an an inner iteration and this inner iteration is computed in every forward pass and so the this routing where the information goes in the next layer that is only the prior probability for that is learned but the actual routing coefficients those are dynamically computed in every forward pass so every forward pass goes it goes information goes through a layer then it goes multiple steps between two layers until it decides exactly what the distribution for the next layer is and then the next layer computes its outputs and that goes again multiple steps between these layers and the next layer so that's the the basic thing to remember there's also some normalization involved the squash is the nonlinearity we discussed so whether they actually train now at the end here they have a they have these 10 capsules and each capsule will be responsible for recognizing one the presence of one digit in the MS dataset of course and so what they do is they take the length of these vectors that are output by these capsules these capsules are feet forward capsules as opposed to the convolutional capsules here so the feet forward capsules output again a vector the length of this vector is taken and then it's basically trained like you would train a regression problem and the loss here is specified up here so if the if the image actually does contain this if the training label actually has this digit present this T here encodes that so if if K let's say K is two right so if K two if there is a two in the image when we know that because it's a training image then the length of the output of capsule number two should be high and this simply encodes that it should be very close to this M plus and M plus here is the I think they said it to 0.9 so they say you should be the length should be as close as possible to 0.9 whereas if the two is not present then T K will be zero then this part will be active so it's only one of these two parts will be active then the length of the vector so of capsule number two should be close to this M negative which is 0.1 it's basically a regression problem saying if if the if the given entity is in the image then please make the length as close as possible to 0.9 and if it's not make as close as possible to 0.1 so this this is a classic say regression loss on the length of the output vectors the the lambda is just a factor to to dampen the contribution for all the negative classes with respect to the one positive class of course per capsule it turns out this is actually not enough so this will be the classification output but it's it seems not enough they don't say it's not enough but they simply say we additionally do the following so they also do is they introduce a reconstruction loss now if this model is trained correctly then these capsules here these last capsules especially this one maybe that's the capsule corresponding to the class of the digit eight will not only encode if an eight is there or not as in the length of the vector output but it will also encode the properties of the eight this is 16-dimensional vector so it will encode hopefully things like the stroke width so width then it might encode the maybe the rotation of the digit then it might be toward the tightness of the of the loop so you can have an eight with very large loops or it can have an eight oh sorry this is a smaller eight I can have an eight with very tight loops so it might you know encode things like this so technically it is it will be possible to reconstruct from this description reconstruct say either width is high the rotation is zero and the tightness is low then maybe I have a wide widely stroked not tight eight that is not rotated right so it should be possible to reconstruct this and they do exactly that so they take this last capsule of the class that is the actual training label that's called the reconstruction target and they feed this to a simple feed forward neural network that at the end you see this is exactly the M-nest size will try to reconstruct the image so if the image here this image goes in then it goes all through here it will take the class four here feed it through this network reshape it to an image again and hopefully what will come out is again this four here and it will then have an auxiliary auxiliary loss in addition to the loss of this of this classification loss here will auxiliary loss that tries to reconstruct the original image right and that's simply a I believe it's just an L2 reconstruction loss that is that is scaled down that it doesn't dominate so they also train the network basically to reconstruct this and I believe they do this because the length isn't quite enough to make it do what they wanted to do thus they by having this reconstruction here they really kind of enforce that the individual capsules the individual dimensions must encode some kind of information about the original image and since the original images in the M-nest data set at least vary by those things by stroke with barotation by tightness that by this loss will be reflected in the reconstruction all right so how are they doing here you see different examples of inputs and then reconstructed outputs and this you know seems pretty good actually so you see here all of these the input image is reconstructed fairly well so the numbers up here in the phone so the right or the failure cases here it the input image is a five labeled in the training data but the network actually classifies it as a three but then if you now you have two choices right this this is the same sample I have two choices for reconstruction either you reconstruct the capsule that is actually the is that you know is the true capsule that should be activated you reconstruct from that or you reconstruct from the capsule that the network says that it classifies it as so here it mixed up a five four three if you still take the five capsule and reconstructed you see it actually looks like the original image but it looks much more like a five and if you take the three capsule to reconstruct which is what the network classified this as it's still it looks like the original but it looks much more like an actual three right it's it's missing the part up here whereas over here it's it's missing this part here so the network really seems to kind of learn the different variations of these digits and and in a big US case such as this one it you know it can it can actually go either way and it can actually reconstruct the original output in either interpretations once as a three and once as a five it will be interesting to see what the actual length of the vector of both of these classes were that were mixed up and here they compare their accuracy so they have a baseline model which I believe is just like CNN where they get a decent kind of error and then the capsule networks they get a lower error and here you see as you add the reconstruction loss and as you add routing more so one step of routing simply means the first step is where you send your output equally to each parent that is as in the classical neural network case but if you introduce three steps of routing then your error drops even lower so they they kind of are on par with baseline CNNs on MNIST here they also explore what their capsules learn so as I said the individual capsules the dimensions should encode kind of properties of the variations of the of the class class samples and here they explore this in the different capsules so they change some dimensions and they run it through their reconstruction networks and indeed they discover that there is like a scale and thickness dimension stroke thickness dimension there's a skewed dimension and so on with and translation so that this is pretty remarkable these networks really if you train them in this way they really seem to learn about the entities and about the properties of the entities and that seems to be quite interesting you see that there's everything here stays well within the class that the capsule is assigned to they also yeah there's robustness to affine transformations where they improve over the baseline it's kind of an auxiliary experiment the next interesting experiment is what they call the multi-emnist experiment the multi- emnist experiment is done by taking two different emnist digits and basically just overlapping them so they they have you know shift them slightly but as you see here or here they are overlapped heavily and the task of the network is to figure out which two overlapping digits are in the image and the network is very very good at doing this the capsule network that is and better than the base lines because the capsule network simply encodes the presence and properties of a particular instance in the image if you simply take the top two length capsules and then reconstruct those independently then you're you can you can you can basically segment the image and you see this here so the different colorations come from two different reconstructions of the image from two different capsules so green is from one capsule and red from the other capsule so the network correctly identifies that it's a six and zero right and it also correctly identifies not only which pixels belong to the six and which belong to zero but also pixels that belong to both so that's not a not a problem if you use capsule networks as they are not able to say here they the way they train is is they train the actually reconstruction by only reconstructing one at a time so the kind of the premise of the dataset is that you actually have access to the underlying individual digits while training so like the images of the individual digits you don't not only have this this label here but that's a detail here some kind of failure cases where it it misclassified or you misspecify the capsules and it's kind of unable you hear you see to to assign the digits of the misclassified or the pixels of the misclassified thing it's quite interesting to look at the failure cases but I find it more interesting to look actually the success cases and the kind of ease at which the at which the capsule networks can do this simply by how they're structured all right so then lastly they also experiment on c4 10 and interestingly the c4 10 experiments show that the capsule networks don't perform as well there and as you know c4 10 is a dataset that is about the same size as them this but it's first of all color and second of all is natural images and so they have quite a bit of clutter it's not black and white black background white digits it's it's actually there's a sky and like on an in-image there's lots of things going on and right there's my tree and there's stuff here and there's stuff here and the the the capsule networks they like to account for things in the image so they like to have a capsule corresponding to everything that's going on here and here and here and here and here and here and if the whole background is black that is not a problem you can account for simply the the background but if there's lots of things going on then these caps on that works get they get a bit over explanatory they want to explain everything and that degrades the performance now this paper basically says yeah you can have a something like a none of the above category and they found that it helped to introduce that in my opinion that I think the the solution will be more towards introduction of a better loss function for this because like such that you don't need kind of to explain the entire thing rather than here we hear what you do is you simply explain it by saying it's none of the above but it's incredibly hard to balance that my opinion yeah all right so that is basically the end of this they say they have a discussion here where they compare capsules against other related work but I hope that you kind of got an overview of how this works now and as much as possible and with that that was it from me and thanks for watching bye bye | [{"start": 0.0, "end": 5.5200000000000005, "text": " Hi there. Today we're looking at dynamic routing between capsules by Sara"}, {"start": 5.5200000000000005, "end": 11.64, "text": " Sabur, Niklos Frost and Jeffrey Hinton of Google Brain. This paper is a bit"}, {"start": 11.64, "end": 18.12, "text": " older but it's made quite the impact at the time and so we'll go through it. I"}, {"start": 18.12, "end": 22.68, "text": " find it's a pretty hard paper to read and kind of understand because a lot of"}, {"start": 22.68, "end": 31.119999999999997, "text": " things are very implicit and hand wavy so we'll kind of go through it and try to"}, {"start": 31.119999999999997, "end": 35.04, "text": " get the best out of it, try to explain what capsules are and what they do and"}, {"start": 35.04, "end": 41.879999999999995, "text": " how they stack against current networks. So capsule network in essence is a new"}, {"start": 41.879999999999995, "end": 47.480000000000004, "text": " type of neural network made of capsules and here it says a capsule is a group of"}, {"start": 47.480000000000004, "end": 51.68, "text": " neurons whose activity vector represents the instantiation parameters of a"}, {"start": 51.68, "end": 57.44, "text": " specific type of entity such as an object or an object part. Kind of cryptic but"}, {"start": 57.44, "end": 65.08, "text": " so what they're saying is that in a capsule network let me try to draw one here"}, {"start": 65.08, "end": 69.8, "text": " actually. In a capsule network you have what's called capsules. The capsules you"}, {"start": 69.8, "end": 76.36, "text": " can imagine has just little blobs of things right and they're also ordered in"}, {"start": 76.36, "end": 82.36, "text": " layers in this case. Let's actually leave away the second layer and each of"}, {"start": 82.36, "end": 90.64, "text": " these of these capsules will correspond to an entity in the input. Let's say the"}, {"start": 90.64, "end": 95.64, "text": " input is an image so somewhere here there is an image right then maybe this"}, {"start": 95.64, "end": 104.28, "text": " capsule here will be responsible for detecting is there a wall in the image and"}, {"start": 104.28, "end": 111.8, "text": " this one will be responsible for detecting is there a roof. This one will be is"}, {"start": 111.8, "end": 120.4, "text": " there a door and this one will be responsible for detecting is there a lake in"}, {"start": 120.4, "end": 129.72, "text": " the image right. So now each of these capsules can for on one hand can either be"}, {"start": 129.72, "end": 138.6, "text": " high or low. So if you if you imagine now a situation where wall high roof high"}, {"start": 138.6, "end": 148.2, "text": " door high lake low it means probably the image has a house on it right. But"}, {"start": 148.2, "end": 153.84, "text": " second of all not only can it predict whether or not a given entity is"}, {"start": 153.84, "end": 158.6, "text": " present in an image but the individual capsules are also responsible for"}, {"start": 158.6, "end": 166.35999999999999, "text": " encoding the exact way or shape or form that this entity takes. So the wall could"}, {"start": 166.35999999999999, "end": 176.6, "text": " have different aspects such as color, color, green. It could have size, tall. It"}, {"start": 176.6, "end": 187.92, "text": " could have orientation. Tation is like I don't know vertical. Then roof could"}, {"start": 187.92, "end": 195.39999999999998, "text": " have angle right angle wide. So it's a wide roof or a flat roof right. These"}, {"start": 195.39999999999998, "end": 200.39999999999998, "text": " are these are kind of attributes of these things that also the capsules would"}, {"start": 200.39999999999998, "end": 205.64, "text": " encode. So ultimately what these capsules that they are proposing will"}, {"start": 205.64, "end": 212.76, "text": " output is the roof capsule here for example would output a vector. So the output"}, {"start": 212.76, "end": 220.92, "text": " of the roof capsule is a let me draw a coordinate system is a vector. Now the"}, {"start": 220.92, "end": 228.92, "text": " length of the vector will represent so that the length draw this norm here will"}, {"start": 228.92, "end": 237.2, "text": " represent the probability that the roof is in the image. That there is a roof in"}, {"start": 237.2, "end": 243.2, "text": " an image right. The roof is element of this input image. This is simply the"}, {"start": 243.2, "end": 248.04, "text": " length and the individual coordinates will encode these attributes. So this"}, {"start": 248.04, "end": 254.51999999999998, "text": " here for example this axis could be the angle of the roof and this axis could"}, {"start": 254.51999999999998, "end": 260.12, "text": " be the color. Let's say the angle is like some degree number that can be"}, {"start": 260.12, "end": 268.36, "text": " positive or negative. Maybe a roof can be like this. But in essence this is a"}, {"start": 268.36, "end": 273.44, "text": " flat roof and this is a very narrow angle roof. So you can imagine something like"}, {"start": 273.44, "end": 277.6, "text": " this and then the color could also be maybe parameterized on a one-dimensional."}, {"start": 277.6, "end": 282.44, "text": " It can have more dimensions than two. I just can't draw more. So depending on"}, {"start": 282.44, "end": 289.88, "text": " where this where this arrow now points the for example this vector here has the"}, {"start": 289.88, "end": 294.2, "text": " same probability that there's a roof in the image like if the output is this"}, {"start": 294.2, "end": 298.44, "text": " but the color will be different. The angle will be the same because they're"}, {"start": 298.44, "end": 303.04, "text": " roughly on the same this axis here but the color of this will encode a different"}, {"start": 303.04, "end": 310.68, "text": " different color roof and then if the vector is something like this a very short"}, {"start": 310.68, "end": 320.8, "text": " vector it will encode the same the same angle and color directions. So maybe I"}, {"start": 320.8, "end": 325.92, "text": " shouldn't say the position on the axis is more like this angle and this this"}, {"start": 325.92, "end": 330.48, "text": " angle that encode the attributes. So the kind of the angular components if you"}, {"start": 330.48, "end": 334.92, "text": " will encode the attributes and the length encodes the probability. So this small"}, {"start": 334.92, "end": 339.88, "text": " vector has the same direction in terms of color and angle of the roof but it's"}, {"start": 339.88, "end": 345.4, "text": " much less probable much less likely. So this if the capsule outputs the little"}, {"start": 345.4, "end": 351.2, "text": " blue vector here it says well if there is a roof it's going to be this color and"}, {"start": 351.2, "end": 355.08, "text": " this angle but I'm really that really don't think there's a roof in this image"}, {"start": 355.08, "end": 361.44, "text": " whereas if it outputs the large green one then it says oh I'm pretty sure that"}, {"start": 361.44, "end": 365.88, "text": " there's a roof and it's going to be this angle and this this this angle and"}, {"start": 365.88, "end": 371.2, "text": " this color. Alright so that's that's is what each capsule is supposed to do. Each"}, {"start": 371.2, "end": 379.08, "text": " capsule takes the input and outputs a vector that encodes if the entity that"}, {"start": 379.08, "end": 384.71999999999997, "text": " the capsule is responsible for is present in the image a and b what properties"}, {"start": 384.71999999999997, "end": 390.88, "text": " this entity has and then we get to the point where there's the next layer of"}, {"start": 390.88, "end": 395.92, "text": " capsules. So the next layer of capsules takes information to each capsule here"}, {"start": 395.92, "end": 402.96, "text": " takes information from each capsule in the lower layer like like you're used"}, {"start": 402.96, "end": 407.8, "text": " to from your neural network and integrates this information and we'll talk"}, {"start": 407.8, "end": 411.84, "text": " about how this works it integrates all of this information right all of"}, {"start": 411.84, "end": 416.04, "text": " these are vectors now that come from the lower integrates all of this"}, {"start": 416.04, "end": 422.68, "text": " information and again each capsule in this next layer is responsible for a"}, {"start": 422.68, "end": 427.76000000000005, "text": " entity. Now these entities in the higher layers are usually composite entities"}, {"start": 427.76000000000005, "end": 436.24, "text": " of the lower layers so this one here could be responsible for house this one"}, {"start": 436.24, "end": 444.48, "text": " could be responsible for national park national park and this one could be"}, {"start": 444.48, "end": 451.16, "text": " responsible for beach or something like this right and then each of these will"}, {"start": 451.16, "end": 456.32, "text": " integrate all of this information from the lower layers and then come up with"}, {"start": 456.32, "end": 461.48, "text": " their own output vector encoding whether or not a given entity is present in the"}, {"start": 461.48, "end": 468.8, "text": " in the image of course the house class will pick up if there is a door or"}, {"start": 468.8, "end": 473.44, "text": " roof in the wall in the image the house class is will pick up on that or that's"}, {"start": 473.44, "end": 477.0, "text": " how it's meant to work house classes meant to pick up on that and then"}, {"start": 477.0, "end": 482.44, "text": " itself output the large vector saying there's probably a house in this in this"}, {"start": 482.44, "end": 488.08, "text": " image so each of these capsules in by itself is responsible for encoding the"}, {"start": 488.08, "end": 494.2, "text": " presence and attributes of a object or object part or entity or part of entity"}, {"start": 494.2, "end": 500.12, "text": " in the given input data and of course the last layer here it will simply be your"}, {"start": 500.12, "end": 505.96, "text": " classification layer so in the last layer you have as many capsules as you have"}, {"start": 505.96, "end": 512.0, "text": " classes in your classification task if so this is mainly for a classification"}, {"start": 512.0, "end": 518.6, "text": " task and then you can classify and you can kind of train the whole system"}, {"start": 518.6, "end": 528.76, "text": " like this so how exactly this happens will see next all right so they make kind"}, {"start": 528.76, "end": 537.6, "text": " of analogies to the visual system and so on we'll jump these you can everyone"}, {"start": 537.6, "end": 544.24, "text": " that does deep learning in some way is trying to to make that we're rather"}, {"start": 544.24, "end": 550.3199999999999, "text": " going to the specifics of how these capsules work and how their specific"}, {"start": 550.3199999999999, "end": 555.64, "text": " suggestions for them know that they say this is in no way the only implementation"}, {"start": 555.64, "end": 562.04, "text": " of capsules is just kind of an example to show how one could do it all right so"}, {"start": 562.04, "end": 567.6, "text": " first of all they present there what do you want call nonlinearity so there"}, {"start": 567.6, "end": 571.92, "text": " nonlinearity what what it needs to do is if you look at these caps on that works"}, {"start": 571.92, "end": 576.3199999999999, "text": " the outputs here the length of the outputs of these vectors right they're"}, {"start": 576.3199999999999, "end": 582.4399999999999, "text": " supposed to represent probabilities and as such they they need to be so here"}, {"start": 582.44, "end": 588.44, "text": " it roof this door may be a vector like this wall may be a vector like that so"}, {"start": 588.44, "end": 594.0, "text": " initially it we simply specify the output is a vector and in essence these"}, {"start": 594.0, "end": 599.32, "text": " capsules are implemented in much the same way like your classic neural network"}, {"start": 599.32, "end": 604.48, "text": " layer would be implemented so each of the each of these"}, {"start": 604.48, "end": 613.48, "text": " capsules will be in essence a neural network layer by itself that outputs a"}, {"start": 613.48, "end": 619.48, "text": " vector and there's nothing constrain in the length of the vector initially so"}, {"start": 619.48, "end": 626.9200000000001, "text": " their nonlinearity does constrain the vector to be of maximum length one and of"}, {"start": 626.9200000000001, "end": 632.28, "text": " minimum length here that's this nonlinearity here so S here is the unscaled"}, {"start": 632.28, "end": 639.9599999999999, "text": " output of the capsule and you can see here if the length of S gets close to 1"}, {"start": 639.9599999999999, "end": 647.04, "text": " or sorry gets really large then this here is becomes irrelevant this whole"}, {"start": 647.04, "end": 654.92, "text": " term will be 1 and then the length of the final output of V here will be 1"}, {"start": 654.92, "end": 661.9599999999999, "text": " right so if this is very large then the the length of the scale output will be"}, {"start": 661.96, "end": 668.0, "text": " 1 however if the if the length is really small of the original outputs if this"}, {"start": 668.0, "end": 674.6800000000001, "text": " goes towards 0 then this becomes irrelevant this becomes irrelevant this will"}, {"start": 674.6800000000001, "end": 681.88, "text": " go towards 0 and the entire length will go towards 0 so this is kind of a"}, {"start": 681.88, "end": 692.92, "text": " nice way to scale these outputs always to be between length 0 and 1 then next"}, {"start": 692.92, "end": 704.92, "text": " thing is so how this I find I find the the most complicated part right so we'll"}, {"start": 704.92, "end": 711.56, "text": " jump ahead actually to how a capsule's network is implemented and this is the"}, {"start": 711.56, "end": 718.04, "text": " the capsule network they implement so first it's an eminence classifier you have"}, {"start": 718.04, "end": 723.1999999999999, "text": " an eminence image here and it first goes through a simple convolutional layer"}, {"start": 723.1999999999999, "end": 729.16, "text": " that's that's nothing new this is a classic convolutional layer is there's"}, {"start": 729.16, "end": 738.3199999999999, "text": " 256 channels it has a 9 by 9 filters and stride 1 so it will output a 20 by"}, {"start": 738.32, "end": 750.9200000000001, "text": " 20 time by 256 tensor then each of these so each of the outputs here is sent to"}, {"start": 750.9200000000001, "end": 754.6, "text": " each of these capsules and now they're convolutional capsules so that makes it"}, {"start": 754.6, "end": 760.4000000000001, "text": " a bit more complicated but don't you know don't worry primarily about them"}, {"start": 760.4000000000001, "end": 764.2800000000001, "text": " being convolutional capsules the analogy is exactly as in a classic neural"}, {"start": 764.28, "end": 769.48, "text": " network you can implement these capsules as for it feet forward capsules or as"}, {"start": 769.48, "end": 774.92, "text": " convolutional capsules and maybe also as transformer capsules I don't think"}, {"start": 774.92, "end": 783.4, "text": " anyone's done that all right there's a paper for you the so you will send"}, {"start": 783.4, "end": 788.12, "text": " you'll send the output of this convolutional layer to each capsule and then you"}, {"start": 788.12, "end": 792.72, "text": " have basically just two layer of capsules here the first layer consists of 32"}, {"start": 792.72, "end": 802.64, "text": " what they call primary caps sorry these 32 capsules each will output an 8"}, {"start": 802.64, "end": 807.0400000000001, "text": " dimensional vector and I'm simplifying here it's it's convolutional but they"}, {"start": 807.0400000000001, "end": 814.8000000000001, "text": " will just for simplest they will each output an 8 dimensional vector right and"}, {"start": 814.8000000000001, "end": 818.84, "text": " these are exactly as we said before so each of these will be responsible"}, {"start": 818.84, "end": 825.52, "text": " ultimately for a given entity or part of entity being there like in eminence"}, {"start": 825.52, "end": 829.76, "text": " this could be is there a little curve on the bottom left side right this might"}, {"start": 829.76, "end": 835.64, "text": " indicate the presence of a six or an eight something like this and then the"}, {"start": 835.64, "end": 841.76, "text": " these capsules here is they're represented as rows so each of these rows here is"}, {"start": 841.76, "end": 846.52, "text": " a capsule and we have ten of these and these are your simply your final"}, {"start": 846.52, "end": 851.3199999999999, "text": " classification capsules so each capsule is responsible for indicating the"}, {"start": 851.3199999999999, "end": 856.76, "text": " presence or absence of one particular class of digits so this would be of a"}, {"start": 856.76, "end": 862.4, "text": " one of a two of a three of a four and so on of a zero I guess somewhere as"}, {"start": 862.4, "end": 869.04, "text": " well so these are ten capsules and the question is how does information go"}, {"start": 869.04, "end": 874.0, "text": " from a capsule here from the output of a capsule or to any of capsule here and"}, {"start": 874.0, "end": 880.96, "text": " the easy way to do this is simply to say as in a classical neural network the"}, {"start": 880.96, "end": 887.32, "text": " output here simply goes to the input here just you just put it there basically"}, {"start": 887.32, "end": 895.96, "text": " unchanged now there's a bit of an issue here with the dimensions but you can"}, {"start": 895.96, "end": 902.04, "text": " simply say well we simply put away matrix in to route into the capsules but the"}, {"start": 902.04, "end": 909.68, "text": " idea of these capsules and this paper is to say wait wait these capsules"}, {"start": 909.68, "end": 918.12, "text": " actually we want to make them decide to which capsule in the next layer will"}, {"start": 918.12, "end": 924.56, "text": " they send their input right so the capsules can kind of decide where they want"}, {"start": 924.56, "end": 930.3199999999999, "text": " to send their output to like where is this where is the capsule that detects the"}, {"start": 930.32, "end": 935.72, "text": " maybe this one detects is there a line in the right side of the image right"}, {"start": 935.72, "end": 942.7600000000001, "text": " indicating maybe a seven or a one this is probably most relevant for the one"}, {"start": 942.7600000000001, "end": 949.0400000000001, "text": " class and for the seven class so it might decide to route its output there and"}, {"start": 949.0400000000001, "end": 955.6800000000001, "text": " the idea of how this routing happens is basically the topic of this paper"}, {"start": 955.68, "end": 963.1999999999999, "text": " so the the capsules route their output to the appropriate next layers"}, {"start": 963.1999999999999, "end": 969.04, "text": " capsules how's this done all right this is done via the what's called the"}, {"start": 969.04, "end": 976.28, "text": " routing mechanism I find it quite poorly described here so I will simply"}, {"start": 976.28, "end": 984.76, "text": " draw it I will I will simply try to make it up all right so we have capsules"}, {"start": 984.76, "end": 993.64, "text": " and as I've drawn them before right we have one two three capsules and we"}, {"start": 993.64, "end": 1001.96, "text": " maybe have two parent capsules each of these capsules here will output a"}, {"start": 1001.96, "end": 1009.3199999999999, "text": " vector as we said and we'll only do it for this this one sorry vector here so"}, {"start": 1009.3199999999999, "end": 1013.68, "text": " this will output this vector and it needs to decide where to hear or to hear"}, {"start": 1013.68, "end": 1022.9599999999999, "text": " do I send this output now what it does is there is an iterative procedure that"}, {"start": 1022.9599999999999, "end": 1028.3999999999999, "text": " has multiple steps and this is I think this is the least the way I understand"}, {"start": 1028.3999999999999, "end": 1033.04, "text": " I think the important part to understand is that if we forward pass data"}, {"start": 1033.04, "end": 1037.8799999999999, "text": " through this network it actually doesn't go forward in a straight line what it"}, {"start": 1037.8799999999999, "end": 1043.08, "text": " actually does is it goes through a layer and then it does multiple steps in"}, {"start": 1043.08, "end": 1047.8799999999999, "text": " between layers until it has decided where it wants to go in the next layer"}, {"start": 1047.8799999999999, "end": 1051.56, "text": " and then it goes on to the next layer and if there's another another capsule"}, {"start": 1051.56, "end": 1057.84, "text": " layers it does again multiple steps before it goes on so that's that's my take"}, {"start": 1057.84, "end": 1063.72, "text": " on it and the multiple steps are as follows first I'll send my output vector to"}, {"start": 1063.72, "end": 1069.6, "text": " to all of the all of the layers like equally all of the parent capsules and so"}, {"start": 1069.6, "end": 1076.9199999999998, "text": " will will everyone else right everyone will send their equally to the parent"}, {"start": 1076.9199999999998, "end": 1082.3999999999999, "text": " now this isn't just done and this may be here this isn't just done just by"}, {"start": 1082.3999999999999, "end": 1086.3999999999999, "text": " sending it but this is actually done by modulation of weight matrices so each"}, {"start": 1086.3999999999999, "end": 1091.48, "text": " thing here if this is capsule I and this is capsule J there's a weight matrix"}, {"start": 1091.48, "end": 1097.56, "text": " in between w i j that is learned right this is a static weight matrix and each"}, {"start": 1097.56, "end": 1102.36, "text": " one of these red are red arrows you see here has such a weight matrix attached"}, {"start": 1102.36, "end": 1108.24, "text": " to it so each each line you see here is actually modulated by such a weight"}, {"start": 1108.24, "end": 1113.48, "text": " matrix there is an quadratic number of these weight matrices flying around and"}, {"start": 1113.48, "end": 1118.32, "text": " this will also then allow you that maybe this vector is eight-dimensional but the"}, {"start": 1118.32, "end": 1123.6799999999998, "text": " input vector here is sixteen-dimensional what we saw before all right so the"}, {"start": 1123.68, "end": 1129.4, "text": " output input of capsule J here it will receive let's see what it receives it"}, {"start": 1129.4, "end": 1139.6000000000001, "text": " will receive the output of capsule will the output of capsule one V1 modulated by"}, {"start": 1139.6000000000001, "end": 1148.88, "text": " the let's let's call this yeah let's call this J modulated by one J W1 J and it"}, {"start": 1148.88, "end": 1155.72, "text": " will also receive this is a set the output of capsule two modulated by the"}, {"start": 1155.72, "end": 1162.8000000000002, "text": " weight matrix for sorry weight matrix for capsule two and so on now what it"}, {"start": 1162.8000000000002, "end": 1171.8400000000001, "text": " does is it adds this these all up into a softmax so sorry let's write this so"}, {"start": 1171.84, "end": 1180.48, "text": " soft it will add those all up in a softmax weighted fashion so it will actually"}, {"start": 1180.48, "end": 1188.76, "text": " compute a weighted average of those now the weights at the beginning are are"}, {"start": 1188.76, "end": 1195.6399999999999, "text": " just one because it gets each from each lower capsule it gets equal amount of"}, {"start": 1195.6399999999999, "end": 1200.6799999999998, "text": " this vector but then this will give you an output so this will give you some"}, {"start": 1200.68, "end": 1207.92, "text": " output let's put this in green this will give you an output that's I don't know"}, {"start": 1207.92, "end": 1215.8, "text": " how they call it in a paper let's just call it oh J right and then what you do"}, {"start": 1215.8, "end": 1224.16, "text": " is all right you compare how much do each of the individual contributions"}, {"start": 1224.16, "end": 1230.76, "text": " agree with oh J so you actually compute for each of these you would compute the"}, {"start": 1230.76, "end": 1240.0800000000002, "text": " inner product so you put compute the inner product of W1 J V1 with oh J and you"}, {"start": 1240.0800000000002, "end": 1249.72, "text": " would compute the inner product of W2 J V2 with oh J right the inner product and"}, {"start": 1249.72, "end": 1256.08, "text": " then these inner products here will become the weighting coefficients for the"}, {"start": 1256.08, "end": 1262.28, "text": " softmax in the next iteration all right so this I mean this this is a bit"}, {"start": 1262.28, "end": 1268.0, "text": " convoluted but ultimately what you're saying is if you're a capsule here you'll"}, {"start": 1268.0, "end": 1275.2, "text": " send your output forward you have an output you send it forward right to the"}, {"start": 1275.2, "end": 1282.04, "text": " other capsule and the other capsule will so this is this is your output and we'll"}, {"start": 1282.04, "end": 1285.92, "text": " forget about this weight matrix for now this is your the other capsule will"}, {"start": 1285.92, "end": 1294.48, "text": " output its own its own output computed from the lower layers now we do an"}, {"start": 1294.48, "end": 1302.92, "text": " iteration again if your output now aligns with this you will send more of it"}, {"start": 1302.92, "end": 1307.52, "text": " and then these two that I've drawn here actually align pretty well right so you'll"}, {"start": 1307.52, "end": 1313.24, "text": " send more of it is more more more right and now the maybe the output that"}, {"start": 1313.24, "end": 1317.92, "text": " next computed output of the same capsule will be even more in that direction"}, {"start": 1317.92, "end": 1321.6000000000001, "text": " because you've contributed more right you'll send more and then you'll like"}, {"start": 1321.6000000000001, "end": 1325.3200000000002, "text": " in the next iteration wow these two are really equal sorry this should be"}, {"start": 1325.3200000000002, "end": 1330.28, "text": " read here your yours just keeps being the same and then you say well I'm not"}, {"start": 1330.28, "end": 1336.32, "text": " going to send even more to that one right whereas another capsule that it's"}, {"start": 1336.32, "end": 1343.48, "text": " whose initial output was basically whose initial output was basically like this"}, {"start": 1343.48, "end": 1350.04, "text": " it will by itself compute the inner product with the original this"}, {"start": 1350.04, "end": 1354.2, "text": " original it will send it here right it will compute the inner product with the"}, {"start": 1354.2, "end": 1359.76, "text": " original output and it will realize well these two now align very much and"}, {"start": 1359.76, "end": 1364.16, "text": " then it will send less right it will send less to the next step and because it"}, {"start": 1364.16, "end": 1370.52, "text": " sends less in the next step of course the output will then probably align"}, {"start": 1370.52, "end": 1375.36, "text": " even less with that vector and then it will send less and less and less so this"}, {"start": 1375.36, "end": 1382.64, "text": " is called dynamic routing the idea behind it is kind of that you route by"}, {"start": 1382.64, "end": 1390.16, "text": " agreement so you will route to the parent capsules that agree with your output"}, {"start": 1390.16, "end": 1396.0800000000002, "text": " and by agreement we mean kind of the inner product is high after modulating by"}, {"start": 1396.0800000000002, "end": 1402.4, "text": " this weight matrix and that sort of so that basically means this weight"}, {"start": 1402.4, "end": 1407.68, "text": " matrix is responsible for deciding which information is relevant together"}, {"start": 1407.68, "end": 1414.76, "text": " whenever you have two vectors that align in the same layer then the in the"}, {"start": 1414.76, "end": 1420.52, "text": " sense of the capsule networks those represent the same kind of information and"}, {"start": 1420.52, "end": 1426.8, "text": " those will be routed together to the same capsule in terms of the examples we"}, {"start": 1426.8, "end": 1434.64, "text": " made maybe if a door and a roof is present then these these these weight matrices"}, {"start": 1434.64, "end": 1440.68, "text": " that connect door and roof to the house class they will transform a high"}, {"start": 1440.68, "end": 1446.6000000000001, "text": " vector and door and roof into aligning vectors for the house class and thereby"}, {"start": 1446.6000000000001, "end": 1453.8400000000001, "text": " saying look these two if I look at them through if I look at a door and a roof"}, {"start": 1453.8400000000001, "end": 1462.4, "text": " through the perspective of trying to be a house right then they are in much"}, {"start": 1462.4, "end": 1470.0800000000002, "text": " agreement on the presence of a house so if I am a house right I am a house"}, {"start": 1470.0800000000002, "end": 1480.16, "text": " and I look at a door and I look at a roof through the kind of from the"}, {"start": 1480.16, "end": 1484.52, "text": " perspective of being a house right this is this is what these weight matrices"}, {"start": 1484.52, "end": 1489.96, "text": " do they always have a perspective of the parent capsule then these two things"}, {"start": 1489.96, "end": 1496.92, "text": " they make a lot of sense together and thus I will route them to the same place"}, {"start": 1496.92, "end": 1503.0, "text": " so they can both contribute to their being a house now from the perspective of"}, {"start": 1503.0, "end": 1510.2, "text": " a house if I look at a little beach with a tree on it right then that does not"}, {"start": 1510.2, "end": 1518.2, "text": " that is not the same that does not really is not the same information as a door"}, {"start": 1518.2, "end": 1527.16, "text": " or a roof so I will not route this to the house in the in the same strength that"}, {"start": 1527.16, "end": 1533.96, "text": " is sort of the best way I have of explaining it how these capsules work basically"}, {"start": 1533.96, "end": 1540.16, "text": " the lower entities will always be routed for the relevance of the higher"}, {"start": 1540.16, "end": 1547.0, "text": " entities that are trying to they're trying to combine the lower entities if that"}, {"start": 1547.0, "end": 1554.88, "text": " wasn't the it's not entirely clear to me either yet but it's the best job I can"}, {"start": 1554.88, "end": 1562.32, "text": " give the routing is here formalized I find it hard to follow the important"}, {"start": 1562.32, "end": 1567.68, "text": " thing is that there is an inner loop in all of this so there is an"}, {"start": 1567.68, "end": 1574.48, "text": " like kind of an an inner iteration and this inner iteration is computed in"}, {"start": 1574.48, "end": 1580.56, "text": " every forward pass and so the this routing where the information goes in the"}, {"start": 1580.56, "end": 1586.24, "text": " next layer that is only the prior probability for that is learned but the"}, {"start": 1586.24, "end": 1593.32, "text": " actual routing coefficients those are dynamically computed in every"}, {"start": 1593.32, "end": 1599.52, "text": " forward pass so every forward pass goes it goes information goes through a layer"}, {"start": 1599.52, "end": 1604.44, "text": " then it goes multiple steps between two layers until it decides exactly what the"}, {"start": 1604.44, "end": 1608.24, "text": " distribution for the next layer is and then the next layer computes its"}, {"start": 1608.24, "end": 1613.3200000000002, "text": " outputs and that goes again multiple steps between these layers and the next"}, {"start": 1613.3200000000002, "end": 1618.96, "text": " layer so that's the the basic thing to remember there's also some normalization"}, {"start": 1618.96, "end": 1624.68, "text": " involved the squash is the nonlinearity we discussed so whether they actually"}, {"start": 1624.68, "end": 1632.8400000000001, "text": " train now at the end here they have a they have these 10 capsules and each capsule"}, {"start": 1632.84, "end": 1639.28, "text": " will be responsible for recognizing one the presence of one digit in the"}, {"start": 1639.28, "end": 1644.32, "text": " MS dataset of course and so what they do is they take the length of these"}, {"start": 1644.32, "end": 1649.28, "text": " vectors that are output by these capsules these capsules are feet forward capsules"}, {"start": 1649.28, "end": 1653.84, "text": " as opposed to the convolutional capsules here so the feet forward capsules"}, {"start": 1653.84, "end": 1660.9599999999998, "text": " output again a vector the length of this vector is taken and then it's basically"}, {"start": 1660.96, "end": 1666.04, "text": " trained like you would train a regression problem and the loss here is"}, {"start": 1666.04, "end": 1672.0, "text": " specified up here so if the if the image actually does contain this if the"}, {"start": 1672.0, "end": 1681.6000000000001, "text": " training label actually has this digit present this T here encodes that so if"}, {"start": 1681.6, "end": 1691.1599999999999, "text": " if K let's say K is two right so if K two if there is a two in the image when"}, {"start": 1691.1599999999999, "end": 1696.36, "text": " we know that because it's a training image then the length of the output of"}, {"start": 1696.36, "end": 1703.48, "text": " capsule number two should be high and this simply encodes that it should be"}, {"start": 1703.48, "end": 1709.1599999999999, "text": " very close to this M plus and M plus here is the I think they said it to 0.9 so"}, {"start": 1709.16, "end": 1715.3600000000001, "text": " they say you should be the length should be as close as possible to 0.9 whereas"}, {"start": 1715.3600000000001, "end": 1722.0, "text": " if the two is not present then T K will be zero then this part will be active"}, {"start": 1722.0, "end": 1726.92, "text": " so it's only one of these two parts will be active then the length of the"}, {"start": 1726.92, "end": 1732.8400000000001, "text": " vector so of capsule number two should be close to this M negative which is"}, {"start": 1732.8400000000001, "end": 1738.76, "text": " 0.1 it's basically a regression problem saying if if the if the given"}, {"start": 1738.76, "end": 1744.6, "text": " entity is in the image then please make the length as close as possible to 0.9"}, {"start": 1744.6, "end": 1751.48, "text": " and if it's not make as close as possible to 0.1 so this this is a classic"}, {"start": 1751.48, "end": 1758.72, "text": " say regression loss on the length of the output vectors the the lambda is just a"}, {"start": 1758.72, "end": 1765.04, "text": " factor to to dampen the contribution for all the negative classes with"}, {"start": 1765.04, "end": 1773.72, "text": " respect to the one positive class of course per capsule it turns out this is"}, {"start": 1773.72, "end": 1779.48, "text": " actually not enough so this will be the classification output but it's it seems"}, {"start": 1779.48, "end": 1784.0, "text": " not enough they don't say it's not enough but they simply say we additionally do"}, {"start": 1784.0, "end": 1789.96, "text": " the following so they also do is they introduce a reconstruction loss now if"}, {"start": 1789.96, "end": 1796.1200000000001, "text": " this model is trained correctly then these capsules here these last capsules"}, {"start": 1796.1200000000001, "end": 1799.76, "text": " especially this one maybe that's the capsule corresponding to the class of the"}, {"start": 1799.76, "end": 1805.52, "text": " digit eight will not only encode if an eight is there or not as in the length"}, {"start": 1805.52, "end": 1811.8, "text": " of the vector output but it will also encode the properties of the eight"}, {"start": 1811.8, "end": 1817.48, "text": " this is 16-dimensional vector so it will encode hopefully things like the"}, {"start": 1817.48, "end": 1826.8, "text": " stroke width so width then it might encode the maybe the rotation of the"}, {"start": 1826.8, "end": 1835.2, "text": " digit then it might be toward the tightness of the of the loop so you can have"}, {"start": 1835.2, "end": 1839.88, "text": " an eight with very large loops or it can have an eight oh sorry this is a"}, {"start": 1839.88, "end": 1845.24, "text": " smaller eight I can have an eight with very tight loops so it might you know"}, {"start": 1845.24, "end": 1851.52, "text": " encode things like this so technically it is it will be possible to reconstruct"}, {"start": 1851.52, "end": 1857.36, "text": " from this description reconstruct say either width is high the rotation is"}, {"start": 1857.36, "end": 1866.92, "text": " zero and the tightness is low then maybe I have a wide widely"}, {"start": 1866.92, "end": 1873.24, "text": " stroked not tight eight that is not rotated right so it should be possible to"}, {"start": 1873.24, "end": 1879.04, "text": " reconstruct this and they do exactly that so they take this last capsule of the"}, {"start": 1879.04, "end": 1885.4, "text": " class that is the actual training label that's called the reconstruction target"}, {"start": 1885.4, "end": 1891.72, "text": " and they feed this to a simple feed forward neural network that at the end you"}, {"start": 1891.72, "end": 1898.04, "text": " see this is exactly the M-nest size will try to reconstruct the image so if the"}, {"start": 1898.04, "end": 1904.08, "text": " image here this image goes in then it goes all through here it will take the"}, {"start": 1904.08, "end": 1910.6399999999999, "text": " class four here feed it through this network reshape it to an image again and"}, {"start": 1910.6399999999999, "end": 1918.08, "text": " hopefully what will come out is again this four here and it will then have an"}, {"start": 1918.08, "end": 1925.12, "text": " auxiliary auxiliary loss in addition to the loss of this of this classification"}, {"start": 1925.12, "end": 1931.9199999999998, "text": " loss here will auxiliary loss that tries to reconstruct the original image right"}, {"start": 1931.9199999999998, "end": 1938.32, "text": " and that's simply a I believe it's just an L2 reconstruction loss that is that"}, {"start": 1938.32, "end": 1945.9599999999998, "text": " is scaled down that it doesn't dominate so they also train the network basically"}, {"start": 1945.9599999999998, "end": 1951.0, "text": " to reconstruct this and I believe they do this because the length isn't quite"}, {"start": 1951.0, "end": 1957.04, "text": " enough to make it do what they wanted to do thus they by having this"}, {"start": 1957.04, "end": 1964.0, "text": " reconstruction here they really kind of enforce that the individual capsules the"}, {"start": 1964.0, "end": 1970.8, "text": " individual dimensions must encode some kind of information about the original"}, {"start": 1970.8, "end": 1976.32, "text": " image and since the original images in the M-nest data set at least vary by"}, {"start": 1976.32, "end": 1983.28, "text": " those things by stroke with barotation by tightness that by this loss will be"}, {"start": 1983.28, "end": 1996.4399999999998, "text": " reflected in the reconstruction all right so how are they doing here you see"}, {"start": 1996.4399999999998, "end": 2003.3999999999999, "text": " different examples of inputs and then reconstructed outputs and this you know"}, {"start": 2003.4, "end": 2009.3600000000001, "text": " seems pretty good actually so you see here all of these the input image is"}, {"start": 2009.3600000000001, "end": 2016.6000000000001, "text": " reconstructed fairly well so the numbers up here in the phone so the right"}, {"start": 2016.6000000000001, "end": 2021.72, "text": " or the failure cases here it the input image is a five labeled in the"}, {"start": 2021.72, "end": 2027.76, "text": " training data but the network actually classifies it as a three but then if"}, {"start": 2027.76, "end": 2032.16, "text": " you now you have two choices right this this is the same sample I have two"}, {"start": 2032.16, "end": 2037.3200000000002, "text": " choices for reconstruction either you reconstruct the capsule that is actually"}, {"start": 2037.3200000000002, "end": 2042.0, "text": " the is that you know is the true capsule that should be activated you"}, {"start": 2042.0, "end": 2046.52, "text": " reconstruct from that or you reconstruct from the capsule that the network"}, {"start": 2046.52, "end": 2052.04, "text": " says that it classifies it as so here it mixed up a five four three if you"}, {"start": 2052.04, "end": 2057.88, "text": " still take the five capsule and reconstructed you see it actually looks like"}, {"start": 2057.88, "end": 2063.1600000000003, "text": " the original image but it looks much more like a five and if you take the three"}, {"start": 2063.1600000000003, "end": 2067.7200000000003, "text": " capsule to reconstruct which is what the network classified this as it's still"}, {"start": 2067.7200000000003, "end": 2073.32, "text": " it looks like the original but it looks much more like an actual three right it's"}, {"start": 2073.32, "end": 2079.1600000000003, "text": " it's missing the part up here whereas over here it's it's missing this part"}, {"start": 2079.1600000000003, "end": 2084.6400000000003, "text": " here so the network really seems to kind of learn the different variations of"}, {"start": 2084.64, "end": 2090.68, "text": " these digits and and in a big US case such as this one it you know it can it can"}, {"start": 2090.68, "end": 2096.7599999999998, "text": " actually go either way and it can actually reconstruct the original output in"}, {"start": 2096.7599999999998, "end": 2103.2, "text": " either interpretations once as a three and once as a five it will be"}, {"start": 2103.2, "end": 2106.56, "text": " interesting to see what the actual length of the vector of both of these classes"}, {"start": 2106.56, "end": 2114.4, "text": " were that were mixed up and here they compare their accuracy so they have a"}, {"start": 2114.4, "end": 2121.48, "text": " baseline model which I believe is just like CNN where they get a decent kind of"}, {"start": 2121.48, "end": 2128.1600000000003, "text": " error and then the capsule networks they get a lower error and here you see as"}, {"start": 2128.1600000000003, "end": 2133.28, "text": " you add the reconstruction loss and as you add routing more so one step of"}, {"start": 2133.28, "end": 2137.6, "text": " routing simply means the first step is where you send your output equally to"}, {"start": 2137.6, "end": 2144.2000000000003, "text": " each parent that is as in the classical neural network case but if you"}, {"start": 2144.2, "end": 2151.52, "text": " introduce three steps of routing then your error drops even lower so they"}, {"start": 2151.52, "end": 2159.7999999999997, "text": " they kind of are on par with baseline CNNs on MNIST here"}, {"start": 2162.3599999999997, "end": 2167.08, "text": " they also explore what their capsules learn so as I said the individual capsules"}, {"start": 2167.08, "end": 2174.6, "text": " the dimensions should encode kind of properties of the variations of the of the"}, {"start": 2174.6, "end": 2180.64, "text": " class class samples and here they explore this in the different capsules so they"}, {"start": 2180.64, "end": 2184.4, "text": " change some dimensions and they run it through their reconstruction networks"}, {"start": 2184.4, "end": 2189.96, "text": " and indeed they discover that there is like a scale and thickness dimension"}, {"start": 2189.96, "end": 2196.12, "text": " stroke thickness dimension there's a skewed dimension and so on with and"}, {"start": 2196.12, "end": 2204.52, "text": " translation so that this is pretty remarkable these networks really if you"}, {"start": 2204.52, "end": 2209.2799999999997, "text": " train them in this way they really seem to learn about the entities and about"}, {"start": 2209.2799999999997, "end": 2214.7999999999997, "text": " the properties of the entities and that seems to be quite interesting you see"}, {"start": 2214.7999999999997, "end": 2219.96, "text": " that there's everything here stays well within the class that the capsule is"}, {"start": 2219.96, "end": 2228.2400000000002, "text": " assigned to they also yeah there's robustness to affine transformations where"}, {"start": 2228.2400000000002, "end": 2233.52, "text": " they improve over the baseline it's kind of an auxiliary experiment the next"}, {"start": 2233.52, "end": 2238.84, "text": " interesting experiment is what they call the multi-emnist experiment the multi-"}, {"start": 2238.84, "end": 2245.48, "text": " emnist experiment is done by taking two different emnist digits and basically"}, {"start": 2245.48, "end": 2251.4, "text": " just overlapping them so they they have you know shift them slightly but as you"}, {"start": 2251.4, "end": 2258.0, "text": " see here or here they are overlapped heavily and the task of the network is to"}, {"start": 2258.0, "end": 2266.12, "text": " figure out which two overlapping digits are in the image and the network is very"}, {"start": 2266.12, "end": 2273.84, "text": " very good at doing this the capsule network that is and better than the base"}, {"start": 2273.84, "end": 2278.28, "text": " lines because the capsule network simply encodes the presence and properties"}, {"start": 2278.28, "end": 2284.04, "text": " of a particular instance in the image if you simply take the top two length"}, {"start": 2284.04, "end": 2291.88, "text": " capsules and then reconstruct those independently then you're you can you can"}, {"start": 2291.88, "end": 2298.2400000000002, "text": " you can basically segment the image and you see this here so the different"}, {"start": 2298.2400000000002, "end": 2303.44, "text": " colorations come from two different reconstructions of the image from two"}, {"start": 2303.44, "end": 2307.84, "text": " different capsules so green is from one capsule and red from the other capsule so"}, {"start": 2307.84, "end": 2313.56, "text": " the network correctly identifies that it's a six and zero right and it also"}, {"start": 2313.56, "end": 2317.6, "text": " correctly identifies not only which pixels belong to the six and which belong to"}, {"start": 2317.6, "end": 2322.6, "text": " zero but also pixels that belong to both so that's not a not a problem if you"}, {"start": 2322.6, "end": 2328.44, "text": " use capsule networks as they are not able to say here they the way they train"}, {"start": 2328.44, "end": 2333.7200000000003, "text": " is is they train the actually reconstruction by only reconstructing one at a"}, {"start": 2333.7200000000003, "end": 2338.2000000000003, "text": " time so the kind of the premise of the dataset is that you actually have access"}, {"start": 2338.2000000000003, "end": 2344.44, "text": " to the underlying individual digits while training so like the images of the"}, {"start": 2344.44, "end": 2349.32, "text": " individual digits you don't not only have this this label here but that's a"}, {"start": 2349.32, "end": 2356.8, "text": " detail here some kind of failure cases where it it misclassified or you"}, {"start": 2356.8, "end": 2364.88, "text": " misspecify the capsules and it's kind of unable you hear you see to to assign"}, {"start": 2364.88, "end": 2371.36, "text": " the digits of the misclassified or the pixels of the misclassified thing it's"}, {"start": 2371.36, "end": 2375.5600000000004, "text": " quite interesting to look at the failure cases but I find it more interesting to"}, {"start": 2375.5600000000004, "end": 2382.92, "text": " look actually the success cases and the kind of ease at which the at which the"}, {"start": 2382.92, "end": 2389.52, "text": " capsule networks can do this simply by how they're structured all right so then"}, {"start": 2389.52, "end": 2394.92, "text": " lastly they also experiment on c4 10 and interestingly the c4 10"}, {"start": 2394.92, "end": 2400.2400000000002, "text": " experiments show that the capsule networks don't perform as well there and as"}, {"start": 2400.2400000000002, "end": 2406.04, "text": " you know c4 10 is a dataset that is about the same size as them this but it's"}, {"start": 2406.04, "end": 2410.8, "text": " first of all color and second of all is natural images and so they have quite a"}, {"start": 2410.8, "end": 2416.32, "text": " bit of clutter it's not black and white black background white digits it's"}, {"start": 2416.32, "end": 2420.8, "text": " it's actually there's a sky and like on an in-image there's lots of things going"}, {"start": 2420.8, "end": 2426.84, "text": " on and right there's my tree and there's stuff here and there's stuff here and"}, {"start": 2426.84, "end": 2432.36, "text": " the the the capsule networks they like to account for things in the image so"}, {"start": 2432.36, "end": 2436.36, "text": " they like to have a capsule corresponding to everything that's going on here"}, {"start": 2436.36, "end": 2439.52, "text": " and here and here and here and here and here and if the whole background is"}, {"start": 2439.52, "end": 2444.32, "text": " black that is not a problem you can account for simply the the background but if"}, {"start": 2444.32, "end": 2451.4, "text": " there's lots of things going on then these caps on that works get they get"}, {"start": 2451.4, "end": 2456.4, "text": " a bit over explanatory they want to explain everything and that degrades the"}, {"start": 2456.4, "end": 2462.12, "text": " performance now this paper basically says yeah you can have a something like a"}, {"start": 2462.12, "end": 2468.32, "text": " none of the above category and they found that it helped to introduce that in my"}, {"start": 2468.32, "end": 2477.56, "text": " opinion that I think the the solution will be more towards introduction of a"}, {"start": 2477.56, "end": 2484.52, "text": " better loss function for this because like such that you don't need kind of"}, {"start": 2484.52, "end": 2490.0, "text": " to explain the entire thing rather than here we hear what you do is you simply"}, {"start": 2490.0, "end": 2493.6400000000003, "text": " explain it by saying it's none of the above but it's incredibly hard to balance"}, {"start": 2493.64, "end": 2504.16, "text": " that my opinion yeah all right so that is basically the end of this they say"}, {"start": 2504.16, "end": 2510.4, "text": " they have a discussion here where they compare capsules against other related"}, {"start": 2510.4, "end": 2520.24, "text": " work but I hope that you kind of got an overview of how this works now and as"}, {"start": 2520.24, "end": 2525.6, "text": " much as possible and with that that was it from me and thanks for watching bye"}, {"start": 2525.6, "end": 2552.3199999999997, "text": " bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=-MCYbmU9kfg | RoBERTa: A Robustly Optimized BERT Pretraining Approach | This paper shows that the original BERT model, if trained correctly, can outperform all of the improvements that have been proposed lately, raising questions about the necessity and reasoning behind these.
Abstract:
Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.
Authors: Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov
https://arxiv.org/abs/1907.11692
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/ | Hello everyone, today we're looking at Roberta, a robustly optimized bird pre-training approach by Yinhang Liu at the out of mainly of Facebook research. So this paper is a pretty short, pretty simple paper and the main premise is we've seen a number of improvements over the initial bird paper where different pre-training of the transformer architecture or extension of the architecture have been shown to have better performance than the original bird model. And this paper basically says if you get the design choices right, then bird is able to basically be on par or exceed all of these other methods so far. So they're basically exploring design choices in the pre-training and training of bird. Alright, so if you don't know what bird is, by the way, I have made a video about bird, I've also made a video about transformers. In very quick terms, bird is a language neural network architecture that takes as input text such as this kind of thing you see here, text such as that, and it will kind of encode it out and it can do various things, for example, classify it into certain categories or kind of segmented, extract answers from questions and so on. But the whole thing is pre-trained with what's called a mask language model objective, which where you don't need labels to train it. So in a mask language model objective, you basically mask out certain words during training and then you ask bird to reconstruct these words from the surrounding information. And that kind of has given some improvements in the original bird paper, but subsequent papers have claimed that you can improve even more by using different pre-training objectives and so on, such as Excel, Net. But here, these researchers basically explore different things. So they use a regular bird architecture, that's what they describe here. So they use both the bird base, the 12 layer, as well as the 24 layer bird that has originally been described. They use masked language modeling as a pre-training objective and they explore the necessity of this next sentence prediction loss that has been part of bird. So along with the mask sentence modeling, bird has also had an objective where if you input a piece of, actually you input two pieces of text, two sentences, such as this, these are two sentences. And bird has to decide if the second sentence follows the first sentence in the corpus or in 50% of the cases, the second sentence is sampled from a different document. So this is the original paper argued, this is necessary to incorporate long distance relationships between text. Yeah, here the NSP objective was designed to improve performance on downstream tasks, such as natural language inference. And this paper explores the necessity of that loss. In terms of optimization, there is of course kind of a pre-training scheme and then a training scheme using Adam here with certain parameters. And also this paper explores the use of these parameters. Lastly, you have data and of course these models sometimes they're trained on different data and that's why that comparing them makes it a bit harder to compare them because the pre-training is done on differently size and differently structured data. This paper also tries to investigate the influence of the training data and especially what happens if we keep the training data constant. So all right, so they implement bird, they re-implement bird and then they fix some hyper parameters while they tune others. And first of all the data set. So they use different data sets. The original bird has been trained on this book corpus and Wikipedia, English Wikipedia data set which is 16 gigabytes large. Now this paper here collects a what's this CC news data set which is the subset of the common crawl news data set which is all in, so the subset is the English portion and that's 76 gigabytes which is on par with for example what GPT2 used I believe. So this is a very large training set and kind of comparing this original data to the large corpus kind of what influence that is should make very clear what the influence of more training of more pre-training data is. They also have a number of other corpora open web text as well as here I believe there's one more stories. Yes, so these are also pretty sizable but these are like, yeah, these are like have very specific schemas to them. Then the evaluation here happens on several different kind of downstream tasks. So the idea is you first you pre-traine this bird model on with the mask language modeling and so on and then you have this glue task which is actually a collection of nine tasks and you have some other tasks such as squad which is a question answering task and here race I don't even I don't know what that is in particular but it suffices to say these are kind of downstream NLP tasks. The paper isn't about these downstream tasks but it is just a way to measure how well your pre-training worked if then you can find to on such a task and you get a good performance but what the tasks are in particular isn't too important. Alright, so here we get into the meat of the paper. First they decide on what they call static versus dynamic masking. So in the original bird paper whenever they do masked language modeling they take a piece of text and they basically replicate it a bunch of times because they want to iterate through training data a bunch of times and then in each iteration they mask out different different tokens and they compare this to what's called dynamic masking. So this is static masking. Dynamic masking sorry dynamic masking would be where you in each basically on the fly generate your mask. You don't pre-computed and save it you on the fly generated this allows you to go through kind of more or less of the data as you want and when you encounter the same sample twice even though you replicate it in the original bird model you could still encounter twice if you trained for longer than the number of replications. Then you basically see the exact same mask again and the dynamic masking is actually much more useful. It's much more ad hoc each time you see a sample you generate the mask on the fly. So they compare this here and they see that there is a marginal improvement so here higher is better. So the improvement in two tasks and a less marginal decrease in performance in one task. So they decide that this dynamic masking is of use. Second thing they investigate is the kind of input format and this next sentence prediction. So as I already said the original bird training objective always gets two sentences next to each other and has to decide if the second one follows from the first one. Actually it doesn't it observes two concatenated document segments which are either sampled contiguously from the same document or from distinct documents and this is half and half. So in addition to the mask language modeling the model is trained to predict whether the observed document segments come from the same or distinct document via an auxiliary next sentence prediction loss and they investigate different ways of including or excluding this loss. So first is what they define if here if it's plus an SP that means that this particular thing includes the next sentence or next segment prediction loss. So they have segment pair plus NSP which means that each input has a pair of segments. And these segments now the difference the distinction between a segment and a sentence is important where the sentence is really a natural sentence a segment can actually be multiple natural sentences which is what the original bird does. So as long as the combined length is less than 512 tokens there can also be multiple sentences but there's clearly two segments and you have to decide if they follow after each other or not. The second thing they try is the same thing so the next segment prediction but now it's just two sentences it's just natural sentences. So it must be one sentence a cold period sorry and then the next sentence a period and you have to distinguish these two if they follow or not. Then they investigate full sentences which is they leave away this next segment prediction loss and they simply fill up the 512 tokens with text from the corpus. So each input is packed with full sentences sampled continuously from one or more documents. And the one or more document means if you so if you sample text right to you sample here text you put all of this in the thing and you are at the end of a document you simply continue with the next one and go on until you have to 512 tokens. So you basically fill fill fill until you have 512 tokens and that's this variant here. And then the last variant you do the same thing this is called doc sentences but you basically you stop at the end. So even so you put all of this in your state and if you hear you stop and then you have to be content by simply padding the rest of the 512 tokens or something like this so you don't have as much data but the all the text that you have in one sample is actually continuous text from the same document. So they pit these four things against each other this is this table here and as you can see here the best thing is this doc sentences thing so on these thing followed by the full sentences encoding right. So there are some some ambiguities here but in general you can kind of rank them as best second best and then here third best and fourth best and they conclude that this next segment or next sentence prediction loss here is more hurtful than helpful in the ways we see here and they say even though this is most most effective they in their case they rather go with this one because it's well I guess easier to implement you get more data through the model in the same time and the performance decrease isn't that much. So but it's pretty interesting to see that this next next segment next sentence prediction isn't super super helpful in in actuality. So removing the NSP loss matches or slightly improves the downstream task performance. This is yeah in contrast to what the original bird authors found but you have to keep in mind this is also on hasn't a bunch of other changes in. Then next thing they investigate batch size so batch size sorry batch size pretty seems to be pretty interesting for these large models in that they love large batch sizes and they actually explore batch sizes 512 here as the smallest one and they go up to 8000 so this they do this actually in a in a data parallel way where they have many many machines with many GPUs and they parallelize the data and then they accumulate the gradient of all of these different samples and so they can go up to a batch size of about 8k and they find generally that the 2000 batch size here as you can see helps to improve the so perplexity lower is better and the other numbers higher is better helps to to improve the performances if you control the control for data set size so the number of times you go through the data set is the same but if you go with a larger batch size that seems to help up to a point here the 2000 seems to be the best they found so again at marginal improvement you can make by training with larger batch sizes and then this the last thing they've looked at is actually is text encoding so how do you encode text and the the pit here is basically between byte parent coding or word piece encoding to that to to decide how large your vocabulary is basically and as I understand it they didn't find a much of a difference between the different implementations of the text encoding so they decide they go with the decide to go with one I don't even remember which one I think they go decide to go with byte pair encoding instead of word pieces all right so they combine all of this into Roberta which is the robust the optimized bird approach and stay say Roberta is trained with dynamic masking so what they showed first full sentence without the next segment prediction loss large mini batches a larger byte level byte parent coding as well as of course their collection of training data and then here they also investigate how long to pre-train so if you look at the original birth models or the XL net models and then compare it to Roberta so Roberta this is the original data and they already beat Bert yet they do not they do not yet beat XL net with that so if they add data they get even better actually on par mostly with the with XL net if they pre-train longer they get even better and if they say pre-train even longer right so that here's the the number of steps if if your number of steps then match the number of steps that the XL net does with the same additional data then or with their additional data then you outperform XL net as well so this this kind of just an an overview of this and they evaluate on other downstream tasks and they basically show that in most of them they can reach state of the art performance or exceed it with their approach and in conclusion they basically say well this only shows that kind of the the gains that these other models make and the reasons why they make gains may be questionable if you simply pre-train Bert in a better way you can reach the same performances so I think the end is not reached yet most of all they publish their code their data I believe I have not looked into this but definitely check out their repository where this is implemented seems pretty easy seems pretty straightforward and that was it for me bye bye. | [{"start": 0.0, "end": 6.32, "text": " Hello everyone, today we're looking at Roberta, a robustly optimized bird pre-training"}, {"start": 6.32, "end": 12.08, "text": " approach by Yinhang Liu at the out of mainly of Facebook research."}, {"start": 12.08, "end": 18.88, "text": " So this paper is a pretty short, pretty simple paper and the main premise is we've seen"}, {"start": 18.88, "end": 28.28, "text": " a number of improvements over the initial bird paper where different pre-training of"}, {"start": 28.28, "end": 35.84, "text": " the transformer architecture or extension of the architecture have been shown to have"}, {"start": 35.84, "end": 38.92, "text": " better performance than the original bird model."}, {"start": 38.92, "end": 47.96, "text": " And this paper basically says if you get the design choices right, then bird is able"}, {"start": 47.96, "end": 53.400000000000006, "text": " to basically be on par or exceed all of these other methods so far."}, {"start": 53.4, "end": 61.239999999999995, "text": " So they're basically exploring design choices in the pre-training and training of bird."}, {"start": 61.239999999999995, "end": 67.96, "text": " Alright, so if you don't know what bird is, by the way, I have made a video about bird,"}, {"start": 67.96, "end": 72.2, "text": " I've also made a video about transformers."}, {"start": 72.2, "end": 80.96000000000001, "text": " In very quick terms, bird is a language neural network architecture that takes as input"}, {"start": 80.96, "end": 89.63999999999999, "text": " text such as this kind of thing you see here, text such as that, and it will kind of encode"}, {"start": 89.63999999999999, "end": 99.24, "text": " it out and it can do various things, for example, classify it into certain categories or"}, {"start": 99.24, "end": 106.08, "text": " kind of segmented, extract answers from questions and so on."}, {"start": 106.08, "end": 111.75999999999999, "text": " But the whole thing is pre-trained with what's called a mask language model objective,"}, {"start": 111.75999999999999, "end": 113.64, "text": " which where you don't need labels to train it."}, {"start": 113.64, "end": 119.08, "text": " So in a mask language model objective, you basically mask out certain words during training"}, {"start": 119.08, "end": 126.44, "text": " and then you ask bird to reconstruct these words from the surrounding information."}, {"start": 126.44, "end": 133.68, "text": " And that kind of has given some improvements in the original bird paper, but subsequent"}, {"start": 133.68, "end": 139.0, "text": " papers have claimed that you can improve even more by using different pre-training objectives"}, {"start": 139.0, "end": 142.72, "text": " and so on, such as Excel, Net."}, {"start": 142.72, "end": 150.64000000000001, "text": " But here, these researchers basically explore different things."}, {"start": 150.64000000000001, "end": 155.76000000000002, "text": " So they use a regular bird architecture, that's what they describe here."}, {"start": 155.76, "end": 164.95999999999998, "text": " So they use both the bird base, the 12 layer, as well as the 24 layer bird that has originally"}, {"start": 164.95999999999998, "end": 167.2, "text": " been described."}, {"start": 167.2, "end": 176.72, "text": " They use masked language modeling as a pre-training objective and they explore the necessity"}, {"start": 176.72, "end": 180.92, "text": " of this next sentence prediction loss that has been part of bird."}, {"start": 180.92, "end": 188.07999999999998, "text": " So along with the mask sentence modeling, bird has also had an objective where if you input"}, {"start": 188.07999999999998, "end": 194.2, "text": " a piece of, actually you input two pieces of text, two sentences, such as this, these"}, {"start": 194.2, "end": 195.51999999999998, "text": " are two sentences."}, {"start": 195.51999999999998, "end": 201.44, "text": " And bird has to decide if the second sentence follows the first sentence in the corpus or"}, {"start": 201.44, "end": 206.2, "text": " in 50% of the cases, the second sentence is sampled from a different document."}, {"start": 206.2, "end": 213.64, "text": " So this is the original paper argued, this is necessary to incorporate long distance relationships"}, {"start": 213.64, "end": 217.51999999999998, "text": " between text."}, {"start": 217.51999999999998, "end": 222.76, "text": " Yeah, here the NSP objective was designed to improve performance on downstream tasks,"}, {"start": 222.76, "end": 227.48, "text": " such as natural language inference."}, {"start": 227.48, "end": 231.35999999999999, "text": " And this paper explores the necessity of that loss."}, {"start": 231.36, "end": 237.16000000000003, "text": " In terms of optimization, there is of course kind of a pre-training scheme and then a"}, {"start": 237.16000000000003, "end": 241.44000000000003, "text": " training scheme using Adam here with certain parameters."}, {"start": 241.44000000000003, "end": 247.0, "text": " And also this paper explores the use of these parameters."}, {"start": 247.0, "end": 254.28000000000003, "text": " Lastly, you have data and of course these models sometimes they're trained on different"}, {"start": 254.28000000000003, "end": 259.2, "text": " data and that's why that comparing them makes it a bit harder to compare them because"}, {"start": 259.2, "end": 265.15999999999997, "text": " the pre-training is done on differently size and differently structured data."}, {"start": 265.15999999999997, "end": 271.15999999999997, "text": " This paper also tries to investigate the influence of the training data and especially what"}, {"start": 271.15999999999997, "end": 275.36, "text": " happens if we keep the training data constant."}, {"start": 275.36, "end": 283.64, "text": " So all right, so they implement bird, they re-implement bird and then they fix some hyper"}, {"start": 283.64, "end": 290.24, "text": " parameters while they tune others."}, {"start": 290.24, "end": 292.0, "text": " And first of all the data set."}, {"start": 292.0, "end": 295.4, "text": " So they use different data sets."}, {"start": 295.4, "end": 301.53999999999996, "text": " The original bird has been trained on this book corpus and Wikipedia, English Wikipedia"}, {"start": 301.53999999999996, "end": 304.64, "text": " data set which is 16 gigabytes large."}, {"start": 304.64, "end": 312.03999999999996, "text": " Now this paper here collects a what's this CC news data set which is the subset of the"}, {"start": 312.04, "end": 319.12, "text": " common crawl news data set which is all in, so the subset is the English portion and"}, {"start": 319.12, "end": 330.24, "text": " that's 76 gigabytes which is on par with for example what GPT2 used I believe."}, {"start": 330.24, "end": 339.16, "text": " So this is a very large training set and kind of comparing this original data to the large"}, {"start": 339.16, "end": 344.40000000000003, "text": " corpus kind of what influence that is should make very clear what the influence of more"}, {"start": 344.40000000000003, "end": 347.68, "text": " training of more pre-training data is."}, {"start": 347.68, "end": 356.16, "text": " They also have a number of other corpora open web text as well as here I believe there's"}, {"start": 356.16, "end": 357.20000000000005, "text": " one more stories."}, {"start": 357.20000000000005, "end": 365.24, "text": " Yes, so these are also pretty sizable but these are like, yeah, these are like have very"}, {"start": 365.24, "end": 369.88, "text": " specific schemas to them."}, {"start": 369.88, "end": 377.40000000000003, "text": " Then the evaluation here happens on several different kind of downstream tasks."}, {"start": 377.40000000000003, "end": 383.72, "text": " So the idea is you first you pre-traine this bird model on with the mask language modeling"}, {"start": 383.72, "end": 392.8, "text": " and so on and then you have this glue task which is actually a collection of nine tasks"}, {"start": 392.8, "end": 402.32, "text": " and you have some other tasks such as squad which is a question answering task and here"}, {"start": 402.32, "end": 407.44, "text": " race I don't even I don't know what that is in particular but it suffices to say these"}, {"start": 407.44, "end": 410.2, "text": " are kind of downstream NLP tasks."}, {"start": 410.2, "end": 417.36, "text": " The paper isn't about these downstream tasks but it is just a way to measure how well"}, {"start": 417.36, "end": 422.36, "text": " your pre-training worked if then you can find to on such a task and you get a good"}, {"start": 422.36, "end": 429.28000000000003, "text": " performance but what the tasks are in particular isn't too important."}, {"start": 429.28000000000003, "end": 433.96000000000004, "text": " Alright, so here we get into the meat of the paper."}, {"start": 433.96000000000004, "end": 440.28000000000003, "text": " First they decide on what they call static versus dynamic masking."}, {"start": 440.28000000000003, "end": 446.32, "text": " So in the original bird paper whenever they do masked language modeling they take a piece"}, {"start": 446.32, "end": 451.52000000000004, "text": " of text and they basically replicate it a bunch of times because they want to iterate"}, {"start": 451.52, "end": 458.32, "text": " through training data a bunch of times and then in each iteration they mask out different"}, {"start": 458.32, "end": 468.52, "text": " different tokens and they compare this to what's called dynamic masking."}, {"start": 468.52, "end": 471.32, "text": " So this is static masking."}, {"start": 471.32, "end": 479.59999999999997, "text": " Dynamic masking sorry dynamic masking would be where you in each basically on the fly generate"}, {"start": 479.59999999999997, "end": 481.03999999999996, "text": " your mask."}, {"start": 481.04, "end": 488.04, "text": " You don't pre-computed and save it you on the fly generated this allows you to go through"}, {"start": 488.04, "end": 495.04, "text": " kind of more or less of the data as you want and when you encounter the same sample twice"}, {"start": 495.04, "end": 500.08000000000004, "text": " even though you replicate it in the original bird model you could still encounter twice"}, {"start": 500.08000000000004, "end": 503.64000000000004, "text": " if you trained for longer than the number of replications."}, {"start": 503.64, "end": 511.32, "text": " Then you basically see the exact same mask again and the dynamic masking is actually much"}, {"start": 511.32, "end": 512.3199999999999, "text": " more useful."}, {"start": 512.3199999999999, "end": 517.72, "text": " It's much more ad hoc each time you see a sample you generate the mask on the fly."}, {"start": 517.72, "end": 522.36, "text": " So they compare this here and they see that there is a marginal improvement so here higher"}, {"start": 522.36, "end": 523.36, "text": " is better."}, {"start": 523.36, "end": 534.12, "text": " So the improvement in two tasks and a less marginal decrease in performance in one task."}, {"start": 534.12, "end": 543.04, "text": " So they decide that this dynamic masking is of use."}, {"start": 543.04, "end": 549.88, "text": " Second thing they investigate is the kind of input format and this next sentence prediction."}, {"start": 549.88, "end": 556.08, "text": " So as I already said the original bird training objective always gets two sentences next to"}, {"start": 556.08, "end": 562.0, "text": " each other and has to decide if the second one follows from the first one."}, {"start": 562.0, "end": 569.28, "text": " Actually it doesn't it observes two concatenated document segments which are either sampled"}, {"start": 569.28, "end": 577.72, "text": " contiguously from the same document or from distinct documents and this is half and half."}, {"start": 577.72, "end": 581.76, "text": " So in addition to the mask language modeling the model is trained to predict whether the"}, {"start": 581.76, "end": 589.0, "text": " observed document segments come from the same or distinct document via an auxiliary next"}, {"start": 589.0, "end": 596.6800000000001, "text": " sentence prediction loss and they investigate different ways of including or excluding this"}, {"start": 596.6800000000001, "end": 598.4, "text": " loss."}, {"start": 598.4, "end": 605.84, "text": " So first is what they define if here if it's plus an SP that means that this particular"}, {"start": 605.84, "end": 610.96, "text": " thing includes the next sentence or next segment prediction loss."}, {"start": 610.96, "end": 620.5600000000001, "text": " So they have segment pair plus NSP which means that each input has a pair of segments."}, {"start": 620.5600000000001, "end": 626.44, "text": " And these segments now the difference the distinction between a segment and a sentence"}, {"start": 626.44, "end": 632.88, "text": " is important where the sentence is really a natural sentence a segment can actually be"}, {"start": 632.88, "end": 641.56, "text": " multiple natural sentences which is what the original bird does."}, {"start": 641.56, "end": 650.0, "text": " So as long as the combined length is less than 512 tokens there can also be multiple sentences"}, {"start": 650.0, "end": 655.12, "text": " but there's clearly two segments and you have to decide if they follow after each other"}, {"start": 655.12, "end": 656.8, "text": " or not."}, {"start": 656.8, "end": 662.0, "text": " The second thing they try is the same thing so the next segment prediction but now it's"}, {"start": 662.0, "end": 666.32, "text": " just two sentences it's just natural sentences."}, {"start": 666.32, "end": 674.88, "text": " So it must be one sentence a cold period sorry and then the next sentence a period and"}, {"start": 674.88, "end": 678.88, "text": " you have to distinguish these two if they follow or not."}, {"start": 678.88, "end": 687.12, "text": " Then they investigate full sentences which is they leave away this next segment prediction"}, {"start": 687.12, "end": 695.12, "text": " loss and they simply fill up the 512 tokens with text from the corpus."}, {"start": 695.12, "end": 701.04, "text": " So each input is packed with full sentences sampled continuously from one or more documents."}, {"start": 701.04, "end": 706.6, "text": " And the one or more document means if you so if you sample text right to you sample here"}, {"start": 706.6, "end": 711.92, "text": " text you put all of this in the thing and you are at the end of a document you simply"}, {"start": 711.92, "end": 717.52, "text": " continue with the next one and go on until you have to 512 tokens."}, {"start": 717.52, "end": 725.4399999999999, "text": " So you basically fill fill fill until you have 512 tokens and that's this variant here."}, {"start": 725.4399999999999, "end": 730.0, "text": " And then the last variant you do the same thing this is called doc sentences but you basically"}, {"start": 730.0, "end": 731.64, "text": " you stop at the end."}, {"start": 731.64, "end": 738.64, "text": " So even so you put all of this in your state and if you hear you stop and then you have"}, {"start": 738.64, "end": 745.64, "text": " to be content by simply padding the rest of the 512 tokens or something like this so you"}, {"start": 745.64, "end": 752.16, "text": " don't have as much data but the all the text that you have in one sample is actually"}, {"start": 752.16, "end": 755.28, "text": " continuous text from the same document."}, {"start": 755.28, "end": 765.56, "text": " So they pit these four things against each other this is this table here and as you can"}, {"start": 765.56, "end": 781.8, "text": " see here the best thing is this doc sentences thing so on these thing followed by the full"}, {"start": 781.8, "end": 785.4799999999999, "text": " sentences encoding right."}, {"start": 785.4799999999999, "end": 794.8, "text": " So there are some some ambiguities here but in general you can kind of rank them as best"}, {"start": 794.8, "end": 804.04, "text": " second best and then here third best and fourth best and they conclude that this next segment"}, {"start": 804.04, "end": 812.0799999999999, "text": " or next sentence prediction loss here is more hurtful than helpful in the ways we see"}, {"start": 812.0799999999999, "end": 819.9599999999999, "text": " here and they say even though this is most most effective they in their case they rather"}, {"start": 819.96, "end": 824.44, "text": " go with this one because it's well I guess easier to implement you get more data through"}, {"start": 824.44, "end": 832.12, "text": " the model in the same time and the performance decrease isn't that much."}, {"start": 832.12, "end": 837.32, "text": " So but it's pretty interesting to see that this next next segment next sentence prediction"}, {"start": 837.32, "end": 847.2800000000001, "text": " isn't super super helpful in in actuality."}, {"start": 847.28, "end": 855.68, "text": " So removing the NSP loss matches or slightly improves the downstream task performance."}, {"start": 855.68, "end": 859.8, "text": " This is yeah in contrast to what the original bird authors found but you have to keep in"}, {"start": 859.8, "end": 868.0799999999999, "text": " mind this is also on hasn't a bunch of other changes in."}, {"start": 868.0799999999999, "end": 875.8399999999999, "text": " Then next thing they investigate batch size so batch size sorry batch size pretty seems"}, {"start": 875.84, "end": 882.24, "text": " to be pretty interesting for these large models in that they love large batch sizes and"}, {"start": 882.24, "end": 891.88, "text": " they actually explore batch sizes 512 here as the smallest one and they go up to 8000 so"}, {"start": 891.88, "end": 895.9200000000001, "text": " this they do this actually in a in a data parallel way where they have many many machines with"}, {"start": 895.9200000000001, "end": 904.32, "text": " many GPUs and they parallelize the data and then they accumulate the gradient of all of"}, {"start": 904.32, "end": 909.1600000000001, "text": " these different samples and so they can go up to a batch size of about 8k and they find"}, {"start": 909.1600000000001, "end": 916.9200000000001, "text": " generally that the 2000 batch size here as you can see helps to improve the so perplexity"}, {"start": 916.9200000000001, "end": 925.36, "text": " lower is better and the other numbers higher is better helps to to improve the performances"}, {"start": 925.36, "end": 929.6, "text": " if you control the control for data set size so the number of times you go through the"}, {"start": 929.6, "end": 937.72, "text": " data set is the same but if you go with a larger batch size that seems to help up to a point"}, {"start": 937.72, "end": 944.32, "text": " here the 2000 seems to be the best they found so again at marginal improvement you can make"}, {"start": 944.32, "end": 951.5600000000001, "text": " by training with larger batch sizes and then this the last thing they've looked at is"}, {"start": 951.5600000000001, "end": 958.0, "text": " actually is text encoding so how do you encode text and the the pit here is basically between"}, {"start": 958.0, "end": 969.0, "text": " byte parent coding or word piece encoding to that to to decide how large your vocabulary"}, {"start": 969.0, "end": 976.08, "text": " is basically and as I understand it they didn't find a much of a difference between the different"}, {"start": 976.08, "end": 984.84, "text": " implementations of the text encoding so they decide they go with the decide to go with"}, {"start": 984.84, "end": 991.2, "text": " one I don't even remember which one I think they go decide to go with byte pair encoding"}, {"start": 991.2, "end": 997.96, "text": " instead of word pieces all right so they combine all of this into Roberta which is the"}, {"start": 997.96, "end": 1007.96, "text": " robust the optimized bird approach and stay say Roberta is trained with dynamic masking"}, {"start": 1007.96, "end": 1017.0400000000001, "text": " so what they showed first full sentence without the next segment prediction loss large mini batches"}, {"start": 1017.0400000000001, "end": 1024.2, "text": " a larger byte level byte parent coding as well as of course their collection of training"}, {"start": 1024.2, "end": 1037.92, "text": " data and then here they also investigate how long to pre-train so if you look at the"}, {"start": 1037.92, "end": 1044.92, "text": " original birth models or the XL net models and then compare it to Roberta so Roberta"}, {"start": 1044.92, "end": 1051.68, "text": " this is the original data and they already beat Bert yet they do not they do not yet beat"}, {"start": 1051.68, "end": 1062.72, "text": " XL net with that so if they add data they get even better actually on par mostly with the"}, {"start": 1062.72, "end": 1069.8, "text": " with XL net if they pre-train longer they get even better and if they say pre-train even"}, {"start": 1069.8, "end": 1076.04, "text": " longer right so that here's the the number of steps if if your number of steps then match"}, {"start": 1076.04, "end": 1086.08, "text": " the number of steps that the XL net does with the same additional data then or with their"}, {"start": 1086.08, "end": 1099.36, "text": " additional data then you outperform XL net as well so this this kind of just an an overview"}, {"start": 1099.36, "end": 1106.6799999999998, "text": " of this and they evaluate on other downstream tasks and they basically show that in most"}, {"start": 1106.6799999999998, "end": 1116.0, "text": " of them they can reach state of the art performance or exceed it with their approach"}, {"start": 1116.0, "end": 1123.96, "text": " and in conclusion they basically say well this only shows that kind of the the gains that"}, {"start": 1123.96, "end": 1128.76, "text": " these other models make and the reasons why they make gains may be questionable if you"}, {"start": 1128.76, "end": 1135.28, "text": " simply pre-train Bert in a better way you can reach the same performances so I think"}, {"start": 1135.28, "end": 1143.12, "text": " the end is not reached yet most of all they publish their code their data I believe I"}, {"start": 1143.12, "end": 1149.0, "text": " have not looked into this but definitely check out their repository where this is implemented"}, {"start": 1149.0, "end": 1178.56, "text": " seems pretty easy seems pretty straightforward and that was it for me bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=AR3W-nfcDe4 | Auditing Radicalization Pathways on YouTube | This paper claims that there is a radicalization pipeline on YouTube pushing people towards the Alt-Right, backing up their claims with empirical analysis of channel recommendations and commenting behavior. I suggest that there is a much simpler explanation of this data: A basic diffusion process.
Abstract:
Non-profits and the media claim there is a radicalization pipeline on YouTube. Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube's recommender system. In this work, we conduct a large scale audit of user radicalization on YouTube. We analyze 331,849 videos of 360 channels which we broadly classify into: control, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right ---channels in the I.D.W. and the Alt-lite would be gateways to fringe far-right ideology, here represented by Alt-right channels. Processing more than 79M comments, we show that the three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube's recommendation algorithm, looking at more than 2M million recommendations for videos and channels between May and July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels via recommendations and that Alt-right channels may be reached from both I.D.W. and Alt-lite channels. Overall, we paint a comprehensive picture of user radicalization on YouTube and provide methods to transparently audit the platform and its recommender system.
Authors: Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira
https://arxiv.org/abs/1908.08313
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Minds: https://www.minds.com/ykilcher
BitChute: https://www.bitchute.com/channel/10a5ui845DOJ/ | Hi there! Today we're going to look at auditing radicalization pathways on YouTube by Manuel Horta Riberio at Al. So this paper is a bit different than the one we're usually looking at, but since I'm a YouTuber and this is in the kind of a data science realm, I thought it fits neatly. So, yeah, we'll have a look and this is mostly going to be an analysis and my opinion on it, so take that for what it is. This is in my opinion a paper where you can see very well what it looks like when you deceive yourself. So when you have a hypothesis of something and then only collect data that matches that and you don't think like you don't think of simpler solutions for that explain the data and therefore you don't think of experiments that could differentiate the simple solutions from what you propose. So it's a good example of how you can kind of trick yourself into believing you found something and this isn't this isn't now about YouTube or anything. This happened to me so many times. It always pays off to take a step back and say, is there a simpler explanation for what's happening and this is what I think is exactly happening here. So I'll present to you their hypothesis and then I'll present to you my kind of what I think is going on and a model that explains the data much much much easier and simpler and actually better. So let's dive in. This paper basically claims a following. So on YouTube there are channels and channels are you know independent channels they make videos and you can actually arrange these channels. So each dot here is a channel. You can arrange these channels in kind of a network and two channels you can claim they're connected and there can be a connection strength or whatever for simplicity they can be connected if for example their topics are similar if they reference each other if they are recommended by YouTube from each other if they have the same users watching those same channels or the videos of these channels. There are a number of metrics where it could you could you could make channels connected but all of them will turn out similar like we'll give you the similar structure of channels being connected. Oh that's connected twice. So you can kind of build a graph of how these channels are connected and what you can do then is you can cluster them. You don't have to build a graph to cluster them but you can cluster the channels and what will emerge are parts of the graph that are very well connected. Here this might be connected with this and with this parts that are very well connected and are kind of within well connected within and more sparsely connected to to others like also have a larger distance in between them. So if you start out from one channel and you're kind of watching recommend videos and recommend channels and so on you'll you'll stroll along here. You will get much faster to these things than to the other things. So these these are called communities usually in these kind of net social network analysis. So on YouTube you know there is a community for makeup there's a community for sports within sports there is a community for soccer there's one for basketball and so on. So these are all these kind of communities that you can discover by clustering. This paper mainly deals with three communities namely the first of all is the the IDW which is the Intellectual Dark Web. They discuss this here. So the Intellectual Dark Web is they describe as a group of individuals that are in a rolling conversation with each other about topics that are let's say usually kind of difficult to to talk about such as gender differences or intelligence research in certain areas or even you know regular politics but kind of the the Intellectual Dark Web are a wide variety of people that basically are conversing with each other about topics. The description is a bit vague but the main aspect is conversation and maybe topics that are kind of on the on the edge of what's acceptable to talk about. But the opinions range widely on these topics. The second group is the alt-right and the alt-right here is kind of the the they're defined as ethno-nationalists. For example here is an example the fringe ideas such as white ethno-state, white supremacist ideology and so on. So specifically ethno-nationalists that I think nations should be organized too along the lines of ethnicity. And the goal of the paper is actually to to show that there is a kind of a dangerous pipeline on YouTube that will drive people to the alt-right and drive people into these radical ideas of the alt-right. Kind of in between is the alt-light which is here defined as civic nationalists which is simply as I understand it means that people should be organized into nations not along ethnicity but just should organize themselves into sovereign communities and they would be more of your libertarian, classically liberal people. Whereas the alt-right would be more of your let's say authoritarian right wing person. So these three communities they have a fourth community which is a they call a control group and the control group consists of what they say are kind of mainstream channels on YouTube simply to differentiate them from these three and two see what's going on with them and if there is a difference. So this is kind of the setup and as I said the hypothesis is the following people go on YouTube so YouTube is here YouTube people come on YouTube they go around right they they explore a bit and all of a sudden they find i.d.w. videos these are recommended by YouTube on a fairly regular basis right that mean they're interesting people find it they find it interesting and so and then there from the i.d.w there are recommendations and links to the alt-light and the alt-light are still so as I read this paper there is kind of an undertone kind of the i.d.w. and the alt-light are still okay like they they are they discuss ideas that are you know sometimes political and so on but the real the worry is the alt-right the kind of radical right wing ethnic nationalists and I mean yes the formulation I can I can agree with and then they claim so you find the i.d.w. that they have links to the alt-light or links I mean recommendations and so on and from the alt-light and to a certain degree also from the i.d.w. you can then find the alt-right so even though a user that goes on YouTube at first isn't likely to find the alt-right videos because it's fringe it's extreme and so on by through the YouTube recommendation algorithm basically by going to the i.d.w. finding this then from there they'll find the alt-light and from there and from the i.d.w. they will then find the alt-right so they claim that there's this pathway of radicalization here that kind of pushes people towards the alt-right and that's their hypothesis and they claim to they have evidence to support this and I claim that there is a simple solution namely so first of all let me state I don't like the alt-right I think their ideas are despicable I should go without saying though I have said it now so you know just as a disclaimer I'm not defending anyone here I'm simply saying this paper has a simpler explanation for their data namely what I think is happening here is YouTube again is channels each dot here is a channel channels can be clustered as such right there as we saw before I'm just drawing more of them right now but channels channels channels channels channels channels channels so what I think is happening is there is a control group what they call the control group it's over here it's large control right it's a bunch of channels then which is kind of mainstream media then over here there is let's say alternative media where all of these three groups belong into so at some point you will have the i.d.w then maybe a bit further away from the control group but very close to the i.d.w would have the alt-light and very close to the two maybe here you would have the alt-right right but so notably the in my model the i.d.w the alt-light are kind of close together they are in terms of comparative distance so if you cluster these channels but let's say audience or topics or and so on it will it will turn out that all of these three are far far away from the control group those two are very close to each other and then there here there is some distance but how much you know how much distance it's it's a question but of course it's going to be smaller distance than the distance to the control group here I mean I could I could draw the alt-right maybe a more accurate picture would be something like this alt-right here so whatever I mean it doesn't it doesn't matter the details but the distance here is smaller than the distance to the control group right and in in this model a second thing is also important namely the alt-right as you can see here is much much smaller than the i.d.w and the alt-light and these again are much smaller than the control group and this I think accounts for most so the distance relations between these and the the size of the of the clusters account for most so that with size I mean mainly channel number of channels and also audience this accounts for most or most of the data better than their model so just keep this in mind right and my model of course doesn't include any kind of pipeline that they suggest so first of all they go ahead and they say all right we collect channels so they collect the data for this and you know we could go over how they collect the data and criticize that and so on they do human annotation and they start from already published reports and so on which themselves can be criticized I don't I'm not going to go into their data collection methodology it can have mistakes but then any collection methodology can have mistakes what they end up with is a number of channels and here are the top channels from each category and you can as you can see alt-right, alt-light, intellectual dark web and control so already here you can see pretty clearly the model I have in mind namely and they acknowledge all of this by the way look at the size of the alt-right channels the biggest ones compared to the size of the alt-light and the intellectual dark web there's much much smaller in number of views and then compare this to the size of the control group the control group again is again larger than the other two groups so just keep in mind second thing to keep in mind look at these channels maybe you know some of them Joe Rogan, Sargon of a card of these Paul Joseph Watson, Stix Hexenhammer these are YouTubers like these are individuals making YouTube clips creating content for YouTube being on this platform whereas if you compare it with the control group what's here Vox, GQ, Wyard, Business Insider these aren't YouTubers these are websites or traditional media companies or their own kind of blogs and so on that have a YouTube channel where a YouTube is one of the outlets of this media company so I think there's a giant discrepancy here in the control group that can explain also some of this data that you see so keep that in mind I think the control group they say they don't try to capture the user dynamic with the control group but I think that there's many problems with this control group including the fact that these are these are kind of traditional mainstream media that just have YouTube as an outlet and moreover a lot of these like Vox or Vice they are clickbait media and ragebait media that it has worked for a number of years but it's the algorithms are becoming more attuned to kind of clickbait and these are crashing fast whereas the kind of more YouTuber people they are they're not susceptible to that much to kind of the abolishment of clickbait all right so these are this is the data they have all these channels they have all these videos and they first of all give some some stats on it here you see on the bottom is always the year so they do this over time and you see the active channels which are channels that have uploaded videos in some time see the control group again is larger but has started to flatten out in the last few years whereas the these communities they are relatively flourishing another interesting point is that the paper somehow tries to tie this to the election of Donald Trump in 2016 but I again I think this is just kind of in there to gain relevance a lot of these kind of trends and so on you'll see already start before that so in so the start of the rise here if you see these these bumps here and so on a lot of them start before 2016 so as we go through this makeup your own mind of how much this is actually tie to the to the election or not I think it's much more the years when kind of clickbait started to go down as a as a business model nevermind though so the active channels growing though the control group not growing as much videos published even though the control group isn't growing so much they still publish the the most videos but you can see generally the site is growing generally YouTube is growing like counts and here you see something interesting starting to have namely these communities especially the alt light and the intellectual dark web they're starting to catch up and this is one of the things that the paper also states is that if you look at for example comments per video this the the light and the intellectual dark web outperform the control group vastly right that also if you look at views per video and likes per video the control group simply don't have an engaged audience which I think first of all is because they produce clickbait second of all they're just not that interesting and third of all they're not youtubers like this isn't their thing they're just simply an outlet but yeah so that's that's kind of a one just kind of a bunch of metrics that the that they show here the next table is a bit more interesting in the next table they do a user intersection so what they do is they collect all these videos and then they collect all the comments of these videos and the comment of course always comes with a user name you need to be logged into youtube to make a comment and they see which users comment on multiple videos or videos of multiple categories and then they can look at aha okay for user how many users of category a also comment category b and vice versa so there are two metrics here jacquard similarity which is for for two communities a and b users commenting number of users commenting on a and b divided by number of users commenting on a or b and the second the overlap coefficient is number of users commenting on a and b divided by the minimum size of a and b they say that the overlap coefficient is more useful to compare communities of different sizes so we'll look at that the top graphs are always always the card difference and the or the card similarity in the bottom one are overlap coefficient first graphs though are number of commenting users per year and you already see that even though the control group has much more views and probably much more videos much larger the comments don't so the again the the users of the all light and the intellectual dark web are much more engaged also comments per user this is the cumulative distribution function most people that comment on the on control group videos maybe comment once and then but but the these other communities they comment more self similarity means year after year so always compared to the year before how many users are similar so how how well do these communities retain users and you can already see here the control group is actually very bad at retaining users it does have this overlap coefficient high but it has the jacquard self similarity low which basically if you think of the formula of the jacquard similarity means that the this number is small and this number is high which means that a and b are very disjoint which means that the last year's users aren't this year's users basically so they they constantly have to appeal to new users because they're losing old users because well I guess they're boring whereas the all light and intellectual dark web are much more are much better at retaining users interestingly the alt right not as good as retaining users as the other two this could also be an effect of size like if your community is smaller the users might wander away more quickly but I think this already speaks against their radicalization pipeline if the if the alt right if YouTube was radicalizing people towards the alt right we I think we would see a the alt right being on top of user retention then here they have intersections between communities so green here is alt light and idw while the blue is alt right and alt light and the other blue is alt right and idd so basically the green is alt light and idw and the blues are the other two and we see that the overlap in terms of overlap proficient is similar the overlap in terms of jacquard similarity the alt light and the idw are very much more sharing users which in the picture I painted make sense if you think my model is valid my model explains this very well in that these two communities are quite close together therefore share a similar user base the alt right smaller and a bit further apart therefore not as similar though more similar than the control group which is the last graph the last graph is sorry the last graph is how similar are these communities to the control group and here we see the idw and the alt light kind of similar the alt right not as similar though in the overlap proficient they're about the same so the paper here claims oh look at look at the the similarity this is definitely a radicalized so they don't claim yet this is a radicalization pipeline but they they claim that there's a higher similarity if you actually look at the numbers it's not it's not so mean here you're around around the 50% similarity and here at the end you're also around the 50% similarity with the control group so this is within these groups and this is here with the control group also here you're if I look at the kind of mean here you're at whatever 2018% and here you're also you're maybe a bit lower but you're also going towards this what it looks to me like rather than there being a radicalization pipeline if you look at the shape of this and kind of where it starts in 2013 2014 it starts to go up here and you look at the shape of this it's simply the same shape delayed and I mean there's no reason why this graph wouldn't go up here um wouldn't go up here in the future and reach the exact same numbers as here it seems the graph is simply shifted which makes total sense if you think these communities are I'm going to draw the same picture here all right IDW all light and over here control if you think they're they're they're like that if you simply think well YouTube is growing users are growing users are starting somewhere here and then spreading out pretty much randomly right they're spreading out spreading out spreading out users start here spreading out users start here spreading out here spreading out everywhere users just if kind of there's diffusion process going on not in a particular direction like they claim if there is just a diffusion process going on what would you expect you would expect users that started here to reach the IDW and the altbrite much sooner than they reach the control group but ultimately as the diffusion continues all users will have commented on most videos if you run YouTube infinitely and these numbers would go that's why the numbers go up right if you just let it go the diffusion process will go along and it simply takes a longer time to go from here all the way over here then it goes then between these communities so to me we're looking at a simple diffusion process here that is shifted in time and that explains very much the discrepancy number but also the shape of the curve that is exactly the same but shifted their model does not explain the shape of the curve they simply say well here it's 75% and here it's only 50% that means that these communities are kind of shipping users towards each other so I think the explanation is easier then so they claim this does not alone kind of show that there's a pipeline what they now do however will show that basically so they claim this is the experiment that really shows that there is it is pipeline so what they do is they define what they call an infection so what they say is okay we are for example this this row here we're taking users that are altlight users at the beginning in this time so basically they only comment on the only comment on altlight videos during this time right so it discard all users that comment on anything else just retain the ones that only comment on altlight videos during this time then we're going to follow them over time and see how many of them have at least one comment in an alt-right video so this is only directed from the community over here towards the alt-right and then they call a user infected specifically if they comment on one or two alt-right videos they're likely infected if they comment on three to five they're mildly infected and if they comment on more they're severely infected so as you can see users starting from the alt-light or from the IDW or from both they will become in some will become infected over time namely and I postulate we simply look at the since the tendencies between the groups are similar we'll simply look at the light infections here so they say okay after you know in 2018 about 8 to 10 percent of the users become infected in these groups you see here here but the same trajectories whereas it um so whereas in the control group it's less here though honestly I don't think it's that much less right I think that again I think there's a normal diffusion process here and they do this similarly with the with the other ones and to me like to them this makes total sense like oh yeah users that start in these communities they migrate they get infected by the alt-right they go towards the alt-right because you can find it so easily and to me this simply looks like a normal diffusion process here's what you need if you want and by the way the control group isn't that much different here's what you need if you want to show that there is a pipeline in this direction you need this exact same graph in the other direction and you need to show the people that started in the alt-right do not go back in the same fashion towards the alt-light or the i.e.w and they do especially not go to the control group you need to show this basically between each pair of these and you need to show that the direction of infection is only in a single direction namely towards radicalization otherwise you're just looking at a normal diffusion process between differently distance and differently sized groups so they go on to analyze and they say well how much basically how much of the alt-right audience makes is made up by people that have been radicalized that have been infected so that this infection is kind of their proxy for what they call a radicalization and if you become infected then basically you're not part of the alt-right or something even though you might have commented something negative actually the you might engage with their ideas and call them their crap but in any case you're now infected and they ask themselves how much of the alt-right audience has are of these infected so basically how much of the alt-right audience have are people that in the past have been not alt-ridors have been exclusively commenting on all light or i.d.w videos and they find that for example for all light 23% of the alt-right audience are former alt-liders and have our former alt-liders that have now made one comment on an alt-right video so their claim is well there is a sizable portion of the alt-right that at the beginning wasn't alt-right that basically became infected and therefore that that kind of shows this radicalization pipeline that the alt-right audience is mainly consistent of people that have not been alt-right previously but have become so and to me again this is simply a function of the size of these communities right if you think of this again and you start randomly somewhere on youtube let's let's make this assumption people start randomly somewhere on youtube what's the probability that you're going to start in the alt-right very small right so what's the the kind of natural let's say the natural size of alt-right before users go and migrate is very tiny right so not many users are going to be what you would consult originally alt-righters wherever they're their first comment basically what this thing measures is where is your first comment and are any of your subsequent comments all right if your first comment is not in the alt-right then you become a potential candidate for infection and if any comment is on the alt-right then you're infected so what's the probability that your first comment is not alt-right well you're gonna land somewhere on youtube youtube is huge the alt-right is very small thus that probability is extremely small and then you let's you simply let people diffuse let them diffuse let them diffuse some will end up in the alt-right and since the alt-right is so small to begin with actually most people that will comment at some point on the not-right video will will have their first comment from somewhere outside the the alt-right videos um simply simply a numbers game right simply the alt-right is so small that this is virtually guaranteed so what they find here is again simply an evidence of a regular diffusion process between these differently sized groups and the claims they make from this are just over the top again that they're comparison to the control group if you if you look at the numbers they're actually not that different from this from the idw numbers they're they're different than the alt-right here substantially different but again it's simply a function of distance in my opinion in these in these clusters um lastly they look at the youtube recommender system and they say okay if we look at these videos and the channels and we look at on these videos what other videos are recommended and what other channels are recommended so if you have like a video on youtube you have the video here and here you have like recommended videos similarly when you have a channel right you have a channel this is a person yay of this person the person can have first of all they can have features channels where they say look these are channels that I find cool I go check them out and then they also have recommended channels that are kind of given by youtube as recommendations so here youtube controls basically everything here the creator controls part and the youtube controls dollar part so they they look to both first of all the channels channels recommend recommendation so these are both sections here and they look at if you start on a alt-light video how likely if you do a random walk are you to end up in the alt-right or in the in touch with our web or control group after one step two steps three steps four steps so that the the big line is the random walker and actually the dashed line is the distance if you were to targetly go into the direction of such a video like what's the minimum number of clicks you need and you can see here the the if you start an alt-light after one or two steps the random walker is kind of a two percent chance to end up at an alt-right video and about a 25 percent chance here of ending up in a intellectual dark web video and about a 50 percent chance of ending up again at an alt-light video the scales here are really different so it's it's very difficult to judge how it compares to the control group which is playing at zero here but to me again this is a reflection of the size of these communities and I think it's a bit you know weird to to then claim oh these are reachable basically so two percent chance of landing on an alt-right video I'm I'm not sure but again if you compare if you start from the control group there's almost no chance you'll end up in a alt-right video so I guess the comparison is is okay if you compare to control group if you start look at videos however again if you start at alt-light after one step you are approximately 25 percent likely to be in an IDW video you're a bit over 50 percent likely to stay in an alt-light video however compare this to channels you're almost super unlikely to end at a control channel if you start at an alt-light channel but in video recommendations you're actually also about 25 percent chance of ending in a control group video where as look at the scale here you're only about 0.03 percent likely to end up in an alt-right video and also here so here even look at this if you start an IDW video the chance that you're going to end up in a control again super high much higher than an alt-light video whereas with the channel recommendations this was completely turned around so we see the alt-right completely loses when it comes to video recommendations and mainly the control group gains compared to the channel recommendations I think here's what I think I think this is due to this section here this section here where the creators have power and also this section here YouTube recommending I think they're putting a lot of work into the video recommendations I think they're putting not that much work into these recommendations and by work I mean actually manually intervening and deciding what's kind of good videos and bad videos and the control group they're probably there's probably big advertisement money in that so they might be pushed up a bit in the video recommendations since most people are going by video recommendations I've actually never used the channel recommendations feature and the channel recommendations first of all the creator has power over part of it and then also YouTube maybe not put as much work into these related channels so both have in the effect that I would say that the data here first of all it doesn't doesn't convince me of a radicalization pipeline it simply convinces me that some communities are larger smaller and closer together but second of all that this down here if you forget about the alt right for a moment yeah they're irrelevant this down here actually compared to up here shows maybe a bit of evidence of an algorithmic promotion of these mainstream media channels compared to how the communities are actually clustering which I think this this up here might be a much more accurate picture so you know that it's just kind of a funky thing in the data yeah that they'll write this irrelevant to this part because they're they're just too small so this is this is kind of my take on this they didn't give recommendations and is this a pipeline and so on and I don't think so you've now heard my idea and you've heard their idea decide for yourself but I think it's a good example of how if you are convinced of an underlying mechanism you're going to collect evidence in support of that mechanism and if you catch yourself doing that really really think isn't there an easier explanation for this all right that was it for me have fun | [{"start": 0.0, "end": 7.32, "text": " Hi there! Today we're going to look at auditing radicalization pathways on YouTube by Manuel"}, {"start": 7.32, "end": 14.200000000000001, "text": " Horta Riberio at Al. So this paper is a bit different than the one we're usually looking"}, {"start": 14.200000000000001, "end": 22.44, "text": " at, but since I'm a YouTuber and this is in the kind of a data science realm, I thought"}, {"start": 22.44, "end": 29.96, "text": " it fits neatly. So, yeah, we'll have a look and this is mostly going to be an analysis"}, {"start": 29.96, "end": 41.4, "text": " and my opinion on it, so take that for what it is. This is in my opinion a paper where you can see"}, {"start": 41.4, "end": 52.64, "text": " very well what it looks like when you deceive yourself. So when you have a hypothesis of something"}, {"start": 52.64, "end": 61.32, "text": " and then only collect data that matches that and you don't think like you don't think of simpler"}, {"start": 61.32, "end": 68.0, "text": " solutions for that explain the data and therefore you don't think of experiments that could differentiate"}, {"start": 68.0, "end": 73.28, "text": " the simple solutions from what you propose. So it's a good example of how you can kind of trick"}, {"start": 73.28, "end": 79.2, "text": " yourself into believing you found something and this isn't this isn't now about YouTube or"}, {"start": 79.2, "end": 86.08, "text": " anything. This happened to me so many times. It always pays off to take a step back and say,"}, {"start": 86.08, "end": 92.96000000000001, "text": " is there a simpler explanation for what's happening and this is what I think is exactly happening"}, {"start": 92.96000000000001, "end": 101.64, "text": " here. So I'll present to you their hypothesis and then I'll present to you my kind of what I"}, {"start": 101.64, "end": 109.56, "text": " think is going on and a model that explains the data much much much easier and simpler and actually"}, {"start": 109.56, "end": 121.4, "text": " better. So let's dive in. This paper basically claims a following. So on YouTube there are channels"}, {"start": 121.4, "end": 128.92000000000002, "text": " and channels are you know independent channels they make videos and you can actually arrange these"}, {"start": 128.92, "end": 137.0, "text": " channels. So each dot here is a channel. You can arrange these channels in kind of a network and"}, {"start": 137.0, "end": 141.95999999999998, "text": " two channels you can claim they're connected and there can be a connection strength or whatever"}, {"start": 141.95999999999998, "end": 149.56, "text": " for simplicity they can be connected if for example their topics are similar if they reference"}, {"start": 149.56, "end": 155.72, "text": " each other if they are recommended by YouTube from each other if they have the same users watching"}, {"start": 155.72, "end": 161.4, "text": " those same channels or the videos of these channels. There are a number of metrics where it could"}, {"start": 163.16, "end": 169.88, "text": " you could you could make channels connected but all of them will turn out similar like we'll give"}, {"start": 169.88, "end": 178.68, "text": " you the similar structure of channels being connected. Oh that's connected twice. So you can kind"}, {"start": 178.68, "end": 185.4, "text": " of build a graph of how these channels are connected and what you can do then is you can cluster them."}, {"start": 185.4, "end": 190.84, "text": " You don't have to build a graph to cluster them but you can cluster the channels and what will"}, {"start": 190.84, "end": 199.32, "text": " emerge are parts of the graph that are very well connected. Here this might be connected with this"}, {"start": 199.32, "end": 207.96, "text": " and with this parts that are very well connected and are kind of within well connected within and"}, {"start": 207.96, "end": 216.12, "text": " more sparsely connected to to others like also have a larger distance in between them. So if you start"}, {"start": 216.12, "end": 222.04000000000002, "text": " out from one channel and you're kind of watching recommend videos and recommend channels and so on"}, {"start": 222.04000000000002, "end": 227.56, "text": " you'll you'll stroll along here. You will get much faster to these things than to the other things."}, {"start": 227.56, "end": 233.8, "text": " So these these are called communities usually in these kind of net social network analysis. So on"}, {"start": 233.8, "end": 241.24, "text": " YouTube you know there is a community for makeup there's a community for sports within sports"}, {"start": 241.24, "end": 246.28, "text": " there is a community for soccer there's one for basketball and so on. So these are all these"}, {"start": 246.28, "end": 252.36, "text": " kind of communities that you can discover by clustering. This paper mainly deals with three communities"}, {"start": 253.16000000000003, "end": 260.36, "text": " namely the first of all is the the IDW which is the Intellectual Dark Web. They discuss this here."}, {"start": 260.36, "end": 271.16, "text": " So the Intellectual Dark Web is they describe as a group of individuals that are in a rolling"}, {"start": 271.16, "end": 278.84000000000003, "text": " conversation with each other about topics that are let's say usually kind of difficult to to talk"}, {"start": 278.84000000000003, "end": 288.28000000000003, "text": " about such as gender differences or intelligence research in certain areas or even you know regular"}, {"start": 288.28, "end": 297.4, "text": " politics but kind of the the Intellectual Dark Web are a wide variety of people that basically are"}, {"start": 298.84, "end": 306.91999999999996, "text": " conversing with each other about topics. The description is a bit vague but the main aspect is"}, {"start": 306.91999999999996, "end": 315.4, "text": " conversation and maybe topics that are kind of on the on the edge of what's acceptable to talk about."}, {"start": 315.4, "end": 324.2, "text": " But the opinions range widely on these topics. The second group is the alt-right and the alt-right"}, {"start": 324.2, "end": 336.03999999999996, "text": " here is kind of the the they're defined as ethno-nationalists. For example here is an example the"}, {"start": 336.03999999999996, "end": 343.79999999999995, "text": " fringe ideas such as white ethno-state, white supremacist ideology and so on. So specifically"}, {"start": 343.8, "end": 351.64, "text": " ethno-nationalists that I think nations should be organized too along the lines of ethnicity."}, {"start": 352.36, "end": 360.28000000000003, "text": " And the goal of the paper is actually to to show that there is a kind of a dangerous pipeline"}, {"start": 360.28000000000003, "end": 367.24, "text": " on YouTube that will drive people to the alt-right and drive people into these radical ideas of"}, {"start": 367.24, "end": 376.04, "text": " the alt-right. Kind of in between is the alt-light which is here defined as civic nationalists"}, {"start": 376.04, "end": 381.40000000000003, "text": " which is simply as I understand it means that people should be organized into nations"}, {"start": 382.28000000000003, "end": 387.72, "text": " not along ethnicity but just should organize themselves into sovereign communities and"}, {"start": 387.72, "end": 397.48, "text": " they would be more of your libertarian, classically liberal people. Whereas the alt-right would be"}, {"start": 397.48, "end": 407.72, "text": " more of your let's say authoritarian right wing person. So these three communities they have a"}, {"start": 407.72, "end": 412.84000000000003, "text": " fourth community which is a they call a control group and the control group consists of what they"}, {"start": 412.84, "end": 421.15999999999997, "text": " say are kind of mainstream channels on YouTube simply to differentiate them from these three and"}, {"start": 421.15999999999997, "end": 429.4, "text": " two see what's going on with them and if there is a difference. So this is kind of the setup and"}, {"start": 429.4, "end": 437.23999999999995, "text": " as I said the hypothesis is the following people go on YouTube so YouTube is here YouTube people"}, {"start": 437.24, "end": 443.40000000000003, "text": " come on YouTube they go around right they they explore a bit and all of a sudden they find i.d.w."}, {"start": 443.40000000000003, "end": 449.32, "text": " videos these are recommended by YouTube on a fairly regular basis right that mean they're"}, {"start": 449.32, "end": 454.84000000000003, "text": " interesting people find it they find it interesting and so and then there from the i.d.w there are"}, {"start": 454.84000000000003, "end": 463.96000000000004, "text": " recommendations and links to the alt-light and the alt-light are still so as I read this paper"}, {"start": 463.96, "end": 470.28, "text": " there is kind of an undertone kind of the i.d.w. and the alt-light are still okay like they"}, {"start": 470.28, "end": 475.79999999999995, "text": " they are they discuss ideas that are you know sometimes political and so on but the real the"}, {"start": 475.79999999999995, "end": 486.03999999999996, "text": " worry is the alt-right the kind of radical right wing ethnic nationalists and I mean yes the"}, {"start": 486.03999999999996, "end": 493.08, "text": " formulation I can I can agree with and then they claim so you find the i.d.w. that they have"}, {"start": 493.08, "end": 499.88, "text": " links to the alt-light or links I mean recommendations and so on and from the alt-light and to a certain"}, {"start": 499.88, "end": 509.32, "text": " degree also from the i.d.w. you can then find the alt-right so even though a user that goes on YouTube"}, {"start": 509.32, "end": 517.8, "text": " at first isn't likely to find the alt-right videos because it's fringe it's extreme and so on"}, {"start": 517.8, "end": 525.0, "text": " by through the YouTube recommendation algorithm basically by going to the i.d.w. finding this then"}, {"start": 525.0, "end": 532.28, "text": " from there they'll find the alt-light and from there and from the i.d.w. they will then find the"}, {"start": 532.28, "end": 542.8399999999999, "text": " alt-right so they claim that there's this pathway of radicalization here that kind of pushes people"}, {"start": 542.84, "end": 553.8000000000001, "text": " towards the alt-right and that's their hypothesis and they claim to they have evidence to support this"}, {"start": 554.52, "end": 564.0400000000001, "text": " and I claim that there is a simple solution namely so first of all let me state I don't like the"}, {"start": 564.04, "end": 575.0799999999999, "text": " alt-right I think their ideas are despicable I should go without saying though I have said it now so"}, {"start": 576.1999999999999, "end": 582.04, "text": " you know just as a disclaimer I'm not defending anyone here I'm simply saying this paper"}, {"start": 583.0799999999999, "end": 587.3199999999999, "text": " has a simpler explanation for their data namely what I think is happening here"}, {"start": 587.32, "end": 599.96, "text": " is YouTube again is channels each dot here is a channel channels can be clustered as such right there"}, {"start": 599.96, "end": 605.48, "text": " as we saw before I'm just drawing more of them right now but channels channels channels channels channels channels"}, {"start": 605.48, "end": 614.7600000000001, "text": " channels so what I think is happening is there is a control group what they call the control group"}, {"start": 614.76, "end": 626.52, "text": " it's over here it's large control right it's a bunch of channels then which is kind of mainstream media"}, {"start": 626.52, "end": 635.08, "text": " then over here there is let's say alternative media where all of these three groups belong into"}, {"start": 635.08, "end": 642.6, "text": " so at some point you will have the i.d.w then maybe a bit further away from the control group but"}, {"start": 642.6, "end": 649.16, "text": " very close to the i.d.w would have the alt-light and very close to the two maybe here you would have"}, {"start": 649.16, "end": 659.32, "text": " the alt-right right but so notably the in my model the i.d.w the alt-light are kind of close"}, {"start": 659.32, "end": 667.24, "text": " together they are in terms of comparative distance so if you cluster these channels but let's say"}, {"start": 667.24, "end": 675.4, "text": " audience or topics or and so on it will it will turn out that all of these three are far far away"}, {"start": 675.4, "end": 681.4, "text": " from the control group those two are very close to each other and then there here there is"}, {"start": 681.4, "end": 687.88, "text": " some distance but how much you know how much distance it's it's a question but of course it's"}, {"start": 687.88, "end": 694.52, "text": " going to be smaller distance than the distance to the control group here I mean I could I could draw"}, {"start": 694.52, "end": 704.6, "text": " the alt-right maybe a more accurate picture would be something like this alt-right here so whatever"}, {"start": 704.6, "end": 711.72, "text": " I mean it doesn't it doesn't matter the details but the distance here is smaller than the distance"}, {"start": 711.72, "end": 720.84, "text": " to the control group right and in in this model a second thing is also important namely the"}, {"start": 720.84, "end": 730.36, "text": " alt-right as you can see here is much much smaller than the i.d.w and the alt-light and these again"}, {"start": 730.36, "end": 737.48, "text": " are much smaller than the control group and this I think accounts for most so the distance"}, {"start": 737.48, "end": 749.96, "text": " relations between these and the the size of the of the clusters account for most so that with size"}, {"start": 749.96, "end": 756.9200000000001, "text": " I mean mainly channel number of channels and also audience this accounts for most or most of the"}, {"start": 756.9200000000001, "end": 764.84, "text": " data better than their model so just keep this in mind right and my model of course doesn't include"}, {"start": 764.84, "end": 775.0, "text": " any kind of pipeline that they suggest so first of all they go ahead and they say all right"}, {"start": 775.0, "end": 781.56, "text": " we collect channels so they collect the data for this and you know we could go over how they"}, {"start": 781.56, "end": 788.04, "text": " collect the data and criticize that and so on they do human annotation and they start from already"}, {"start": 788.04, "end": 793.96, "text": " published reports and so on which themselves can be criticized I don't I'm not going to go into"}, {"start": 793.96, "end": 799.56, "text": " their data collection methodology it can have mistakes but then any collection methodology"}, {"start": 799.56, "end": 807.2399999999999, "text": " can have mistakes what they end up with is a number of channels and here are the top channels"}, {"start": 807.2399999999999, "end": 814.5999999999999, "text": " from each category and you can as you can see alt-right, alt-light, intellectual dark web and"}, {"start": 814.5999999999999, "end": 822.3599999999999, "text": " control so already here you can see pretty clearly the model I have in mind namely and they"}, {"start": 822.3599999999999, "end": 828.68, "text": " acknowledge all of this by the way look at the size of the alt-right channels the biggest ones"}, {"start": 828.68, "end": 836.12, "text": " compared to the size of the alt-light and the intellectual dark web there's much much smaller"}, {"start": 836.12, "end": 842.52, "text": " in number of views and then compare this to the size of the control group the control group again"}, {"start": 842.52, "end": 850.4399999999999, "text": " is again larger than the other two groups so just keep in mind second thing to keep in mind"}, {"start": 850.44, "end": 859.1600000000001, "text": " look at these channels maybe you know some of them Joe Rogan, Sargon of a card of these"}, {"start": 859.96, "end": 868.6, "text": " Paul Joseph Watson, Stix Hexenhammer these are YouTubers like these are individuals making"}, {"start": 868.6, "end": 875.72, "text": " YouTube clips creating content for YouTube being on this platform whereas if you compare it with"}, {"start": 875.72, "end": 884.9200000000001, "text": " the control group what's here Vox, GQ, Wyard, Business Insider these aren't YouTubers these are"}, {"start": 885.88, "end": 893.64, "text": " websites or traditional media companies or their own kind of blogs and so on that have a YouTube"}, {"start": 893.64, "end": 903.48, "text": " channel where a YouTube is one of the outlets of this media company so I think there's a giant"}, {"start": 903.48, "end": 909.64, "text": " discrepancy here in the control group that can explain also some of this data that you see"}, {"start": 910.52, "end": 916.6, "text": " so keep that in mind I think the control group they say they don't try to capture the user dynamic"}, {"start": 916.6, "end": 922.12, "text": " with the control group but I think that there's many problems with this control group including"}, {"start": 922.12, "end": 929.16, "text": " the fact that these are these are kind of traditional mainstream media that just have YouTube as an"}, {"start": 929.16, "end": 937.9599999999999, "text": " outlet and moreover a lot of these like Vox or Vice they are clickbait media and ragebait media"}, {"start": 938.68, "end": 945.3199999999999, "text": " that it has worked for a number of years but it's the algorithms are becoming more attuned to"}, {"start": 945.3199999999999, "end": 955.24, "text": " kind of clickbait and these are crashing fast whereas the kind of more YouTuber people they are"}, {"start": 955.24, "end": 964.76, "text": " they're not susceptible to that much to kind of the abolishment of clickbait all right so these are"}, {"start": 964.76, "end": 971.16, "text": " this is the data they have all these channels they have all these videos and they first of all give"}, {"start": 972.52, "end": 981.72, "text": " some some stats on it here you see on the bottom is always the year so they do this over time"}, {"start": 981.72, "end": 991.96, "text": " and you see the active channels which are channels that have uploaded videos in some time see the"}, {"start": 991.96, "end": 1001.48, "text": " control group again is larger but has started to flatten out in the last few years whereas the"}, {"start": 1001.48, "end": 1008.84, "text": " these communities they are relatively flourishing another interesting point is that the paper somehow"}, {"start": 1008.84, "end": 1021.0, "text": " tries to tie this to the election of Donald Trump in 2016 but I again I think this is just kind of"}, {"start": 1021.0, "end": 1028.6000000000001, "text": " in there to gain relevance a lot of these kind of trends and so on you'll see already start before"}, {"start": 1028.6000000000001, "end": 1038.44, "text": " that so in so the start of the rise here if you see these these bumps here and so on a lot of them"}, {"start": 1038.44, "end": 1044.92, "text": " start before 2016 so as we go through this makeup your own mind of how much this is actually tie to"}, {"start": 1044.92, "end": 1055.0, "text": " the to the election or not I think it's much more the years when kind of clickbait started to go"}, {"start": 1055.0, "end": 1063.64, "text": " down as a as a business model nevermind though so the active channels growing though the control"}, {"start": 1063.64, "end": 1071.48, "text": " group not growing as much videos published even though the control group isn't growing so much they"}, {"start": 1071.48, "end": 1079.4, "text": " still publish the the most videos but you can see generally the site is growing generally YouTube"}, {"start": 1079.4, "end": 1086.5200000000002, "text": " is growing like counts and here you see something interesting starting to have namely these"}, {"start": 1086.5200000000002, "end": 1091.72, "text": " communities especially the alt light and the intellectual dark web they're starting to catch up"}, {"start": 1091.72, "end": 1097.64, "text": " and this is one of the things that the paper also states is that if you look at for example comments"}, {"start": 1097.64, "end": 1107.72, "text": " per video this the the light and the intellectual dark web outperform the control group vastly right"}, {"start": 1107.72, "end": 1118.44, "text": " that also if you look at views per video and likes per video the control group simply don't have"}, {"start": 1118.44, "end": 1126.04, "text": " an engaged audience which I think first of all is because they produce clickbait second of all"}, {"start": 1126.04, "end": 1131.48, "text": " they're just not that interesting and third of all they're not youtubers like this isn't their"}, {"start": 1131.48, "end": 1140.68, "text": " thing they're just simply an outlet but yeah so that's that's kind of a one just kind of a bunch"}, {"start": 1140.68, "end": 1152.3600000000001, "text": " of metrics that the that they show here the next table is a bit more interesting in the next table"}, {"start": 1152.3600000000001, "end": 1158.04, "text": " they do a user intersection so what they do is they collect all these videos and then they"}, {"start": 1158.04, "end": 1164.8400000000001, "text": " collect all the comments of these videos and the comment of course always comes with a user name"}, {"start": 1164.84, "end": 1171.72, "text": " you need to be logged into youtube to make a comment and they see which users comment on multiple"}, {"start": 1171.72, "end": 1178.4399999999998, "text": " videos or videos of multiple categories and then they can look at aha okay for user how many"}, {"start": 1178.4399999999998, "end": 1183.8799999999999, "text": " users of category a also comment category b and vice versa so there are two metrics here"}, {"start": 1184.6, "end": 1190.6, "text": " jacquard similarity which is for for two communities a and b users commenting"}, {"start": 1190.6, "end": 1197.1599999999999, "text": " number of users commenting on a and b divided by number of users commenting on a or b and the"}, {"start": 1197.1599999999999, "end": 1203.32, "text": " second the overlap coefficient is number of users commenting on a and b divided by the minimum"}, {"start": 1203.32, "end": 1212.28, "text": " size of a and b they say that the overlap coefficient is more useful to compare communities"}, {"start": 1212.28, "end": 1221.16, "text": " of different sizes so we'll look at that the top graphs are always always the card difference"}, {"start": 1221.8799999999999, "end": 1229.96, "text": " and the or the card similarity in the bottom one are overlap coefficient first graphs though"}, {"start": 1229.96, "end": 1237.56, "text": " are number of commenting users per year and you already see that even though the control group"}, {"start": 1237.56, "end": 1243.96, "text": " has much more views and probably much more videos much larger the comments don't"}, {"start": 1244.6799999999998, "end": 1250.6799999999998, "text": " so the again the the users of the all light and the intellectual dark web are much more engaged"}, {"start": 1254.44, "end": 1260.6, "text": " also comments per user this is the cumulative distribution function most people that comment on"}, {"start": 1260.6, "end": 1268.36, "text": " the on control group videos maybe comment once and then but but the these other communities"}, {"start": 1268.36, "end": 1275.32, "text": " they comment more self similarity means year after year so always compared to the year before"}, {"start": 1276.04, "end": 1283.6399999999999, "text": " how many users are similar so how how well do these communities retain users and you can"}, {"start": 1283.6399999999999, "end": 1290.12, "text": " already see here the control group is actually very bad at retaining users it does have this"}, {"start": 1290.12, "end": 1297.08, "text": " overlap coefficient high but it has the jacquard self similarity low which basically if you think"}, {"start": 1297.08, "end": 1305.8799999999999, "text": " of the formula of the jacquard similarity means that the this number is small and this number is"}, {"start": 1305.8799999999999, "end": 1314.4399999999998, "text": " high which means that a and b are very disjoint which means that the last year's users aren't"}, {"start": 1314.44, "end": 1321.96, "text": " this year's users basically so they they constantly have to appeal to new users because they're"}, {"start": 1321.96, "end": 1329.72, "text": " losing old users because well I guess they're boring whereas the all light and intellectual dark"}, {"start": 1329.72, "end": 1338.52, "text": " web are much more are much better at retaining users interestingly the alt right not as good as"}, {"start": 1338.52, "end": 1346.2, "text": " retaining users as the other two this could also be an effect of size like if your community is"}, {"start": 1346.2, "end": 1354.2, "text": " smaller the users might wander away more quickly but I think this already speaks against their"}, {"start": 1354.2, "end": 1362.12, "text": " radicalization pipeline if the if the alt right if YouTube was radicalizing people towards the alt"}, {"start": 1362.12, "end": 1371.3999999999999, "text": " right we I think we would see a the alt right being on top of user retention"}, {"start": 1373.7199999999998, "end": 1383.8, "text": " then here they have intersections between communities so green here is alt light and idw"}, {"start": 1383.8, "end": 1393.24, "text": " while the blue is alt right and alt light and the other blue is alt right and idd"}, {"start": 1393.24, "end": 1403.1599999999999, "text": " so basically the green is alt light and idw and the blues are the other two and we see that the"}, {"start": 1403.1599999999999, "end": 1410.52, "text": " overlap in terms of overlap proficient is similar the overlap in terms of jacquard similarity"}, {"start": 1410.52, "end": 1420.68, "text": " the alt light and the idw are very much more sharing users which in the picture I painted make sense"}, {"start": 1420.68, "end": 1429.72, "text": " if you think my model is valid my model explains this very well in that these two communities are"}, {"start": 1430.52, "end": 1439.0, "text": " quite close together therefore share a similar user base the alt right smaller and a bit further apart"}, {"start": 1439.0, "end": 1446.68, "text": " therefore not as similar though more similar than the control group which is the last graph the last"}, {"start": 1446.68, "end": 1453.96, "text": " graph is sorry the last graph is how similar are these communities to the control group and here"}, {"start": 1455.72, "end": 1467.4, "text": " we see the idw and the alt light kind of similar the alt right not as similar though in the overlap"}, {"start": 1467.4, "end": 1477.48, "text": " proficient they're about the same so the paper here claims oh look at look at the the similarity"}, {"start": 1477.48, "end": 1483.4, "text": " this is definitely a radicalized so they don't claim yet this is a radicalization pipeline but they"}, {"start": 1483.4, "end": 1489.24, "text": " they claim that there's a higher similarity if you actually look at the numbers it's not it's not so"}, {"start": 1490.44, "end": 1497.0, "text": " mean here you're around around the 50% similarity and here at the end you're also around the 50%"}, {"start": 1497.0, "end": 1502.04, "text": " similarity with the control group so this is within these groups and this is here with the control group"}, {"start": 1503.24, "end": 1511.8, "text": " also here you're if I look at the kind of mean here you're at whatever 2018% and here you're also"}, {"start": 1512.6, "end": 1517.64, "text": " you're maybe a bit lower but you're also going towards this what it looks to me like"}, {"start": 1518.68, "end": 1523.64, "text": " rather than there being a radicalization pipeline if you look at the shape of this"}, {"start": 1523.64, "end": 1533.3200000000002, "text": " and kind of where it starts in 2013 2014 it starts to go up here and you look at the shape of this"}, {"start": 1533.3200000000002, "end": 1541.48, "text": " it's simply the same shape delayed and I mean there's no reason why this graph wouldn't go up here"}, {"start": 1542.44, "end": 1549.72, "text": " um wouldn't go up here in the future and reach the exact same numbers as here it seems the"}, {"start": 1549.72, "end": 1556.76, "text": " graph is simply shifted which makes total sense if you think these communities are I'm going to draw"}, {"start": 1556.76, "end": 1569.0, "text": " the same picture here all right IDW all light and over here control if you think they're they're"}, {"start": 1569.0, "end": 1576.68, "text": " they're like that if you simply think well YouTube is growing users are growing users are starting"}, {"start": 1576.68, "end": 1583.24, "text": " somewhere here and then spreading out pretty much randomly right they're spreading out spreading out"}, {"start": 1583.24, "end": 1588.76, "text": " spreading out users start here spreading out users start here spreading out here spreading out"}, {"start": 1588.76, "end": 1594.68, "text": " everywhere users just if kind of there's diffusion process going on not in a particular direction"}, {"start": 1594.68, "end": 1599.8, "text": " like they claim if there is just a diffusion process going on what would you expect you would"}, {"start": 1599.8, "end": 1609.96, "text": " expect users that started here to reach the IDW and the altbrite much sooner than they reach the"}, {"start": 1609.96, "end": 1617.1599999999999, "text": " control group but ultimately as the diffusion continues all users will have commented on most"}, {"start": 1617.1599999999999, "end": 1622.76, "text": " videos if you run YouTube infinitely and these numbers would go that's why the numbers go up"}, {"start": 1622.76, "end": 1629.6399999999999, "text": " right if you just let it go the diffusion process will go along and it simply takes a longer"}, {"start": 1629.64, "end": 1635.96, "text": " time to go from here all the way over here then it goes then between these communities so"}, {"start": 1637.3200000000002, "end": 1645.0, "text": " to me we're looking at a simple diffusion process here that is shifted in time and that explains"}, {"start": 1645.0, "end": 1651.64, "text": " very much the discrepancy number but also the shape of the curve that is exactly the same but"}, {"start": 1651.64, "end": 1658.76, "text": " shifted their model does not explain the shape of the curve they simply say well here it's 75%"}, {"start": 1658.76, "end": 1666.04, "text": " and here it's only 50% that means that these communities are kind of shipping users towards each"}, {"start": 1666.04, "end": 1676.92, "text": " other so I think the explanation is easier then so they claim this does not alone kind of show that"}, {"start": 1676.92, "end": 1683.64, "text": " there's a pipeline what they now do however will show that basically so they claim this is the"}, {"start": 1683.64, "end": 1691.88, "text": " experiment that really shows that there is it is pipeline so what they do is they define what they"}, {"start": 1691.88, "end": 1702.6000000000001, "text": " call an infection so what they say is okay we are for example this this row here we're taking users"}, {"start": 1703.72, "end": 1712.0400000000002, "text": " that are altlight users at the beginning in this time so basically they only comment on"}, {"start": 1712.04, "end": 1720.6, "text": " the only comment on altlight videos during this time right so it discard all users that comment on"}, {"start": 1720.6, "end": 1727.0, "text": " anything else just retain the ones that only comment on altlight videos during this time then"}, {"start": 1727.0, "end": 1734.6, "text": " we're going to follow them over time and see how many of them have at least one comment"}, {"start": 1734.6, "end": 1742.84, "text": " in an alt-right video so this is only directed from the community over here towards the alt-right"}, {"start": 1743.48, "end": 1749.8, "text": " and then they call a user infected specifically if they comment on one or two"}, {"start": 1749.8, "end": 1757.48, "text": " alt-right videos they're likely infected if they comment on three to five they're mildly infected"}, {"start": 1757.48, "end": 1768.68, "text": " and if they comment on more they're severely infected so as you can see users starting from the alt-light"}, {"start": 1770.28, "end": 1778.1200000000001, "text": " or from the IDW or from both they will become in some will become infected over time namely"}, {"start": 1779.56, "end": 1785.88, "text": " and I postulate we simply look at the since the tendencies between the groups are similar we'll"}, {"start": 1785.88, "end": 1795.96, "text": " simply look at the light infections here so they say okay after you know in 2018 about 8 to 10"}, {"start": 1795.96, "end": 1802.44, "text": " percent of the users become infected in these groups you see here here but the same trajectories"}, {"start": 1802.44, "end": 1812.3600000000001, "text": " whereas it um so whereas in the control group it's less here though honestly"}, {"start": 1812.36, "end": 1823.0, "text": " I don't think it's that much less right I think that again I think there's a normal diffusion process"}, {"start": 1823.0, "end": 1831.9599999999998, "text": " here and they do this similarly with the with the other ones and to me like to them this makes total"}, {"start": 1831.9599999999998, "end": 1837.56, "text": " sense like oh yeah users that start in these communities they migrate they get infected by the"}, {"start": 1837.56, "end": 1842.36, "text": " alt-right they go towards the alt-right because you can find it so easily and to me this simply"}, {"start": 1842.36, "end": 1850.04, "text": " looks like a normal diffusion process here's what you need if you want and by the way the control"}, {"start": 1850.04, "end": 1856.76, "text": " group isn't that much different here's what you need if you want to show that there is a pipeline"}, {"start": 1856.76, "end": 1865.1599999999999, "text": " in this direction you need this exact same graph in the other direction and you need to show"}, {"start": 1865.16, "end": 1874.52, "text": " the people that started in the alt-right do not go back in the same fashion towards the"}, {"start": 1874.52, "end": 1881.8000000000002, "text": " alt-light or the i.e.w and they do especially not go to the control group you need to show this"}, {"start": 1881.8000000000002, "end": 1888.68, "text": " basically between each pair of these and you need to show that the direction of infection is"}, {"start": 1888.68, "end": 1896.3600000000001, "text": " only in a single direction namely towards radicalization otherwise you're just looking at a normal"}, {"start": 1896.3600000000001, "end": 1905.88, "text": " diffusion process between differently distance and differently sized groups so they go on to analyze"}, {"start": 1905.88, "end": 1914.3600000000001, "text": " and they say well how much basically how much of the alt-right audience makes is made up by people"}, {"start": 1914.36, "end": 1919.9599999999998, "text": " that have been radicalized that have been infected so that this infection is kind of their proxy"}, {"start": 1919.9599999999998, "end": 1928.76, "text": " for what they call a radicalization and if you become infected then basically you're not part of"}, {"start": 1928.76, "end": 1934.28, "text": " the alt-right or something even though you might have commented something negative actually"}, {"start": 1934.28, "end": 1943.24, "text": " the you might engage with their ideas and call them their crap but in any case you're now infected"}, {"start": 1943.24, "end": 1951.32, "text": " and they ask themselves how much of the alt-right audience has are of these infected so basically"}, {"start": 1951.32, "end": 1960.04, "text": " how much of the alt-right audience have are people that in the past have been not alt-ridors have"}, {"start": 1960.04, "end": 1970.68, "text": " been exclusively commenting on all light or i.d.w videos and they find that for example for"}, {"start": 1970.68, "end": 1979.8, "text": " all light 23% of the alt-right audience are former alt-liders and have our former alt-liders"}, {"start": 1979.8, "end": 1987.72, "text": " that have now made one comment on an alt-right video so their claim is well there is a sizable"}, {"start": 1987.72, "end": 1996.8400000000001, "text": " portion of the alt-right that at the beginning wasn't alt-right that basically became infected"}, {"start": 1996.84, "end": 2003.3999999999999, "text": " and therefore that that kind of shows this radicalization pipeline that the alt-right audience"}, {"start": 2003.3999999999999, "end": 2012.6, "text": " is mainly consistent of people that have not been alt-right previously but have become so"}, {"start": 2013.1599999999999, "end": 2022.28, "text": " and to me again this is simply a function of the size of these communities right if you think"}, {"start": 2022.28, "end": 2028.36, "text": " of this again and you start randomly somewhere on youtube let's let's make this assumption people"}, {"start": 2028.36, "end": 2033.8799999999999, "text": " start randomly somewhere on youtube what's the probability that you're going to start in the"}, {"start": 2033.8799999999999, "end": 2041.56, "text": " alt-right very small right so what's the the kind of natural let's say the natural"}, {"start": 2043.16, "end": 2050.68, "text": " size of alt-right before users go and migrate is very tiny right so not many users are going to be"}, {"start": 2050.68, "end": 2057.08, "text": " what you would consult originally alt-righters wherever they're their first comment basically what"}, {"start": 2057.08, "end": 2063.48, "text": " this thing measures is where is your first comment and are any of your subsequent comments all right"}, {"start": 2064.2, "end": 2069.8799999999997, "text": " if your first comment is not in the alt-right then you become a potential candidate for infection and"}, {"start": 2069.8799999999997, "end": 2075.72, "text": " if any comment is on the alt-right then you're infected so what's the probability that your first"}, {"start": 2075.72, "end": 2080.68, "text": " comment is not alt-right well you're gonna land somewhere on youtube youtube is huge the alt-right is"}, {"start": 2080.68, "end": 2089.3999999999996, "text": " very small thus that probability is extremely small and then you let's you simply let people"}, {"start": 2089.3999999999996, "end": 2097.24, "text": " diffuse let them diffuse let them diffuse some will end up in the alt-right and since the alt-right"}, {"start": 2097.24, "end": 2103.56, "text": " is so small to begin with actually most people that will comment at some point on the not-right video"}, {"start": 2103.56, "end": 2114.2799999999997, "text": " will will have their first comment from somewhere outside the the alt-right videos um simply simply"}, {"start": 2114.2799999999997, "end": 2121.08, "text": " a numbers game right simply the alt-right is so small that this is virtually guaranteed so what"}, {"start": 2121.08, "end": 2126.92, "text": " they find here is again simply an evidence of a regular diffusion process between these"}, {"start": 2126.92, "end": 2134.92, "text": " differently sized groups and the claims they make from this are just over the top again that"}, {"start": 2134.92, "end": 2140.2000000000003, "text": " they're comparison to the control group if you if you look at the numbers they're actually not"}, {"start": 2140.2000000000003, "end": 2150.44, "text": " that different from this from the idw numbers they're they're different than the alt-right here"}, {"start": 2150.44, "end": 2158.6, "text": " substantially different but again it's simply a function of distance in my opinion in these in"}, {"start": 2158.6, "end": 2171.4, "text": " these clusters um lastly they look at the youtube recommender system and they say okay if we look"}, {"start": 2171.4, "end": 2179.48, "text": " at these videos and the channels and we look at on these videos what other videos are recommended"}, {"start": 2179.48, "end": 2184.12, "text": " and what other channels are recommended so if you have like a video on youtube you have the video"}, {"start": 2184.12, "end": 2189.2400000000002, "text": " here and here you have like recommended videos similarly when you have a channel right you have a"}, {"start": 2189.2400000000002, "end": 2194.68, "text": " channel this is a person yay of this person the person can have first of all they can have features"}, {"start": 2194.68, "end": 2200.92, "text": " channels where they say look these are channels that I find cool I go check them out and then they"}, {"start": 2200.92, "end": 2208.36, "text": " also have recommended channels that are kind of given by youtube as recommendations so here youtube"}, {"start": 2208.36, "end": 2214.6800000000003, "text": " controls basically everything here the creator controls part and the youtube controls dollar part"}, {"start": 2215.4, "end": 2224.92, "text": " so they they look to both first of all the channels channels recommend recommendation so these"}, {"start": 2224.92, "end": 2234.6800000000003, "text": " are both sections here and they look at if you start on a alt-light video how likely if you do a"}, {"start": 2234.68, "end": 2242.12, "text": " random walk are you to end up in the alt-right or in the in touch with our web or control group"}, {"start": 2242.12, "end": 2248.9199999999996, "text": " after one step two steps three steps four steps so that the the big line is the random walker"}, {"start": 2248.9199999999996, "end": 2256.12, "text": " and actually the dashed line is the distance if you were to targetly go into the direction of"}, {"start": 2256.12, "end": 2265.24, "text": " such a video like what's the minimum number of clicks you need and you can see here the the"}, {"start": 2267.72, "end": 2275.08, "text": " if you start an alt-light after one or two steps the random walker is kind of a two percent chance"}, {"start": 2275.08, "end": 2284.2799999999997, "text": " to end up at an alt-right video and about a 25 percent chance here of ending up in a intellectual"}, {"start": 2284.28, "end": 2291.7200000000003, "text": " dark web video and about a 50 percent chance of ending up again at an alt-light video the scales"}, {"start": 2291.7200000000003, "end": 2297.96, "text": " here are really different so it's it's very difficult to judge how it compares to the control group"}, {"start": 2297.96, "end": 2305.2400000000002, "text": " which is playing at zero here but to me again this is a reflection of the size of these communities"}, {"start": 2305.24, "end": 2314.3599999999997, "text": " and I think it's a bit you know weird to to then claim oh these are reachable basically so two"}, {"start": 2314.3599999999997, "end": 2323.24, "text": " percent chance of landing on an alt-right video I'm I'm not sure but again if you compare if you"}, {"start": 2323.24, "end": 2331.0, "text": " start from the control group there's almost no chance you'll end up in a alt-right video so I"}, {"start": 2331.0, "end": 2342.2, "text": " guess the comparison is is okay if you compare to control group if you start look at videos however"}, {"start": 2343.64, "end": 2353.32, "text": " again if you start at alt-light after one step you are approximately 25 percent likely to be in an"}, {"start": 2353.32, "end": 2361.96, "text": " IDW video you're a bit over 50 percent likely to stay in an alt-light video however compare this"}, {"start": 2361.96, "end": 2369.88, "text": " to channels you're almost super unlikely to end at a control channel if you start at an alt-light"}, {"start": 2369.88, "end": 2377.1600000000003, "text": " channel but in video recommendations you're actually also about 25 percent chance of ending in a"}, {"start": 2377.16, "end": 2387.08, "text": " control group video where as look at the scale here you're only about 0.03 percent likely to end"}, {"start": 2387.08, "end": 2400.52, "text": " up in an alt-right video and also here so here even look at this if you start an IDW video the"}, {"start": 2400.52, "end": 2410.36, "text": " chance that you're going to end up in a control again super high much higher than an alt-light video"}, {"start": 2411.24, "end": 2417.24, "text": " whereas with the channel recommendations this was completely turned around so we see the alt-right"}, {"start": 2417.24, "end": 2423.32, "text": " completely loses when it comes to video recommendations and mainly the control group gains"}, {"start": 2423.32, "end": 2431.48, "text": " compared to the channel recommendations I think here's what I think I think this is due to this"}, {"start": 2431.48, "end": 2438.92, "text": " section here this section here where the creators have power and also this section here"}, {"start": 2438.92, "end": 2445.32, "text": " YouTube recommending I think they're putting a lot of work into the video recommendations I think"}, {"start": 2445.32, "end": 2450.92, "text": " they're putting not that much work into these recommendations and by work I mean actually manually"}, {"start": 2450.92, "end": 2457.56, "text": " intervening and deciding what's kind of good videos and bad videos and the control group they're"}, {"start": 2457.56, "end": 2465.48, "text": " probably there's probably big advertisement money in that so they might be pushed up a bit in"}, {"start": 2465.48, "end": 2471.32, "text": " the video recommendations since most people are going by video recommendations I've actually never"}, {"start": 2471.32, "end": 2476.6, "text": " used the channel recommendations feature and the channel recommendations first of all the creator"}, {"start": 2476.6, "end": 2483.72, "text": " has power over part of it and then also YouTube maybe not put as much work into these related"}, {"start": 2483.72, "end": 2494.8399999999997, "text": " channels so both have in the effect that I would say that the data here first of all it doesn't"}, {"start": 2495.56, "end": 2499.56, "text": " doesn't convince me of a radicalization pipeline it simply convinces me that some"}, {"start": 2499.56, "end": 2506.92, "text": " communities are larger smaller and closer together but second of all that this down here if you"}, {"start": 2506.92, "end": 2513.64, "text": " forget about the alt right for a moment yeah they're irrelevant this down here actually compared to"}, {"start": 2513.64, "end": 2522.44, "text": " up here shows maybe a bit of evidence of an algorithmic promotion of these mainstream media"}, {"start": 2522.44, "end": 2531.16, "text": " channels compared to how the communities are actually clustering which I think this this up here"}, {"start": 2531.16, "end": 2539.48, "text": " might be a much more accurate picture so you know that it's just kind of a funky thing in the data"}, {"start": 2541.96, "end": 2547.48, "text": " yeah that they'll write this irrelevant to this part because they're they're just too small"}, {"start": 2547.48, "end": 2559.56, "text": " so this is this is kind of my take on this they didn't give recommendations and is this a pipeline"}, {"start": 2559.56, "end": 2565.96, "text": " and so on and I don't think so you've now heard my idea and you've heard their idea"}, {"start": 2565.96, "end": 2577.96, "text": " decide for yourself but I think it's a good example of how if you are convinced of an underlying"}, {"start": 2577.96, "end": 2585.2400000000002, "text": " mechanism you're going to collect evidence in support of that mechanism and if you catch yourself"}, {"start": 2585.2400000000002, "end": 2591.08, "text": " doing that really really think isn't there an easier explanation for this all right that was it"}, {"start": 2591.08, "end": 2598.52, "text": " for me have fun"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=wZWn7Hm8osA | Gauge Equivariant Convolutional Networks and the Icosahedral CNN | Ever wanted to do a convolution on a Klein Bottle? This paper defines CNNs over manifolds such that they are independent of which coordinate frame you choose. Amazingly, this then results in an efficient practical method to achieve state-of-the-art in several tasks!
https://arxiv.org/abs/1902.04615
Abstract:
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design. Equivariant networks have shown excellent performance and data efficiency on vision and medical imaging problems that exhibit symmetries. Here we show how this principle can be extended beyond global symmetries to local gauge transformations. This enables the development of a very general class of convolutional neural networks on manifolds that depend only on the intrinsic geometry, and which includes many popular methods from equivariant and geometric deep learning. We implement gauge equivariant CNNs for signals defined on the surface of the icosahedron, which provides a reasonable approximation of the sphere. By choosing to work with this very regular manifold, we are able to implement the gauge equivariant convolution using a single conv2d call, making it a highly scalable and practical alternative to Spherical CNNs. Using this method, we demonstrate substantial improvements over previous methods on the task of segmenting omnidirectional images and global climate patterns.
Authors: Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling | What you're looking at here are manifolds. Specifically, you're looking at 2D manifolds embedded in a 3D space. So, naturally, these are some kind of bodies that have a surface. And one of the things you might want to do with a manifold like this is to define a convolutional neural network to work on this surface. So, usually, we have convolutional neural network working on flat surfaces, such as images. But what if you could actually work on a manifold like this? An easy example is a sphere. You might want to work on a sphere. Why is that useful? Maybe you want to predict the climate. And then you actually want to work on the Earth's surface, which is approximated by a sphere. So, today we'll look at the following paper. Gage-equivariant convolutional networks and the ICO-SAHEDRAL CNN by Tako Kohen, Maurice Weiler, Burkai Kei-chang Kei-chang No Glue and Max Welling. So, as I already said, this paper tries to define convolutional neural networks on any kind of manifold. So, what's the problem inherently when you're doing this? Can't you just, you know, place a filter, move it around like you do in a regular CNN? That's exactly the problem, actually. So, if you have a picture and let me draw a picture of a cat. Right? Cat here. Here, here, here. Hi, hi. All right. Cat smiling. This is a terrible cat. What you do is you have your filter, right? And that's a little patch in the image. You're just going to move this filter, move it around, move it around. And at each point, you convolve the filter. If this is larger, you convolve each of the elements of the filter. Here, maybe you have nine elements. So, each of these elements here is convolved with the underlying image. At the end, you aggregate all of them into a single point, usually by adding them up. And there, you, from this image, you produce a new image that is a different thing. So, if this kernel here, for example, is a specific kernel that detects lines you might end up with, or that detects specifically up down lines, you might end up with just the lines that go up and down in this. So, the eyes here, here, right? So, this might be the result of this convolution. Of course, in C and then these convolutional kernels then are learned as parameters. So, it seems pretty easy, right? You just simply take a kernel and kind of shift it around each point you convolve the underlying image. And that's it. Well, it's not so easy if you work on a manifold. And why is that illustrated here on a sphere? So, if you have a sphere and you place a kernel, it really matters which direction you place the kernel in. So, I mean, it does on an image, but bear with me. So, here you place a kernel in the direction of this arrow, right? You place the kernel, maybe like this here, you place your little kernel on it, and you say, up, basically, up is here. And then you move that kernel around, and ultimately, you want to move it all the way to the other side of the sphere. So, back here, you want to move it over there, you want to move it all around the sphere, right? Now, what happens? If you move it this way, right? You convolve here, you move it this way, you convolve here. You see already by the red arrows, where is up? Up is where the red arrow is point, right? If you move it along here, the red arrow is always point up, up, up, up, up. Okay, so you arrive back here with your kernel, I'm going to draw this, try to draw this dashed with the up in the kernel, being this direction, because you've moved it around like this so. But if you, for some reason, choose to move your kernel in another direction, namely in this direction up here. Then, as you can see, if you place it here, and then you place it here, you place it here, you place it back here, and ultimately here, where is up? If you just keep track of where up is in your kernel, it's always going to be to the front of the sphere. So, on one hand, you have up being to the back here, and on the other hand, you have one up being to the front here. So, this doesn't match, so it actually depends on which path you take from this original point to any other point. It depends which path you take, how your kernel is going to end up there. And that's of course very unfortunate, because we're not used to this on this 2D thing, because if I move it down first and then up here, over here, sorry, where is up in my, so if up is here, if it's down here, up is here, and over here, up is here. And if I kind of move it straight over here, and then down, and then here, and then here, you see up is always the same direction. There is no problem in a flat surface. That's why we can simply define it as we do, but in on a sphere or any kind of manifold, it's called parallel transport is path dependent in technical terms. The way you transport it, thing from one place to another really depends on the path you take. So, this paper is trying to address this problem and define a convolution on any manifold. So, how is this done? First of all, to define a convolution on the curved surface, what they do is they say, okay, we have a convolutional filter, and the convolutional filter is actually sort of a flat object. And it works in what's called the tangent space of the manifold, the tangent space, is a flat space that you can define at any point on the manifold. So, here the manifold is the sphere. At some point P, you define the tangent space as simply the tangent, kind of a, imagine a sheet, a straight sheet, touching the surface at point P. So, this is now a flat space where we can define a, let's say, a regular convolutional kernel, as we did, laying it up here. The way, the question is, how do you map points from the sphere to this tangent space and back? And that's happening via this exponential map. The exponential map in this sense is not the same as the exponential map that you are used to. I simply, you know, exponentuating things. The exponential map here basically means if I want to go from the, from a point in the tangent space to a point on the manifold, what I do is I take this vector here, which is a straight vector in the tangent space. And I go on the manifold in this direction for a, for a predefined length. So, this is usually a length of one on the manifold for a predefined length. I walk into this direction along the geodesic. So, along the shortest path into this direction, and then I stop and where I end up, that's where I basically end up. So, that's the corresponding point to this point here on the tangent space. So, to define a convolution fully, it means that first you lay your kernel, and then for each element in the kernel, you will multiply that kernel entry. Let me use a blue here. And multiply that kernel entry by the corresponding point on the manifold itself. So, by mapping this point in the tangent space to the manifold, or you can also say you basically back project from the manifold to the tangent space, and there you do your regular convolution. So, that's how you define a convolution in the classic sense, if you have, for example, a sphere. And what the author is here, of course, notice already is that this is dependent on how you get there. And in technical terms, it's called this is dependent on your gauge. So, the gauge basically is defining this coordinate frame in the tangent space. So, this tangent vector here is an abstract object, it's just a vector, but in order to do something with it, in order to do something with a kernel and convolution, and so on, you have to express it in numbers. And numbers are expressed with respect to a base, usually. If you have a vector v here, you can express it with respect to this two basis vectors. So, maybe v is here is two, and here is three. So, v can be represented as the vector two, three, with respect to the base E1, E2. And so, this choice of base basically is what's called a gauge. Now, I'm probably butchering this topic completely for any physicists or mathematicians listening, but just kind of give you an impression. So, this choice of basis is called a gauge. And we can imagine a different choice of basis. So, let me draw another basis here. So, another basis might be one, two. So, E1 is here, E2 is here. So, the new coordinates here would be something like v can also be expressed in this new basis as, let's say one, here's maybe one, and this is very far, so this is maybe five, so five in this direction. And to transform between the two, there is formulas basically from, you know them from linear algebra from vector spaces. In general, they're called gauge transformations. And if we want our convolution to be invariant to the basically chosen coordinate frames, we have to say in technical terms what we mean is the convolution should be gauge equivariants. That means no matter which base we choose if we choose this space or if we choose this. The result should basically be the same. So, within the computation of the convolution, we must account for the fact of which gauge is chosen and then basically have the result be invariant. And with the result, we don't mean the numbers of the result because these will change, but we mean the actual object that is resulting, the geometric object that is resulting should be equivariant under gauge, gauge transformations. So, this is a, it sounds very technical, but the way I understand it is basically you want to define a convolution on these manifolds such that you, it's such that the result is not dependent on exactly how you shift the kernel around as long as you account for the fact that you shifted it around this way. It should give you the same, the same result. So, for this day, define a condition and the condition is that the kernel must behave as such. So, the V is the input here and G minus 1 is a transformation of the gauge as I understand it. And basically if you transform the input by a different coordinate frame, then the kernel applied to that different input must behave exactly as the kernel applied to the original object. And then perturbed by these two operations. So, this is, this you might notice this, you might know things like this from discussions, maybe of what it means for a function to be linear or something where the function applied to a transformed version must correspond to the function applied to the original version of the input. So, the result transformed by some, some operation. So, if this holds, so this is a condition on the kernel of the convolution. And if you, so if you define your convolution in this way, this is a modification to the convolution on the tangent space that we had, then your resulting, your result will be gauge equivariant. What does this transformation, what is this new convolution they define, they say if you do the convolution this way, then these things will hold. So, what is this, this way? Basically, again, you convolve the kernel with the input, but you, the F here is the input K is the kernel. But what you do, if we come up here again, what you do, you have to do a slight modification, you kernel here, if you want to convolve it, let's say this point here, you would not combine this point with the point along the exponential map corresponding to it. Right, this point here, but what you would do is you would transport this point back along the geodesic to here, and then you would, and then you would compute your regular convolution. So this means, sorry, this is what this term here means, technically. If you don't understand it, don't worry, I don't either, I guess, this is simply saying that if you perform convolutions in on manifold in this way, and you have the appropriate kernel, then they will be gauge equivariant. So this is pretty cool, because what they do next is they define the convolution on an icosahedron, and an icosahedron is a shape, a 3D geometric shape that's made of like triangles, and I can try to, maybe they have drawn it, I think they have it, yes. So, all right, this is an icosahedron, and so they cannot define a convolution on this, where a filter is basically, a filter looks like this, it's this kind of hexagon. Yes, and the filter is kind of shifted around, and of course, the problem is whenever it shifts over one of these boundaries here, or whenever it shifts over these corners here, what do you do, what do you do then? Because if you look at it, you can't basically flatten the corner, if you try to flatten the corner, you're going to have this wedge sticking out, that's terrible. You're going to have a wedge here sticking out if you try to flatten the corner. So, you have to define basically the convolution on this, they do it in their framework, and specifically what they do is they flatten and pad the icosahedron to this representation, so they put it into five pieces, they have to pad a bit, you see here each colored edge here, this colored edge corresponds to this colored edge, so that would be padded from here, and then they put this into a regular 2D image, with the color things they are sometimes repeated in this image, and then they define the filters in this following way, so these are the filters for basically for a six channel input image, and what they have to do, is they have to do a weight sharing between the filters in a very specific way, and in order for the kernel to have these properties, they need to replicate these filters down here, and if you look the different colors in these different channels, they each have different intensities, and if you look down here, they are all slightly different, which means they are all slightly different linear combinations of the filter up here, or rotations basically, they are all differently arranged, but they are basically this blue field here, is this blue field, but is also, let's see, this one, and this one, and this one, so the weights here are these original filters are basically arranged, such that the weights are shared in this form down here, but if you do this, if you arrange them like this, when you replicate each filter basically, six times, because you also want six output channels, then the filter will have the desired properties, and your convolution will be gauge-equivariant, so the applied is to, to, to, ICO-Mness, so the complete algorithm is actually down here, they can actually use, if they pat the image and the correct way to the 2D image, and they expand the kernel to arrange it as we just saw, they can use a regular 2D convolution to compute their result, and that's pretty cool, and this means this also is very, very, very efficient, on this ICO-Sahedron, so what they do is, they apply this to ICO-Mness, where they project, basically, they project M-NIST on an ICO-Sahedron, so they take the image M-NIST, and they project it onto this, and then they try to classify it on that, and they can actually show that their method outperforms other method, and learns these invariances, so learns the symmetries of the ICO-Sahedron, or base, sorry, is invariant to them, being invariant to the symmetries means you don't have to learn them anymore, if you're not invariant to symmetries, it means you have to learn each one of them separately, right, but if you're invariant to symmetries, then you have only have to learn one thing once, and then if the ICO-Sahedron is rotated, you're just like, Matt, that's just the same thing as this other thing. They also do this interestingly to climate pattern segmentation, and also a kind of 2 or 3D omni-directional segmentation, where you're in a room, a 3D room, and you have an omni-directional picture, sorry, from everywhere, you have a picture of 3D sphere picture from everywhere, you're asked to segment things in the room, and actually outperform all other methods on these data sets, so I find this extremely cool, that kind of this ultra-theoretical work, starting out as ultra-theoretical, then gets implemented into something that beats state of the art methods on relevant tasks. Alright, so that was just a brief overview and a very dirty look at these things, but I hope you got something out of it, and thus far, that was it from me. Bye-bye. | [{"start": 0.0, "end": 4.0, "text": " What you're looking at here are manifolds."}, {"start": 4.0, "end": 9.0, "text": " Specifically, you're looking at 2D manifolds embedded in a 3D space."}, {"start": 9.0, "end": 15.0, "text": " So, naturally, these are some kind of bodies that have a surface."}, {"start": 15.0, "end": 21.0, "text": " And one of the things you might want to do with a manifold like this is to"}, {"start": 21.0, "end": 26.0, "text": " define a convolutional neural network to work on this surface."}, {"start": 26.0, "end": 31.0, "text": " So, usually, we have convolutional neural network working on flat surfaces,"}, {"start": 31.0, "end": 37.0, "text": " such as images. But what if you could actually work on a manifold like this?"}, {"start": 37.0, "end": 42.0, "text": " An easy example is a sphere. You might want to work on a sphere."}, {"start": 42.0, "end": 46.0, "text": " Why is that useful? Maybe you want to predict the climate."}, {"start": 46.0, "end": 52.0, "text": " And then you actually want to work on the Earth's surface, which is approximated by a sphere."}, {"start": 52.0, "end": 56.0, "text": " So, today we'll look at the following paper."}, {"start": 56.0, "end": 63.0, "text": " Gage-equivariant convolutional networks and the ICO-SAHEDRAL CNN by Tako Kohen,"}, {"start": 63.0, "end": 71.0, "text": " Maurice Weiler, Burkai Kei-chang Kei-chang No Glue and Max Welling."}, {"start": 71.0, "end": 81.0, "text": " So, as I already said, this paper tries to define convolutional neural networks on any kind of manifold."}, {"start": 81.0, "end": 85.0, "text": " So, what's the problem inherently when you're doing this?"}, {"start": 85.0, "end": 91.0, "text": " Can't you just, you know, place a filter, move it around like you do in a regular CNN?"}, {"start": 91.0, "end": 99.0, "text": " That's exactly the problem, actually. So, if you have a picture and let me draw a picture"}, {"start": 99.0, "end": 107.0, "text": " of a cat. Right? Cat here. Here, here, here. Hi, hi. All right. Cat smiling."}, {"start": 107.0, "end": 113.0, "text": " This is a terrible cat. What you do is you have your filter, right?"}, {"start": 113.0, "end": 119.0, "text": " And that's a little patch in the image. You're just going to move this filter, move it around, move it around."}, {"start": 119.0, "end": 126.0, "text": " And at each point, you convolve the filter. If this is larger, you convolve each of the elements of the filter."}, {"start": 126.0, "end": 133.0, "text": " Here, maybe you have nine elements. So, each of these elements here is convolved with the underlying image."}, {"start": 133.0, "end": 139.0, "text": " At the end, you aggregate all of them into a single point, usually by adding them up."}, {"start": 139.0, "end": 147.0, "text": " And there, you, from this image, you produce a new image that is a different thing."}, {"start": 147.0, "end": 154.0, "text": " So, if this kernel here, for example, is a specific kernel that detects lines you might end up with,"}, {"start": 154.0, "end": 164.0, "text": " or that detects specifically up down lines, you might end up with just the lines that go up and down in this."}, {"start": 164.0, "end": 171.0, "text": " So, the eyes here, here, right? So, this might be the result of this convolution."}, {"start": 171.0, "end": 177.0, "text": " Of course, in C and then these convolutional kernels then are learned as parameters."}, {"start": 177.0, "end": 188.0, "text": " So, it seems pretty easy, right? You just simply take a kernel and kind of shift it around each point you convolve the underlying image."}, {"start": 188.0, "end": 197.0, "text": " And that's it. Well, it's not so easy if you work on a manifold. And why is that illustrated here on a sphere?"}, {"start": 197.0, "end": 204.0, "text": " So, if you have a sphere and you place a kernel, it really matters which direction you place the kernel in."}, {"start": 204.0, "end": 210.0, "text": " So, I mean, it does on an image, but bear with me. So, here you place a kernel in the direction of this arrow, right?"}, {"start": 210.0, "end": 220.0, "text": " You place the kernel, maybe like this here, you place your little kernel on it, and you say, up, basically, up is here."}, {"start": 220.0, "end": 226.0, "text": " And then you move that kernel around, and ultimately, you want to move it all the way to the other side of the sphere."}, {"start": 226.0, "end": 232.0, "text": " So, back here, you want to move it over there, you want to move it all around the sphere, right?"}, {"start": 232.0, "end": 239.0, "text": " Now, what happens? If you move it this way, right? You convolve here, you move it this way, you convolve here."}, {"start": 239.0, "end": 246.0, "text": " You see already by the red arrows, where is up? Up is where the red arrow is point, right?"}, {"start": 246.0, "end": 252.0, "text": " If you move it along here, the red arrow is always point up, up, up, up, up."}, {"start": 252.0, "end": 266.0, "text": " Okay, so you arrive back here with your kernel, I'm going to draw this, try to draw this dashed with the up in the kernel, being this direction, because you've moved it around like this so."}, {"start": 266.0, "end": 275.0, "text": " But if you, for some reason, choose to move your kernel in another direction, namely in this direction up here."}, {"start": 275.0, "end": 286.0, "text": " Then, as you can see, if you place it here, and then you place it here, you place it here, you place it back here, and ultimately here, where is up?"}, {"start": 286.0, "end": 294.0, "text": " If you just keep track of where up is in your kernel, it's always going to be to the front of the sphere."}, {"start": 294.0, "end": 302.0, "text": " So, on one hand, you have up being to the back here, and on the other hand, you have one up being to the front here."}, {"start": 302.0, "end": 312.0, "text": " So, this doesn't match, so it actually depends on which path you take from this original point to any other point."}, {"start": 312.0, "end": 318.0, "text": " It depends which path you take, how your kernel is going to end up there."}, {"start": 318.0, "end": 331.0, "text": " And that's of course very unfortunate, because we're not used to this on this 2D thing, because if I move it down first and then up here, over here, sorry, where is up in my,"}, {"start": 331.0, "end": 337.0, "text": " so if up is here, if it's down here, up is here, and over here, up is here."}, {"start": 337.0, "end": 346.0, "text": " And if I kind of move it straight over here, and then down, and then here, and then here, you see up is always the same direction."}, {"start": 346.0, "end": 351.0, "text": " There is no problem in a flat surface."}, {"start": 351.0, "end": 363.0, "text": " That's why we can simply define it as we do, but in on a sphere or any kind of manifold, it's called parallel transport is path dependent in technical terms."}, {"start": 363.0, "end": 370.0, "text": " The way you transport it, thing from one place to another really depends on the path you take."}, {"start": 370.0, "end": 379.0, "text": " So, this paper is trying to address this problem and define a convolution on any manifold."}, {"start": 379.0, "end": 382.0, "text": " So, how is this done?"}, {"start": 382.0, "end": 396.0, "text": " First of all, to define a convolution on the curved surface, what they do is they say, okay, we have a convolutional filter, and the convolutional filter is actually sort of a flat object."}, {"start": 396.0, "end": 406.0, "text": " And it works in what's called the tangent space of the manifold, the tangent space, is a flat space that you can define at any point on the manifold."}, {"start": 406.0, "end": 421.0, "text": " So, here the manifold is the sphere. At some point P, you define the tangent space as simply the tangent, kind of a, imagine a sheet, a straight sheet, touching the surface at point P."}, {"start": 421.0, "end": 432.0, "text": " So, this is now a flat space where we can define a, let's say, a regular convolutional kernel, as we did, laying it up here."}, {"start": 432.0, "end": 440.0, "text": " The way, the question is, how do you map points from the sphere to this tangent space and back? And that's happening via this exponential map."}, {"start": 440.0, "end": 451.0, "text": " The exponential map in this sense is not the same as the exponential map that you are used to. I simply, you know, exponentuating things."}, {"start": 451.0, "end": 467.0, "text": " The exponential map here basically means if I want to go from the, from a point in the tangent space to a point on the manifold, what I do is I take this vector here, which is a straight vector in the tangent space."}, {"start": 467.0, "end": 487.0, "text": " And I go on the manifold in this direction for a, for a predefined length. So, this is usually a length of one on the manifold for a predefined length. I walk into this direction along the geodesic."}, {"start": 487.0, "end": 500.0, "text": " So, along the shortest path into this direction, and then I stop and where I end up, that's where I basically end up. So, that's the corresponding point to this point here on the tangent space."}, {"start": 500.0, "end": 524.0, "text": " So, to define a convolution fully, it means that first you lay your kernel, and then for each element in the kernel, you will multiply that kernel entry. Let me use a blue here. And multiply that kernel entry by the corresponding point on the manifold itself."}, {"start": 524.0, "end": 535.0, "text": " So, by mapping this point in the tangent space to the manifold, or you can also say you basically back project from the manifold to the tangent space, and there you do your regular convolution."}, {"start": 535.0, "end": 553.0, "text": " So, that's how you define a convolution in the classic sense, if you have, for example, a sphere. And what the author is here, of course, notice already is that this is dependent on how you get there."}, {"start": 553.0, "end": 564.0, "text": " And in technical terms, it's called this is dependent on your gauge. So, the gauge basically is defining this coordinate frame in the tangent space."}, {"start": 564.0, "end": 576.0, "text": " So, this tangent vector here is an abstract object, it's just a vector, but in order to do something with it, in order to do something with a kernel and convolution, and so on, you have to express it in numbers."}, {"start": 576.0, "end": 589.0, "text": " And numbers are expressed with respect to a base, usually. If you have a vector v here, you can express it with respect to this two basis vectors."}, {"start": 589.0, "end": 610.0, "text": " So, maybe v is here is two, and here is three. So, v can be represented as the vector two, three, with respect to the base E1, E2. And so, this choice of base basically is what's called a gauge."}, {"start": 610.0, "end": 620.0, "text": " Now, I'm probably butchering this topic completely for any physicists or mathematicians listening, but just kind of give you an impression."}, {"start": 620.0, "end": 637.0, "text": " So, this choice of basis is called a gauge. And we can imagine a different choice of basis. So, let me draw another basis here. So, another basis might be one, two."}, {"start": 637.0, "end": 657.0, "text": " So, E1 is here, E2 is here. So, the new coordinates here would be something like v can also be expressed in this new basis as, let's say one, here's maybe one, and this is very far, so this is maybe five, so five in this direction."}, {"start": 657.0, "end": 669.0, "text": " And to transform between the two, there is formulas basically from, you know them from linear algebra from vector spaces. In general, they're called gauge transformations."}, {"start": 669.0, "end": 693.0, "text": " And if we want our convolution to be invariant to the basically chosen coordinate frames, we have to say in technical terms what we mean is the convolution should be gauge equivariants. That means no matter which base we choose if we choose this space or if we choose this."}, {"start": 693.0, "end": 709.0, "text": " The result should basically be the same. So, within the computation of the convolution, we must account for the fact of which gauge is chosen and then basically have the result be invariant."}, {"start": 709.0, "end": 725.0, "text": " And with the result, we don't mean the numbers of the result because these will change, but we mean the actual object that is resulting, the geometric object that is resulting should be equivariant under gauge, gauge transformations."}, {"start": 725.0, "end": 754.0, "text": " So, this is a, it sounds very technical, but the way I understand it is basically you want to define a convolution on these manifolds such that you, it's such that the result is not dependent on exactly how you shift the kernel around as long as you account for the fact that you shifted it around this way."}, {"start": 754.0, "end": 770.0, "text": " It should give you the same, the same result. So, for this day, define a condition and the condition is that the kernel must behave as such."}, {"start": 770.0, "end": 799.0, "text": " So, the V is the input here and G minus 1 is a transformation of the gauge as I understand it. And basically if you transform the input by a different coordinate frame, then the kernel applied to that different input must behave exactly as the kernel applied to the original object."}, {"start": 799.0, "end": 828.0, "text": " And then perturbed by these two operations. So, this is, this you might notice this, you might know things like this from discussions, maybe of what it means for a function to be linear or something where the function applied to a transformed version must correspond to the function applied to the original version of the input."}, {"start": 828.0, "end": 857.0, "text": " So, the result transformed by some, some operation. So, if this holds, so this is a condition on the kernel of the convolution. And if you, so if you define your convolution in this way, this is a modification to the convolution on the tangent space that we had, then your resulting, your result will be gauge equivariant."}, {"start": 857.0, "end": 878.0, "text": " What does this transformation, what is this new convolution they define, they say if you do the convolution this way, then these things will hold. So, what is this, this way? Basically, again, you convolve the kernel with the input, but you, the F here is the input K is the kernel."}, {"start": 878.0, "end": 902.0, "text": " But what you do, if we come up here again, what you do, you have to do a slight modification, you kernel here, if you want to convolve it, let's say this point here, you would not combine this point with the point along the exponential map corresponding to it."}, {"start": 902.0, "end": 919.0, "text": " Right, this point here, but what you would do is you would transport this point back along the geodesic to here, and then you would, and then you would compute your regular convolution."}, {"start": 919.0, "end": 948.0, "text": " So this means, sorry, this is what this term here means, technically. If you don't understand it, don't worry, I don't either, I guess, this is simply saying that if you perform convolutions in on manifold in this way, and you have the appropriate kernel, then they will be gauge equivariant."}, {"start": 948.0, "end": 972.0, "text": " So this is pretty cool, because what they do next is they define the convolution on an icosahedron, and an icosahedron is a shape, a 3D geometric shape that's made of like triangles, and I can try to, maybe they have drawn it, I think they have it, yes."}, {"start": 972.0, "end": 987.0, "text": " So, all right, this is an icosahedron, and so they cannot define a convolution on this, where a filter is basically, a filter looks like this, it's this kind of hexagon."}, {"start": 987.0, "end": 1008.0, "text": " Yes, and the filter is kind of shifted around, and of course, the problem is whenever it shifts over one of these boundaries here, or whenever it shifts over these corners here, what do you do, what do you do then?"}, {"start": 1008.0, "end": 1021.0, "text": " Because if you look at it, you can't basically flatten the corner, if you try to flatten the corner, you're going to have this wedge sticking out, that's terrible."}, {"start": 1021.0, "end": 1027.0, "text": " You're going to have a wedge here sticking out if you try to flatten the corner."}, {"start": 1027.0, "end": 1055.0, "text": " So, you have to define basically the convolution on this, they do it in their framework, and specifically what they do is they flatten and pad the icosahedron to this representation, so they put it into five pieces, they have to pad a bit, you see here each colored edge here, this colored edge corresponds to this colored edge, so that would be padded from here,"}, {"start": 1055.0, "end": 1084.0, "text": " and then they put this into a regular 2D image, with the color things they are sometimes repeated in this image, and then they define the filters in this following way, so these are the filters for basically for a six channel input image, and what they have to do,"}, {"start": 1084.0, "end": 1111.0, "text": " is they have to do a weight sharing between the filters in a very specific way, and in order for the kernel to have these properties, they need to replicate these filters down here, and if you look the different colors in these different channels, they each have different intensities, and if you look down here,"}, {"start": 1111.0, "end": 1139.0, "text": " they are all slightly different, which means they are all slightly different linear combinations of the filter up here, or rotations basically, they are all differently arranged, but they are basically this blue field here, is this blue field, but is also, let's see, this one, and this one, and this one, so the weights here are"}, {"start": 1139.0, "end": 1165.0, "text": " these original filters are basically arranged, such that the weights are shared in this form down here, but if you do this, if you arrange them like this, when you replicate each filter basically, six times, because you also want six output channels, then the filter will have the desired properties, and your convolution will be gauge-equivariant,"}, {"start": 1165.0, "end": 1194.0, "text": " so the applied is to, to, to, ICO-Mness, so the complete algorithm is actually down here, they can actually use, if they pat the image and the correct way to the 2D image, and they expand the kernel to arrange it as we just saw, they can use a regular 2D convolution to compute their result, and that's pretty cool, and this means this also is very, very, very efficient,"}, {"start": 1194.0, "end": 1220.0, "text": " on this ICO-Sahedron, so what they do is, they apply this to ICO-Mness, where they project, basically, they project M-NIST on an ICO-Sahedron, so they take the image M-NIST, and they project it onto this, and then they try to classify it on that, and they can actually show that their method outperforms other method, and learns these invariances,"}, {"start": 1220.0, "end": 1249.0, "text": " so learns the symmetries of the ICO-Sahedron, or base, sorry, is invariant to them, being invariant to the symmetries means you don't have to learn them anymore, if you're not invariant to symmetries, it means you have to learn each one of them separately, right, but if you're invariant to symmetries, then you have only have to learn one thing once, and then if the ICO-Sahedron is rotated, you're just like, Matt, that's just the same thing as this other thing."}, {"start": 1249.0, "end": 1275.0, "text": " They also do this interestingly to climate pattern segmentation, and also a kind of 2 or 3D omni-directional segmentation, where you're in a room, a 3D room, and you have an omni-directional picture, sorry, from everywhere, you have a picture of 3D sphere picture from everywhere, you're asked to segment things in the room,"}, {"start": 1275.0, "end": 1297.0, "text": " and actually outperform all other methods on these data sets, so I find this extremely cool, that kind of this ultra-theoretical work, starting out as ultra-theoretical, then gets implemented into something that beats state of the art methods on relevant tasks."}, {"start": 1297.0, "end": 1308.0, "text": " Alright, so that was just a brief overview and a very dirty look at these things, but I hope you got something out of it, and thus far, that was it from me."}, {"start": 1308.0, "end": 1335.0, "text": " Bye-bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=H6Qiegq_36c | Processing Megapixel Images with Deep Attention-Sampling Models | Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper proposes attention sampling, which learns to selectively process parts of any large image in full resolution, while discarding uninteresting bits. This leads to enormous gains in speed and memory consumption.
https://arxiv.org/abs/1905.03711
Abstract:
Existing deep architectures cannot operate on very large signals such as megapixel images due to computational and memory constraints. To tackle this limitation, we propose a fully differentiable end-to-end trainable model that samples and processes only a fraction of the full resolution input image. The locations to process are sampled from an attention distribution computed from a low resolution view of the input. We refer to our method as attention sampling and it can process images of several megapixels with a standard single GPU setup. We show that sampling from the attention distribution results in an unbiased estimator of the full model with minimal variance, and we derive an unbiased estimator of the gradient that we use to train our model end-to-end with a normal SGD procedure. This new method is evaluated on three classification tasks, where we show that it allows to reduce computation and memory footprint by an order of magnitude for the same accuracy as classical architectures. We also show the consistency of the sampling that indeed focuses on informative parts of the input images.
Authors: Angelos Katharopoulos, François Fleuret | Hi there. Today we're looking at processing megapixel images with deep attention sampling models by Angela's Cattaro-Poulouse and Francois Fleure. So this another paper that I saw they talk of at ICML and it's a pretty cool idea, it's pretty simple and apparently it works very well. So consider the following image here of a street situation and ask yourself if a self-driving car sees this. What are the kind of things it needs to be aware of? So of course one of the things it needs to be aware of is like the road, the cars and so on but also what's in circle in red here, the street sign and the street sign especially is important because there's a number on it and you want to see what the number is otherwise you won't be able to adjust your speed. So if this is now a really large image so if the camera is really good and the dimensions of this image are really large then current machine learning methods have a problem because current machine learning methods kind of go up to maybe something like 200 by 200 pixels or current image net models some down samples on so if this is much larger than this what current machine learning models would do is they would simply down sample like compress the size just compress it a bit and so on and by that as you see here on the right if the original patch in the image you could you know you could cut it out and large it it would look like this if you compress the whole image the same patch would now look like this blurred so in the bottom half you'd be able to recognize the number in the top half you wouldn't so as standard CNN might be able to recognize the road and the cars still at the lower resolution but not the speed sign what we want is a method that can selectively pay attention to parts of the image that it finds interesting and then look at those parts in full detail while basically deciding to discard other parts completely such as the sky here so this paper is one that does this and does so in a very efficient manner so the basic premise is very simple all right I'm going to show it on this on the same image so what you do is first you actually compress the image so this image would become a smaller image right so here maybe this is 1000 by 2000 you compress it down to maybe 100 by 200 still the same image but compressed on here's the road here's a bunch of trees I'm very good at drawing trees and here's this street sign and here is a car and here is another car all right so and there's a sky up here so now what you do is on this smaller version you classify every location I guess you could classify you could sub-sample but you want to classify every single location on it on how interesting is it and what they do is they take this and just put it through what they call an attention network which is just just a neural network in their case it's a CNN that for each location here for each blue location outputs a function a of a and let's call it a x y at coordinates x and y of this image x okay this is stupid notation that's a of x so the image is x at coordinates i j right so all of these blue things here are eyes and j's different eyes and j's and then does this gives you now if you normalize correctly so if you normalize over all the a's and i j i j if you normalize this gives you a distribution over this image so if we look at it in like 1d this gives you like a distribution not to continue this one in this case a discrete one how interesting is each patch and at the end if you have this distribution so let's finish here what you want to do is you want to say which are the most interesting location so this one's pretty high and these are very high so that might correspond to over here at my correspond to some location so this location is very high and these locations are very interesting and only in these locations you take them out and then only those you process and full resolution so you might have extracted let's say four patches so now you have four of these patches and each of them individually you run through a second neural network which is called another CNN which is called F the feature network so the feature network will take a patch and output a vector of features right so you'll feed those in and output the vector features and then what you do is you simply your final output which they call G let me colorize this so G which is G is now the final output that's not called a G let's call it O output is you sum over all the patches you have extracted down here so the patch number P over all your patches and you sum these features F of patch P right and P might be at location IJ let's put IJ here so IJ in the extracted patches and you weigh each feature by how much attention it got at that location right so it looks more complicated than it is what you do is you simply determine these features by using this neural network only at the position where this neural network says are interesting then you get the features from the interesting positions and you basically just weigh them by how much attention they got in the attention distribution and that will be your final output of the network and it makes intuitive sense like one network decides what is interesting the other network decides what are we going to do with the interesting things in this image and the cool thing about this is you can basically decide how many of these patches here how many you want to extract you can decide at what resolution you want to process this image and all of this are parameters that you said by how much time you have for computation and how much memory you have for your computation so that's pretty cool pretty modular scale up and scale down and the another cool thing is that theoretical guarantees that they give so basically here they prove that the way they do it especially by extracting the patch especially if they have an on biased sorry especially if they have sampling with that replacement is that if they weigh the things correctly and if they do the things correctly they show that this is actually an on biased estimator of the true neural network if you were to evaluate on the full image basically on each patch in full resolution so only taking the ones where the attention focuses is an on biased estimator and not only is it an on biased estimator it is in fact the estimator with the smallest variance and that's what they prove here so the minimum variance estimator and this is this is pretty pretty interesting pretty cool and works pretty well they also show how to derive the gradient update when you train with this attention sampling so now you train your neural you train your machine learning system not on the whole image but only on a sub set of the image patches but it still behaves in expectation as if you were to train on the entire images so pretty neat so here they show how this compares to full CNN in this case we have the full CNN where the picture is simply down sampled and then classified and this is what's called megapixel amnesty so in megapixel amnesty you have a large image and you put three digits in there there's a same for example five five five from the amnesty dataset you put two random digits others like two three and you put also a bunch of noise noise patches somewhere right so the task is to recognize which is the dominant digit here in this case it would be five right five five where was the other one five here so if you give this to a regular CNN you see it does about this well this is the training loss here training loss and this is the test loss and it takes this much time right time per epoch here and this much time to evaluate sorry if you now use this attention sampling and as I said you can actually modulate how many patches you want to take so as you go down you take more patches we would expect it to take more time this is exactly what happens you see for example down here in the test error if you take five patches per image it takes very little time but the error I mean the error is still better than if you use the CNN simply because you cannot pay attention to details much more as you use more patches your test error drops also your training loss if they drop so using more patches will actually give you a better and better and better performing model but you sacrifice a little bit of time but still not never as as slow as with the full with the CNN so even though it's a downsample CNN right so that is very interesting and very cool that not only do they beat the baseline in terms of error but also a lot in terms of speed if you look at what the model does as it learns here you see for a given image this is always the same image from the data set at the beginning they have actually marked where the relevant the three relevant digits are in the picture with the red circle so if you look at how over the training of this model how this distribution evolves is pretty interesting yellow basically means high attention so at the beginning you have high attention everywhere in the image right and then as you go on and on and on you see for example here it pays attention to all the locations where basically where there is something in the image right this could be one of these three digits but it could also be one of the digits that it's trying to that is trying to distract the model like the false digits or the noise patches and as you go more and more and more it really learns to only pay attention to the relevant digits and then classify those at full resolution so this really shows that the this this kind of attention distribution learns something very meaningful they do more experiments on two data sets namely this is a histopathology data set right here where the goal is I think to recognize this epithelial cells this type of cell and you can see that this here is the baseline and this here is the new method and the baseline basically what it does is it does a similar thing namely it processes the image in patches but it processes every single patch maybe in succession but it still processes every single patch where the attention sampling only processes the patches that the attention sampling distribution suggests and this other update to set here is a street sign date to set that you saw at the beginning right here and the again I think this is the baseline and this is the attention sampling so both learn to pay attention to the street signs but again the attention sampling much more efficient so here you see the baseline performance the attention sampling performance is similar in terms of test error but if you look at how much time the baseline uses per sample and how much memory and then compare this to the attention sampling you see that they save at least an order of magnitude in time and memory and the same thing goes for the street sign data set you see test error here and then test error is similar for the attention sampling but again time memory much much lower so the attention sampling is faster and is more memory efficient than the baseline and that makes it makes it easy to process these megapixel images even on here they say process megapixel images in a single CPU or GPU and that really I like this because it kind of brings the research back to let's say regular people or maybe universities that don't have as much money as large companies and so all in all very cool paper very neat experiments they have a lot in the appendix check it out where they show their attention distribution in these images their theoretical analysis is pretty easy to follow if you want to check that out and with that thanks for listening and bye bye | [{"start": 0.0, "end": 5.0200000000000005, "text": " Hi there. Today we're looking at processing megapixel images with deep"}, {"start": 5.0200000000000005, "end": 13.02, "text": " attention sampling models by Angela's Cattaro-Poulouse and Francois Fleure. So this"}, {"start": 13.02, "end": 21.36, "text": " another paper that I saw they talk of at ICML and it's a pretty cool idea, it's"}, {"start": 21.36, "end": 27.02, "text": " pretty simple and apparently it works very well. So consider the following"}, {"start": 27.02, "end": 38.0, "text": " image here of a street situation and ask yourself if a self-driving car sees"}, {"start": 38.0, "end": 44.86, "text": " this. What are the kind of things it needs to be aware of? So of course one of"}, {"start": 44.86, "end": 49.7, "text": " the things it needs to be aware of is like the road, the cars and so on but also"}, {"start": 49.7, "end": 56.1, "text": " what's in circle in red here, the street sign and the street sign especially"}, {"start": 56.1, "end": 60.64, "text": " is important because there's a number on it and you want to see what the"}, {"start": 60.64, "end": 66.48, "text": " number is otherwise you won't be able to adjust your speed. So if this is now a"}, {"start": 66.48, "end": 71.28, "text": " really large image so if the camera is really good and the dimensions of this"}, {"start": 71.28, "end": 77.1, "text": " image are really large then current machine learning methods have a problem"}, {"start": 77.1, "end": 84.86, "text": " because current machine learning methods kind of go up to maybe something"}, {"start": 84.86, "end": 91.3, "text": " like 200 by 200 pixels or current image net models some down samples on so if"}, {"start": 91.3, "end": 95.5, "text": " this is much larger than this what current machine learning models would do"}, {"start": 95.5, "end": 101.5, "text": " is they would simply down sample like compress the size just compress it a bit"}, {"start": 101.5, "end": 108.26, "text": " and so on and by that as you see here on the right if the original patch in the"}, {"start": 108.26, "end": 112.52, "text": " image you could you know you could cut it out and large it it would look like"}, {"start": 112.52, "end": 118.78, "text": " this if you compress the whole image the same patch would now look like this"}, {"start": 118.78, "end": 124.06, "text": " blurred so in the bottom half you'd be able to recognize the number in the top"}, {"start": 124.06, "end": 129.94, "text": " half you wouldn't so as standard CNN might be able to recognize the road and the"}, {"start": 129.94, "end": 135.14, "text": " cars still at the lower resolution but not the speed sign what we want is a"}, {"start": 135.14, "end": 141.74, "text": " method that can selectively pay attention to parts of the image that it"}, {"start": 141.74, "end": 148.46, "text": " finds interesting and then look at those parts in full detail while basically"}, {"start": 148.46, "end": 154.42000000000002, "text": " deciding to discard other parts completely such as the sky here so this"}, {"start": 154.42000000000002, "end": 160.82000000000002, "text": " paper is one that does this and does so in a very efficient manner so the"}, {"start": 160.82000000000002, "end": 168.22, "text": " basic premise is very simple all right I'm going to show it on this on the same"}, {"start": 168.22, "end": 175.06, "text": " image so what you do is first you actually compress the image so this image"}, {"start": 175.06, "end": 183.94, "text": " would become a smaller image right so here maybe this is 1000 by 2000 you"}, {"start": 183.94, "end": 189.9, "text": " compress it down to maybe 100 by 200 still the same image but compressed on"}, {"start": 189.9, "end": 196.54, "text": " here's the road here's a bunch of trees I'm very good at drawing trees and here's"}, {"start": 196.54, "end": 203.14, "text": " this street sign and here is a car and here is another car all right so and"}, {"start": 203.14, "end": 211.06, "text": " there's a sky up here so now what you do is on this smaller version you"}, {"start": 211.06, "end": 218.82, "text": " classify every location I guess you could classify you could sub-sample but you"}, {"start": 218.82, "end": 226.85999999999999, "text": " want to classify every single location on it on how interesting is it and what"}, {"start": 226.85999999999999, "end": 232.9, "text": " they do is they take this and just put it through what they call an attention"}, {"start": 232.9, "end": 240.66, "text": " network which is just just a neural network in their case it's a CNN that for"}, {"start": 240.66, "end": 252.57999999999998, "text": " each location here for each blue location outputs a function a of a and let's"}, {"start": 252.57999999999998, "end": 262.14, "text": " call it a x y at coordinates x and y of this image x okay this is stupid"}, {"start": 262.14, "end": 271.14, "text": " notation that's a of x so the image is x at coordinates i j right so all of these"}, {"start": 271.14, "end": 278.58, "text": " blue things here are eyes and j's different eyes and j's and then does this"}, {"start": 278.58, "end": 282.58, "text": " gives you now if you normalize correctly so if you normalize over all the a's"}, {"start": 282.58, "end": 291.06, "text": " and i j i j if you normalize this gives you a distribution over this image so"}, {"start": 291.06, "end": 297.98, "text": " if we look at it in like 1d this gives you like a distribution not to"}, {"start": 297.98, "end": 307.02, "text": " continue this one in this case a discrete one how interesting is each patch and"}, {"start": 307.02, "end": 314.9, "text": " at the end if you have this distribution so let's finish here what you want to"}, {"start": 314.9, "end": 318.98, "text": " do is you want to say which are the most interesting location so this one's"}, {"start": 318.98, "end": 327.3, "text": " pretty high and these are very high so that might correspond to over here at"}, {"start": 327.3, "end": 332.46000000000004, "text": " my correspond to some location so this location is very high and these"}, {"start": 332.46000000000004, "end": 339.78000000000003, "text": " locations are very interesting and only in these locations you take them out"}, {"start": 339.78000000000003, "end": 343.78000000000003, "text": " and then only those you process and full resolution so you might have"}, {"start": 343.78, "end": 353.46, "text": " extracted let's say four patches so now you have four of these patches and each"}, {"start": 353.46, "end": 359.9, "text": " of them individually you run through a second neural network which is called"}, {"start": 359.9, "end": 367.14, "text": " another CNN which is called F the feature network so the feature network will"}, {"start": 367.14, "end": 377.58, "text": " take a patch and output a vector of features right so you'll feed those in and"}, {"start": 377.58, "end": 386.09999999999997, "text": " output the vector features and then what you do is you simply your final output"}, {"start": 386.1, "end": 404.38, "text": " which they call G let me colorize this so G which is G is now the final output"}, {"start": 404.38, "end": 415.62, "text": " that's not called a G let's call it O output is you sum over all the patches you"}, {"start": 415.62, "end": 425.5, "text": " have extracted down here so the patch number P over all your patches and you"}, {"start": 425.5, "end": 437.3, "text": " sum these features F of patch P right and P might be at location IJ let's put IJ"}, {"start": 437.3, "end": 449.66, "text": " here so IJ in the extracted patches and you weigh each feature by how much"}, {"start": 449.66, "end": 456.74, "text": " attention it got at that location right so it looks more complicated than it is"}, {"start": 456.74, "end": 462.38, "text": " what you do is you simply determine these features by using this neural network"}, {"start": 462.38, "end": 467.02, "text": " only at the position where this neural network says are interesting then you get"}, {"start": 467.02, "end": 473.41999999999996, "text": " the features from the interesting positions and you basically just weigh them"}, {"start": 473.41999999999996, "end": 478.78, "text": " by how much attention they got in the attention distribution and that will"}, {"start": 478.78, "end": 483.9, "text": " be your final output of the network and it makes intuitive sense like one"}, {"start": 483.9, "end": 488.68, "text": " network decides what is interesting the other network decides what are we"}, {"start": 488.68, "end": 496.41999999999996, "text": " going to do with the interesting things in this image and the cool thing about"}, {"start": 496.42, "end": 502.86, "text": " this is you can basically decide how many of these patches here how many you"}, {"start": 502.86, "end": 508.46000000000004, "text": " want to extract you can decide at what resolution you want to process this"}, {"start": 508.46000000000004, "end": 516.54, "text": " image and all of this are parameters that you said by how much time you have"}, {"start": 516.54, "end": 522.0600000000001, "text": " for computation and how much memory you have for your computation so that's"}, {"start": 522.06, "end": 527.06, "text": " pretty cool pretty modular scale up and scale down and the another cool thing is"}, {"start": 527.06, "end": 535.26, "text": " that theoretical guarantees that they give so basically here they prove that the"}, {"start": 535.26, "end": 540.6199999999999, "text": " way they do it especially by extracting the patch especially if they have"}, {"start": 540.6199999999999, "end": 548.9, "text": " an on biased sorry especially if they have sampling with that replacement is that"}, {"start": 548.9, "end": 554.42, "text": " if they weigh the things correctly and if they do the things correctly they show"}, {"start": 554.42, "end": 562.38, "text": " that this is actually an on biased estimator of the true neural network if you"}, {"start": 562.38, "end": 570.5, "text": " were to evaluate on the full image basically on each patch in full resolution so"}, {"start": 570.5, "end": 578.5, "text": " only taking the ones where the attention focuses is an on biased estimator"}, {"start": 578.5, "end": 585.22, "text": " and not only is it an on biased estimator it is in fact the estimator with"}, {"start": 585.22, "end": 591.34, "text": " the smallest variance and that's what they prove here so the minimum variance"}, {"start": 591.34, "end": 600.74, "text": " estimator and this is this is pretty pretty interesting pretty cool and works"}, {"start": 600.74, "end": 606.34, "text": " pretty well they also show how to derive the gradient update when you train"}, {"start": 606.34, "end": 611.6600000000001, "text": " with this attention sampling so now you train your neural you train your machine"}, {"start": 611.6600000000001, "end": 616.9, "text": " learning system not on the whole image but only on a sub set of the image"}, {"start": 616.9, "end": 622.38, "text": " patches but it still behaves in expectation as if you were to train on the"}, {"start": 622.38, "end": 630.7, "text": " entire images so pretty neat so here they show how this compares to full CNN in"}, {"start": 630.7, "end": 637.3000000000001, "text": " this case we have the full CNN where the picture is simply down sampled and"}, {"start": 637.3000000000001, "end": 642.86, "text": " then classified and this is what's called megapixel amnesty so in megapixel"}, {"start": 642.86, "end": 647.6600000000001, "text": " amnesty you have a large image and you put three digits in there there's a"}, {"start": 647.6600000000001, "end": 654.5400000000001, "text": " same for example five five five from the amnesty dataset you put two random digits"}, {"start": 654.54, "end": 661.3, "text": " others like two three and you put also a bunch of noise noise patches somewhere"}, {"start": 661.3, "end": 667.98, "text": " right so the task is to recognize which is the dominant digit here in this"}, {"start": 667.98, "end": 675.4599999999999, "text": " case it would be five right five five where was the other one five here so if"}, {"start": 675.4599999999999, "end": 680.66, "text": " you give this to a regular CNN you see it does about this well this is the"}, {"start": 680.66, "end": 687.2199999999999, "text": " training loss here training loss and this is the test loss and it takes this"}, {"start": 687.2199999999999, "end": 696.26, "text": " much time right time per epoch here and this much time to evaluate sorry if you"}, {"start": 696.26, "end": 700.42, "text": " now use this attention sampling and as I said you can actually modulate how"}, {"start": 700.42, "end": 706.5, "text": " many patches you want to take so as you go down you take more patches we would"}, {"start": 706.5, "end": 710.5, "text": " expect it to take more time this is exactly what happens you see for example"}, {"start": 710.5, "end": 716.62, "text": " down here in the test error if you take five patches per image it takes very"}, {"start": 716.62, "end": 722.42, "text": " little time but the error I mean the error is still better than if you use the"}, {"start": 722.42, "end": 729.86, "text": " CNN simply because you cannot pay attention to details much more as you use more"}, {"start": 729.86, "end": 736.5, "text": " patches your test error drops also your training loss if they drop so using more"}, {"start": 736.5, "end": 741.06, "text": " patches will actually give you a better and better and better performing model"}, {"start": 741.06, "end": 748.34, "text": " but you sacrifice a little bit of time but still not never as as slow as with"}, {"start": 748.34, "end": 757.46, "text": " the full with the CNN so even though it's a downsample CNN right so that is"}, {"start": 757.46, "end": 763.14, "text": " very interesting and very cool that not only do they beat the baseline in terms"}, {"start": 763.14, "end": 770.9, "text": " of error but also a lot in terms of speed if you look at what the model does as"}, {"start": 770.9, "end": 775.6999999999999, "text": " it learns here you see for a given image this is always the same image from the"}, {"start": 775.6999999999999, "end": 780.8199999999999, "text": " data set at the beginning they have actually marked where the relevant the"}, {"start": 780.8199999999999, "end": 787.9399999999999, "text": " three relevant digits are in the picture with the red circle so if you look at"}, {"start": 787.94, "end": 795.46, "text": " how over the training of this model how this distribution evolves is pretty"}, {"start": 795.46, "end": 799.46, "text": " interesting yellow basically means high attention so at the beginning you have"}, {"start": 799.46, "end": 807.3000000000001, "text": " high attention everywhere in the image right and then as you go on and on and on"}, {"start": 807.3000000000001, "end": 813.3000000000001, "text": " you see for example here it pays attention to all the locations where"}, {"start": 813.3, "end": 818.8199999999999, "text": " basically where there is something in the image right this could be one of"}, {"start": 818.8199999999999, "end": 823.3, "text": " these three digits but it could also be one of the digits that it's trying to"}, {"start": 823.3, "end": 828.42, "text": " that is trying to distract the model like the false digits or the noise patches"}, {"start": 828.42, "end": 835.3, "text": " and as you go more and more and more it really learns to only pay attention to"}, {"start": 835.3, "end": 841.3, "text": " the relevant digits and then classify those at full resolution so this really"}, {"start": 841.3, "end": 846.8199999999999, "text": " shows that the this this kind of attention distribution learns something very"}, {"start": 846.8199999999999, "end": 857.3, "text": " meaningful they do more experiments on two data sets namely this is a histopathology"}, {"start": 857.3, "end": 863.8599999999999, "text": " data set right here where the goal is I think to recognize this epithelial cells"}, {"start": 863.86, "end": 878.1, "text": " this type of cell and you can see that this here is the baseline and this here is"}, {"start": 878.1, "end": 884.5, "text": " the new method and the baseline basically what it does is it does a similar"}, {"start": 884.5, "end": 889.22, "text": " thing namely it processes the image in patches but it processes every single"}, {"start": 889.22, "end": 895.94, "text": " patch maybe in succession but it still processes every single patch where"}, {"start": 895.94, "end": 900.98, "text": " the attention sampling only processes the patches that the attention"}, {"start": 900.98, "end": 907.3000000000001, "text": " sampling distribution suggests and this other update to set here is a"}, {"start": 907.3000000000001, "end": 910.1800000000001, "text": " street sign date to set that you saw at the beginning"}, {"start": 910.18, "end": 919.9399999999999, "text": " right here and the again I think this is the baseline and this is the attention"}, {"start": 919.9399999999999, "end": 924.5, "text": " sampling so both learn to pay attention to the street signs but again the"}, {"start": 924.5, "end": 930.66, "text": " attention sampling much more efficient so here you see the"}, {"start": 930.66, "end": 935.8599999999999, "text": " baseline performance the attention sampling performance is similar"}, {"start": 935.86, "end": 942.42, "text": " in terms of test error but if you look at how much time the baseline uses"}, {"start": 942.42, "end": 946.9, "text": " per sample and how much memory and then compare this to the attention"}, {"start": 946.9, "end": 953.14, "text": " sampling you see that they save at least an order of magnitude in time and"}, {"start": 953.14, "end": 959.3000000000001, "text": " memory and the same thing goes for the street sign data set you see test error"}, {"start": 959.3, "end": 967.54, "text": " here and then test error is similar for the attention sampling but again time"}, {"start": 967.54, "end": 975.38, "text": " memory much much lower so the attention sampling is faster"}, {"start": 975.38, "end": 981.6999999999999, "text": " and is more memory efficient than the baseline"}, {"start": 981.6999999999999, "end": 988.02, "text": " and that makes it makes it easy to process these megapixel images even on"}, {"start": 988.02, "end": 993.54, "text": " here they say process megapixel images in a single CPU or GPU"}, {"start": 993.54, "end": 999.38, "text": " and that really I like this because it kind of brings the research back to"}, {"start": 999.38, "end": 1005.38, "text": " let's say regular people or maybe universities that don't have as much money"}, {"start": 1005.38, "end": 1012.18, "text": " as large companies and so all in all very cool paper very"}, {"start": 1012.18, "end": 1016.5, "text": " neat experiments they have a lot in the appendix"}, {"start": 1016.5, "end": 1021.86, "text": " check it out where they show their attention distribution in these images"}, {"start": 1021.86, "end": 1025.86, "text": " their theoretical analysis is pretty easy to follow if you want to check that"}, {"start": 1025.86, "end": 1055.6999999999998, "text": " out and with that thanks for listening and bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=1L83tM8nwHU | Manifold Mixup: Better Representations by Interpolating Hidden States | Standard neural networks suffer from problems such as un-smooth classification boundaries and overconfidence. Manifold Mixup is an easy regularization technique that rectifies these problems. It works by interpolating hidden representations of different data points and then train them to predict equally interpolated labels.
https://arxiv.org/abs/1806.05236
Abstract:
Deep neural networks excel at learning the training data, but often provide incorrect and confident predictions when evaluated on slightly different test examples. This includes distribution shifts, outliers, and adversarial examples. To address these issues, we propose Manifold Mixup, a simple regularizer that encourages neural networks to predict less confidently on interpolations of hidden representations. Manifold Mixup leverages semantic interpolations as additional training signal, obtaining neural networks with smoother decision boundaries at multiple levels of representation. As a result, neural networks trained with Manifold Mixup learn class-representations with fewer directions of variance. We prove theory on why this flattening happens under ideal conditions, validate it on practical situations, and connect it to previous works on information theory and generalization. In spite of incurring no significant computation and being implemented in a few lines of code, Manifold Mixup improves strong baselines in supervised learning, robustness to single-step adversarial attacks, and test log-likelihood.
Authors:
Vikas Verma, Alex Lamb, Christopher Beckham, Amir Najafi, Ioannis Mitliagkas, Aaron Courville, David Lopez-Paz, Yoshua Bengio | Hi there. Today we're looking at manifold mixup, better representations by interpolating hidden states, by Vikus Verma at all. A number of big names on this paper as you can see and I also saw this at ICML so I was intrigued by it. They propose manifold mixup which is sort of a regularizer of neural networks, specifically of supervised learning and it's actually a pretty simple concept and they kind of show that it has some nice properties and outperforms other regularizers. So what's the problem? The problem is that if you look at this spiral problem here which is often kind of used to show properties of neural networks, what you have are blue points and the blue points are one class and the red points are another class and you see the two classes here are in this kind of spiral pattern. It's just the data space is just too dimensional. So you see here this is one class, this is the other class. So this is pretty difficult for a model to learn because of course the easy models would be like linear classifiers but there's no way to like put a line through this such that one class is on one side mostly. So neural networks if you train them they will give you something like you see here they will try to kind of bound the regions with the red points from the blue points but then it gets you know there's some weird things like here is a weird thing here is a weird thing so you'd imagine a correct model would actually classify this area as blue but the neural network has no concept of of let's say that the spiral should continue that it's in places. Ah here's blue here's blue here's a bit of a gap in the training data so it in this case it assigns a red class to it. So this is one problem that the decision boundaries are rather let's say squiggly and irregular and the second one if you look at the actual colors. Full blue means very confident blue class. Full red means very confident red class and in between you kind of see going into the the white so if you look very closely I can actually zoom in more here. If you look very closely you'll see that the blue gets lighter and lighter until it reaches white and from here the red goes lighter and lighter until it reaches white and white means not confident white means like 50 50 so you see the the area of not confident is actually very small right if you consider a point here is actually still very confident that it's a blue point and the area of non-conference is very small even though maybe as humans we would judge like a relatively large band in the middle to be not confident like if we get a point like this and the third problem is that you can see in multiple locations like here or here or here that the decision boundary is very close to the data points unnecessarily close so especially if you look here the decision boundary could be much more optimally placed probably something like this right given the training data but the neural networks because they only see training data they they have no basically no incentive to do this. All right one might think of you know something like a support vector machine that actually has an incentive to to put the decision boundary away from the from the training data but neural networks currently they're not SVMs they're basically logistic regressions and as such have no no incentive to do this. So these these are the problems the other problems are this is the input space if you look at the hidden space so they build neural networks specifically they have like 2d input and then that goes through a bunch of layers and then at one point there's a bottleneck layer with just two hidden nodes and then I guess that goes again and then it goes into a classifier so in this bottleneck layer they analyze the hidden representations of the data points and in this case for this spiral data set what happens is so in red you see again the red classes in blue the blue classes it's 2d so you can plot it what it does is it bunches up the hidden representations fairly fairly so it bunches them kind of up it spreads them out in directions here here here most are bunched up here and it does these kind of weird arrangements here with the pockets of those and of course the neural network is powerful enough such that it can actually you know separate all of this from each other but it's not ideal and the black dots they represent kind of points in between or points from the input space that are not part of the training data so they say they sample uniformly in the range of the input space they see that the black dots are all over the place right some are a confidence blue some are confident red some are like somewhere right what you would expect from a good model is that if you input something that's kind of in between or not really sure not even part of the input distribution that it assigns like a low confidence to it that it says well I'm not sure about this this must be somewhere in the middle so just to jump and jump forward to the results what does manifold mix up doing without knowing what it is in the same data set it gives you a picture like this you see the decision boundaries are much more smooth right the region of no confidence or of low confidence indicated by the light color is here is much larger and also the decision boundary here we had specifically this data point here you see the decision boundary is pushed away low you could argue about that particular point but the decision boundary is generally pushed away from the data points you also see no more kind of these squiggles here it doesn't happen in in here also if you look at the hidden representations the hidden representations now are spread out the classes are bunched up so not all the points are bunched up but the the points of individual classes are bunched up together and the randomly sampled points are in the middle as they should be say only confident red is down here confident blue is up here and everything in between is unconfident and third if you look at the singular value decomposition of the hidden layer and that's kind of a measure of how spread out in the different dimensions that it said is you see that the manifold mix up here in green it concentrates or it it it lowers the singular values of the kind of lower indexes so the first singular value is large which means that there is like a dominant direction in the in the data and this is done for each class separately as I understand it puts a lot of weight on the first singular vector and then it pushes down the contributions of the other singular vector which means that the data set that is analyzed is is concentrated into fewer directions of variance this is layer one and here is layer three I mean so you see it happens in both that the manifold mix up compared to the baseline model does this so now you might ask what is manifold mix up it's actually pretty pretty simple concept right here is another comparing it to other kind of regularization techniques and showing that none of them really does this so manifold mix up is this basically what you do is when you train a neural network you have input data and you take mini batches of input data specifically you take too many batches x and y and x prime y prime and then what you do is if I have the draw the neural network here so here is the inputs like a picture of a cat it goes through layers right and then what you do is you say at some particular you say stop stop right you take the representation out you and you do this with two different mini batches so here is this is cat one down down back here is cat two whatever or dog that's a cat you pass it in right here you take it out here you pass it through the network and you take it out so you now have two different forward paths of two different mini batches and then you define a lambda I guess they randomly sample a lambda in 0 1 right in the range of 0 1 so this is a mixing coefficient and then you mix you say lambda times hidden representation of batch one plus one minus lambda of hidden representation of batch two and that is what you pass through the rest of the network right so basically you for propagate two different batches until a certain layer here then you mix them with a random coefficient and then you pass it through the rest and then the only thing you also have to do is then at the end if you think of the labels of these two things you want to mix the labels in the same fashion so you want to mix lambda times y of batch one plus one minus lambda of y of batch two and then this is your training signal for whatever comes out here right so it's it's um these are these are one hot labels so if it's class three it's 0 0 1 0 0 and if y 2 is class five it's 0 0 0 0 1 and then you simply mix the two right and that becomes your training signal so in a practical example if let's just have a mini batch size of one so just one sample if this is cat and this is dog you would pass them forward right you would mix so in the hidden representation it would kind of become a cat dog maybe you do it 50 50 but then you would also mix the labels of cat and dog 50 50 and tell the network this is a mixture of 50% cat 50% dog and then you would train the network to predict that 50 50 coefficient so they do this the question is at which layer do you do this and they simply I think for each mini batch sample one hidden layer at random they might have some waiting or something but the way they describe it is they simply sample one layer per mini batch and then do the mixing there and then you can actually backprop through everything everything is differentiable this mixing is differentiable so you can backprop through everything and there's even you know kind of an engineering trick to only use a single mini batch by mixing it with itself so that's that's pretty neat so this manifold mix up as you can see here is the that's kind of the description you mix the hidden representations with lambda and you mix the labels with the same lambda and that will become your actual training signal all right so they give some theory to it that it flattens representations and specifically they say under some conditions namely if the network is large enough so if if the dimension of the hidden representation is of a certain size then if you optimize this manifold mix up like if you optimize over every lambda and over the entire training data set what you will end up is actually a linear rep a linear function of the input this is not too surprising that if you because what you do is you mix linearly this mixture happens in a linear fashion so if you optimize for and you not only optimize for the training set but you optimize for every possible mixture of the training set linear mixture your minimization your minimizer function will actually become a linear function it's not surprising but they have a formal proof of this and they also have a proof that if certain assumptions are given then the minimizers if you apply the minimizers the hidden representations will actually fall on a low dimensional subspace which is also not surprising but it's kind of the theoretical analog to what they show with the singular value distribution that it basically suppresses low singular values that means the data set is much more into a single direction the hidden representations all right so this the theory part is you can read it if you if you want to it's yeah it's to the results are to be expected I would say from what they do and the last thing they give a pictorial example of why manifold mix up flatens representations so both of these things the fact that the minimizers will become linear functions and the fact that the singular value spectrum is more concentrated on the first singular value means basically that representations are flattened and here is a pictorial representation so in this case what happens if you if you basically have these four data points a1 a2 b1 and b2 where a1 and a2 are blue class and b1 and b2 are red class and if you now look at an interpolation point between the two so if you look at this interpolation point between a1 and b2 what happens is that in this case this should be 50 50 blue and red but if you now look at the points that it where it's not interpolated on this is very close to a2 in this case it's probably should be more like 95 blue and 5 red they say here well if you use manifold mix up to learn the network what you'll actually do is you say okay actually this hidden representation needs to be pushed outward and you will achieve something over here where any mixture of two points of the opposite class will actually give you a 50 50 so all the midpoints here will give you a 50 50 mixture between the labels which basically means what you end up with is a line between this data and this data and it means that basically the network becomes more linear and the representations become more flat because flat is the optimal if your distributions are flat all the distances to the line are the same and this objective is optimized and this is basically my my kind of biggest problem with the method is that it kind of mixes the input with a linear function where we know that that is kind of not the shape of the true data manifold the input manifolds as you can see here the input manifold here isn't linear or flat it's actually very very tangled and we know that neural networks as you continue and the layers will flatten those representations because ultimately at the end it needs to classify the data set linearly because the last layer is a softmax layer but the the idea that you could apply this to any layer seems a bit shady to me of course it works and they show it works and it's really nice that it works but applying this to low layers and neural networks seems a bit not principled to me so I think this is not the end of the story of this line of work and there is kind of more that can be done in a more principled fashion but in any case they show that this actually works in terms of performance on generalization on kind of standard data sets so they have results on c4 10 and c4 100 which are famous image data sets and they show that they're regularizer outperforms others and they also show that they can withstand one step single step adversarial attacks more kind of better so they have a better performance against single step adversarial attacks after regularizing mostly again giving kind of an idea that the if you push if you push if you have a two points this is x this is x x 1 x 2 there are different classes if you put the decision boundary really close to x 2 then an adversarial attack can simply move the point across the decision boundary with a very small step but if you actually have the decision boundary pushed away from both data points then the an adversarial attack must go a very long way to the decision boundary and thus if you limit the size of adversarial attacks which is what you usually do you can maybe not reach this decision boundary and thus you mitigate some of the problem so that's pretty cool I think yeah there's work to be done but I think this is pretty cool it's implemented pretty easy I've seen there is a lot of libraries already available with it in and yeah won't hurt to add this to your code make your work better and more robust all right that was it from me bye bye | [{"start": 0.0, "end": 5.64, "text": " Hi there. Today we're looking at manifold mixup, better representations by"}, {"start": 5.64, "end": 11.88, "text": " interpolating hidden states, by Vikus Verma at all. A number of big names on this"}, {"start": 11.88, "end": 19.04, "text": " paper as you can see and I also saw this at ICML so I was intrigued by it. They"}, {"start": 19.04, "end": 26.44, "text": " propose manifold mixup which is sort of a regularizer of neural networks,"}, {"start": 26.44, "end": 32.64, "text": " specifically of supervised learning and it's actually a pretty simple concept"}, {"start": 32.64, "end": 37.8, "text": " and they kind of show that it has some nice properties and outperforms"}, {"start": 37.8, "end": 44.16, "text": " other regularizers. So what's the problem? The problem is that if you look at"}, {"start": 44.16, "end": 51.28, "text": " this spiral problem here which is often kind of used to show properties of"}, {"start": 51.28, "end": 57.2, "text": " neural networks, what you have are blue points and the blue points are one"}, {"start": 57.2, "end": 60.68, "text": " class and the red points are another class and you see the two classes here are"}, {"start": 60.68, "end": 65.36, "text": " in this kind of spiral pattern. It's just the data space is just too"}, {"start": 65.36, "end": 69.16, "text": " dimensional. So you see here this is one class, this is the other class. So this"}, {"start": 69.16, "end": 76.88, "text": " is pretty difficult for a model to learn because of course the easy models would"}, {"start": 76.88, "end": 82.16, "text": " be like linear classifiers but there's no way to like put a line through this"}, {"start": 82.16, "end": 88.16, "text": " such that one class is on one side mostly. So neural networks if you train them"}, {"start": 88.16, "end": 93.24, "text": " they will give you something like you see here they will try to kind of bound"}, {"start": 93.24, "end": 99.19999999999999, "text": " the regions with the red points from the blue points but then it gets you"}, {"start": 99.19999999999999, "end": 104.08, "text": " know there's some weird things like here is a weird thing here is a weird thing"}, {"start": 104.08, "end": 109.4, "text": " so you'd imagine a correct model would actually classify this area as blue but"}, {"start": 109.4, "end": 116.6, "text": " the neural network has no concept of of let's say that the spiral should"}, {"start": 116.6, "end": 120.92, "text": " continue that it's in places. Ah here's blue here's blue here's a bit of a gap"}, {"start": 120.92, "end": 128.44, "text": " in the training data so it in this case it assigns a red class to it. So this is"}, {"start": 128.44, "end": 134.32, "text": " one problem that the decision boundaries are rather let's say squiggly and irregular"}, {"start": 134.32, "end": 140.24, "text": " and the second one if you look at the actual colors. Full blue means very"}, {"start": 140.24, "end": 145.72, "text": " confident blue class. Full red means very confident red class and in between you"}, {"start": 145.72, "end": 150.96, "text": " kind of see going into the the white so if you look very closely I can actually"}, {"start": 150.96, "end": 155.92, "text": " zoom in more here. If you look very closely you'll see that the blue gets lighter"}, {"start": 155.92, "end": 161.04, "text": " and lighter until it reaches white and from here the red goes lighter and lighter"}, {"start": 161.04, "end": 166.32, "text": " until it reaches white and white means not confident white means like 50 50"}, {"start": 166.32, "end": 173.72, "text": " so you see the the area of not confident is actually very small right if you"}, {"start": 173.72, "end": 179.92, "text": " consider a point here is actually still very confident that it's a blue point"}, {"start": 179.92, "end": 187.48, "text": " and the area of non-conference is very small even though maybe as humans we"}, {"start": 187.48, "end": 193.23999999999998, "text": " would judge like a relatively large band in the middle to be not confident like"}, {"start": 193.23999999999998, "end": 198.16, "text": " if we get a point like this and the third problem is that you can see in"}, {"start": 198.16, "end": 205.2, "text": " multiple locations like here or here or here that the decision boundary is very"}, {"start": 205.2, "end": 212.04, "text": " close to the data points unnecessarily close so especially if you look here"}, {"start": 212.04, "end": 217.39999999999998, "text": " the decision boundary could be much more optimally placed probably something"}, {"start": 217.39999999999998, "end": 223.85999999999999, "text": " like this right given the training data but the neural networks because they"}, {"start": 223.85999999999999, "end": 230.76, "text": " only see training data they they have no basically no incentive to do this."}, {"start": 230.76, "end": 235.72, "text": " All right one might think of you know something like a support vector machine"}, {"start": 235.72, "end": 240.64, "text": " that actually has an incentive to to put the decision boundary away from the"}, {"start": 240.64, "end": 247.68, "text": " from the training data but neural networks currently they're not SVMs they're"}, {"start": 247.68, "end": 255.0, "text": " basically logistic regressions and as such have no no incentive to do this. So"}, {"start": 255.0, "end": 260.88, "text": " these these are the problems the other problems are this is the input space if"}, {"start": 260.88, "end": 265.44, "text": " you look at the hidden space so they build neural networks specifically they have"}, {"start": 265.44, "end": 269.64, "text": " like 2d input and then that goes through a bunch of layers and then at one"}, {"start": 269.64, "end": 274.28, "text": " point there's a bottleneck layer with just two hidden nodes and then I guess that"}, {"start": 274.28, "end": 279.36, "text": " goes again and then it goes into a classifier so in this bottleneck layer they"}, {"start": 279.36, "end": 286.8, "text": " analyze the hidden representations of the data points and in this case for this"}, {"start": 286.8, "end": 292.08000000000004, "text": " spiral data set what happens is so in red you see again the red classes in"}, {"start": 292.08000000000004, "end": 296.84000000000003, "text": " blue the blue classes it's 2d so you can plot it what it does is it bunches up"}, {"start": 296.84000000000003, "end": 302.40000000000003, "text": " the hidden representations fairly fairly so it bunches them kind of up it"}, {"start": 302.40000000000003, "end": 308.40000000000003, "text": " spreads them out in directions here here here most are bunched up here and it"}, {"start": 308.4, "end": 314.76, "text": " does these kind of weird arrangements here with the pockets of those and of"}, {"start": 314.76, "end": 318.35999999999996, "text": " course the neural network is powerful enough such that it can actually you"}, {"start": 318.35999999999996, "end": 323.52, "text": " know separate all of this from each other but it's not ideal and the black"}, {"start": 323.52, "end": 328.28, "text": " dots they represent kind of points in between or points from the input space"}, {"start": 328.28, "end": 332.56, "text": " that are not part of the training data so they say they sample uniformly in"}, {"start": 332.56, "end": 338.0, "text": " the range of the input space they see that the black dots are all over the"}, {"start": 338.0, "end": 343.52, "text": " place right some are a confidence blue some are confident red some are like"}, {"start": 343.52, "end": 349.04, "text": " somewhere right what you would expect from a good model is that if you input"}, {"start": 349.04, "end": 353.04, "text": " something that's kind of in between or not really sure not even part of the"}, {"start": 353.04, "end": 359.0, "text": " input distribution that it assigns like a low confidence to it that it says well"}, {"start": 359.0, "end": 364.76, "text": " I'm not sure about this this must be somewhere in the middle so just to jump"}, {"start": 364.76, "end": 369.44, "text": " and jump forward to the results what does manifold mix up doing without knowing"}, {"start": 369.44, "end": 374.56, "text": " what it is in the same data set it gives you a picture like this you see the"}, {"start": 374.56, "end": 380.48, "text": " decision boundaries are much more smooth right the region of no confidence or"}, {"start": 380.48, "end": 387.32, "text": " of low confidence indicated by the light color is here is much larger and also"}, {"start": 387.32, "end": 393.2, "text": " the decision boundary here we had specifically this data point here you see the"}, {"start": 393.2, "end": 398.32, "text": " decision boundary is pushed away low you could argue about that particular"}, {"start": 398.32, "end": 403.2, "text": " point but the decision boundary is generally pushed away from the data points"}, {"start": 403.2, "end": 410.84, "text": " you also see no more kind of these squiggles here it doesn't happen in in here"}, {"start": 410.84, "end": 418.0, "text": " also if you look at the hidden representations the hidden representations now are"}, {"start": 418.0, "end": 424.0, "text": " spread out the classes are bunched up so not all the points are bunched up but"}, {"start": 424.0, "end": 429.2, "text": " the the points of individual classes are bunched up together and the"}, {"start": 429.2, "end": 435.92, "text": " randomly sampled points are in the middle as they should be say only confident"}, {"start": 435.92, "end": 441.8, "text": " red is down here confident blue is up here and everything in between is"}, {"start": 441.8, "end": 450.2, "text": " unconfident and third if you look at the singular value decomposition of the"}, {"start": 450.2, "end": 456.2, "text": " hidden layer and that's kind of a measure of how spread out in the"}, {"start": 456.2, "end": 461.24, "text": " different dimensions that it said is you see that the manifold mix up here in"}, {"start": 461.24, "end": 471.0, "text": " green it concentrates or it it it lowers the singular values of the kind of"}, {"start": 471.0, "end": 476.92, "text": " lower indexes so the first singular value is large which means that there is"}, {"start": 476.92, "end": 482.12, "text": " like a dominant direction in the in the data and this is done for each class"}, {"start": 482.12, "end": 488.92, "text": " separately as I understand it puts a lot of weight on the first singular"}, {"start": 488.92, "end": 492.44, "text": " vector and then it pushes down the contributions of the other singular"}, {"start": 492.44, "end": 498.72, "text": " vector which means that the data set that is analyzed is is concentrated into"}, {"start": 498.72, "end": 509.56, "text": " fewer directions of variance this is layer one and here is layer three I mean"}, {"start": 509.56, "end": 513.96, "text": " so you see it happens in both that the manifold mix up compared to the baseline"}, {"start": 513.96, "end": 520.9200000000001, "text": " model does this so now you might ask what is manifold mix up it's actually"}, {"start": 520.9200000000001, "end": 527.4, "text": " pretty pretty simple concept right here is another comparing it to other kind"}, {"start": 527.4, "end": 534.3199999999999, "text": " of regularization techniques and showing that none of them really does this"}, {"start": 535.52, "end": 544.64, "text": " so manifold mix up is this basically what you do is when you train a neural"}, {"start": 544.64, "end": 549.88, "text": " network you have input data and you take mini batches of input data specifically"}, {"start": 549.88, "end": 558.36, "text": " you take too many batches x and y and x prime y prime and then what you do is if"}, {"start": 558.36, "end": 562.88, "text": " I have the draw the neural network here so here is the inputs like a picture of"}, {"start": 562.88, "end": 572.76, "text": " a cat it goes through layers right and then what you do is you say at some"}, {"start": 572.76, "end": 579.16, "text": " particular you say stop stop right you take the representation out you"}, {"start": 579.16, "end": 585.52, "text": " and you do this with two different mini batches so here is this is cat one"}, {"start": 585.52, "end": 594.36, "text": " down down back here is cat two whatever or dog that's a cat you pass it in"}, {"start": 594.36, "end": 600.64, "text": " right here you take it out here you pass it through the network and you take it"}, {"start": 600.64, "end": 606.64, "text": " out so you now have two different forward paths of two different mini batches"}, {"start": 606.64, "end": 617.48, "text": " and then you define a lambda I guess they randomly sample a lambda in 0 1 right"}, {"start": 617.48, "end": 623.24, "text": " in the range of 0 1 so this is a mixing coefficient and then you mix you say"}, {"start": 623.24, "end": 632.4, "text": " lambda times hidden representation of batch one plus one minus lambda of"}, {"start": 632.4, "end": 638.12, "text": " hidden representation of batch two and that is what you pass through the rest of"}, {"start": 638.12, "end": 644.48, "text": " the network right so basically you for propagate two different batches until a"}, {"start": 644.48, "end": 651.72, "text": " certain layer here then you mix them with a random coefficient and then you"}, {"start": 651.72, "end": 658.4399999999999, "text": " pass it through the rest and then the only thing you also have to do is then at"}, {"start": 658.44, "end": 665.9200000000001, "text": " the end if you think of the labels of these two things you want to mix the"}, {"start": 665.9200000000001, "end": 672.0400000000001, "text": " labels in the same fashion so you want to mix lambda times y of batch one plus"}, {"start": 672.0400000000001, "end": 680.72, "text": " one minus lambda of y of batch two and then this is your training signal for"}, {"start": 680.72, "end": 688.32, "text": " whatever comes out here right so it's it's um these are these are one hot labels"}, {"start": 688.32, "end": 696.8000000000001, "text": " so if it's class three it's 0 0 1 0 0 and if y 2 is class five it's 0 0 0 0 1"}, {"start": 696.8000000000001, "end": 703.1600000000001, "text": " and then you simply mix the two right and that becomes your training signal"}, {"start": 703.1600000000001, "end": 709.32, "text": " so in a practical example if let's just have a mini batch size of one so just"}, {"start": 709.32, "end": 715.6400000000001, "text": " one sample if this is cat and this is dog you would pass them forward right you"}, {"start": 715.6400000000001, "end": 720.12, "text": " would mix so in the hidden representation it would kind of become a cat dog"}, {"start": 720.12, "end": 725.72, "text": " maybe you do it 50 50 but then you would also mix the labels of cat and dog 50"}, {"start": 725.72, "end": 732.36, "text": " 50 and tell the network this is a mixture of 50% cat 50% dog and then you would"}, {"start": 732.36, "end": 739.08, "text": " train the network to predict that 50 50 coefficient so they do this the"}, {"start": 739.08, "end": 743.9200000000001, "text": " question is at which layer do you do this and they simply I think for each"}, {"start": 743.9200000000001, "end": 750.44, "text": " mini batch sample one hidden layer at random they might have some waiting or"}, {"start": 750.44, "end": 756.2800000000001, "text": " something but the way they describe it is they simply sample one layer per mini"}, {"start": 756.2800000000001, "end": 761.48, "text": " batch and then do the mixing there and then you can actually backprop through"}, {"start": 761.48, "end": 764.76, "text": " everything everything is differentiable this mixing is differentiable so you"}, {"start": 764.76, "end": 769.3199999999999, "text": " can backprop through everything and there's even you know kind of an engineering"}, {"start": 769.3199999999999, "end": 774.3199999999999, "text": " trick to only use a single mini batch by mixing it with itself so that's"}, {"start": 774.3199999999999, "end": 779.6, "text": " that's pretty neat so this manifold mix up as you can see here is the that's"}, {"start": 779.6, "end": 784.3199999999999, "text": " kind of the description you mix the hidden representations with lambda and you"}, {"start": 784.3199999999999, "end": 789.12, "text": " mix the labels with the same lambda and that will become your actual training"}, {"start": 789.12, "end": 798.2, "text": " signal all right so they give some theory to it that it flattens"}, {"start": 798.2, "end": 805.2, "text": " representations and specifically they say under some conditions namely if the"}, {"start": 805.2, "end": 811.16, "text": " network is large enough so if if the dimension of the hidden representation is of"}, {"start": 811.16, "end": 817.52, "text": " a certain size then if you optimize this manifold mix up like if you optimize"}, {"start": 817.52, "end": 823.0799999999999, "text": " over every lambda and over the entire training data set what you will end up"}, {"start": 823.0799999999999, "end": 832.52, "text": " is actually a linear rep a linear function of the input this is not too"}, {"start": 832.52, "end": 838.96, "text": " surprising that if you because what you do is you mix linearly this mixture"}, {"start": 838.96, "end": 846.64, "text": " happens in a linear fashion so if you optimize for and you not only optimize"}, {"start": 846.64, "end": 850.0, "text": " for the training set but you optimize for every possible mixture of the"}, {"start": 850.0, "end": 855.6, "text": " training set linear mixture your minimization your minimizer function will"}, {"start": 855.6, "end": 860.76, "text": " actually become a linear function it's not surprising but they have a formal"}, {"start": 860.76, "end": 870.48, "text": " proof of this and they also have a proof that if certain assumptions are given"}, {"start": 870.48, "end": 876.9200000000001, "text": " then the minimizers if you apply the minimizers the hidden representations will"}, {"start": 876.9200000000001, "end": 882.52, "text": " actually fall on a low dimensional subspace which is also not surprising but"}, {"start": 882.52, "end": 889.5600000000001, "text": " it's kind of the theoretical analog to what they show with the singular value"}, {"start": 889.5600000000001, "end": 894.76, "text": " distribution that it basically suppresses low singular values that means the"}, {"start": 894.76, "end": 903.0, "text": " data set is much more into a single direction the hidden representations all right"}, {"start": 903.0, "end": 912.12, "text": " so this the theory part is you can read it if you if you want to it's yeah it's"}, {"start": 912.12, "end": 918.72, "text": " to the results are to be expected I would say from what they do and the last"}, {"start": 918.72, "end": 926.52, "text": " thing they give a pictorial example of why manifold mix up flatens"}, {"start": 926.52, "end": 930.6800000000001, "text": " representations so both of these things the fact that the minimizers will"}, {"start": 930.6800000000001, "end": 936.2, "text": " become linear functions and the fact that the singular value spectrum is more"}, {"start": 936.2, "end": 940.6800000000001, "text": " concentrated on the first singular value means basically that representations"}, {"start": 940.68, "end": 951.4799999999999, "text": " are flattened and here is a pictorial representation so in this case what"}, {"start": 951.4799999999999, "end": 961.92, "text": " happens if you if you basically have these four data points a1 a2 b1 and b2"}, {"start": 961.92, "end": 971.3199999999999, "text": " where a1 and a2 are blue class and b1 and b2 are red class and if you now"}, {"start": 971.3199999999999, "end": 976.12, "text": " look at an interpolation point between the two so if you look at this"}, {"start": 976.12, "end": 984.64, "text": " interpolation point between a1 and b2 what happens is that in this case this"}, {"start": 984.64, "end": 991.68, "text": " should be 50 50 blue and red but if you now look at the points that it where"}, {"start": 991.68, "end": 999.1999999999999, "text": " it's not interpolated on this is very close to a2 in this case it's probably"}, {"start": 999.1999999999999, "end": 1007.68, "text": " should be more like 95 blue and 5 red they say here well if you use manifold"}, {"start": 1007.68, "end": 1013.28, "text": " mix up to learn the network what you'll actually do is you say okay actually"}, {"start": 1013.28, "end": 1019.52, "text": " this hidden representation needs to be pushed outward and you will achieve"}, {"start": 1019.52, "end": 1029.28, "text": " something over here where any mixture of two points of the opposite class will"}, {"start": 1029.28, "end": 1037.68, "text": " actually give you a 50 50 so all the midpoints here will give you a 50 50 mixture"}, {"start": 1037.68, "end": 1043.8799999999999, "text": " between the labels which basically means what you end up with is a line"}, {"start": 1043.88, "end": 1051.2, "text": " between this data and this data and it means that basically the network becomes"}, {"start": 1051.2, "end": 1055.7600000000002, "text": " more linear and the representations become more flat because flat is the"}, {"start": 1055.7600000000002, "end": 1061.8400000000001, "text": " optimal if your distributions are flat all the distances to the line are the"}, {"start": 1061.8400000000001, "end": 1069.0800000000002, "text": " same and this objective is optimized and this is basically my my kind of"}, {"start": 1069.08, "end": 1080.04, "text": " biggest problem with the method is that it kind of mixes the input with a"}, {"start": 1080.04, "end": 1087.08, "text": " linear function where we know that that is kind of not the shape of the true"}, {"start": 1087.08, "end": 1095.36, "text": " data manifold the input manifolds as you can see here the input manifold here"}, {"start": 1095.36, "end": 1102.9199999999998, "text": " isn't linear or flat it's actually very very tangled and we know that neural"}, {"start": 1102.9199999999998, "end": 1107.9199999999998, "text": " networks as you continue and the layers will flatten those representations"}, {"start": 1107.9199999999998, "end": 1113.28, "text": " because ultimately at the end it needs to classify the data set linearly"}, {"start": 1113.28, "end": 1118.7199999999998, "text": " because the last layer is a softmax layer but the the idea that you could"}, {"start": 1118.7199999999998, "end": 1124.6399999999999, "text": " apply this to any layer seems a bit shady to me of course it works and they"}, {"start": 1124.64, "end": 1130.0800000000002, "text": " show it works and it's really nice that it works but applying this to low"}, {"start": 1130.0800000000002, "end": 1139.1200000000001, "text": " layers and neural networks seems a bit not principled to me so I think this is"}, {"start": 1139.1200000000001, "end": 1144.92, "text": " not the end of the story of this line of work and there is kind of more that can"}, {"start": 1144.92, "end": 1149.6000000000001, "text": " be done in a more principled fashion but in any case they show that this"}, {"start": 1149.6, "end": 1156.56, "text": " actually works in terms of performance on generalization on kind of standard"}, {"start": 1156.56, "end": 1164.3999999999999, "text": " data sets so they have results on c4 10 and c4 100 which are famous image"}, {"start": 1164.3999999999999, "end": 1170.76, "text": " data sets and they show that they're regularizer outperforms others and they"}, {"start": 1170.76, "end": 1178.8, "text": " also show that they can withstand one step single step adversarial attacks"}, {"start": 1178.8, "end": 1186.6399999999999, "text": " more kind of better so they have a better performance against single step"}, {"start": 1186.6399999999999, "end": 1194.84, "text": " adversarial attacks after regularizing mostly again giving kind of an idea"}, {"start": 1194.84, "end": 1204.0, "text": " that the if you push if you push if you have a two points this is x this is x x"}, {"start": 1204.0, "end": 1209.16, "text": " 1 x 2 there are different classes if you put the decision boundary really"}, {"start": 1209.16, "end": 1216.16, "text": " close to x 2 then an adversarial attack can simply move the point across the"}, {"start": 1216.16, "end": 1222.2, "text": " decision boundary with a very small step but if you actually have the decision"}, {"start": 1222.2, "end": 1228.84, "text": " boundary pushed away from both data points then the an adversarial attack must"}, {"start": 1228.84, "end": 1235.6799999999998, "text": " go a very long way to the decision boundary and thus if you limit the size of"}, {"start": 1235.6799999999998, "end": 1240.76, "text": " adversarial attacks which is what you usually do you can maybe not reach this"}, {"start": 1240.76, "end": 1246.24, "text": " decision boundary and thus you mitigate some of the problem so that's pretty"}, {"start": 1246.24, "end": 1250.9199999999998, "text": " cool I think yeah there's work to be done but I think this is pretty cool it's"}, {"start": 1250.9199999999998, "end": 1255.32, "text": " implemented pretty easy I've seen there is a lot of libraries already"}, {"start": 1255.32, "end": 1263.24, "text": " available with it in and yeah won't hurt to add this to your code make your"}, {"start": 1263.24, "end": 1293.2, "text": " work better and more robust all right that was it from me bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=Qk4lJdp7ZAs | Learning World Graphs to Accelerate Hierarchical Reinforcement Learning | The goal of hierarchical reinforcement learning is to divide a task into different levels of coarseness with the top-level agent planning only over a high-level view of the world and each subsequent layer having a more detailed view. This paper proposes to learn a set of important states as well as their connections to each other as a high-level abstraction.
https://arxiv.org/abs/1907.00664
Abstract:
In many real-world scenarios, an autonomous agent often encounters various tasks within a single complex environment. We propose to build a graph abstraction over the environment structure to accelerate the learning of these tasks. Here, nodes are important points of interest (pivotal states) and edges represent feasible traversals between them. Our approach has two stages. First, we jointly train a latent pivotal state model and a curiosity-driven goal-conditioned policy in a task-agnostic manner. Second, provided with the information from the world graph, a high-level Manager quickly finds solution to new tasks and expresses subgoals in reference to pivotal states to a low-level Worker. The Worker can then also leverage the graph to easily traverse to the pivotal states of interest, even across long distance, and explore non-locally. We perform a thorough ablation study to evaluate our approach on a suite of challenging maze tasks, demonstrating significant advantages from the proposed framework over baselines that lack world graph knowledge in terms of performance and efficiency.
Authors: Wenling Shang, Alex Trott, Stephan Zheng, Caiming Xiong, Richard Socher | Hi there. Today we're looking at learning world graphs to accelerate hierarchical reinforcement learning by Wendling Chung at all from Salesforce research. This work is based in the world of reinforcement learning and especially hierarchical reinforcement learning. So in hierarchical reinforcement learning, the idea is that in order to perform a task like in this case they perform all of their experiments on mazes like this. So imagine you have this maze and this red thing here is the agent and the goal is the green square and the gray things obviously are walls and the black things are everywhere the agent can move. The agent can always move one step in any direction that it wants and that isn't blocked by a wall. So in order to fulfill such a task the agent needs to take many many steps like go here here here here here here each one of those is a step. In addition this specific maze has an additional property namely that there's a locked door here and first you need to pick up the key to basically to open the locked door. So in order to reach the goal the agent needs first to pick up the key then open the door then go to the goal and each one of these it has to traverse many many steps. So the idea in hierarchical reinforcement learning is that you have two parts to it to the agent. So your agent which is this entire box here is divided into what's called a manager and a worker and this is a divide. So what the manager sees the manager sees basically I do an example here they do it differently but the manager could see large could see the world basically only in these large chunks right and it doesn't really care what is in or it cares what is in the chunks but it doesn't distinguish points within the chunks it just knows about these these chunks basically and what the manager will say oh first I need to go to this chunk here then because there's the key in this chunk and then I need to go to this chunk here because there is the door and then I need to go to this chunk here because there's the goal. So the in the view of the manager which has a very high level view of the world is the the action sequence is down here over here then over here. These are like three actions that's a pretty simple and then the manager would pass this information to the worker and it would say hey worker please go to this state here please go to the first state and then the worker would be tasked with basically moving the individual steps to go not to the final goal but only to go to that chunk and then in that chunk the worker would go to the key and then once it has the key the manager would say good job now please perform the second action which is go to to this chunk here. So the second action that the worker would so you basically get the idea whoops I am doing something here you get the idea that the I'm creating text boxes that the worker and the manager work together and that the manager has a high level view of the world and then the worker can basically execute the action that the manager has decided on in a fine-grained way. So this is gives you several advantages namely the manager can plan high level and far away things and then the worker really only has to care about its close neighborhood because each step the manager proposes is a fairly short range so the worker can implement it. They do this in a kind of different way so let's actually start from the back from of this paper which is I find is a bit more explanatory and it makes a bit more sense to look at it. What they propose is to learn a world graph. So in a world graph what is a world graph a world graph consists of two things. First a set of states which is the or the blue states here so all these blue states which are so-called pivot states or important states. So these are states in the world that are very important determined by some measure. So these are basically states that look at where they are. They're often like narrow passes you see here here they're at these narrow passes. So basically if you if you reach those states as an intermediary goal then you can basically go a lot of places from here. So these are very let's say powerful states and these states are connected by a neighborhood graph. So basically which states of these are close to each other and for example here you would connect of course those two because they're neighbors those you would probably connect those. So I'm attempting to kind of draw the world graph you might connect those. It doesn't need to be like a tree it can be like such. So you see that the graph kind of takes shape these are fairly reachable. So whenever a node in the graph whenever one of these important states is fairly easily reachable by some other state it's designated as a neighbor. So with that with this world graph here this is what you get you get an abstraction basically you get a set of states with connections between them that says how easy or hard is it to reach from one state to the other. If you have these things then you can really easily imagine a hierarchical reinforcement learning algorithm that now incorporates this information namely the manager will only use the important states to plan. So for example if the goal the goal isn't drawn in here but let's say the goal is here and then the door the door is here it's a locked door here and then the key let's draw in the key come on okay this doesn't want to all right the key is somewhere let's say here there's the key he is this all right then the no let's put the key further away um come on door here um I'm off with the colors and key here all right so what would the manager do the manager would then say ah okay the key is here so this would be a good state to reach of my importance it the manager is only allowed to go important states right so the manager says because it has the graph right it says aha this state is easily reachable from let's say this state then this state is easily reachable from this state so it plans go here then go here then go here then get the key right this is a kind of a micro action that is not in the importance then I need to you know go here this is reachable from this state that's reachable from this state and from this state and that's reachable from my audience so from the key then next go here go here go here go here and then open the door and then of course go here and solve the the task um the worker then would only ever need to implement the following it starts here and it says aha I need to go here what do I need to do I need to go for example down and over and now once I've done this I need to go here so I need to go right down right so you see the the worker only ever has to care about going from one hop to the next hop making it um really easy for the worker while the manager only has these blue states available which makes it search space much more um much more condensed and much more um much more over-viewable especially with the nodes in between the world graph so that's if you have the world graph right if you have this set of states and how important are how easily they reachable reachable they are between each other you can very easily do a reinforcement learning approach that that is hierarchical has the manager plan on the world graph has and then has the worker implement the fine grained actions and there is already a method that does this uh this paper here uses feudal networks so we won't go into that later just saying it's pretty easy if you have those things so the real question is how do they learn the world graph um and what they do is the following and they describe it here in kind of this oh sorry this way what they want to uh to finally learn is a uh prior that tells them for a given state how important is it and that's a beta prior a beta distribution is a continuous approximation on a on a kind of a binary zero one variable um so how do they do it they use an LSTM to encode trajectory so these are trajectories from kind of rollouts of a policy um and then the LSTM encodes it and for each step it outputs this um posterior over the what's called these latent variables here they say how important is a state so these are the post erias whereas this over here is the prior and the posterior of course only makes sense in context of a trajectory um that's why the ultimate decision happens for the prior because the state needs to be important or not important to any trajectory so what they do is they roll out policies and they have certain methods of um of doing this so um they have uh they have uh random expression they have curiosity goals but they also train this continuously so they update it continuously via this what's called a goal condition policy and what a goal condition policy is basically is you put the agent somewhere in the maze actually let's use this maze over here um you put the agent somewhere in the maze let's say here you um for example make a bunch of random make a random exploration let's say here so you know these two things are reachable and then you train the agent say go from here to here right this is your goal now the agent tries to kind of reconstruct this random walk to there and um you can riff so so this is how you train an agent to go basically go from any to well reachable states to each other right from here to here and so on now you you won't train it to go directly from here to over here because a random walk would be very hard for a random walk to find its way over there um but what you end up with is is somehow an agent that is able to reach close by states and that's exactly what the worker is supposed to do right here and um so of of these trajectories you can then unroll them and decide on the kind of on these on these pivotal states so how do you do that and this is where this top part here comes in so down here you input to trajectory and you output how important is each state all right and now you see in this example here the light color means that the LSTM decides the state isn't important and the darker orange color means the LSTM decides the state is important so what you do next is the states where it decides it is important and notice the beginning at the end are always important um it feeds to a second LSTM as an input you see here here here so in this case of these um two of these six states in the trajectory three are important namely the start the end and this one here where the LSTM decides hey that's important um that goes into a second LSTM which is generator so this here is an encoder and this here is a decoder and what it does is it decodes the sequence of actions right here given nothing just given this it decodes a sequence of actions and at the end what you want is that the actions output here reconstruct the actions input this might sound a little confusing but the core value of this is what you want is to reconstruct the actions of the trajectory taken given only the important states what does this mean in our example in our example here this means if if I have to go from here to here right and for example I took the following path this this this so write write down down write this is these were macrons equals now if I only have the start the end and one state in between let's say this one right then can I reconstruct what actions were taken and if I erase the blue thing and I tell you I went from here via here to here then you could very much reconstruct the actions here so this state here is a good candidate for being an important state whereas if if it were a different state if it were for example if I told you I went from over here to here and then to here you'd say well this could be either something like this or it could be a path like this right it could be many many paths or it like this could be many paths leading from here here so this state here is not probably not very important so that's kind of how they how they learn which one are the important states via this encoding trajectories in an LSTM and trying to reconstruct the state the actions taken in the trajectory given only the states that were deemed important by the LSTM so that's how you train the LSTM to recognize important states and once you've recognized the important states in a trajectory you can then use those to learn a prior so basically you ask over all possible trajectories which of the states are generally important and that's how you end up with these blue states right and then the last part is to connect the blue states and that is fairly easily done in their approach what they say is all right we have blue states we should pick one and we do a random walk from it right random walk random walk if we hit another blue state like this one here in the random walk we simply say well they're probably neighbors so we do this a bunch of times if you hit the blue states of course without hitting another blue state first then you connect the two in a graph so these would be connected these would probably be connected what we ended up at the beginning right you have this graph maybe these two are connected and so on so this gives you this world graph and now you end up with a set of important states and connections between them that tell you which ones are easily reachable from each other so you can train the manager on that you can train the worker as we said before to you simply select two close by states train it to go from one to the other that by the worker will learn that so in essence that's how they do it you can look at the experiments themselves they show that this basically transfers so if you train like this pre-trained then you can give more specific and more complicated tasks and this will this will rapidly accelerate the learning of this yeah look at the experiments if you have time that was it for me thank you for listening | [{"start": 0.0, "end": 6.2, "text": " Hi there. Today we're looking at learning world graphs to accelerate hierarchical reinforcement"}, {"start": 6.2, "end": 15.56, "text": " learning by Wendling Chung at all from Salesforce research. This work is based in the world of"}, {"start": 15.56, "end": 21.44, "text": " reinforcement learning and especially hierarchical reinforcement learning. So in hierarchical reinforcement"}, {"start": 21.44, "end": 29.8, "text": " learning, the idea is that in order to perform a task like in this case they perform all"}, {"start": 29.8, "end": 37.4, "text": " of their experiments on mazes like this. So imagine you have this maze and this red thing here"}, {"start": 37.4, "end": 45.84, "text": " is the agent and the goal is the green square and the gray things obviously are walls and the black"}, {"start": 45.84, "end": 53.56, "text": " things are everywhere the agent can move. The agent can always move one step in any direction"}, {"start": 53.56, "end": 62.160000000000004, "text": " that it wants and that isn't blocked by a wall. So in order to fulfill such a task the agent needs"}, {"start": 62.160000000000004, "end": 67.72, "text": " to take many many steps like go here here here here here here each one of those is a step."}, {"start": 67.72, "end": 75.6, "text": " In addition this specific maze has an additional property namely that there's a locked door here"}, {"start": 75.6, "end": 84.0, "text": " and first you need to pick up the key to basically to open the locked door. So in order to reach"}, {"start": 84.0, "end": 90.0, "text": " the goal the agent needs first to pick up the key then open the door then go to the goal and each"}, {"start": 90.0, "end": 97.52, "text": " one of these it has to traverse many many steps. So the idea in hierarchical reinforcement learning"}, {"start": 97.52, "end": 105.56, "text": " is that you have two parts to it to the agent. So your agent which is this entire box here is"}, {"start": 105.56, "end": 116.04, "text": " divided into what's called a manager and a worker and this is a divide. So what the manager sees"}, {"start": 116.04, "end": 123.0, "text": " the manager sees basically I do an example here they do it differently but the manager could see"}, {"start": 123.0, "end": 132.2, "text": " large could see the world basically only in these large chunks right and it doesn't really care"}, {"start": 132.2, "end": 139.0, "text": " what is in or it cares what is in the chunks but it doesn't distinguish points within the chunks"}, {"start": 139.0, "end": 147.79999999999998, "text": " it just knows about these these chunks basically and what the manager will say oh first I need to go to"}, {"start": 147.79999999999998, "end": 154.76, "text": " this chunk here then because there's the key in this chunk and then I need to go to this chunk here"}, {"start": 154.76, "end": 160.28, "text": " because there is the door and then I need to go to this chunk here because there's the goal. So the"}, {"start": 160.28, "end": 167.72, "text": " in the view of the manager which has a very high level view of the world is the the action sequence is"}, {"start": 168.2, "end": 174.92000000000002, "text": " down here over here then over here. These are like three actions that's a pretty simple and then"}, {"start": 175.4, "end": 181.4, "text": " the manager would pass this information to the worker and it would say hey worker please go"}, {"start": 182.52, "end": 190.12, "text": " to this state here please go to the first state and then the worker would be tasked with basically"}, {"start": 190.12, "end": 199.16, "text": " moving the individual steps to go not to the final goal but only to go to that chunk and then in"}, {"start": 199.16, "end": 205.48000000000002, "text": " that chunk the worker would go to the key and then once it has the key the manager would say good job"}, {"start": 205.48000000000002, "end": 212.36, "text": " now please perform the second action which is go to to this chunk here. So the second action"}, {"start": 212.36, "end": 217.4, "text": " that the worker would so you basically get the idea whoops I am doing something here"}, {"start": 217.4, "end": 225.4, "text": " you get the idea that the I'm creating text boxes that the worker and the manager work together"}, {"start": 225.4, "end": 231.08, "text": " and that the manager has a high level view of the world and then the worker can basically"}, {"start": 231.96, "end": 238.12, "text": " execute the action that the manager has decided on in a fine-grained way."}, {"start": 239.48000000000002, "end": 245.88, "text": " So this is gives you several advantages namely the manager can plan high level and far away"}, {"start": 245.88, "end": 252.6, "text": " things and then the worker really only has to care about its close neighborhood because each step"}, {"start": 252.6, "end": 259.96, "text": " the manager proposes is a fairly short range so the worker can implement it. They do this in a"}, {"start": 259.96, "end": 269.48, "text": " kind of different way so let's actually start from the back from of this paper which is I find"}, {"start": 269.48, "end": 276.28000000000003, "text": " is a bit more explanatory and it makes a bit more sense to look at it. What they propose is to"}, {"start": 276.28000000000003, "end": 285.16, "text": " learn a world graph. So in a world graph what is a world graph a world graph consists of two things."}, {"start": 285.8, "end": 293.32, "text": " First a set of states which is the or the blue states here so all these blue states which are"}, {"start": 293.32, "end": 303.24, "text": " so-called pivot states or important states. So these are states in the world that are very important"}, {"start": 304.44, "end": 313.88, "text": " determined by some measure. So these are basically states that look at where they are. They're"}, {"start": 313.88, "end": 320.36, "text": " often like narrow passes you see here here they're at these narrow passes. So basically if you"}, {"start": 320.36, "end": 328.52000000000004, "text": " if you reach those states as an intermediary goal then you can basically go a lot of places from here."}, {"start": 328.52000000000004, "end": 336.28000000000003, "text": " So these are very let's say powerful states and these states are connected by a neighborhood graph."}, {"start": 336.28000000000003, "end": 343.16, "text": " So basically which states of these are close to each other and for example here you would connect"}, {"start": 343.16, "end": 349.64, "text": " of course those two because they're neighbors those you would probably connect those. So I'm attempting"}, {"start": 349.64, "end": 356.76, "text": " to kind of draw the world graph you might connect those. It doesn't need to be like a tree it can be"}, {"start": 359.4, "end": 369.4, "text": " like such. So you see that the graph kind of takes shape these are fairly reachable. So whenever"}, {"start": 371.0, "end": 376.76, "text": " a node in the graph whenever one of these important states is fairly easily reachable by some"}, {"start": 376.76, "end": 385.8, "text": " other state it's designated as a neighbor. So with that with this world graph here this is what"}, {"start": 385.8, "end": 391.4, "text": " you get you get an abstraction basically you get a set of states with connections between them that"}, {"start": 391.4, "end": 399.32, "text": " says how easy or hard is it to reach from one state to the other. If you have these things then you"}, {"start": 399.32, "end": 407.4, "text": " can really easily imagine a hierarchical reinforcement learning algorithm that now incorporates"}, {"start": 407.4, "end": 415.32, "text": " this information namely the manager will only use the important states to plan. So for example if the"}, {"start": 415.32, "end": 424.76, "text": " goal the goal isn't drawn in here but let's say the goal is here and then the door the door is"}, {"start": 424.76, "end": 432.68, "text": " here it's a locked door here and then the key let's draw in the key come on"}, {"start": 434.44, "end": 441.32, "text": " okay this doesn't want to all right the key is somewhere let's say here there's the key"}, {"start": 441.32, "end": 453.96, "text": " he is this all right then the no let's put the key further away um come on door here um I'm off with"}, {"start": 453.96, "end": 463.71999999999997, "text": " the colors and key here all right so what would the manager do the manager would then say ah okay"}, {"start": 463.71999999999997, "end": 470.28, "text": " the key is here so this would be a good state to reach of my importance it the manager is only"}, {"start": 470.28, "end": 477.47999999999996, "text": " allowed to go important states right so the manager says because it has the graph right it says"}, {"start": 477.47999999999996, "end": 483.55999999999995, "text": " aha this state is easily reachable from let's say this state then this state is easily reachable"}, {"start": 483.55999999999995, "end": 491.55999999999995, "text": " from this state so it plans go here then go here then go here then get the key right this is a kind"}, {"start": 491.55999999999995, "end": 498.67999999999995, "text": " of a micro action that is not in the importance then I need to you know go here this is reachable"}, {"start": 498.68, "end": 507.88, "text": " from this state that's reachable from this state and from this state and that's reachable from"}, {"start": 507.88, "end": 515.48, "text": " my audience so from the key then next go here go here go here go here and then open the door"}, {"start": 515.48, "end": 527.32, "text": " and then of course go here and solve the the task um the worker then would only ever need to"}, {"start": 527.32, "end": 536.0400000000001, "text": " implement the following it starts here and it says aha I need to go here what do I need to do"}, {"start": 536.0400000000001, "end": 542.6, "text": " I need to go for example down and over and now once I've done this I need to go here so I need to go"}, {"start": 542.6, "end": 549.48, "text": " right down right so you see the the worker only ever has to care about going from one hop to the"}, {"start": 549.48, "end": 556.44, "text": " next hop making it um really easy for the worker while the manager only has these blue states"}, {"start": 556.44, "end": 564.36, "text": " available which makes it search space much more um much more condensed and much more um"}, {"start": 565.8800000000001, "end": 572.6, "text": " much more over-viewable especially with the nodes in between the world graph so"}, {"start": 574.36, "end": 580.36, "text": " that's if you have the world graph right if you have this set of states and how important are how"}, {"start": 580.36, "end": 587.88, "text": " easily they reachable reachable they are between each other you can very easily do a reinforcement"}, {"start": 587.88, "end": 593.72, "text": " learning approach that that is hierarchical has the manager plan on the world graph has and then"}, {"start": 593.72, "end": 600.04, "text": " has the worker implement the fine grained actions and there is already a method that does this uh"}, {"start": 600.04, "end": 605.88, "text": " this paper here uses feudal networks so we won't go into that later just saying it's pretty easy"}, {"start": 605.88, "end": 611.48, "text": " if you have those things so the real question is how do they learn the world graph um"}, {"start": 612.92, "end": 619.88, "text": " and what they do is the following and they describe it here in kind of this oh sorry"}, {"start": 621.16, "end": 633.24, "text": " this way what they want to uh to finally learn is a uh prior that tells them for a given state"}, {"start": 633.24, "end": 640.44, "text": " how important is it and that's a beta prior a beta distribution is a continuous approximation on"}, {"start": 640.44, "end": 652.84, "text": " a on a kind of a binary zero one variable um so how do they do it they use an LSTM to encode"}, {"start": 652.84, "end": 663.16, "text": " trajectory so these are trajectories from kind of rollouts of a policy um and then the LSTM encodes"}, {"start": 663.16, "end": 671.4, "text": " it and for each step it outputs this um posterior over the what's called these latent variables here"}, {"start": 671.4, "end": 678.12, "text": " they say how important is a state so these are the post erias whereas this over here is the prior"}, {"start": 678.76, "end": 686.04, "text": " and the posterior of course only makes sense in context of a trajectory um that's why the ultimate"}, {"start": 686.04, "end": 691.8, "text": " decision happens for the prior because the state needs to be important or not important to any"}, {"start": 691.8, "end": 700.52, "text": " trajectory so what they do is they roll out policies and they have certain methods of um"}, {"start": 701.9599999999999, "end": 709.0, "text": " of doing this so um they have uh they have uh random expression they have curiosity goals but"}, {"start": 709.0, "end": 715.0799999999999, "text": " they also train this continuously so they update it continuously via this what's called a"}, {"start": 715.08, "end": 722.76, "text": " goal condition policy and what a goal condition policy is basically is you put the agent somewhere"}, {"start": 722.76, "end": 729.0, "text": " in the maze actually let's use this maze over here um you put the agent somewhere in the maze"}, {"start": 729.88, "end": 739.5600000000001, "text": " let's say here you um for example make a bunch of random make a random exploration let's say here"}, {"start": 739.56, "end": 746.04, "text": " so you know these two things are reachable and then you train the agent say go from here to here"}, {"start": 746.04, "end": 752.4399999999999, "text": " right this is your goal now the agent tries to kind of reconstruct this random walk to there"}, {"start": 753.0, "end": 758.68, "text": " and um you can riff so so this is how you train an agent to go basically go from any to"}, {"start": 759.7199999999999, "end": 765.4, "text": " well reachable states to each other right from here to here and so on now you you won't train it to"}, {"start": 765.4, "end": 771.48, "text": " go directly from here to over here because a random walk would be very hard for a random walk to"}, {"start": 771.48, "end": 778.4399999999999, "text": " find its way over there um but what you end up with is is somehow an agent that is able to reach"}, {"start": 779.16, "end": 787.3199999999999, "text": " close by states and that's exactly what the worker is supposed to do right here and um so"}, {"start": 787.32, "end": 797.24, "text": " of of these trajectories you can then unroll them and decide on the kind of on these on these pivotal"}, {"start": 797.24, "end": 806.36, "text": " states so how do you do that and this is where this top part here comes in so down here you input"}, {"start": 806.36, "end": 813.96, "text": " to trajectory and you output how important is each state all right and now you see in this example"}, {"start": 813.96, "end": 822.36, "text": " here the light color means that the LSTM decides the state isn't important and the darker orange color"}, {"start": 822.36, "end": 830.6, "text": " means the LSTM decides the state is important so what you do next is the states where it decides"}, {"start": 830.6, "end": 838.6800000000001, "text": " it is important and notice the beginning at the end are always important um it feeds to a second LSTM"}, {"start": 838.68, "end": 846.12, "text": " as an input you see here here here so in this case of these um two of these six states in the"}, {"start": 846.12, "end": 852.68, "text": " trajectory three are important namely the start the end and this one here where the LSTM decides"}, {"start": 852.68, "end": 861.2399999999999, "text": " hey that's important um that goes into a second LSTM which is generator so this here is an encoder"}, {"start": 861.88, "end": 866.76, "text": " and this here is a decoder and what it does is it decodes the sequence of actions"}, {"start": 866.76, "end": 875.48, "text": " right here given nothing just given this it decodes a sequence of actions and at the end what you"}, {"start": 875.48, "end": 882.12, "text": " want is that the actions output here reconstruct the actions input this might sound a little confusing"}, {"start": 882.12, "end": 891.8, "text": " but the core value of this is what you want is to reconstruct the actions of the trajectory"}, {"start": 891.8, "end": 898.4399999999999, "text": " taken given only the important states what does this mean in our example in our example"}, {"start": 899.0799999999999, "end": 907.88, "text": " here this means if if I have to go from here to here right and for example I took the following path"}, {"start": 907.88, "end": 914.12, "text": " this this this so write write down down write this is these were macrons equals now if I only have"}, {"start": 914.12, "end": 926.28, "text": " the start the end and one state in between let's say this one right then can I reconstruct what"}, {"start": 926.28, "end": 935.5600000000001, "text": " actions were taken and if I erase the blue thing and I tell you I went from here via here to here"}, {"start": 935.56, "end": 944.3599999999999, "text": " then you could very much reconstruct the actions here so this state here is a good candidate for"}, {"start": 944.3599999999999, "end": 950.28, "text": " being an important state whereas if if it were a different state if it were for example if I"}, {"start": 950.28, "end": 956.8399999999999, "text": " told you I went from over here to here and then to here you'd say well this could be either"}, {"start": 956.8399999999999, "end": 962.5999999999999, "text": " something like this or it could be a path like this right it could be many many paths or it"}, {"start": 962.6, "end": 970.6, "text": " like this could be many paths leading from here here so this state here is not probably not very"}, {"start": 970.6, "end": 979.4, "text": " important so that's kind of how they how they learn which one are the important states via"}, {"start": 979.4, "end": 987.5600000000001, "text": " this encoding trajectories in an LSTM and trying to reconstruct the state the actions taken"}, {"start": 987.56, "end": 994.28, "text": " in the trajectory given only the states that were deemed important by the LSTM so that's how you"}, {"start": 994.28, "end": 1001.2399999999999, "text": " train the LSTM to recognize important states and once you've recognized the important states in"}, {"start": 1001.2399999999999, "end": 1011.3199999999999, "text": " a trajectory you can then use those to learn a prior so basically you ask over all possible trajectories"}, {"start": 1011.32, "end": 1019.08, "text": " which of the states are generally important and that's how you end up with these blue states"}, {"start": 1020.2800000000001, "end": 1027.88, "text": " right and then the last part is to connect the blue states and that is fairly easily done in their"}, {"start": 1027.88, "end": 1034.8400000000001, "text": " approach what they say is all right we have blue states we should pick one and we do a random walk"}, {"start": 1034.84, "end": 1042.6, "text": " from it right random walk random walk if we hit another blue state like this one here in the random"}, {"start": 1042.6, "end": 1048.04, "text": " walk we simply say well they're probably neighbors so we do this a bunch of times if you hit the"}, {"start": 1048.04, "end": 1054.4399999999998, "text": " blue states of course without hitting another blue state first then you connect the two in a graph"}, {"start": 1054.4399999999998, "end": 1060.36, "text": " so these would be connected these would probably be connected what we ended up at the beginning"}, {"start": 1060.36, "end": 1066.6799999999998, "text": " right you have this graph maybe these two are connected and so on so this gives you this"}, {"start": 1066.6799999999998, "end": 1074.28, "text": " world graph and now you end up with a set of important states and connections between them that"}, {"start": 1074.28, "end": 1081.32, "text": " tell you which ones are easily reachable from each other so you can train the manager on that"}, {"start": 1081.32, "end": 1087.32, "text": " you can train the worker as we said before to you simply select two close by states train it to"}, {"start": 1087.32, "end": 1095.24, "text": " go from one to the other that by the worker will learn that so in essence that's how they do it"}, {"start": 1095.24, "end": 1102.2, "text": " you can look at the experiments themselves they show that this basically transfers so if you"}, {"start": 1102.76, "end": 1108.9199999999998, "text": " train like this pre-trained then you can give more specific and more complicated tasks and this"}, {"start": 1108.9199999999998, "end": 1115.8799999999999, "text": " will this will rapidly accelerate the learning of this yeah look at the experiments if you have time"}, {"start": 1115.88, "end": 1126.0400000000002, "text": " that was it for me thank you for listening"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=ZAW9EyNo2fw | Reconciling modern machine learning and the bias-variance trade-off | It turns out that the classic view of generalization and overfitting is incomplete! If you add parameters beyond the number of points in your dataset, generalization performance might increase again due to the increased smoothness of overparameterized functions.
Abstract:
The question of generalization in machine learning---how algorithms are able to learn predictors from a training sample to make accurate predictions out-of-sample---is revisited in light of the recent breakthroughs in modern machine learning technology.
The classical approach to understanding generalization is based on bias-variance trade-offs, where model complexity is carefully calibrated so that the fit on the training sample reflects performance out-of-sample.
However, it is now common practice to fit highly complex models like deep neural networks to data with (nearly) zero training error, and yet these interpolating predictors are observed to have good out-of-sample accuracy even for noisy data.
How can the classical understanding of generalization be reconciled with these observations from modern machine learning practice?
In this paper, we bridge the two regimes by exhibiting a new "double descent" risk curve that extends the traditional U-shaped bias-variance curve beyond the point of interpolation.
Specifically, the curve shows that as soon as the model complexity is high enough to achieve interpolation on the training sample---a point that we call the "interpolation threshold"---the risk of suitably chosen interpolating predictors from these models can, in fact, be decreasing as the model complexity increases, often below the risk achieved using non-interpolating models.
The double descent risk curve is demonstrated for a broad range of models, including neural networks and random forests, and a mechanism for producing this behavior is posited.
Authors: Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal
https://arxiv.org/abs/1812.11118 | Hi there. Today we're looking at reconciling modern machine learning and the bias variance trade off by Mikhail Belkin at all. So this paper struck me as interesting at ICML when I heard a talk by Mike Mikhail Belkin and the kind of the paper is very interesting in terms of what it proposes about modern machine learning. So what's the problem? The problem is they contrast what they call classical machine learning and how to understand machine learning namely in terms of bias variance trade-offs and modern machine learning where it's for example deep neural networks which have very different properties. So basically the best way to describe it is probably with an example. So let's say we have four data points. Here is a coordinate system in two dimensions. So 1, 2, 3, 4, four data points. Yeah, why not? All right. So these four data points we want to fit a function from x to y. Why is our target? So it's kind of a regression problem. And let's say we have just one parameter which which we can use to describe our function. Probably the best thing we could do is to do something like this, right, which is a line and the only parameter here is the slope of that line. So the kind of our model would be this one line and it would pass basically through the data and would describe the data fairly well as you can see. If we have two parameters now we can introduce for example a bias term and not have the line at the origin. So this line here now we have the bias which is the distance to this point to describe it as well as the slope of this line as parameters. So two parameters. And if you look at this line here it describes the data a bit better than before, right? It passes kind of through the center of the data. Now if we go to three or four parameters let's go to four parameters. It's well known that if I have the same number of parameters as I have the as I have data points actually fit the data perfectly and how to do this it would be like an order for polynomial which let's let's see if I can draw an order for polynomial it needs to go. It needs to rip and then okay well no that's okay that's more than order for. In any case I can fit actually the data perfectly. Now if you think about all of these functions let's contrast these right? Let's contrast them and let's look at what is the what is the the data distribution probably right? Data distribution is probably if I fill in the rest of the data that is not in our training set maybe something like this right? So which of these functions generalizes well to this general data the unseen data? Probably the first function not doing very poorly the first function actually doing okay the second function doing even better as we saw right? And then if we so if we add a parameter to the first function it gets better but if we then add more parameters it gets worse. So this is kind of taught in current machine learning classes as the phenomenon of overfitting whereas here here the function that has the most parameters actually doesn't fit well. What is troubling now is that if you think of things like neural networks modern architectures they actually have even more they have more oftentimes more parameters than there are data points in the data set. So they can fit the training data perfectly and still have kind of spare room spare capacity and these models actually generalize fairly well. So this paper asks what's going on here and what they propose is the following picture. So here we have a classical view of machine learning on the x axis is the complexity of h and you can think of the complexity of the this is h is the model class h is the class of all the models you could fit. So if if for example it would be every linear model with one parameter this this was our first model right. Our first model would be somewhere here one the complexity is one and then here we'd have a complexity of two where we added a parameter three parameters and four parameters and this is what we saw right at the beginning one parameter we had some some a training risk risk here simply another term for loss but some training loss right fit and then as we added a parameter the training loss decreased right they got better and also the test the test loss on the unseen data decreased. So it got better on the test set as well as we added parameter but then as we added more parameters it was able to fit the training data better and better going to almost zero risk here but on unseen data the performance actually got worse again and that's the that again this is the what we teach as overfitting these authors proposed this is incomplete namely the picture actually looks like this and all we've done so far is look at this left hand side here namely that there is a peak here and this is called the interpolation threshold and the interpolation threshold is roughly at the point where you have as many parameters as you have data points and after the interpolation threshold if you give even more parameters the training risk of course stays low because you can fit the training data perfectly from the interpolation threshold forward but the test risk actually decreases again and this is really this is really interesting and let me just preempt this and say this is not due to regularization so it's not because people regularize their models or anything like this in any case regularization would actually move you to less of a complexity of your model class because now if you regularize you're no longer able to fit certain models as easily or converge to them so they they propose that this is happening and they give some reason why this might happen and they give some evidence that this is happening so here is the evidence that this is happening and they do this here for example this is a random Fourier features classifier so what are random Fourier features they describe them here so if you have a data point x what you do is you push this through a function which or you push this through many of them you sample capital in of these vectors v and and of each of the vectors we take the inner product and raise it raise it take the exponential function of it and then aggregate them and these these random Fourier features these are the random Fourier features and these then are the weights that you learn so this is basically a linear classifier but not of the original features but of intermediary features which are fixed for a given random seed and the good thing is here you can sample you can decide how many intermediary features that you want the other good thing is if you let n go to infinity this actually becomes a infinite dimensional kernel machine so it becomes a kernel SVM with Gaussian kernel which is operating in an infinite dimensional space but if you don't go as far then it's just an approximation to that so this it's a cool it's a cool model where you can choose how many parameters you want so it's a perfect model to explore this this phenomenon so what are they doing they are doing the following they take Mnist and they just apply this model and on the x axis here are the number of parameters that they and the number of random Fourier features that they construct and here you can see the mean squared error on the test set so as you can see on at the beginning the error goes down as proposed right but then here is probably this sweet spot of classical machine learning after that you start to overfit it goes up again there's a giant peak and then it goes down again as you so here 10,000 they I think they do it with a subset of Mnist if I remember correctly and 10 around 10,000 is exactly the number of data points they use or multiplied by the classes I don't remember correctly but in any case at this number you have the same amount of parameters as data points roughly or and after that the test error decreases again so as you give more and more and more features every every single classifier on this line is able to fit the training data perfectly but they successfully get less and less error on the test set you can see it approaches this this dotted line here which is if you perfectly solve the infinite dimensional problem so if you actually use a kernel SVM to solve this problem that that is kind of you can see this gives you a lower bound so you can really be can really shows nicely that the random Fourier features classifier approximates this as you go higher and higher with capital and it actually approximates the kernel SVM and this is really interesting that this actually happens in practice and what they also see here is when they look at the norm of the solution so the norm of the solution they calculate as basically the they want to use ideally the norm in the Hilbert space but they can't because it's hard to compute so a proxy for this is simply the norm of the weight vector that you learn and the norm of the solution as you add more parameters of course the first it goes up because you add more kind of more parameters you fit each of them they have some value and then it goes up and it peaks at this interpolation threshold there you have a really high norm solution and after that the norm goes down again of the solution and again it approximates the norm of the of the perfectly solved kernel machine so that's extremely interesting and it's a part of an explanation they give why this is happening namely the following if you have too many parameters what you might do with the correct inductive bias is find a low norm solution and what does a low norm solution mean a low norm solution means a relatively simple function so as you add parameters your model is better and better able to find a simple function that describes the training data not in terms of not in terms of simple of less parameters but simple in terms of how it moves between the training data so if you imagine the the training data again from before it's actually and you imagine it perfectly fit this polynomial here right that we drew before parameters if I have many many many more parameters I can do something like yeah I have many parameters but I can be the kind of pretty good but they have late right so this something like this here grab this here I grab this something like this and this moves smoothly between the training data as many parameters because there's many many squiggles here but it's a low norm solution the low norm will cause the solution to kind of be smooth whereas a high norm solution that perfectly interpolates the training data would look something like this right so the authors here say if your inductive bias is able to find a low norm solution that perfectly fits the training data then that will generalize well and it turns out that modern architectures tend to find low norm solutions if you train them for example with SGD and and that's a so the combination of many parameters and low norm solutions will give you a smooth function and the smoothness of the function will be the thing that generalizes to unseen data because the smoothness kind of ensures that everything in between the data will be nicely kind of interpolated here here all right so that's the the perspective they go on from these random Fourier features to neural networks and what they do here is they train a neural network on M-nist with a one hidden layer so there's two weight layers now and again you can see as the as the number of parameters so this means basically the number of hidden no the increased the number of hidden nodes in the hidden layer and as the increased this the training and test error go down training error continues to go down test error goes up until the interpolation threshold again and then the test error drops again while the the training error continues to be almost zero and they do the same thing with decision trees and random forests and show the exact same thing that there is this interpolation threshold after which the test error drops even though the training error is almost zero so to me this is really remarkable and they show this in the appendix have many many more experiments where they they show this phenomenon happening on different data sets and on different architectures here random relu features and so on and it kind of gives a new perspective on generalization and why our models generalize so well they finally conclude with why hasn't this not been seen yet and they give some nice reasons basically that for example models where you can choose the models where you can choose the complexity for example random Fourier features are originally proposed as an approximation to kernel machines if you have too many data points and don't want to compute as many features so they they're basically only ever used in this regime where the classical paradigm holds and then neural networks in the other hand often are simply made super large and they say this peak here that they show is very localized and you might if you increase your neural network maybe you try one at this size this size this size and this size and all you then see is kind of a downward trajectory you kind of miss this peak so it leads to the impression that simply oh bigger neural networks perform better yeah so I found this interesting I hope you did as well and definitely check out more of this group's work that was it for now have a nice day | [{"start": 0.0, "end": 5.2, "text": " Hi there. Today we're looking at reconciling modern machine learning and the"}, {"start": 5.2, "end": 11.72, "text": " bias variance trade off by Mikhail Belkin at all. So this paper struck me as"}, {"start": 11.72, "end": 19.5, "text": " interesting at ICML when I heard a talk by Mike Mikhail Belkin and the kind of"}, {"start": 19.5, "end": 26.0, "text": " the paper is very interesting in terms of what it proposes about modern"}, {"start": 26.0, "end": 31.2, "text": " machine learning. So what's the problem? The problem is they contrast what they"}, {"start": 31.2, "end": 38.68, "text": " call classical machine learning and how to understand machine learning namely"}, {"start": 38.68, "end": 45.36, "text": " in terms of bias variance trade-offs and modern machine learning where it's for"}, {"start": 45.36, "end": 52.32, "text": " example deep neural networks which have very different properties. So basically"}, {"start": 52.32, "end": 56.8, "text": " the best way to describe it is probably with an example. So let's say we have"}, {"start": 56.8, "end": 66.36, "text": " four data points. Here is a coordinate system in two dimensions. So 1, 2, 3, 4,"}, {"start": 66.36, "end": 79.72, "text": " four data points. Yeah, why not? All right. So these four data points we want to"}, {"start": 79.72, "end": 85.88, "text": " fit a function from x to y. Why is our target? So it's kind of a regression"}, {"start": 85.88, "end": 93.52, "text": " problem. And let's say we have just one parameter which which we can use to"}, {"start": 93.52, "end": 98.08, "text": " describe our function. Probably the best thing we could do is to do something"}, {"start": 98.08, "end": 106.16, "text": " like this, right, which is a line and the only parameter here is the slope of"}, {"start": 106.16, "end": 114.11999999999999, "text": " that line. So the kind of our model would be this one line and it would pass"}, {"start": 114.11999999999999, "end": 119.28, "text": " basically through the data and would describe the data fairly well as you can"}, {"start": 119.28, "end": 125.03999999999999, "text": " see. If we have two parameters now we can introduce for example a bias term"}, {"start": 125.03999999999999, "end": 132.16, "text": " and not have the line at the origin. So this line here now we have the bias"}, {"start": 132.16, "end": 137.92, "text": " which is the distance to this point to describe it as well as the slope of"}, {"start": 137.92, "end": 143.51999999999998, "text": " this line as parameters. So two parameters. And if you look at this line here it"}, {"start": 143.51999999999998, "end": 149.04, "text": " describes the data a bit better than before, right? It passes kind of through"}, {"start": 149.04, "end": 155.12, "text": " the center of the data. Now if we go to three or four parameters let's go to"}, {"start": 155.12, "end": 159.88, "text": " four parameters. It's well known that if I have the same number of parameters as"}, {"start": 159.88, "end": 167.07999999999998, "text": " I have the as I have data points actually fit the data perfectly and how to do"}, {"start": 167.07999999999998, "end": 175.68, "text": " this it would be like an order for polynomial which let's let's see if I can"}, {"start": 175.68, "end": 187.92, "text": " draw an order for polynomial it needs to go. It needs to rip and then okay well"}, {"start": 187.92, "end": 198.95999999999998, "text": " no that's okay that's more than order for. In any case I can fit actually the"}, {"start": 198.95999999999998, "end": 204.48, "text": " data perfectly. Now if you think about all of these functions let's contrast"}, {"start": 204.48, "end": 211.51999999999998, "text": " these right? Let's contrast them and let's look at what is the what is the"}, {"start": 211.51999999999998, "end": 217.67999999999998, "text": " the data distribution probably right? Data distribution is probably if I fill in"}, {"start": 217.68, "end": 223.32, "text": " the rest of the data that is not in our training set maybe something like this"}, {"start": 223.32, "end": 232.4, "text": " right? So which of these functions generalizes well to this general data the"}, {"start": 232.4, "end": 237.64000000000001, "text": " unseen data? Probably the first function not doing very poorly the first"}, {"start": 237.64000000000001, "end": 242.96, "text": " function actually doing okay the second function doing even better as we saw"}, {"start": 242.96, "end": 248.12, "text": " right? And then if we so if we add a parameter to the first function it gets"}, {"start": 248.12, "end": 253.44, "text": " better but if we then add more parameters it gets worse. So this is kind of"}, {"start": 253.44, "end": 257.52, "text": " taught in current machine learning classes as the phenomenon of overfitting"}, {"start": 257.52, "end": 263.76, "text": " whereas here here the function that has the most parameters actually doesn't fit"}, {"start": 263.76, "end": 269.84000000000003, "text": " well. What is troubling now is that if you think of things like neural networks"}, {"start": 269.84, "end": 275.32, "text": " modern architectures they actually have even more they have more oftentimes more"}, {"start": 275.32, "end": 281.23999999999995, "text": " parameters than there are data points in the data set. So they can fit the"}, {"start": 281.23999999999995, "end": 288.32, "text": " training data perfectly and still have kind of spare room spare capacity and"}, {"start": 288.32, "end": 295.64, "text": " these models actually generalize fairly well. So this paper asks what's going"}, {"start": 295.64, "end": 301.47999999999996, "text": " on here and what they propose is the following picture. So here we have a"}, {"start": 301.47999999999996, "end": 309.2, "text": " classical view of machine learning on the x axis is the complexity of h and you"}, {"start": 309.2, "end": 315.36, "text": " can think of the complexity of the this is h is the model class h is the class of"}, {"start": 315.36, "end": 323.56, "text": " all the models you could fit. So if if for example it would be every linear"}, {"start": 323.56, "end": 328.36, "text": " model with one parameter this this was our first model right. Our first model"}, {"start": 328.36, "end": 333.04, "text": " would be somewhere here one the complexity is one and then here we'd have a"}, {"start": 333.04, "end": 337.6, "text": " complexity of two where we added a parameter three parameters and four"}, {"start": 337.6, "end": 342.8, "text": " parameters and this is what we saw right at the beginning one parameter we had"}, {"start": 342.8, "end": 349.12, "text": " some some a training risk risk here simply another term for loss but some"}, {"start": 349.12, "end": 354.08, "text": " training loss right fit and then as we added a parameter the training loss"}, {"start": 354.08, "end": 361.08, "text": " decreased right they got better and also the test the test loss on the"}, {"start": 361.08, "end": 365.96, "text": " unseen data decreased. So it got better on the test set as well as we added"}, {"start": 365.96, "end": 370.64, "text": " parameter but then as we added more parameters it was able to fit the training"}, {"start": 370.64, "end": 378.36, "text": " data better and better going to almost zero risk here but on unseen data the"}, {"start": 378.36, "end": 384.08000000000004, "text": " performance actually got worse again and that's the that again this is the"}, {"start": 384.08000000000004, "end": 388.6, "text": " what we teach as overfitting these authors proposed this is incomplete"}, {"start": 388.6, "end": 393.64, "text": " namely the picture actually looks like this and all we've done so far is"}, {"start": 393.64, "end": 401.12, "text": " look at this left hand side here namely that there is a peak here and this is"}, {"start": 401.12, "end": 404.72, "text": " called the interpolation threshold and the interpolation threshold is"}, {"start": 404.72, "end": 409.28000000000003, "text": " roughly at the point where you have as many parameters as you have data"}, {"start": 409.28000000000003, "end": 415.24, "text": " points and after the interpolation threshold if you give even more"}, {"start": 415.24, "end": 419.52000000000004, "text": " parameters the training risk of course stays low because you can fit the"}, {"start": 419.52000000000004, "end": 425.72, "text": " training data perfectly from the interpolation threshold forward but the test"}, {"start": 425.72, "end": 432.56, "text": " risk actually decreases again and this is really this is really interesting and"}, {"start": 432.56, "end": 439.28000000000003, "text": " let me just preempt this and say this is not due to regularization so it's"}, {"start": 439.28000000000003, "end": 444.24, "text": " not because people regularize their models or anything like this in any case"}, {"start": 444.24, "end": 449.8, "text": " regularization would actually move you to less of a complexity of your model"}, {"start": 449.8, "end": 455.2, "text": " class because now if you regularize you're no longer able to fit certain"}, {"start": 455.2, "end": 464.47999999999996, "text": " models as easily or converge to them so they they propose that this is"}, {"start": 464.47999999999996, "end": 468.15999999999997, "text": " happening and they give some reason why this might happen and they give some"}, {"start": 468.15999999999997, "end": 473.68, "text": " evidence that this is happening so here is the evidence that this is happening"}, {"start": 473.68, "end": 481.03999999999996, "text": " and they do this here for example this is a random Fourier features"}, {"start": 481.04, "end": 486.20000000000005, "text": " classifier so what are random Fourier features they describe them here so if you"}, {"start": 486.20000000000005, "end": 498.24, "text": " have a data point x what you do is you push this through a function which or"}, {"start": 498.24, "end": 504.48, "text": " you push this through many of them you sample capital in of these vectors v and"}, {"start": 504.48, "end": 511.20000000000005, "text": " and of each of the vectors we take the inner product and raise it raise it take the"}, {"start": 511.20000000000005, "end": 520.08, "text": " exponential function of it and then aggregate them and these these random Fourier"}, {"start": 520.08, "end": 523.28, "text": " features these are the random Fourier features and these then are the weights"}, {"start": 523.28, "end": 530.6, "text": " that you learn so this is basically a linear classifier but not of the original"}, {"start": 530.6, "end": 536.5600000000001, "text": " features but of intermediary features which are fixed for a given random"}, {"start": 536.5600000000001, "end": 541.5600000000001, "text": " seed and the good thing is here you can sample you can decide how many"}, {"start": 541.5600000000001, "end": 547.4, "text": " intermediary features that you want the other good thing is if you let n go to"}, {"start": 547.4, "end": 554.6800000000001, "text": " infinity this actually becomes a infinite dimensional kernel machine so it"}, {"start": 554.68, "end": 560.88, "text": " becomes a kernel SVM with Gaussian kernel which is operating in an infinite"}, {"start": 560.88, "end": 568.16, "text": " dimensional space but if you don't go as far then it's just an approximation to"}, {"start": 568.16, "end": 571.88, "text": " that so this it's a cool it's a cool model where you can choose how many"}, {"start": 571.88, "end": 578.9599999999999, "text": " parameters you want so it's a perfect model to explore this this phenomenon so"}, {"start": 578.96, "end": 585.76, "text": " what are they doing they are doing the following they take Mnist and they just"}, {"start": 585.76, "end": 592.32, "text": " apply this model and on the x axis here are the number of parameters that they"}, {"start": 592.32, "end": 600.1600000000001, "text": " and the number of random Fourier features that they construct and here you can"}, {"start": 600.16, "end": 609.24, "text": " see the mean squared error on the test set so as you can see on at the beginning"}, {"start": 609.24, "end": 616.28, "text": " the error goes down as proposed right but then here is probably this sweet"}, {"start": 616.28, "end": 621.3199999999999, "text": " spot of classical machine learning after that you start to overfit it goes up"}, {"start": 621.3199999999999, "end": 629.9599999999999, "text": " again there's a giant peak and then it goes down again as you so here"}, {"start": 629.96, "end": 636.0, "text": " 10,000 they I think they do it with a subset of Mnist if I remember correctly"}, {"start": 636.0, "end": 642.2, "text": " and 10 around 10,000 is exactly the number of data points they use or"}, {"start": 642.2, "end": 648.8000000000001, "text": " multiplied by the classes I don't remember correctly but in any case at this"}, {"start": 648.8000000000001, "end": 658.2, "text": " number you have the same amount of parameters as data points roughly or and"}, {"start": 658.2, "end": 664.96, "text": " after that the test error decreases again so as you give more and more and"}, {"start": 664.96, "end": 670.44, "text": " more features every every single classifier on this line is able to fit the"}, {"start": 670.44, "end": 676.32, "text": " training data perfectly but they successfully get less and less error on the"}, {"start": 676.32, "end": 683.84, "text": " test set you can see it approaches this this dotted line here which is if you"}, {"start": 683.84, "end": 688.2800000000001, "text": " perfectly solve the infinite dimensional problem so if you actually use a"}, {"start": 688.2800000000001, "end": 695.24, "text": " kernel SVM to solve this problem that that is kind of you can see this gives you"}, {"start": 695.24, "end": 701.6800000000001, "text": " a lower bound so you can really be can really shows nicely that the random"}, {"start": 701.6800000000001, "end": 707.1600000000001, "text": " Fourier features classifier approximates this as you go higher and higher with"}, {"start": 707.16, "end": 714.48, "text": " capital and it actually approximates the kernel SVM and this is really"}, {"start": 714.48, "end": 720.68, "text": " interesting that this actually happens in practice and what they also see here"}, {"start": 720.68, "end": 726.68, "text": " is when they look at the norm of the solution so the norm of the solution they"}, {"start": 726.68, "end": 733.9599999999999, "text": " calculate as basically the they want to use ideally the norm in the"}, {"start": 733.96, "end": 739.12, "text": " Hilbert space but they can't because it's hard to compute so a proxy for this"}, {"start": 739.12, "end": 746.76, "text": " is simply the norm of the weight vector that you learn and the norm of the"}, {"start": 746.76, "end": 752.6800000000001, "text": " solution as you add more parameters of course the first it goes up because you"}, {"start": 752.6800000000001, "end": 758.8000000000001, "text": " add more kind of more parameters you fit each of them they have some value and"}, {"start": 758.8, "end": 768.04, "text": " then it goes up and it peaks at this interpolation threshold there you have a"}, {"start": 768.04, "end": 773.4399999999999, "text": " really high norm solution and after that the norm goes down again of the"}, {"start": 773.4399999999999, "end": 781.8399999999999, "text": " solution and again it approximates the norm of the of the perfectly solved"}, {"start": 781.8399999999999, "end": 787.4, "text": " kernel machine so that's extremely interesting and it's a part of an"}, {"start": 787.4, "end": 795.64, "text": " explanation they give why this is happening namely the following if you have"}, {"start": 795.64, "end": 802.52, "text": " too many parameters what you might do with the correct inductive bias is find a"}, {"start": 802.52, "end": 807.4, "text": " low norm solution and what does a low norm solution mean a low norm solution"}, {"start": 807.4, "end": 813.4, "text": " means a relatively simple function so as you add parameters your model is"}, {"start": 813.4, "end": 819.72, "text": " better and better able to find a simple function that describes the training"}, {"start": 819.72, "end": 826.92, "text": " data not in terms of not in terms of simple of less parameters but simple in"}, {"start": 826.92, "end": 833.04, "text": " terms of how it moves between the training data so if you imagine the the"}, {"start": 833.04, "end": 842.28, "text": " training data again from before it's actually and you imagine it perfectly"}, {"start": 842.28, "end": 847.68, "text": " fit this polynomial here right that we drew before parameters if I have many"}, {"start": 847.68, "end": 854.64, "text": " many many more parameters I can do something like yeah I have many parameters"}, {"start": 854.64, "end": 860.52, "text": " but I can be the kind of pretty good but they have late right so this something"}, {"start": 860.52, "end": 866.64, "text": " like this here grab this here I grab this something like this and this moves"}, {"start": 866.64, "end": 870.72, "text": " smoothly between the training data as many parameters because there's many"}, {"start": 870.72, "end": 875.6, "text": " many squiggles here but it's a low norm solution the low norm will cause the"}, {"start": 875.6, "end": 882.6, "text": " solution to kind of be smooth whereas a high norm solution that perfectly"}, {"start": 882.6, "end": 891.48, "text": " interpolates the training data would look something like this right so the"}, {"start": 891.48, "end": 898.88, "text": " authors here say if your inductive bias is able to find a low norm solution that"}, {"start": 898.88, "end": 906.4399999999999, "text": " perfectly fits the training data then that will generalize well and it turns out"}, {"start": 906.4399999999999, "end": 913.0, "text": " that modern architectures tend to find low norm solutions if you train them for"}, {"start": 913.0, "end": 920.88, "text": " example with SGD and and that's a so the combination of many parameters and"}, {"start": 920.88, "end": 926.04, "text": " low norm solutions will give you a smooth function and the smoothness of the"}, {"start": 926.04, "end": 933.0799999999999, "text": " function will be the thing that generalizes to unseen data because the smoothness"}, {"start": 933.0799999999999, "end": 941.9599999999999, "text": " kind of ensures that everything in between the data will be nicely kind of"}, {"start": 941.9599999999999, "end": 950.12, "text": " interpolated here here all right so that's the the perspective they go on from"}, {"start": 950.12, "end": 956.2, "text": " these random Fourier features to neural networks and what they do here is they"}, {"start": 956.2, "end": 962.88, "text": " train a neural network on M-nist with a one hidden layer so there's two weight"}, {"start": 962.88, "end": 971.24, "text": " layers now and again you can see as the as the number of parameters so this"}, {"start": 971.24, "end": 974.6800000000001, "text": " means basically the number of hidden no the increased the number of hidden"}, {"start": 974.6800000000001, "end": 979.72, "text": " nodes in the hidden layer and as the increased this the training and test"}, {"start": 979.72, "end": 984.9200000000001, "text": " error go down training error continues to go down test error goes up until the"}, {"start": 984.9200000000001, "end": 991.28, "text": " interpolation threshold again and then the test error drops again while the"}, {"start": 991.28, "end": 1002.6800000000001, "text": " the training error continues to be almost zero and they do the same thing with"}, {"start": 1002.6800000000001, "end": 1008.08, "text": " decision trees and random forests and show the exact same thing that there is"}, {"start": 1008.08, "end": 1014.2, "text": " this interpolation threshold after which the test error drops even though the"}, {"start": 1014.2, "end": 1023.76, "text": " training error is almost zero so to me this is really remarkable and they show"}, {"start": 1023.76, "end": 1029.24, "text": " this in the appendix have many many more experiments where they they show"}, {"start": 1029.24, "end": 1033.68, "text": " this phenomenon happening on different data sets and on different"}, {"start": 1033.68, "end": 1042.48, "text": " architectures here random relu features and so on and it kind of gives a new"}, {"start": 1042.48, "end": 1050.6000000000001, "text": " perspective on generalization and why our models generalize so well they"}, {"start": 1050.6000000000001, "end": 1057.48, "text": " finally conclude with why hasn't this not been seen yet and they give some"}, {"start": 1057.48, "end": 1067.84, "text": " nice reasons basically that for example models where you can choose the models"}, {"start": 1067.84, "end": 1075.52, "text": " where you can choose the complexity for example random Fourier features are"}, {"start": 1075.52, "end": 1080.2, "text": " originally proposed as an approximation to kernel machines if you have too many"}, {"start": 1080.2, "end": 1085.08, "text": " data points and don't want to compute as many features so they they're basically"}, {"start": 1085.08, "end": 1090.0, "text": " only ever used in this regime where the classical paradigm holds and then"}, {"start": 1090.0, "end": 1096.76, "text": " neural networks in the other hand often are simply made super large and they say"}, {"start": 1096.76, "end": 1103.4399999999998, "text": " this peak here that they show is very localized and you might if you increase"}, {"start": 1103.4399999999998, "end": 1108.4399999999998, "text": " your neural network maybe you try one at this size this size this size and this"}, {"start": 1108.4399999999998, "end": 1113.1599999999999, "text": " size and all you then see is kind of a downward trajectory you kind of miss"}, {"start": 1113.16, "end": 1117.8000000000002, "text": " this peak so it leads to the impression that simply oh bigger neural networks"}, {"start": 1117.8000000000002, "end": 1126.3200000000002, "text": " perform better yeah so I found this interesting I hope you did as well and"}, {"start": 1126.3200000000002, "end": 1132.4, "text": " definitely check out more of this group's work that was it for now have a nice"}, {"start": 1132.4, "end": 1146.24, "text": " day"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=l8JeokY5NsU | Conversation about Population-Based Methods (Re-upload) | Being interviewed by Connor Shorten of Henry AI Labs (https://www.youtube.com/channel/UCHB9VepY6kYvZjj0Bgxnpbw) on the topic of population-based methods and open-ended learning.
Tutorial: https://www.facebook.com/icml.imls/videos/481758745967365/
Book: https://www.amazon.com/dp/B00X57B4JG/ | Hi there. I've recently been interviewed by the YouTube channel Henry AI Labs by Connor Shorten. And what follows is the resulting conversation we had about population-based methods and open-ended learning, things like that, basically topics of the ICML tutorial that we both saw. It's important to note that none of us is really an expert on the topic, but we are trying to make sense of it. And, maybe just kind of talking about the ideas. So, please enjoy the conversation with Connor Shorten. Definitely check out the Henry AI Labs channel. And, yeah, have a good time. This is watching the Henry AI Labs Deep Learning Podcast. Today I'm joined with Yannick Kiltcher. Yannick is working the Data Analytics Lab at ETH. He has a great YouTube channel. I really enjoy watching his Paper Summary videos. If you like any of the videos that I'm making, you'll definitely also like checking out this channel. I'm going to put the link in the description at the end of the talk. So, Yannick, thanks for doing this and we really appreciate it. Thanks for having me. It's cool. So, what we're going to talk about is population-based search and presentation at ICML that I really thought was interesting about emphasizing diversity and novelty in search. So, the first question I just wanted to start by generally talking about your opinion on population-based search and the differences between population-based search and gradient descent going straight for one solution. Yeah, so the kind of main difference is that in population-based search, as the name implies, you maintain kind of a large population of solutions. So, you don't want to limit yourself to just one trajectory, say I start here, and then I run towards my goal. But you can maintain a lot of hypotheses of what the solution could be, and then you kind of want to update all of them at the same time. And so, there's many different variants of population-based search, but they all have this thing in common where you maintain many solutions and you kind of bet on one of them becoming a good one, basically. Yeah, so one other thing they present their paper where they have the robot walking and if it breaks one of its legs, for example, it can go back to the map elites table and say, OK, well, I lost this leg, but I think maybe this solution, I wasn't too clear on how that would really be related. So, maybe wondering if you had more insight on that. Yeah, so the context is, yeah, you want to teach a robot to walk, and the robot had six legs, I believe. And if you think of what's a solution to the problem, the solution is kind of an algorithm that takes the current sensor input and outputs how to move the motors. And if you just have like some gradient descent algorithm converging on the best solution of how to move the robot, it's just going to be like, oh, these are the sensors, OK, I'm going to move like this, like this, like this. But if one leg breaks, of course, your loss, you know, this one way of moving, and now the, sorry. So you only know this one way of moving, basically. And that's it. But in population based search, if you think of the solution as a way to move, you maintain many, many ways to move. So you basically, the objective, if you can call it like this, is algorithm. Find me a lot of different ways to move, right, with my six legs. And now if one of my legs, I still can evaluate all of them. I still can find, OK, which one's the best? But if now one of them falls away, I have all these other solutions that I can try. Right? So then what they would do is like this life was with, now they just re-evaluate all of those solutions while only having five legs. And the best of those, like, is much more likely to kind of work than if you had just your single solution. So that kind of, that's the, it's population based because you maintain many different ways of solving the problem. Yes, I was also thinking about like using the search algorithms that control neural architecture search and things like that. So it's trying to think of how you might extend these ideas from the robot walking with six legs to the RNM controller designing the convolutional network. Like maybe I might have like more of a storage constraint or more of a latency constraint. And I could jump to a different solution like that. I'm just wondering how you think like these ideas of population based search translate into the neural architecture search. And specifically if it really is important because like you've got, I feel like in neural architecture search, you have such a direct signal with the classification accuracy. Like I don't see as much variance that was in the objective function. Yeah, I really think this population based approach is they shine in. So they shine in multiple different areas, but one area where they shine is definitely when the environment changes. So when you know something about whatever your input changes like the robot losing a leg. So in kind of neural architecture search, you might, you might find these methods working if you then go for let's say transfer learning. So you can not train your network on one task. You want to actually do it in another task, right? And then if you maintain many solutions and you can evaluate all of them in a in this transfer setting, it's much more likely that one of them, you know, it's going to be, is going to be fine. So, but you're right of I also believe that directly in architecture search, maybe it's not. Maybe it doesn't yield that many great results though the other, of course, the other. Area where these methods shine and this is with respect to algorithms like novelty search. Which can be implemented as a population based method is they gave this really good example of deception in a search problem. So a deception would be like if you have a robot walking a maze and the robot just wants to get to the goal, right? And you would program it the robot to be rewarded the closer it gets to the goal. But if like there's a wall in between and you actually need to go around the wall kind of then for a while, you would need to move away from the goal in order to reach it. So if you have like a pure objective driven approach, you just go straight to the goal. You would always get stuck at the wall. But if you then kind of do what is called a novelty search where you basically reward the robot for things it has never done before, it would actually find its way around the wall. So you can maintain population of solutions that all kind of explore the space. And that in or neural architecture search, maybe it's of a benefit that actually, you know, if I probably always benefit from like adding more layers or neurons or something like this. But maybe I actually want to prune some stuff first and then add some more stuff. So I maybe want to get worse first before I can get even better, right? So so it reached where I can imagine happening, but I don't know. Yeah, I was thinking of the changing environment. I definitely think like when you deploy a model and then you're getting new data that you could frame that as a changing environment. And then also thinking about like in the context of GANs, which is something that I think is really interesting that the discriminator classifying the GAN, the generator samples, it's a changing environment because of the generator's updates. So maybe having some kind of population based GAN or discriminator model might help it avoid that like continual learning problem, I guess, is sort of an eat. Yeah, that that could that might as might very well be there are approaches to GANs, I believe, where you have like many discriminators and each one kind of only has, let's say, has its own limited view on the day turn, you're trying to kind of fool all of them at the same time, but it's not the same thing. But yes, I think that I might make sense. Yeah, I've seen that multiple generator multiple discriminator model too. I think that's really interesting as well. So then one other thing I was curious about is this idea of goal switching and how that might relate to the autoML on our existing. I'm like heavily studied things like classification, localization, semantic segmentation, like how do you think goal switching could be important? Like one idea I had is maybe if you've got like multi classification and it's got like a really low false positive rate or something on like one class, you might say, well, you somehow learn a decision boundary on that class. Or do you think that that wouldn't generalize and that there's no sense in goal switching in like a multi class classification problem? So in general, well, you think of goal switching in general, how they introduced it was also in the context of like this population based search of these mappolites. Maybe it's kind of so what mappolites the algorithm does basically is it says, okay, I have a number of dimensions that I could solve the problem on and they introduced, okay, let's take life on earth needs to whatever survive. So I can either be like a super tall creature right to reach food that no one else can reach. I could be a super fast creature right to kind of run away from everything or it can be a super heavy creature so that no one can attack me. And so these are kind of the dimensions that you can solve the problem of reproduction and survival and within. So what mappolites does it it would segment this area. So let's say size and speed it would segment this into a grid and in each grid it would kind of maintain the best solution so far that is within that grid. And then what they see is when they then kind of evolve this over time and improve each each grid is that inventions let's say inventions algorithm discoveries in one grid say for a very fast creature they would then kind of be adapted to the very let's say the very heavy creature. So it's like fast creature kind of discovers all longer legs make me even faster maybe the longer legs can be combined in the heavy creature to do something else so this kind of goal switching it's think of like feathers being first kind of developed or evolved for warmth for temperature regulation then being goal switched over to adapted for flight. So in the in terms of all T class classification I guess it's a bit of a different problem if you just have one classifier we can definitely make the argument that since you know you're learning maybe to classify one class really well the low falls positive rate you have learned very good features for that class and if some other class kind of like the zebra is a horse with stripes and then. The horse is a horse but with the features stripes being really low you can probably classify that better something making stuff up here but it's a bit of a different context I feel the if you have a single class fair do multi class classification but definitely the logic applies in the features space I would say where you learn features for one class and they might become useful for another class. Yeah, I have this other thoughts sort of when you're discussing as like what about like multi class multi task learning like maybe my intermediate features get map to a classifier get map to a segmentation get map to again like could goal switching improve multi task learning. Yeah, I would definitely say so I think that that's exactly what we're seeing when you look at for example pre training so if you think of like this these newest big language models like Bert or something they're really good at tasks I don't know what it was an on the task labeling of sentiment sentiment classification is the classic right. If they evaluate on that because it's so easy but let's say birds is really good at sentiment classification but if you were to just to train it outright on sentiment classification is probably not going to work because there's just too little signal but then what happens is you pre train it as a language model as this mask language model and it kind of gets really good at simply comprehending language and that skill can then be kind of adapted over into the language. So I think if you look at the up something like pre training or multi task as you say then definitely one the addition of a task might give rise to certain features that then all of a sudden can be adapted by another task. If you just trained the latter task by itself that maybe would have been too difficult. So yeah, there's definitely an analogy. So then what I think about is so I'm going from my pre training language model into sentiment classification maybe I also add like question answering document summarization named entity like this like vector of task that it can go do. I'm then curious like when your goal switching it's like how do you then combine the features later on or you just like take it as if I need this task go go to this model likes yeah. Well the question here is do you whether or not you implement this as a single model and kind of refer to the goal switching of features within that model or whether you also do this now as a population based method where basically you maintain you maintain different neural networks for different combination of these tasks. Then you did actually need a method to kind of combine and reproduce the neural networks themselves which I yeah I see that's that's going to be a bit of a different task like some cross distillation or some something crazy. I don't know how that work exactly. Yeah, I just wonder about two things it's like do from a population based search could you have like the weights be the population like different sets of weights or would it necessarily need to be like taking apart the layers and designing new internal like cells is in the architecture search like because if I just have the weights maybe I could treat the diversity search or goal switching as like stochastic weight averaging and just like mesh them all together when I'm finished with my goal switching at the end. But it's yeah. It definitely be if you wanted to if you if you wanted to if you wanted to implement your multitask multitask tasking as a population based approach where yeah you could definitely give you an easier time if you keep the architecture of your neural networks the same and simply have different weights and then you could indeed consider something like weight averaging or or yeah I guess more modern approach will be like distillation from the two teacher models into one child months actually good metaphor for for reproduction kind of distillation from multiple teacher model don't know if anyone's done that yet but yeah. I guess that might be the way to do it if you also maintain different architectures for different problems that might be a bit of a. Yeah that's an interesting thing too if you have the goal switching and then you model distillate all into one model that is yes well if you think of map elites right you it's simply you simply distill it into the appropriate I don't even know what the what the axis would be. Probably I can imagine okay you have like three tasks so you have three axis and then you'd mix the task maybe in accordance on how far up your of these axis you are or something like this. It's not exactly map elites because your actual objectives are on the axis but. Yes pretty cool so just to backtrack one step I want to talk about like diversity centric search novelty like when I was thinking about that I was like can you just initialize it such that it has maximum diversity like can you just initialize the population such that they're all like uniformly spaced and then search locally from there so I just wonder what you think on that and how this is different from that. So yeah in this in this diversity search algorithms basically what you're you're doing is your only goal is or your main goal depends on the album but let's say your only goal is to find diverse behaviors or diverse solutions diverse I think the main problem with that is is that the search space is so extremely large. That you're going to have a hard time even even defining what a kind of a uniform distribution is because it's such a high dimensional space that even if you sample uniformly it's it's almost empty like you're almost you're not you're not getting anywhere because you have finite. You have a finite computer you need to implement an algorithm even if you even if my computer can hold a hundred thousand different members of a population in high dimensions that is nothing right so. To me yet the initialization might be definitely important but I don't think you'll you get around some sort of iterative procedure and getting around weeding out weeding out things such that you have space for interesting things because ultimately what you want to find is something interesting in the robot maze example the. The novelty search basically is here's a robot you started right and then you want to do something that you haven't done yet right so if the robots crashes into a wall the first time that's a good thing you say cool you haven't done that yet but if it crashes into the wall the second time you're like you've done that already right so you. You basically need a measure of saying how close to behaviors are but if the robot has crashed into every wall once the only thing it can do if it wants to do something new is actually go around the wall and then you're like oh cool you've done something new but the space of behaviors often is so large that you can simply enumerate all the all the behaviors so you I think that's the main problem why you can't just. Make it diverse from the beginning. Yeah when I think about that I think that maybe the like reward function if you're like navigating the maze it needs to be more refined so like the crashes into the wall that needs to be like. I don't know plus three some some like unique signal I feel like in order to create that kind of because like thinking of if it's just like reward zero everywhere but one if you hit that finish line and then maybe some kind of like. Discounting for how long it takes you to get there is I don't see how it could interpret that it's done a new behavior if all it has is it so that so to me it feels like it's all about the design of the reward space now to implement such a thing. Yes absolutely so the definitely if you wanted to do novelty search you would need to implement a measure of how close to behaviors are so there's no way around and I think that's kind of crux of the of this method is that by specifying. How close to behaviors are so what what constitutes novelty and what doesn't you already implicitly kind of telling the robot something about the nature of the world so I think that the kind of the objective because they now say oh we don't give the robot the objecting of reaching the target we simply give it the objective of not doing the same thing twice I think the kind of objective sneaks in in like again. Through the specification of how of you how close are to be a risk but definitely. This is just kind of a really simple example what they want to say is that these methods really become important when you have ambitious objectives right in the maze we can all agree if we just design the reward crashing walls bad you don't have to actually go straight to the goal you can you know but. Go around walls good and so on then it's easy right but in really ambitious objectives like I don't know flying reaching the moon in the in the 1960s designing general AI curing cancer and so on we don't actually know how to design the reward right because we don't know which steps need to be fulfilled in order to to fly to the moon I guess now we do in hindsight. Right but we you can have predicted we don't know which steps need to be discovered in order to cure cancer and it's very very probable if you look at history that. The fundamental discoveries that lead to us here in cancer will not directly come from cancer research that's that's their entire point right it's not like you can have a goal go straight towards it if it's like a really ambitious goal very probably. The solutions will come in part from extremely non related fields and they and you kind of have to make advances everywhere and in order to solve that problem so the the question of it's all design it's all a design in the reward yes but we would have to know how the reward must be must look and in these really ambitious objectives we don't and that's that's where we're going to go. That's that's where they argue well the best thing actually you can do is to just explore and you just find interesting things along the way and you kind of hope that these interesting things will come no you know the interesting things will combine to form new interesting things right but you just don't know where you're going to end up right. Yes, I guess maybe you could just keep a trip like the trajectory of states and use that as your signal novelty but then I think like if you've got like a robotic arm with like X degrees of freedom it's like the state space would be too infinite to really like say oh this is significantly this is a significantly different sequential procedure of states and this other thing. So then the next thing yes I think this is a good transition into their pick breeder experiment and so anyone who listens to this who hasn't watched their talk the pick breeder is like they've got these generator neural networks with sets of weights and they have like humans go on and they pick two of the generated images to blend together and derive a new image and so this repeats on and on until it goes from like just like a spiral. Pattern into like a school faced drawing or a butterfly drawing or something like that and they so this idea is supposed to represent open endedness in an environment and not so it just generally I just found it to be really interesting I think it's one of the things in their talk that you look at it and you're like oh it's interesting what what is going on here. But it's like the the mutation is really guided by the human search which is so complex I feel like I was just wondering what you thought of that pick breeder experiment. Yeah it's really cool and it's it's it's actually the the basis for their entire books I've read the book the white greatness cannot be planned I believe I've got the title. So that this they actually they kind of start out with this as a motivational example of what if what if the only goal is to do something interesting and without any objective so all you do is kind of choose slight variations on a current picture and you see what you end up with and I thought I thought it illustrates their points extremely well so it illustrates for example goal switching is that so if you were done with your sequence of image manipulations you can then save it into the database and someone else could pick up from it and then kind of continue it and since every human finds slightly different things interesting right you could take someone else's final result and say you know that that kind of looks weird but then you your modifications to it will be different than that. Human continued breeding the picture so what you you end up is they show this for example one picture ends up being a car and it had been adapted from an alien face where the eyes of the alien face became the wheels of the car and so the first the first person might have been like this this looks more more like an alien face I'm going to make it more like an alien face and then the second person is like other kind of looks nice I'm going to modify it in a different so they they basically they basically give this example of if you have an ambitious goal like getting to a car just from these very simple picture generation networks then the stepping stones to get there have nothing to do with cars and the people that did it didn't have a car in mind while going there and the second thing is that if you try to get a car from the beginning I believe they right they've done this if you try to you you can't like it's just the sequence of things that you have to go through is so complicated and convoluted if you were to try to end up with a result it's it's basically impossible. So these kind of illustrate their points very very nicely and any I mean it's a cool experiment in itself but they use it kind of as a basis metaphor for then going on jumping off. Yeah I just think it's so interesting this idea that it's like you can't design a car unless unless you don't try unless you just happen to come across that it's sort of like I think about like if I was to fire up garage band and start trying to make it a song it's like I don't know exactly what it's going to sound like I'm just going to kind of explore until I come across something. So then I was thinking about like with the GANS and the way that the GANS design images like so this is like sort of a design I I drew up that I'm curious what you think of it's like what if the generator just like tries to make some object and then I preaching classifier says I think it looks like this maybe and then you send it to like a refining network. So the GANS just sort of searches for objects and then some classifiers like I think it looks like sort of like how the pick reader sort of like how we're like I think this looks like a school or whatever so I'm going to try to you know refine it now do you think that would be an interesting thing or. You'd have like a two stage process first you do something general and then it gets classified and then you'd have like a special generator just for the skull class and the special discriminator just for that yeah I don't see why not might be hard it might be hard to get the first generator to to be sufficiently diverse so you might might need some kind of discriminator signal. At the even at the beginning. So yes you're like how do you think the pick reader experiment could become fully automated such that there's no human in the loop. Yeah that's that's a thought I had as well because it to me it seems that the the kind of of course the resulting pictures the fact that they look like human objects or you recognize a object is it a result from them being. Bread by humans like the fact that it looks like a car or a skull or something like this is is very much but also I guess that that could be abstract is in we just not expect the results to be like human recognizable objects but maybe something else the much more deeper. Construction in pick reader is the fact that the measure of interestingness is provided by the humans right so the humans they click on a picture and then they get variance of the picture and they click on the one that they most like this this sense of interestingness of I like this one is that's what's that's the fundamental core that's provided by the humans as an input to the system that's what drives the entire thing that's exactly the same as before it's when you right when you teach the robot which to behaviors are close enough like no that's too close to before that's not novel or yes that's sufficiently different than before that is novel right this this sense is somehow you either need to specify it or you need to have the human in the loop to provide it I feel it's very very hard to capture that in an algorithm as as of today. Yeah like something I think about is like maybe I'd have like my 1000 class image net classifier and then maybe I'd have like a like a style classifier like a neural style transfer network that I've like chopped off the like some intermediate feature I'm going to take that as my style and so maybe I'm like classifying I think it's like an airplane and then I kind of like this style for it that sort of like my like how I would think about trying to automate that like like I don't know I guess like I don't know if I I guess it's interesting but I also feel like when you're doing the pick reader you're kind of like oh I'm going to try it now that I see this vision I'm going to try to make it like look like that now I suppose like I yeah I think yeah yeah I think I think I could hold this into a school and then you start doing yes yeah yes they're very much so they're not they're not advocating random exploration what they're advocating is basically if you have an ambitious goal then you basically don't know the stepping stones but from stepping stone to stepping stone that's where objectives are very handy so when you want to say I this already kind of looks like something I want to make it more like that I want to make it more into a skull right already has like two circles and kind of the shape but I'm going to drive it there that that is very that can be very objective driven but in the grand scheme of things you don't know then once you have the skull right someone else can develop that into an even new thing so yeah indeed if if you if you're in kind of a local search in this space then an objective driven behavior like what you're saying like I want to make it as much this as possible that's very that's actually a thing they're advocating for but then from their end result yeah you would need to then restart again do the same thing with like something else yeah it's really interesting I just thinking about yeah I'm thinking about like the stepping stones and like is how would you define the space of stepping stones to such a to any kind of thing I guess it's like you could still design some kind of maybe it's discrete or maybe you have some kind of signal you can get back from it and I guess it's is just a lot to think of that directly take it yeah they give this they give this great analogy I feel like if you have a really ambitious objective it's like crossing a lake but the lake is covered in fog so you basically can't really see very far but you can always kind of see the next stepping stones right and you can then you can then try to go from stepping stone to stepping stone but you don't know which one to take if there's like a fork and there's two ways you don't know which one right so all you can do is basically go the most interesting one and they relate this to scientific research so yeah if we wanted to accomplish some really great research goal like artificial general intelligence we don't like we don't know but we can see the next stepping stones right we can see oh from what we have right now what interesting combination could we make that's still kind of it still kind of makes that's not total garbage right so in the local search I can try to say I want to I don't know I want to do this I want to do multiple generators and it's multi stage and then this thing right this this is kind of a stepping stone and maybe that will then lead to something more interesting and so on so yeah that's kind of how they really I like this metaphor of the lake yeah yeah I just like could like a metacontroller try to put the stones down and then the objective is like or is the space too enormous that that idea of having a metacontroller guide the stepping stone placement is just like absurd and then and there's no way that that would work that's sort of where I'm thinking with this now I was like so they actually that's that's exactly the question right of what I so I believe you need such a meta whatever because the space is too large you somehow need a way to choose the stepping stones in the first place right you some a need a way to do this now what they're saying is that if you're if your goal is really ambitious then a metacontroller that simply wants to reach the goal is bad because right because what we discussed before you might need a lot of inventions from other fields in order to make goal happen and if you simply go your field maximum power towards your goal that's not going to happen now if your metacontroller is actually just something that wants to produce interesting things then that's actually something they they advocate for that is exactly what they're there algorithms are trying to capture they're trying to capture locally yeah we want to get better at a particular thing what those particular things are and the order of these that should be novelty driven instead of gold ribbon that yeah yeah yeah yeah the interesting component I'm I guess I'm sort of bias towards liking the objective design and now I'm thinking like okay let's abstract those metacontrollers one level up and have a metameta controller and just repeat this into a hierarchy makes sense and that if you if you're if you're a bit cynical that is what you will also hear out of and they have to argue in the in their book a lot against that like isn't the question isn't the kind of isn't the implementation of a metacontroller that just searches for novelty in itself an objective again and then they give some good reasons why actually you don't you it is different it's more like a constraint on your search if you think of natural evolution for example it isn't really doesn't really have an objective and if you think reproduction and survival is the objective of natural evolution it doesn't really the good the good reason they give is the objective has already been fulfilled by the very first organism to ever live right why didn't it stop there right why didn't stop very first cell OK we will fulfill the objective it's it's more of a it's more of an actually a constrained optimization where the constraint is you need to be able to survive that's kind of a minimum bar of two being on this planet and then what I'm saying constrained optimization but it's it's not it's not an optimization it's more like a constraint constraint search. OK I think yeah I guess it's just like I definitely think I'm closed in this world of trying to think of these constrained problems and I haven't really like thought more generally about just like exploration as a whole but but anyway so I just wanted to ask you generally like you're deep learning researcher I want to ask like what areas of deep learning are you really interested in right now and what do you think is promising in the near future. So I'm currently working in adversarial examples that is a really interesting topic there's lots of questions still still open but I'm generally interested in pretty much any anything that is not I'm not too interested in like the newest the newest fine technique on getting the latest state of the art numbers even though that's probably super important for practitioners basically agreeing more with the authors of this tutorial of that let's just try to do interesting things and to me these these these these areas in in terms of open ended open ended search open ended learning are very interesting I think reinforcement learning still has a long way to go I think actually NLP still has a long way to go because I don't believe it's the current models are the end of it so I think it's really exciting time. Yeah I'll love to think about adversarial examples because it definitely flips the CNN idea on its head and and then I so I have one other thing about adversarial examples that I'm interested in is there is like an interview with Elon Musk and this Lex Friedman researcher where he asks him about adversarial examples on self driving cars and he seems dismissive of it he says he thinks basically you could just average different patches of like test time augmentation to overcome adversarial examples so like in your research do you think that like the example where they add the noise mass to the panda and they're like oh it's a given now if they just perturbed it like nine more times do you think the prediction would average out to pandas still that is a that is a very difficult question and in from experience simply adding noise and then feeding it to the class far even if you average after that usually will it will defend against adversarial examples to a point but it will also degrade your your classification performance because so maybe I understood it wrong but my understanding is I have my input right and I simply add noise to it and then feed it through the network and I could do this many times right and then average the prediction but usually this will help against adversarial examples but it will also degrade the accuracy of that classifier so it might actually make yourself driving car worse in the in the overall because how often is it going to be attacked against a adversarial example but it's going to be attacked maybe I don't know once or twice a year maybe it drives by some some hackers house right stick around a stop sign or something but the rest of the time I would actually like to retain retain the best possible classifier and if I always have to add noise then that that's not possible so the research we're doing is actually into the direction of can we retain the original accuracy while still kind of detecting these these samples it's I mean it's you somehow have to get a trade of somewhere but just adding noise isn't the isn't the the final solution yeah I was like so with these adversarial examples there are only going to make misscossifications like that if it really is adversarial sought after it's not just like the noise perturbation would be such an enormous base to find it otherwise so yes you really need to try so it's very unlikely that some random thing of course these networks can be confused by random noise but I think one of the self driving cars once drove into a big white truck because it was large and white so it thought it was sky but other like like other than these failures yeah you really have to try to find an adversarial example really cool yeah thanks so much for doing this anybody watching listening definitely check out your YouTube channel he has really great paper summaries and all sorts of things thank you hey thanks so much for having me | [{"start": 0.0, "end": 8.0, "text": " Hi there. I've recently been interviewed by the YouTube channel Henry AI Labs by Connor Shorten."}, {"start": 8.0, "end": 17.0, "text": " And what follows is the resulting conversation we had about population-based methods and open-ended learning,"}, {"start": 17.0, "end": 23.0, "text": " things like that, basically topics of the ICML tutorial that we both saw."}, {"start": 23.0, "end": 27.0, "text": " It's important to note that none of us is really an expert on the topic,"}, {"start": 27.0, "end": 34.0, "text": " but we are trying to make sense of it. And, maybe just kind of talking about the ideas."}, {"start": 34.0, "end": 41.0, "text": " So, please enjoy the conversation with Connor Shorten. Definitely check out the Henry AI Labs channel."}, {"start": 41.0, "end": 44.0, "text": " And, yeah, have a good time."}, {"start": 44.0, "end": 47.0, "text": " This is watching the Henry AI Labs Deep Learning Podcast."}, {"start": 47.0, "end": 52.0, "text": " Today I'm joined with Yannick Kiltcher. Yannick is working the Data Analytics Lab at ETH."}, {"start": 52.0, "end": 57.0, "text": " He has a great YouTube channel. I really enjoy watching his Paper Summary videos."}, {"start": 57.0, "end": 61.0, "text": " If you like any of the videos that I'm making, you'll definitely also like checking out this channel."}, {"start": 61.0, "end": 64.0, "text": " I'm going to put the link in the description at the end of the talk."}, {"start": 64.0, "end": 67.0, "text": " So, Yannick, thanks for doing this and we really appreciate it."}, {"start": 67.0, "end": 70.0, "text": " Thanks for having me. It's cool."}, {"start": 70.0, "end": 77.0, "text": " So, what we're going to talk about is population-based search and presentation at ICML"}, {"start": 77.0, "end": 83.0, "text": " that I really thought was interesting about emphasizing diversity and novelty in search."}, {"start": 83.0, "end": 89.0, "text": " So, the first question I just wanted to start by generally talking about your opinion on population-based search"}, {"start": 89.0, "end": 98.0, "text": " and the differences between population-based search and gradient descent going straight for one solution."}, {"start": 98.0, "end": 103.0, "text": " Yeah, so the kind of main difference is that in population-based search,"}, {"start": 103.0, "end": 108.0, "text": " as the name implies, you maintain kind of a large population of solutions."}, {"start": 108.0, "end": 112.0, "text": " So, you don't want to limit yourself to just one trajectory, say I start here,"}, {"start": 112.0, "end": 114.0, "text": " and then I run towards my goal."}, {"start": 114.0, "end": 119.0, "text": " But you can maintain a lot of hypotheses of what the solution could be,"}, {"start": 119.0, "end": 124.0, "text": " and then you kind of want to update all of them at the same time."}, {"start": 124.0, "end": 128.0, "text": " And so, there's many different variants of population-based search,"}, {"start": 128.0, "end": 133.0, "text": " but they all have this thing in common where you maintain many solutions"}, {"start": 133.0, "end": 139.0, "text": " and you kind of bet on one of them becoming a good one, basically."}, {"start": 139.0, "end": 146.0, "text": " Yeah, so one other thing they present their paper where they have the robot walking"}, {"start": 146.0, "end": 148.0, "text": " and if it breaks one of its legs, for example,"}, {"start": 148.0, "end": 151.0, "text": " it can go back to the map elites table and say,"}, {"start": 151.0, "end": 159.0, "text": " OK, well, I lost this leg, but I think maybe this solution, I wasn't too clear on how that would really be related."}, {"start": 159.0, "end": 163.0, "text": " So, maybe wondering if you had more insight on that."}, {"start": 163.0, "end": 171.0, "text": " Yeah, so the context is, yeah, you want to teach a robot to walk,"}, {"start": 171.0, "end": 174.0, "text": " and the robot had six legs, I believe."}, {"start": 174.0, "end": 177.0, "text": " And if you think of what's a solution to the problem,"}, {"start": 177.0, "end": 182.0, "text": " the solution is kind of an algorithm that takes the current sensor input"}, {"start": 182.0, "end": 186.0, "text": " and outputs how to move the motors."}, {"start": 186.0, "end": 192.0, "text": " And if you just have like some gradient descent algorithm converging on the best solution"}, {"start": 192.0, "end": 195.0, "text": " of how to move the robot, it's just going to be like,"}, {"start": 195.0, "end": 199.0, "text": " oh, these are the sensors, OK, I'm going to move like this, like this, like this."}, {"start": 199.0, "end": 203.0, "text": " But if one leg breaks, of course, your loss,"}, {"start": 203.0, "end": 210.0, "text": " you know, this one way of moving, and now the, sorry."}, {"start": 210.0, "end": 213.0, "text": " So you only know this one way of moving, basically."}, {"start": 213.0, "end": 214.0, "text": " And that's it."}, {"start": 214.0, "end": 219.0, "text": " But in population based search, if you think of the solution as a way to move,"}, {"start": 219.0, "end": 223.0, "text": " you maintain many, many ways to move."}, {"start": 223.0, "end": 230.0, "text": " So you basically, the objective, if you can call it like this, is algorithm."}, {"start": 230.0, "end": 236.0, "text": " Find me a lot of different ways to move, right, with my six legs."}, {"start": 236.0, "end": 240.0, "text": " And now if one of my legs, I still can evaluate all of them."}, {"start": 240.0, "end": 242.0, "text": " I still can find, OK, which one's the best?"}, {"start": 242.0, "end": 248.0, "text": " But if now one of them falls away, I have all these other solutions that I can try."}, {"start": 248.0, "end": 251.0, "text": " Right? So then what they would do is like this life was with,"}, {"start": 251.0, "end": 257.0, "text": " now they just re-evaluate all of those solutions while only having five legs."}, {"start": 257.0, "end": 266.0, "text": " And the best of those, like, is much more likely to kind of work than if you had just your single solution."}, {"start": 266.0, "end": 273.0, "text": " So that kind of, that's the, it's population based because you maintain many different ways of solving the problem."}, {"start": 273.0, "end": 281.0, "text": " Yes, I was also thinking about like using the search algorithms that control neural architecture search and things like that."}, {"start": 281.0, "end": 290.0, "text": " So it's trying to think of how you might extend these ideas from the robot walking with six legs to the RNM controller designing the convolutional network."}, {"start": 290.0, "end": 297.0, "text": " Like maybe I might have like more of a storage constraint or more of a latency constraint."}, {"start": 297.0, "end": 300.0, "text": " And I could jump to a different solution like that."}, {"start": 300.0, "end": 307.0, "text": " I'm just wondering how you think like these ideas of population based search translate into the neural architecture search."}, {"start": 307.0, "end": 313.0, "text": " And specifically if it really is important because like you've got, I feel like in neural architecture search,"}, {"start": 313.0, "end": 317.0, "text": " you have such a direct signal with the classification accuracy."}, {"start": 317.0, "end": 325.0, "text": " Like I don't see as much variance that was in the objective function."}, {"start": 325.0, "end": 330.0, "text": " Yeah, I really think this population based approach is they shine in."}, {"start": 330.0, "end": 337.0, "text": " So they shine in multiple different areas, but one area where they shine is definitely when the environment changes."}, {"start": 337.0, "end": 342.0, "text": " So when you know something about whatever your input changes like the robot losing a leg."}, {"start": 342.0, "end": 351.0, "text": " So in kind of neural architecture search, you might, you might find these methods working if you then go for let's say transfer learning."}, {"start": 351.0, "end": 354.0, "text": " So you can not train your network on one task."}, {"start": 354.0, "end": 358.0, "text": " You want to actually do it in another task, right?"}, {"start": 358.0, "end": 364.0, "text": " And then if you maintain many solutions and you can evaluate all of them in a in this transfer setting,"}, {"start": 364.0, "end": 369.0, "text": " it's much more likely that one of them, you know, it's going to be, is going to be fine."}, {"start": 369.0, "end": 377.0, "text": " So, but you're right of I also believe that directly in architecture search, maybe it's not."}, {"start": 377.0, "end": 384.0, "text": " Maybe it doesn't yield that many great results though the other, of course, the other."}, {"start": 384.0, "end": 392.0, "text": " Area where these methods shine and this is with respect to algorithms like novelty search."}, {"start": 392.0, "end": 402.0, "text": " Which can be implemented as a population based method is they gave this really good example of deception in a search problem."}, {"start": 402.0, "end": 408.0, "text": " So a deception would be like if you have a robot walking a maze and the robot just wants to get to the goal, right?"}, {"start": 408.0, "end": 414.0, "text": " And you would program it the robot to be rewarded the closer it gets to the goal."}, {"start": 414.0, "end": 421.0, "text": " But if like there's a wall in between and you actually need to go around the wall kind of then for a while,"}, {"start": 421.0, "end": 424.0, "text": " you would need to move away from the goal in order to reach it."}, {"start": 424.0, "end": 429.0, "text": " So if you have like a pure objective driven approach, you just go straight to the goal."}, {"start": 429.0, "end": 431.0, "text": " You would always get stuck at the wall."}, {"start": 431.0, "end": 439.0, "text": " But if you then kind of do what is called a novelty search where you basically reward the robot for things it has never done before,"}, {"start": 439.0, "end": 442.0, "text": " it would actually find its way around the wall."}, {"start": 442.0, "end": 446.0, "text": " So you can maintain population of solutions that all kind of explore the space."}, {"start": 446.0, "end": 460.0, "text": " And that in or neural architecture search, maybe it's of a benefit that actually, you know, if I probably always benefit from like adding more layers or neurons or something like this."}, {"start": 460.0, "end": 464.0, "text": " But maybe I actually want to prune some stuff first and then add some more stuff."}, {"start": 464.0, "end": 469.0, "text": " So I maybe want to get worse first before I can get even better, right?"}, {"start": 469.0, "end": 475.0, "text": " So so it reached where I can imagine happening, but I don't know."}, {"start": 475.0, "end": 478.0, "text": " Yeah, I was thinking of the changing environment."}, {"start": 478.0, "end": 484.0, "text": " I definitely think like when you deploy a model and then you're getting new data that you could frame that as a changing environment."}, {"start": 484.0, "end": 498.0, "text": " And then also thinking about like in the context of GANs, which is something that I think is really interesting that the discriminator classifying the GAN, the generator samples, it's a changing environment because of the generator's updates."}, {"start": 498.0, "end": 512.0, "text": " So maybe having some kind of population based GAN or discriminator model might help it avoid that like continual learning problem, I guess, is sort of an eat."}, {"start": 512.0, "end": 530.0, "text": " Yeah, that that could that might as might very well be there are approaches to GANs, I believe, where you have like many discriminators and each one kind of only has, let's say, has its own limited view on the day turn, you're trying to kind of fool all of them at the same time, but it's not the same thing."}, {"start": 530.0, "end": 534.0, "text": " But yes, I think that I might make sense."}, {"start": 534.0, "end": 538.0, "text": " Yeah, I've seen that multiple generator multiple discriminator model too."}, {"start": 538.0, "end": 550.0, "text": " I think that's really interesting as well. So then one other thing I was curious about is this idea of goal switching and how that might relate to the autoML on our existing."}, {"start": 550.0, "end": 558.0, "text": " I'm like heavily studied things like classification, localization, semantic segmentation, like how do you think goal switching could be important?"}, {"start": 558.0, "end": 570.0, "text": " Like one idea I had is maybe if you've got like multi classification and it's got like a really low false positive rate or something on like one class, you might say, well, you somehow learn a decision boundary on that class."}, {"start": 570.0, "end": 578.0, "text": " Or do you think that that wouldn't generalize and that there's no sense in goal switching in like a multi class classification problem?"}, {"start": 578.0, "end": 590.0, "text": " So in general, well, you think of goal switching in general, how they introduced it was also in the context of like this population based search of these mappolites."}, {"start": 590.0, "end": 606.0, "text": " Maybe it's kind of so what mappolites the algorithm does basically is it says, okay, I have a number of dimensions that I could solve the problem on and they introduced, okay, let's take life on earth needs to whatever survive."}, {"start": 606.0, "end": 619.0, "text": " So I can either be like a super tall creature right to reach food that no one else can reach. I could be a super fast creature right to kind of run away from everything or it can be a super heavy creature so that no one can attack me."}, {"start": 619.0, "end": 629.0, "text": " And so these are kind of the dimensions that you can solve the problem of reproduction and survival and within."}, {"start": 629.0, "end": 645.0, "text": " So what mappolites does it it would segment this area. So let's say size and speed it would segment this into a grid and in each grid it would kind of maintain the best solution so far that is within that grid."}, {"start": 645.0, "end": 666.0, "text": " And then what they see is when they then kind of evolve this over time and improve each each grid is that inventions let's say inventions algorithm discoveries in one grid say for a very fast creature they would then kind of be adapted to the very let's say the very heavy creature."}, {"start": 666.0, "end": 694.0, "text": " So it's like fast creature kind of discovers all longer legs make me even faster maybe the longer legs can be combined in the heavy creature to do something else so this kind of goal switching it's think of like feathers being first kind of developed or evolved for warmth for temperature regulation then being goal switched over to adapted for flight."}, {"start": 694.0, "end": 722.0, "text": " So in the in terms of all T class classification I guess it's a bit of a different problem if you just have one classifier we can definitely make the argument that since you know you're learning maybe to classify one class really well the low falls positive rate you have learned very good features for that class and if some other class kind of like the zebra is a horse with stripes and then."}, {"start": 722.0, "end": 750.0, "text": " The horse is a horse but with the features stripes being really low you can probably classify that better something making stuff up here but it's a bit of a different context I feel the if you have a single class fair do multi class classification but definitely the logic applies in the features space I would say where you learn features for one class and they might become useful for another class."}, {"start": 750.0, "end": 768.0, "text": " Yeah, I have this other thoughts sort of when you're discussing as like what about like multi class multi task learning like maybe my intermediate features get map to a classifier get map to a segmentation get map to again like could goal switching improve multi task learning."}, {"start": 768.0, "end": 795.0, "text": " Yeah, I would definitely say so I think that that's exactly what we're seeing when you look at for example pre training so if you think of like this these newest big language models like Bert or something they're really good at tasks I don't know what it was an on the task labeling of sentiment sentiment classification is the classic right."}, {"start": 795.0, "end": 824.0, "text": " If they evaluate on that because it's so easy but let's say birds is really good at sentiment classification but if you were to just to train it outright on sentiment classification is probably not going to work because there's just too little signal but then what happens is you pre train it as a language model as this mask language model and it kind of gets really good at simply comprehending language and that skill can then be kind of adapted over into the language."}, {"start": 824.0, "end": 850.0, "text": " So I think if you look at the up something like pre training or multi task as you say then definitely one the addition of a task might give rise to certain features that then all of a sudden can be adapted by another task."}, {"start": 850.0, "end": 856.0, "text": " If you just trained the latter task by itself that maybe would have been too difficult."}, {"start": 856.0, "end": 859.0, "text": " So yeah, there's definitely an analogy."}, {"start": 859.0, "end": 873.0, "text": " So then what I think about is so I'm going from my pre training language model into sentiment classification maybe I also add like question answering document summarization named entity like this like vector of task that it can go do."}, {"start": 873.0, "end": 887.0, "text": " I'm then curious like when your goal switching it's like how do you then combine the features later on or you just like take it as if I need this task go go to this model likes yeah."}, {"start": 887.0, "end": 907.0, "text": " Well the question here is do you whether or not you implement this as a single model and kind of refer to the goal switching of features within that model or whether you also do this now as a population based method where basically you maintain you maintain different neural networks for different combination of these tasks."}, {"start": 907.0, "end": 925.0, "text": " Then you did actually need a method to kind of combine and reproduce the neural networks themselves which I yeah I see that's that's going to be a bit of a different task like some cross distillation or some something crazy."}, {"start": 925.0, "end": 928.0, "text": " I don't know how that work exactly."}, {"start": 928.0, "end": 957.0, "text": " Yeah, I just wonder about two things it's like do from a population based search could you have like the weights be the population like different sets of weights or would it necessarily need to be like taking apart the layers and designing new internal like cells is in the architecture search like because if I just have the weights maybe I could treat the diversity search or goal switching as like stochastic weight averaging and just like mesh them all together when I'm finished with my goal switching at the end."}, {"start": 957.0, "end": 960.0, "text": " But it's yeah."}, {"start": 960.0, "end": 986.0, "text": " It definitely be if you wanted to if you if you wanted to if you wanted to implement your multitask multitask tasking as a population based approach where yeah you could definitely give you an easier time if you keep the"}, {"start": 986.0, "end": 1015.0, "text": " architecture of your neural networks the same and simply have different weights and then you could indeed consider something like weight averaging or or yeah I guess more modern approach will be like distillation from the two teacher models into one child months actually good metaphor for for reproduction kind of distillation from multiple teacher model don't know if anyone's done that yet but yeah."}, {"start": 1015.0, "end": 1025.0, "text": " I guess that might be the way to do it if you also maintain different architectures for different problems that might be a bit of a."}, {"start": 1025.0, "end": 1043.0, "text": " Yeah that's an interesting thing too if you have the goal switching and then you model distillate all into one model that is yes well if you think of map elites right you it's simply you simply distill it into the appropriate I don't even know what the what the axis would be."}, {"start": 1043.0, "end": 1059.0, "text": " Probably I can imagine okay you have like three tasks so you have three axis and then you'd mix the task maybe in accordance on how far up your of these axis you are or something like this."}, {"start": 1059.0, "end": 1066.0, "text": " It's not exactly map elites because your actual objectives are on the axis but."}, {"start": 1066.0, "end": 1093.0, "text": " Yes pretty cool so just to backtrack one step I want to talk about like diversity centric search novelty like when I was thinking about that I was like can you just initialize it such that it has maximum diversity like can you just initialize the population such that they're all like uniformly spaced and then search locally from there so I just wonder what you think on that and how this is different from that."}, {"start": 1093.0, "end": 1122.0, "text": " So yeah in this in this diversity search algorithms basically what you're you're doing is your only goal is or your main goal depends on the album but let's say your only goal is to find diverse behaviors or diverse solutions diverse I think the main problem with that is is that the search space is so extremely large."}, {"start": 1122.0, "end": 1145.0, "text": " That you're going to have a hard time even even defining what a kind of a uniform distribution is because it's such a high dimensional space that even if you sample uniformly it's it's almost empty like you're almost you're not you're not getting anywhere because you have finite."}, {"start": 1145.0, "end": 1160.0, "text": " You have a finite computer you need to implement an algorithm even if you even if my computer can hold a hundred thousand different members of a population in high dimensions that is nothing right so."}, {"start": 1160.0, "end": 1189.0, "text": " To me yet the initialization might be definitely important but I don't think you'll you get around some sort of iterative procedure and getting around weeding out weeding out things such that you have space for interesting things because ultimately what you want to find is something interesting in the robot maze example the."}, {"start": 1189.0, "end": 1215.0, "text": " The novelty search basically is here's a robot you started right and then you want to do something that you haven't done yet right so if the robots crashes into a wall the first time that's a good thing you say cool you haven't done that yet but if it crashes into the wall the second time you're like you've done that already right so you."}, {"start": 1215.0, "end": 1244.0, "text": " You basically need a measure of saying how close to behaviors are but if the robot has crashed into every wall once the only thing it can do if it wants to do something new is actually go around the wall and then you're like oh cool you've done something new but the space of behaviors often is so large that you can simply enumerate all the all the behaviors so you I think that's the main problem why you can't just."}, {"start": 1244.0, "end": 1247.0, "text": " Make it diverse from the beginning."}, {"start": 1247.0, "end": 1258.0, "text": " Yeah when I think about that I think that maybe the like reward function if you're like navigating the maze it needs to be more refined so like the crashes into the wall that needs to be like."}, {"start": 1258.0, "end": 1271.0, "text": " I don't know plus three some some like unique signal I feel like in order to create that kind of because like thinking of if it's just like reward zero everywhere but one if you hit that finish line and then maybe some kind of like."}, {"start": 1271.0, "end": 1286.0, "text": " Discounting for how long it takes you to get there is I don't see how it could interpret that it's done a new behavior if all it has is it so that so to me it feels like it's all about the design of the reward space now to implement such a thing."}, {"start": 1286.0, "end": 1303.0, "text": " Yes absolutely so the definitely if you wanted to do novelty search you would need to implement a measure of how close to behaviors are so there's no way around and I think that's kind of crux of the of this method is that by specifying."}, {"start": 1303.0, "end": 1332.0, "text": " How close to behaviors are so what what constitutes novelty and what doesn't you already implicitly kind of telling the robot something about the nature of the world so I think that the kind of the objective because they now say oh we don't give the robot the objecting of reaching the target we simply give it the objective of not doing the same thing twice I think the kind of objective sneaks in in like again."}, {"start": 1332.0, "end": 1340.0, "text": " Through the specification of how of you how close are to be a risk but definitely."}, {"start": 1340.0, "end": 1360.0, "text": " This is just kind of a really simple example what they want to say is that these methods really become important when you have ambitious objectives right in the maze we can all agree if we just design the reward crashing walls bad you don't have to actually go straight to the goal you can you know but."}, {"start": 1360.0, "end": 1389.0, "text": " Go around walls good and so on then it's easy right but in really ambitious objectives like I don't know flying reaching the moon in the in the 1960s designing general AI curing cancer and so on we don't actually know how to design the reward right because we don't know which steps need to be fulfilled in order to to fly to the moon I guess now we do in hindsight."}, {"start": 1389.0, "end": 1402.0, "text": " Right but we you can have predicted we don't know which steps need to be discovered in order to cure cancer and it's very very probable if you look at history that."}, {"start": 1402.0, "end": 1418.0, "text": " The fundamental discoveries that lead to us here in cancer will not directly come from cancer research that's that's their entire point right it's not like you can have a goal go straight towards it if it's like a really ambitious goal very probably."}, {"start": 1418.0, "end": 1447.0, "text": " The solutions will come in part from extremely non related fields and they and you kind of have to make advances everywhere and in order to solve that problem so the the question of it's all design it's all a design in the reward yes but we would have to know how the reward must be must look and in these really ambitious objectives we don't and that's that's where we're going to go."}, {"start": 1447.0, "end": 1470.0, "text": " That's that's where they argue well the best thing actually you can do is to just explore and you just find interesting things along the way and you kind of hope that these interesting things will come no you know the interesting things will combine to form new interesting things right but you just don't know where you're going to end up right."}, {"start": 1470.0, "end": 1495.0, "text": " Yes, I guess maybe you could just keep a trip like the trajectory of states and use that as your signal novelty but then I think like if you've got like a robotic arm with like X degrees of freedom it's like the state space would be too infinite to really like say oh this is significantly this is a significantly different sequential procedure of states and this other thing."}, {"start": 1495.0, "end": 1524.0, "text": " So then the next thing yes I think this is a good transition into their pick breeder experiment and so anyone who listens to this who hasn't watched their talk the pick breeder is like they've got these generator neural networks with sets of weights and they have like humans go on and they pick two of the generated images to blend together and derive a new image and so this repeats on and on until it goes from like just like a spiral."}, {"start": 1524.0, "end": 1546.0, "text": " Pattern into like a school faced drawing or a butterfly drawing or something like that and they so this idea is supposed to represent open endedness in an environment and not so it just generally I just found it to be really interesting I think it's one of the things in their talk that you look at it and you're like oh it's interesting what what is going on here."}, {"start": 1546.0, "end": 1559.0, "text": " But it's like the the mutation is really guided by the human search which is so complex I feel like I was just wondering what you thought of that pick breeder experiment."}, {"start": 1559.0, "end": 1577.0, "text": " Yeah it's really cool and it's it's it's actually the the basis for their entire books I've read the book the white greatness cannot be planned I believe I've got the title."}, {"start": 1577.0, "end": 1606.0, "text": " So that this they actually they kind of start out with this as a motivational example of what if what if the only goal is to do something interesting and without any objective so all you do is kind of choose slight variations on a current picture and you see what you end up with and I thought I thought it illustrates their points extremely well so it illustrates for example"}, {"start": 1606.0, "end": 1635.0, "text": " goal switching is that so if you were done with your sequence of image manipulations you can then save it into the database and someone else could pick up from it and then kind of continue it and since every human finds slightly different things interesting right you could take someone else's final result and say you know that that kind of looks weird but then you your modifications to it will be different than that."}, {"start": 1635.0, "end": 1664.0, "text": " Human continued breeding the picture so what you you end up is they show this for example one picture ends up being a car and it had been adapted from an alien face where the eyes of the alien face became the wheels of the car and so the first the first person might have been like this this looks more more like an alien face I'm going to make it more like an alien"}, {"start": 1664.0, "end": 1693.0, "text": " face and then the second person is like other kind of looks nice I'm going to modify it in a different so they they basically they basically give this example of if you have an ambitious goal like getting to a car just from these very simple picture generation networks then the stepping stones to get there have nothing to do with cars and the people that did it didn't have a car in mind while going there and the second thing is"}, {"start": 1693.0, "end": 1714.0, "text": " that if you try to get a car from the beginning I believe they right they've done this if you try to you you can't like it's just the sequence of things that you have to go through is so complicated and convoluted if you were to try to end up with a result it's it's basically impossible."}, {"start": 1714.0, "end": 1729.0, "text": " So these kind of illustrate their points very very nicely and any I mean it's a cool experiment in itself but they use it kind of as a basis metaphor for then going on jumping off."}, {"start": 1729.0, "end": 1743.0, "text": " Yeah I just think it's so interesting this idea that it's like you can't design a car unless unless you don't try unless you just happen to come across that it's sort of like I think about like if I was to fire up garage band and start trying to make it"}, {"start": 1743.0, "end": 1749.0, "text": " a song it's like I don't know exactly what it's going to sound like I'm just going to kind of explore until I come across something."}, {"start": 1749.0, "end": 1770.0, "text": " So then I was thinking about like with the GANS and the way that the GANS design images like so this is like sort of a design I I drew up that I'm curious what you think of it's like what if the generator just like tries to make some object and then I preaching classifier says I think it looks like this maybe and then you send it to like a refining network."}, {"start": 1770.0, "end": 1786.0, "text": " So the GANS just sort of searches for objects and then some classifiers like I think it looks like sort of like how the pick reader sort of like how we're like I think this looks like a school or whatever so I'm going to try to you know refine it now do you think that would be an interesting thing or."}, {"start": 1786.0, "end": 1814.0, "text": " You'd have like a two stage process first you do something general and then it gets classified and then you'd have like a special generator just for the skull class and the special discriminator just for that yeah I don't see why not might be hard it might be hard to get the first generator to to be sufficiently diverse so you might might need some kind of discriminator signal."}, {"start": 1814.0, "end": 1819.0, "text": " At the even at the beginning."}, {"start": 1819.0, "end": 1831.0, "text": " So yes you're like how do you think the pick reader experiment could become fully automated such that there's no human in the loop."}, {"start": 1831.0, "end": 1847.0, "text": " Yeah that's that's a thought I had as well because it to me it seems that the the kind of of course the resulting pictures the fact that they look like human objects or you recognize a object is it a result from them being."}, {"start": 1847.0, "end": 1870.0, "text": " Bread by humans like the fact that it looks like a car or a skull or something like this is is very much but also I guess that that could be abstract is in we just not expect the results to be like human recognizable objects but maybe something else the much more deeper."}, {"start": 1870.0, "end": 1888.0, "text": " Construction in pick reader is the fact that the measure of interestingness is provided by the humans right so the humans they click on a picture and then they get variance of the picture and they click on the one that they most like this this sense of"}, {"start": 1888.0, "end": 1903.0, "text": " interestingness of I like this one is that's what's that's the fundamental core that's provided by the humans as an input to the system that's what drives the entire thing that's exactly the same as before it's when you"}, {"start": 1903.0, "end": 1932.0, "text": " right when you teach the robot which to behaviors are close enough like no that's too close to before that's not novel or yes that's sufficiently different than before that is novel right this this sense is somehow you either need to specify it or you need to have the human in the loop to provide it I feel it's very very hard to capture that in an algorithm as as of today."}, {"start": 1932.0, "end": 1961.0, "text": " Yeah like something I think about is like maybe I'd have like my 1000 class image net classifier and then maybe I'd have like a like a style classifier like a neural style transfer network that I've like chopped off the like some intermediate feature I'm going to take that as my style and so maybe I'm like classifying I think it's like an airplane and then I kind of like this style for it that sort of like my like how I would think about trying to automate that like"}, {"start": 1961.0, "end": 1979.0, "text": " like I don't know I guess like I don't know if I I guess it's interesting but I also feel like when you're doing the pick reader you're kind of like oh I'm going to try it now that I see this vision I'm going to try to make it like look like that now I suppose like I yeah I think yeah yeah I think"}, {"start": 1979.0, "end": 2000.0, "text": " I think I could hold this into a school and then you start doing yes yeah yes they're very much so they're not they're not advocating random exploration what they're advocating is basically if you have an ambitious goal then you basically don't know the stepping stones but from"}, {"start": 2000.0, "end": 2014.0, "text": " stepping stone to stepping stone that's where objectives are very handy so when you want to say I this already kind of looks like something I want to make it more like that I want to make it more into a skull right already has like two circles and"}, {"start": 2014.0, "end": 2027.0, "text": " kind of the shape but I'm going to drive it there that that is very that can be very objective driven but in the grand scheme of things you don't know then once you have the skull"}, {"start": 2027.0, "end": 2046.0, "text": " right someone else can develop that into an even new thing so yeah indeed if if you if you're in kind of a local search in this space then an objective driven behavior like what you're saying like I want to make it as much this as possible"}, {"start": 2046.0, "end": 2059.0, "text": " that's very that's actually a thing they're advocating for but then from their end result yeah you would need to then restart again do the same thing with like something else"}, {"start": 2059.0, "end": 2080.0, "text": " yeah it's really interesting I just thinking about yeah I'm thinking about like the stepping stones and like is how would you define the space of stepping stones to such a to any kind of thing I guess it's like you could still design some kind of maybe"}, {"start": 2080.0, "end": 2090.0, "text": " it's discrete or maybe you have some kind of signal you can get back from it and I guess it's is just a lot to think of that directly take it yeah"}, {"start": 2090.0, "end": 2099.0, "text": " they give this they give this great analogy I feel like if you have a really ambitious objective it's like crossing a lake"}, {"start": 2099.0, "end": 2119.0, "text": " but the lake is covered in fog so you basically can't really see very far but you can always kind of see the next stepping stones right and you can then you can then try to go from stepping stone to stepping stone but you don't know which one to take if there's like a fork and there's two ways"}, {"start": 2119.0, "end": 2133.0, "text": " you don't know which one right so all you can do is basically go the most interesting one and they relate this to scientific research so yeah if we wanted to accomplish some really great research goal like artificial general intelligence"}, {"start": 2133.0, "end": 2145.0, "text": " we don't like we don't know but we can see the next stepping stones right we can see oh from what we have right now what interesting combination could we make that's still kind of"}, {"start": 2145.0, "end": 2173.0, "text": " it still kind of makes that's not total garbage right so in the local search I can try to say I want to I don't know I want to do this I want to do multiple generators and it's multi stage and then this thing right this this is kind of a stepping stone and maybe that will then lead to something more interesting and so on so yeah that's kind of how they really I like this metaphor of the lake"}, {"start": 2173.0, "end": 2192.0, "text": " yeah yeah I just like could like a metacontroller try to put the stones down and then the objective is like or is the space too enormous that that idea of having a metacontroller guide the stepping stone placement is just like absurd and then and there's no way that that would work that's sort of where I'm thinking with this now I was like"}, {"start": 2192.0, "end": 2220.0, "text": " so they actually that's that's exactly the question right of what I so I believe you need such a meta whatever because the space is too large you somehow need a way to choose the stepping stones in the first place right you some a need a way to do this now what they're saying is that if you're if your goal is really ambitious then a metacontroller that simply wants to reach the goal is bad"}, {"start": 2220.0, "end": 2249.0, "text": " because right because what we discussed before you might need a lot of inventions from other fields in order to make goal happen and if you simply go your field maximum power towards your goal that's not going to happen now if your metacontroller is actually just something that wants to produce interesting things then that's actually something they they advocate for that is exactly what they're"}, {"start": 2249.0, "end": 2277.0, "text": " there algorithms are trying to capture they're trying to capture locally yeah we want to get better at a particular thing what those particular things are and the order of these that should be novelty driven instead of gold ribbon that yeah yeah yeah yeah the interesting component I'm I guess I'm sort of bias towards liking the objective design and now I'm thinking like okay"}, {"start": 2277.0, "end": 2294.0, "text": " let's abstract those metacontrollers one level up and have a metameta controller and just repeat this into a hierarchy makes sense and that if you if you're if you're a bit cynical that is what you will also hear out of"}, {"start": 2294.0, "end": 2323.0, "text": " and they have to argue in the in their book a lot against that like isn't the question isn't the kind of isn't the implementation of a metacontroller that just searches for novelty in itself an objective again and then they give some good reasons why actually you don't you it is different it's more like a constraint on your search if you think of natural evolution for example it isn't"}, {"start": 2323.0, "end": 2348.0, "text": " really doesn't really have an objective and if you think reproduction and survival is the objective of natural evolution it doesn't really the good the good reason they give is the objective has already been fulfilled by the very first organism to ever live right why didn't it stop there right why didn't stop very first cell OK"}, {"start": 2348.0, "end": 2362.0, "text": " we will fulfill the objective it's it's more of a it's more of an actually a constrained optimization where the constraint is you need to be able to survive that's kind of a minimum bar of two being on this planet and then"}, {"start": 2362.0, "end": 2371.0, "text": " what I'm saying constrained optimization but it's it's not it's not an optimization it's more like a constraint constraint search."}, {"start": 2371.0, "end": 2391.0, "text": " OK I think yeah I guess it's just like I definitely think I'm closed in this world of trying to think of these constrained problems and I haven't really like thought more generally about just like exploration as a whole but but anyway so I just wanted to ask you generally like you're deep learning researcher I want"}, {"start": 2391.0, "end": 2399.0, "text": " to ask like what areas of deep learning are you really interested in right now and what do you think is promising in the near future."}, {"start": 2399.0, "end": 2417.0, "text": " So I'm currently working in adversarial examples that is a really interesting topic there's lots of questions still still open but I'm generally interested in pretty much any anything that is not I'm not too"}, {"start": 2417.0, "end": 2441.0, "text": " interested in like the newest the newest fine technique on getting the latest state of the art numbers even though that's probably super important for practitioners basically agreeing more with the authors of this tutorial of that"}, {"start": 2441.0, "end": 2462.0, "text": " let's just try to do interesting things and to me these these these these areas in in terms of open ended open ended search open ended learning are very interesting I think reinforcement learning still has a long way to go I think actually NLP still has a long way to go"}, {"start": 2462.0, "end": 2480.0, "text": " because I don't believe it's the current models are the end of it so I think it's really exciting time. Yeah I'll love to think about adversarial examples because it definitely flips the CNN idea on its head and and then I so I have one other thing"}, {"start": 2480.0, "end": 2491.0, "text": " about adversarial examples that I'm interested in is there is like an interview with Elon Musk and this Lex Friedman researcher where he asks him about adversarial examples on"}, {"start": 2491.0, "end": 2507.0, "text": " self driving cars and he seems dismissive of it he says he thinks basically you could just average different patches of like test time augmentation to overcome adversarial examples so like in your research do you think that like the example where they add the noise mass to the panda and they're like oh it's"}, {"start": 2507.0, "end": 2524.0, "text": " a given now if they just perturbed it like nine more times do you think the prediction would average out to pandas still that is a that is a very difficult question and in from experience simply adding noise and then feeding it to the"}, {"start": 2524.0, "end": 2542.0, "text": " class far even if you average after that usually will it will defend against adversarial examples to a point but it will also degrade your your classification performance because so maybe I understood it wrong but"}, {"start": 2542.0, "end": 2557.0, "text": " my understanding is I have my input right and I simply add noise to it and then feed it through the network and I could do this many times right and then average the prediction but usually this will help against adversarial examples but it will also"}, {"start": 2557.0, "end": 2578.0, "text": " degrade the accuracy of that classifier so it might actually make yourself driving car worse in the in the overall because how often is it going to be attacked against a adversarial example but it's going to be attacked maybe I don't know once or twice a year maybe"}, {"start": 2578.0, "end": 2596.0, "text": " it drives by some some hackers house right stick around a stop sign or something but the rest of the time I would actually like to retain retain the best possible classifier and if I always have to add noise then that that's not possible so the research"}, {"start": 2596.0, "end": 2614.0, "text": " we're doing is actually into the direction of can we retain the original accuracy while still kind of detecting these these samples it's I mean it's you somehow have to get a trade of somewhere but just adding noise isn't the isn't the"}, {"start": 2614.0, "end": 2632.0, "text": " the final solution yeah I was like so with these adversarial examples there are only going to make misscossifications like that if it really is adversarial sought after it's not just like the noise perturbation would be such an enormous base to find it otherwise so"}, {"start": 2632.0, "end": 2653.0, "text": " yes you really need to try so it's very unlikely that some random thing of course these networks can be confused by random noise but I think one of the self driving cars once drove into a big white truck because it was large and white so it thought it was sky but other like"}, {"start": 2653.0, "end": 2671.0, "text": " like other than these failures yeah you really have to try to find an adversarial example really cool yeah thanks so much for doing this anybody watching listening definitely check out your YouTube channel he has really great paper summaries and all sorts of things thank you"}, {"start": 2671.0, "end": 2692.0, "text": " hey thanks so much for having me"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=H5vpBCLo74U | XLNet: Generalized Autoregressive Pretraining for Language Understanding | Abstract:
With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, XLNet outperforms BERT on 20 tasks, often by a large margin, and achieves state-of-the-art results on 18 tasks including question answering, natural language inference, sentiment analysis, and document ranking.
Authors: Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le
https://arxiv.org/abs/1906.08237 | Hi there. Today we're looking at XLNet, generalized auto-regressive pre-training for language understanding by Jill and Yang and other people from Carnegie Mellon University as well as Google Brain. So this is kind of the elephant in the room currently as XLNet is the first model to beat BERT, which was the previous state of the art on a lot of NLP tasks. To beat BERT at a lot of these same NLP tasks, so the chief state of the art result on 18 of 20 tasks, I believe. Maybe they test more. They have performed BERT on 20, the chief state of the art on 18, including things that's question answering natural language inference, sentiment analysis and so on. So those are kind of remarkable results and even more remarkable is that the architecture of the network is actually very fairly similar to BERT. The kind of new introduction is a pre-training, a different pre-training procedure and we'll look into that. So let's actually jump into their main points straight away. What they go into is there are two kinds of currently used pre-training methods for these NLP tasks and both can be understood as kind of language modeling. One, so language modeling for those you don't know is predict the next word in a sequence. So if I give you the sequence here, one supervised representation learning has been and then ask you what's next and then you're supposed to say highly, right? That's language modeling in a nutshell. So what they differentiate are two kinds of language modeling. The first one, they say is autoregressive language modeling. Now what autoregressive language modeling does is exactly what we've looked at. I give you on supervised learning has been, you're supposed to predict highly and then in the next step, I give you on supervised representation learning has been highly and you're supposed to predict successful and so on. So in the next step, I'm going to give you the entire sentence up until here and you're supposed to predict in autoregressive because each token can look at the kind of previous ones in the in the sequence. So when you, sorry, you can't see that. When you predict, when you predict, you can always kind of autoregressively look at what the previous ones were, including what you've previously predicted. Of course, during training, this is a teacher-forced, as I said, so you'll put the actual words there. But this is autoregressive modeling in contrast to what they call auto encoding. And auto encoding is what Bert does. And this is the following. So in contrast to that, let's say I have the same sequence on supervised representation learning has been highly successful in the domain of, yeah, something. And then I say, okay, I give you the sequence, but I am going to delete this and this. And now I ask you to predict these two. So you can see the task is slightly different. As you now have access to all of the sequence basically, except the ones that you're asked to predict, but you're you kind of ask to predict yet them not in any order, but you're asked to predict them at the same time basically. So at the same time, you're asked to predict this word and this word. And so the first kind of these autoregressive language modeling has been used by transformer models until Bert. And then basically Bert really pushed this auto encoding language model pre-training, which made it so successful. And now this paper, Excel, that wants to like combine the best of both of them. And in order to understand what's the best of both of them. So what's good at Bert we've already seen it can actually draw information from all of the context of the words it's trying to predict. But what is the kind of pitfall of Bert? And they actually put this really nicely in an example, they gave way further down where they say comparison to Bert. I don't know why that is not like also in the introduction. But here they have the sentence New York is a city. All right, New York is a city. This one. And you're asked to predict these two words. And if you now compare Bert to what Excel net does. If so the context is a city and you're asked to predict New York. What Bert does is it simply masks out the two words and says here please fill in these two words. Now this translates to the kind of objective being separated in the two words such that the prediction of York here is completely independent of the prediction of New. So if you know of any other city that is made of two words, for example San Francisco or Los Angeles, then these would be as valid and any mixture would be as valid. So you might Bert might end up with laws. York is a city and that will be perfectly fine for Bert because while it's predicting laws is a perfectly fine prediction for the first word of a two word city. And York is a perfectly fine prediction for the last word of a two word city. So these are the kind of mistakes that Bert can get into by not being order aggressive by basically predicting all of these tokens at the same time independently of each other. Whereas Excel net, what it would do is it would specify an order. Let's say, okay first I will predict the word New for the first word New something is a city. And then when I predict York, I will actually take into account that I previously have predicted the word New. So that's the main advantage that that order aggressive training has over auto encoding. Now what are the pitfalls? The pitfalls are if you have this sentence, let's look at it write it down New York is a city. Right. If you have this sentence and let's say actually you're not you're not asked to predict New York, you're asked to predict the word A here. A. Right. You're asked to predict that. In order aggressive style or A city, it's a better example. The two words A city in order aggressive style, if you predict the word A, you can only ever look at what comes beforehand. Whereas if Bert were to predict a just the word A, it would be able to look at all of it. Okay, let's not predict city. So you see the the kind of order aggressive model is bound to the order of the of the factorization of the sentence. That's why it's it's bound to the order in which it has to predict the tokens. So here if it's predicting A, it can only look at stuff that comes before it because it needs to do it in order. Right. Once it gets to city, you can actually look at the entire sentence here. But before that, it only ever has partial information about the about the context. So actually it wouldn't be much better if I had said we're trying to predict these two words is and A. Right. And once I predict so so Bert would actually have access to the word city here, whereas the order aggressive models only have access to the ones before it. I hope that makes it clearer. So the main idea in excel net is where do where does this order dependence come from in the order aggressive model? The order dependence actually comes from the factorization of the sentence of the of the language model. So in a language model, we're actually trying to assess the probability distribution of sentences here. X is a sentence. Right. And this can be naturally factorized into a product over the words where the probability of each word is only dependent on the words before it. So this is a this is an equal is not an approximation. This is an equality. The probability of a sequence can be decomposed into a product of probabilities like this exactly. So this here is exactly what these order aggressive models implement. Each word is predicted from the words before it. Right. There are other kinds of order aggressive models that also do the other direction where here they say, okay, the probability of a sentence is a product and each word is predicted from the words after it. But it kind of is the same problem. You only ever have access into the one direction. Basically, however, you define the order of decoding. You only ever have access from a given word to what was before it in the order. So the main idea of XL net is they say, hey, why don't we consider all possible orderings? Right. I mean, that that's kind of a it's an idea. So let's go back to our thing here. They say, why don't we consider all possible orderings? So basically what we will do is if this sample comes up, New York is a city. Right. What I can do is I can define the ordering. Let's say I always want to predict two words. So, but typically, masks out about 15% of its input that to be predicted. And here, let's say, we'll mask out 20% which is two words. So of this sequence, we'll mask two words and ask them all to predict it. That's that will be our pre-training objective. The first time this sample comes up from the dataset, I might specify the order just classically. Right. Just one, two, three, four, five. All right. I'll predict the last two words. I'll kind of mask them out. Right. I give the model New York is. And then I predict, let it predict A. And then in the next step, I'll give it New York is A and I'll let it predict City. Cool. So now the pitfall is the word A here only has access to no things before it and not to city itself. City has access to everything. All right. So, but then I continue training. And the next set, time, this sample, right. It's in my dataset. New York is a city. The next time it comes up, I simply go for a different order. Let's say one, two, three, four, five. Right. So now, again, I'm asking to predict the last two tokens which here are City and York. So in the first step, I would give it is A and New. And I will ask it what's here. And I'll ask it to predict City. And then in the second step, I'll also give it that and I'll ask it, okay. Now what's here, given all of that. Right. So new is a city. All right. You're asked to predict the missing word. So that that's pretty, so in the first step, it's new is a, you're asked to predict that the second. And then the second step is new is the city. And they're asked to predict the first. So now, as you can see, while predicting City, here, all of a sudden, we didn't know long or in this ordering, we don't have access to the word York. So we'll have to learn to predict City from the rest of the context. Now, even more, even more, if we now decide, let's decide on a different ordering. Again, one, two, three, four, five. So now, we'll actually first step is to ask New York City. Please predict this thing here. All right. Model might, yeah, you might train the model to predict is. And then the second step, you say New York is City. Please predict this. Now, you see before, before, when we are asked to predict the word A, it only had access to things to the left of it in the very first example. But now it actually has access to the entire context. So the, the, the idea is, as we sample this data point multiple times and each time we decide on a different ordering to decode, for each prediction of each token, token, sorry, will actually have seen many, many parts, many different variants of the context. And in expectation, we'll actually have seen all of the context just like Bert, but we'll always have done it in an order regressive way. So basically, you get all the advantages of being order regressive, namely, that you are able to decode step by step while always referring to everything in front of you and ordering. So the predictions are not independent, but you also get the benefit of Bert that it's able to basically look at all of the rest of the context in expectation in order to make this prediction. So this is, this is the main idea of, of Excel net. They formalized this jump up again. They formalized it in saying, okay, what Bert does here is it actually see it, it facturizes the log probability of a sentence into this, some, so the product in the log becomes a sum into the sum of log probabilities of, no, sorry, this is order regressive, confused. Into the words conditioned on everything in front of them. What Bert does is it actually approximately factorizes the log probability into each word, and then everything in the context, everything that's not masked in the context. And this is only an approximate factorization because now you basically dropping away all these masked tokens. And what they do now is they do the same as the AR as the order regressive models. Here they decompose the log probability into a sum of log probabilities over each of the words, given all the words before it, but now not before it in the sequence, but before it in a chosen permutation Z. And Z is sampled uniformly from the set of all possible permutations. So in expectation they'll see all of the context. So this is the, this is the main thing. They show this here in a kind of a picture with, so here is the neural network. This is the input layer. And these are the hidden layers as the attention layers go up and up here you're asked to predict the token. So here you're always asked to predict X3. So there is no, there's never going to be any way here. Since if you knew X3, you would be able to predict X3. One would hope. All right. So in the first example, the factorization order chosen at random is 3241. Now you're asked to predict X3 and we know, okay, we should only do this with things that are before it in the permutation order. Well, here since X3 is the first in the permutation order, we actually don't, we don't have anything to go on. We basically ask to predict X3 from scratch as if it were the start of the sentence. So we'll basically tell the model, I have a sentence that goes, hmm, hmm, hmm, hmm, hmm, hmm. Please predict the third. All right. It's a hard task. Yeah. By the way, you're always able to look at this memory thing here. Don't worry about this for now. This is just, this is an augmentation they do on top of their idea. This is not the core idea. So, okay, but now the second time this sample comes up from the training set, we decide on a different order. So the order here is 2431. Now again, we're asked to predict X3 and we're allowed to look at everything before it. So 2 and 4, as you see here, there are weights from X2 and X4 into this column that finally is then asked to predict X3. So this is also, this is now an easier task, right? You're allowed to look at the work to the left and to the right. If you have the following permutation order 1423, you're actually allowed to look at all of the other words because X3 is at the end of the permutation order in order to produce X3. So all of these four and the four thing is similar. So all of these four things will appear during training and you will learn from them. So in expectations, you'll basically have seen all variants of different versions of the context, which helps a lot, apparently. All right, so in order to achieve this, they had to make some architectural changes to the model. Namely, what you want to do is in a single pass through the model here, you not only want to predict one token, but you want to do many predictions. This helps training a lot, so bird and naturally always does like the 15th, the mass at 15% of the tokens or so, what puts that like 40, 50 tokens. So it masks them and it predicts them all at the same time. Now, you would like to do this here as well. You would like to predict all at the same time the ones that you're asked to predict. But of course, the problem is for here, if you're asked, if in this factorization order, two, four, three, one, if you're asked to predict X3, you're allowed to look at X2 and X4. If you're asked to predict X1, you're allowed to look at X2, X4 and X3. So if you only have a single pass through the model, the question is, do you now input X3 or do you not? Because the prediction of X3 is not allowed to look at X3 while the prediction of X1 is allowed to look at X3. So they do an architectural change in order to achieve both things so that you can do have a single pass through the model, but the prediction of each token only depends on the things in front of it in the permutation order. And they do this by having these two stream, these masked two stream attention, where they basically have not one hidden representation, like in classic transformers, but they have at each step two hidden representations. One they call H, one they call G. So the Hs are initialized with the embeddings of the tokens, and the Gs are just initialized randomly, and then they get transformed. The point is the H of the next layer is always able to look at everything in front of it, including its own H, basically it's one layer down, its own position one layer down. While the G is only allowed to look at the Hs, it's allowed to look at the Hs, but the Hs from before. So all the Gs here are only ever able to look at the Hs from before the current position, whereas the H is always allowed here to look at the same, but also at the H at the current position. And now at the last layer, you simply ask the model to predict the token from just the G. And you can easily see that this results in this model only attending two things before it. The G, by the way, can also look at the G of the current layer. So that's also the thing, but it cannot look at the H. So there's never any information flowing from the current word embedding of the token you're trying to predict to the prediction layer. So basically that means the model can't just look like you're not telling the model the answer. Yet you're still able to predict multiple things in a single pass through the model. Formally, this is described here in the attention layer. So they divide how they produce the queries and how they produce the keys and values. Usually the queries and the keys and values are produced from the same hidden representation, but here they produce the keys and values from the Hs in both cases. But to update the Gs, they produce the queries from the last layers G and to produce the Hs, they produce the queries from the last layer Hs. And most importantly, when they produce the keys and values, the Hs they look at here to update the G, you're only allowed to look at Hs before you in the permutation order, but to update the H, you're allowed to look at everything before including the position you're currently at. So that's kind of the, it's an engineering solution to the problem introduced by their augmentation. I think it's a pretty neat solution, pretty cool. Yeah. So the rest of the paper here is incorporating ideas from transformer XL. So transformer XL is one of these classic transformers that that is like this AR, so this other regressive style of transformer, but that has a few improvements over the classic vanilla transformer. And they incorporate a number of things here, namely, first of all, they incorporate this memory thing. So the memory thing allows you to input longer sequences. Let's say our transformer input length is maximum of five tokens. What the transformer XL allows you to do is you input five tokens. And then you save, you do your transformer thing, you encode it, and you save something into this memory. And then when you input the next five tokens, your transformer is then allowed to look at the memory of the last sequence, right? And also updated. So that that's kind of this, these Membloxi saw here. So you're always allowed to look at these Memblox from last sequence. And then the hidden representations here of this sequence, they will actually be stored in the Memblock for the next sequence. So this is kind of a trick to carry over information. It's not the updating the memory part isn't learned with the objective to make the next prediction better, but it's just some information, it's kind of gradient free information to provide to the next step. And it apparently helps you can incorporate longer sequences into this transformer XL. So they take this over and implement this into XL net. They also do relative positioning coatings, relative segment encodings. I won't go into this too much more here because it's not the main idea basically. So they do experiments and they compare to bird architecture with the same, basically same architecture with the same number of parameters or layers. And they beat bird in all of these kind of NLP tasks or most of, I think they said in 20. They reach new state of the art in 18 NLP tasks. So apparently their method works very well. So what they do here is the last thing I find important is an ablation study of the effects of their improvements. So they were because kind of my problem is I never know. Like they have this new idea, okay, we do these random permutines. But then they also say, oh, and also we include memory from XL net and we do relative positioning coatings and so on. To me, these kind of papers, of course, you reach better numbers. You get a new state of the art. So it's kind of a landmark paper. But to me, a paper should more be like a single thing. So whatever, your idea is this, your idea is these orderings and whatever you need to do to make that work, okay, fine. But then why the additional transformer XL things, it's really then hard to estimate how much of the improvement comes from your idea and how much of the improvement simply comes from the fact that you already put these other things. Actually, I have nothing to do with it. So I appreciate these kind of analyses called ablation studies where they kind of try to take away the memory and these things and kind of look at what it's doing to the model. And you see here kind of the grades down here as, for example, this kind of the grades as you take stuff away while still being more kind of more successful than birth. So that that I would say also, yeah, here is more unclear, but also kind of seems to degrade a bit and while being more successful than birth. So I appreciate this, this, some kind of really trying to show that your gains really come from your new idea and not from some other stuff. All right, so the last thing I want to mention actually is this thing. So someone claiming or calculating that it costs to $45,000 to train the Excel net model the way they describe it in the paper. I'm sure this is going to be brought down because it was brought down that like the time to train was brought down with birth as well, but this is just, I mean, this is crazy. This is just training it. It kind of gives large questions about the state of research and the ability for kind of, let's say, more academic players to participate in research. On the one hand, of course, we, like, of course, these companies should be able to do this. And on the other hand, if it seems like currently in some fields, just putting more money on the table will get your weather result. Not this, this actually, like this paper is actually a cool idea, but it's still kind of prohibitively expensive to even reproduce it. Yeah, all right. So that was, that was that for this paper. I hope you enjoyed this and see you. | [{"start": 0.0, "end": 6.84, "text": " Hi there. Today we're looking at XLNet, generalized auto-regressive pre-training for language"}, {"start": 6.84, "end": 13.24, "text": " understanding by Jill and Yang and other people from Carnegie Mellon University as well as"}, {"start": 13.24, "end": 20.240000000000002, "text": " Google Brain. So this is kind of the elephant in the room currently as XLNet is the first"}, {"start": 20.240000000000002, "end": 26.84, "text": " model to beat BERT, which was the previous state of the art on a lot of NLP tasks. To"}, {"start": 26.84, "end": 33.32, "text": " beat BERT at a lot of these same NLP tasks, so the chief state of the art result on 18"}, {"start": 33.32, "end": 40.72, "text": " of 20 tasks, I believe. Maybe they test more. They have performed BERT on 20, the chief"}, {"start": 40.72, "end": 46.480000000000004, "text": " state of the art on 18, including things that's question answering natural language inference,"}, {"start": 46.480000000000004, "end": 53.08, "text": " sentiment analysis and so on. So those are kind of remarkable results and even more remarkable"}, {"start": 53.08, "end": 59.32, "text": " is that the architecture of the network is actually very fairly similar to BERT. The kind"}, {"start": 59.32, "end": 67.64, "text": " of new introduction is a pre-training, a different pre-training procedure and we'll look into"}, {"start": 67.64, "end": 74.44, "text": " that. So let's actually jump into their main points straight away. What they go into is there"}, {"start": 74.44, "end": 81.08, "text": " are two kinds of currently used pre-training methods for these NLP tasks and both can be"}, {"start": 81.08, "end": 88.84, "text": " understood as kind of language modeling. One, so language modeling for those you don't know is"}, {"start": 88.84, "end": 94.28, "text": " predict the next word in a sequence. So if I give you the sequence here, one supervised representation"}, {"start": 94.28, "end": 102.2, "text": " learning has been and then ask you what's next and then you're supposed to say highly, right?"}, {"start": 102.2, "end": 111.32000000000001, "text": " That's language modeling in a nutshell. So what they differentiate are two kinds of language"}, {"start": 111.32000000000001, "end": 117.56, "text": " modeling. The first one, they say is autoregressive language modeling. Now what autoregressive language modeling"}, {"start": 117.56, "end": 123.0, "text": " does is exactly what we've looked at. I give you on supervised learning has been, you're supposed"}, {"start": 123.0, "end": 129.24, "text": " to predict highly and then in the next step, I give you on supervised representation learning has"}, {"start": 129.24, "end": 135.8, "text": " been highly and you're supposed to predict successful and so on. So in the next step, I'm going to give"}, {"start": 135.8, "end": 142.76000000000002, "text": " you the entire sentence up until here and you're supposed to predict in autoregressive because each"}, {"start": 142.76000000000002, "end": 150.52, "text": " token can look at the kind of previous ones in the in the sequence. So when you, sorry, you can't"}, {"start": 150.52, "end": 159.32000000000002, "text": " see that. When you predict, when you predict, you can always kind of autoregressively look at what"}, {"start": 159.32000000000002, "end": 165.88, "text": " the previous ones were, including what you've previously predicted. Of course, during training,"}, {"start": 165.88, "end": 172.84, "text": " this is a teacher-forced, as I said, so you'll put the actual words there. But this is autoregressive"}, {"start": 172.84, "end": 181.56, "text": " modeling in contrast to what they call auto encoding. And auto encoding is what Bert does. And this"}, {"start": 181.56, "end": 188.12, "text": " is the following. So in contrast to that, let's say I have the same sequence on supervised"}, {"start": 188.12, "end": 198.2, "text": " representation learning has been highly successful in the domain of, yeah, something. And then I say,"}, {"start": 198.2, "end": 207.79999999999998, "text": " okay, I give you the sequence, but I am going to delete this and this. And now I ask you to predict"}, {"start": 208.44, "end": 216.04, "text": " these two. So you can see the task is slightly different. As you now have access to all of the"}, {"start": 216.04, "end": 222.04, "text": " sequence basically, except the ones that you're asked to predict, but you're you kind of ask to"}, {"start": 222.04, "end": 228.68, "text": " predict yet them not in any order, but you're asked to predict them at the same time basically."}, {"start": 228.68, "end": 239.48, "text": " So at the same time, you're asked to predict this word and this word. And so the first kind of"}, {"start": 239.48, "end": 245.95999999999998, "text": " these autoregressive language modeling has been used by transformer models until Bert. And then"}, {"start": 245.96, "end": 255.16, "text": " basically Bert really pushed this auto encoding language model pre-training, which made it so"}, {"start": 255.16, "end": 263.24, "text": " successful. And now this paper, Excel, that wants to like combine the best of both of them."}, {"start": 264.44, "end": 271.64, "text": " And in order to understand what's the best of both of them. So what's good at Bert we've already"}, {"start": 271.64, "end": 278.36, "text": " seen it can actually draw information from all of the context of the words it's trying to predict."}, {"start": 278.36, "end": 285.64, "text": " But what is the kind of pitfall of Bert? And they actually put this really nicely in an example,"}, {"start": 285.64, "end": 291.4, "text": " they gave way further down where they say comparison to Bert. I don't know why that is not"}, {"start": 291.4, "end": 298.2, "text": " like also in the introduction. But here they have the sentence New York is a city."}, {"start": 298.2, "end": 306.28, "text": " All right, New York is a city. This one. And you're asked to predict these two words. And if you"}, {"start": 306.28, "end": 317.15999999999997, "text": " now compare Bert to what Excel net does. If so the context is a city and you're asked to predict New"}, {"start": 317.15999999999997, "end": 322.84, "text": " York. What Bert does is it simply masks out the two words and says here please fill in these two"}, {"start": 322.84, "end": 331.4, "text": " words. Now this translates to the kind of objective being separated in the two words such that"}, {"start": 331.88, "end": 338.44, "text": " the prediction of York here is completely independent of the prediction of New. So if you know"}, {"start": 338.44, "end": 345.32, "text": " of any other city that is made of two words, for example San Francisco or Los Angeles, then"}, {"start": 345.32, "end": 352.2, "text": " these would be as valid and any mixture would be as valid. So you might Bert might end up with"}, {"start": 353.4, "end": 359.71999999999997, "text": " laws. York is a city and that will be perfectly fine for Bert because while it's predicting"}, {"start": 359.71999999999997, "end": 364.76, "text": " laws is a perfectly fine prediction for the first word of a two word city. And York is a"}, {"start": 364.76, "end": 373.08, "text": " perfectly fine prediction for the last word of a two word city. So these are the kind of mistakes"}, {"start": 373.08, "end": 378.84, "text": " that Bert can get into by not being order aggressive by basically predicting all of these tokens"}, {"start": 378.84, "end": 384.76, "text": " at the same time independently of each other. Whereas Excel net, what it would do is it would specify"}, {"start": 384.76, "end": 391.56, "text": " an order. Let's say, okay first I will predict the word New for the first word New something is"}, {"start": 391.56, "end": 396.84, "text": " a city. And then when I predict York, I will actually take into account that I previously have"}, {"start": 396.84, "end": 405.96, "text": " predicted the word New. So that's the main advantage that that order aggressive training has"}, {"start": 405.96, "end": 413.64, "text": " over auto encoding. Now what are the pitfalls? The pitfalls are if you have this sentence,"}, {"start": 413.64, "end": 426.59999999999997, "text": " let's look at it write it down New York is a city. Right. If you have this sentence and let's say"}, {"start": 426.6, "end": 434.36, "text": " actually you're not you're not asked to predict New York, you're asked to predict the word A here."}, {"start": 434.36, "end": 443.64000000000004, "text": " A. Right. You're asked to predict that. In order aggressive style or A city, it's a better example."}, {"start": 443.64000000000004, "end": 450.20000000000005, "text": " The two words A city in order aggressive style, if you predict the word A, you can only ever look at"}, {"start": 450.2, "end": 456.52, "text": " what comes beforehand. Whereas if Bert were to predict a just the word A, it would be able to look"}, {"start": 456.52, "end": 464.59999999999997, "text": " at all of it. Okay, let's not predict city. So you see the the kind of order aggressive model"}, {"start": 465.48, "end": 473.15999999999997, "text": " is bound to the order of the of the factorization of the sentence. That's why it's it's bound to the"}, {"start": 473.15999999999997, "end": 479.24, "text": " order in which it has to predict the tokens. So here if it's predicting A, it can only look at"}, {"start": 479.24, "end": 483.96000000000004, "text": " stuff that comes before it because it needs to do it in order. Right. Once it gets to city, you can"}, {"start": 483.96000000000004, "end": 490.44, "text": " actually look at the entire sentence here. But before that, it only ever has partial information about"}, {"start": 490.44, "end": 500.6, "text": " the about the context. So actually it wouldn't be much better if I had said we're trying to predict"}, {"start": 500.6, "end": 508.92, "text": " these two words is and A. Right. And once I predict so so Bert would actually have access to the"}, {"start": 508.92, "end": 514.6800000000001, "text": " word city here, whereas the order aggressive models only have access to the ones before it."}, {"start": 515.32, "end": 524.6800000000001, "text": " I hope that makes it clearer. So the main idea in excel net is where do where does this order"}, {"start": 524.6800000000001, "end": 529.32, "text": " dependence come from in the order aggressive model? The order dependence actually comes from the"}, {"start": 529.32, "end": 538.12, "text": " factorization of the sentence of the of the language model. So in a language model, we're actually"}, {"start": 538.12, "end": 547.16, "text": " trying to assess the probability distribution of sentences here. X is a sentence. Right. And this"}, {"start": 547.16, "end": 556.68, "text": " can be naturally factorized into a product over the words where the probability of each word"}, {"start": 556.68, "end": 562.36, "text": " is only dependent on the words before it. So this is a this is an equal is not an approximation."}, {"start": 562.36, "end": 568.84, "text": " This is an equality. The probability of a sequence can be decomposed into a product of"}, {"start": 568.84, "end": 576.6, "text": " probabilities like this exactly. So this here is exactly what these order aggressive models implement."}, {"start": 576.6, "end": 585.88, "text": " Each word is predicted from the words before it. Right. There are other kinds of order aggressive"}, {"start": 585.88, "end": 591.72, "text": " models that also do the other direction where here they say, okay, the probability of a sentence"}, {"start": 591.72, "end": 596.6800000000001, "text": " is a product and each word is predicted from the words after it. But it kind of is the same"}, {"start": 596.6800000000001, "end": 604.0400000000001, "text": " problem. You only ever have access into the one direction. Basically, however, you define the order"}, {"start": 604.0400000000001, "end": 611.72, "text": " of decoding. You only ever have access from a given word to what was before it in the order."}, {"start": 611.72, "end": 622.9200000000001, "text": " So the main idea of XL net is they say, hey, why don't we consider all possible orderings?"}, {"start": 622.9200000000001, "end": 632.44, "text": " Right. I mean, that that's kind of a it's an idea. So let's go back to our thing here. They say,"}, {"start": 632.44, "end": 639.32, "text": " why don't we consider all possible orderings? So basically what we will do is if this sample comes"}, {"start": 639.32, "end": 646.9200000000001, "text": " up, New York is a city. Right. What I can do is I can define the ordering. Let's say I always"}, {"start": 646.9200000000001, "end": 656.0400000000001, "text": " want to predict two words. So, but typically, masks out about 15% of its input that to be predicted."}, {"start": 656.6, "end": 662.2, "text": " And here, let's say, we'll mask out 20% which is two words. So of this sequence, we'll mask two"}, {"start": 662.2, "end": 666.9200000000001, "text": " words and ask them all to predict it. That's that will be our pre-training objective."}, {"start": 666.92, "end": 672.12, "text": " The first time this sample comes up from the dataset, I might specify the order just"}, {"start": 672.12, "end": 680.28, "text": " classically. Right. Just one, two, three, four, five. All right. I'll predict the last two words. I'll"}, {"start": 681.3199999999999, "end": 688.52, "text": " kind of mask them out. Right. I give the model New York is. And then I predict, let it predict"}, {"start": 688.52, "end": 694.12, "text": " A. And then in the next step, I'll give it New York is A and I'll let it predict City."}, {"start": 694.12, "end": 702.44, "text": " Cool. So now the pitfall is the word A here only has access to no things before it and not to"}, {"start": 702.44, "end": 708.68, "text": " city itself. City has access to everything. All right. So, but then I continue training."}, {"start": 708.68, "end": 714.44, "text": " And the next set, time, this sample, right. It's in my dataset. New York is a city. The next time"}, {"start": 714.44, "end": 725.08, "text": " it comes up, I simply go for a different order. Let's say one, two, three, four, five. Right. So now,"}, {"start": 726.44, "end": 736.6800000000001, "text": " again, I'm asking to predict the last two tokens which here are City and York. So in the first"}, {"start": 736.68, "end": 745.0799999999999, "text": " step, I would give it is A and New. And I will ask it what's here. And I'll ask it to predict City."}, {"start": 745.0799999999999, "end": 750.3599999999999, "text": " And then in the second step, I'll also give it that and I'll ask it, okay. Now what's here,"}, {"start": 750.3599999999999, "end": 756.3599999999999, "text": " given all of that. Right. So new is a city. All right. You're asked to predict the missing word."}, {"start": 757.8, "end": 764.68, "text": " So that that's pretty, so in the first step, it's new is a, you're asked to predict that the"}, {"start": 764.68, "end": 771.7199999999999, "text": " second. And then the second step is new is the city. And they're asked to predict the first."}, {"start": 774.04, "end": 781.4799999999999, "text": " So now, as you can see, while predicting City, here, all of a sudden, we didn't know"}, {"start": 781.4799999999999, "end": 786.04, "text": " long or in this ordering, we don't have access to the word York. So we'll have to learn to"}, {"start": 786.04, "end": 793.3199999999999, "text": " predict City from the rest of the context. Now, even more, even more, if we now decide,"}, {"start": 793.32, "end": 805.5600000000001, "text": " let's decide on a different ordering. Again, one, two, three, four, five. So now, we'll actually"}, {"start": 805.5600000000001, "end": 816.2800000000001, "text": " first step is to ask New York City. Please predict this thing here. All right. Model might,"}, {"start": 816.28, "end": 824.12, "text": " yeah, you might train the model to predict is. And then the second step, you say New York is City."}, {"start": 824.12, "end": 830.28, "text": " Please predict this. Now, you see before, before, when we are asked to predict the word A,"}, {"start": 830.28, "end": 837.3199999999999, "text": " it only had access to things to the left of it in the very first example. But now it actually has"}, {"start": 837.3199999999999, "end": 846.04, "text": " access to the entire context. So the, the, the idea is, as we sample this data point multiple"}, {"start": 846.04, "end": 852.28, "text": " times and each time we decide on a different ordering to decode, for each prediction of each"}, {"start": 852.28, "end": 859.3199999999999, "text": " token, token, sorry, will actually have seen many, many parts, many different variants of the"}, {"start": 859.3199999999999, "end": 867.0, "text": " context. And in expectation, we'll actually have seen all of the context just like Bert, but we'll"}, {"start": 867.0, "end": 873.4, "text": " always have done it in an order regressive way. So basically, you get all the advantages of being"}, {"start": 873.4, "end": 881.24, "text": " order regressive, namely, that you are able to decode step by step while always referring"}, {"start": 881.8, "end": 887.48, "text": " to everything in front of you and ordering. So the predictions are not independent,"}, {"start": 887.48, "end": 893.48, "text": " but you also get the benefit of Bert that it's able to basically look at all of the rest of the"}, {"start": 893.48, "end": 901.0, "text": " context in expectation in order to make this prediction. So this is, this is the main idea of,"}, {"start": 901.0, "end": 911.64, "text": " of Excel net. They formalized this jump up again. They formalized it in saying, okay, what Bert does"}, {"start": 911.64, "end": 917.8, "text": " here is it actually see it, it facturizes the log probability of a sentence into this, some,"}, {"start": 917.8, "end": 924.6, "text": " so the product in the log becomes a sum into the sum of log probabilities of, no, sorry, this is"}, {"start": 924.6, "end": 933.4, "text": " order regressive, confused. Into the words conditioned on everything in front of them."}, {"start": 934.44, "end": 943.64, "text": " What Bert does is it actually approximately factorizes the log probability into each word,"}, {"start": 943.64, "end": 948.84, "text": " and then everything in the context, everything that's not masked in the context."}, {"start": 948.84, "end": 956.52, "text": " And this is only an approximate factorization because now you basically dropping away all these"}, {"start": 956.52, "end": 968.0400000000001, "text": " masked tokens. And what they do now is they do the same as the AR as the order regressive models."}, {"start": 968.6, "end": 974.44, "text": " Here they decompose the log probability into a sum of log probabilities over each of the words,"}, {"start": 974.44, "end": 984.12, "text": " given all the words before it, but now not before it in the sequence, but before it in a chosen"}, {"start": 984.12, "end": 991.5600000000001, "text": " permutation Z. And Z is sampled uniformly from the set of all possible permutations."}, {"start": 992.44, "end": 997.4000000000001, "text": " So in expectation they'll see all of the context. So this is the, this is the main thing."}, {"start": 997.4, "end": 1007.3199999999999, "text": " They show this here in a kind of a picture with, so here is the neural network. This is the input"}, {"start": 1007.3199999999999, "end": 1014.36, "text": " layer. And these are the hidden layers as the attention layers go up and up here you're asked to"}, {"start": 1014.36, "end": 1023.16, "text": " predict the token. So here you're always asked to predict X3. So there is no, there's never going to be"}, {"start": 1023.16, "end": 1030.76, "text": " any way here. Since if you knew X3, you would be able to predict X3. One would hope."}, {"start": 1032.92, "end": 1040.12, "text": " All right. So in the first example, the factorization order chosen at random is 3241."}, {"start": 1040.12, "end": 1047.32, "text": " Now you're asked to predict X3 and we know, okay, we should only do this with things that are"}, {"start": 1047.32, "end": 1054.2, "text": " before it in the permutation order. Well, here since X3 is the first in the permutation order, we"}, {"start": 1054.2, "end": 1061.72, "text": " actually don't, we don't have anything to go on. We basically ask to predict X3 from scratch as if"}, {"start": 1061.72, "end": 1067.6399999999999, "text": " it were the start of the sentence. So we'll basically tell the model, I have a sentence that goes,"}, {"start": 1067.64, "end": 1075.4, "text": " hmm, hmm, hmm, hmm, hmm, hmm. Please predict the third. All right. It's a hard task. Yeah."}, {"start": 1076.0400000000002, "end": 1079.88, "text": " By the way, you're always able to look at this memory thing here. Don't worry about this"}, {"start": 1080.44, "end": 1087.3200000000002, "text": " for now. This is just, this is an augmentation they do on top of their idea. This is not the core idea."}, {"start": 1088.1200000000001, "end": 1092.5200000000002, "text": " So, okay, but now the second time this sample comes up from the training set, we decide on a"}, {"start": 1092.52, "end": 1100.76, "text": " different order. So the order here is 2431. Now again, we're asked to predict X3 and we're allowed"}, {"start": 1100.76, "end": 1107.8, "text": " to look at everything before it. So 2 and 4, as you see here, there are weights from X2 and X4"}, {"start": 1108.68, "end": 1117.08, "text": " into this column that finally is then asked to predict X3. So this is also, this is now an easier"}, {"start": 1117.08, "end": 1123.96, "text": " task, right? You're allowed to look at the work to the left and to the right. If you have the"}, {"start": 1123.96, "end": 1131.6399999999999, "text": " following permutation order 1423, you're actually allowed to look at all of the other words because"}, {"start": 1131.6399999999999, "end": 1138.36, "text": " X3 is at the end of the permutation order in order to produce X3. So all of these four and the"}, {"start": 1138.36, "end": 1143.8, "text": " four thing is similar. So all of these four things will appear during training and you will learn"}, {"start": 1143.8, "end": 1151.0, "text": " from them. So in expectations, you'll basically have seen all variants of different versions of"}, {"start": 1151.0, "end": 1164.6, "text": " the context, which helps a lot, apparently. All right, so in order to achieve this, they had to"}, {"start": 1164.6, "end": 1172.76, "text": " make some architectural changes to the model. Namely, what you want to do is in a single pass"}, {"start": 1172.76, "end": 1178.28, "text": " through the model here, you not only want to predict one token, but you want to do many predictions."}, {"start": 1178.28, "end": 1184.92, "text": " This helps training a lot, so bird and naturally always does like the 15th, the mass at 15% of the"}, {"start": 1184.92, "end": 1191.08, "text": " tokens or so, what puts that like 40, 50 tokens. So it masks them and it predicts them all at the"}, {"start": 1191.08, "end": 1196.36, "text": " same time. Now, you would like to do this here as well. You would like to predict all at the same"}, {"start": 1196.36, "end": 1203.9599999999998, "text": " time the ones that you're asked to predict. But of course, the problem is for here, if you're asked,"}, {"start": 1203.9599999999998, "end": 1210.4399999999998, "text": " if in this factorization order, two, four, three, one, if you're asked to predict X3, you're allowed"}, {"start": 1210.4399999999998, "end": 1218.6, "text": " to look at X2 and X4. If you're asked to predict X1, you're allowed to look at X2, X4 and X3."}, {"start": 1218.6, "end": 1226.6799999999998, "text": " So if you only have a single pass through the model, the question is, do you now input X3 or do you not?"}, {"start": 1226.6799999999998, "end": 1233.48, "text": " Because the prediction of X3 is not allowed to look at X3 while the prediction of X1 is allowed"}, {"start": 1233.48, "end": 1240.9199999999998, "text": " to look at X3. So they do an architectural change in order to achieve both things so that you can do"}, {"start": 1240.92, "end": 1248.28, "text": " have a single pass through the model, but the prediction of each token only depends on the things"}, {"start": 1249.0800000000002, "end": 1254.92, "text": " in front of it in the permutation order. And they do this by having these two stream,"}, {"start": 1256.28, "end": 1263.96, "text": " these masked two stream attention, where they basically have not one hidden representation,"}, {"start": 1263.96, "end": 1269.0800000000002, "text": " like in classic transformers, but they have at each step two hidden representations. One they"}, {"start": 1269.08, "end": 1277.6399999999999, "text": " call H, one they call G. So the Hs are initialized with the embeddings of the tokens,"}, {"start": 1278.28, "end": 1284.84, "text": " and the Gs are just initialized randomly, and then they get transformed. The point is the H"}, {"start": 1284.84, "end": 1292.4399999999998, "text": " of the next layer is always able to look at everything in front of it, including its own H,"}, {"start": 1292.44, "end": 1301.4, "text": " basically it's one layer down, its own position one layer down. While the G is only allowed to look at"}, {"start": 1302.2, "end": 1312.28, "text": " the Hs, it's allowed to look at the Hs, but the Hs from before. So all the Gs here are only ever"}, {"start": 1312.28, "end": 1319.48, "text": " able to look at the Hs from before the current position, whereas the H is always allowed here"}, {"start": 1319.48, "end": 1325.88, "text": " to look at the same, but also at the H at the current position. And now at the last layer,"}, {"start": 1325.88, "end": 1334.1200000000001, "text": " you simply ask the model to predict the token from just the G. And you can easily see that this results"}, {"start": 1334.1200000000001, "end": 1348.28, "text": " in this model only attending two things before it. The G, by the way, can also look at the G"}, {"start": 1348.28, "end": 1355.96, "text": " of the current layer. So that's also the thing, but it cannot look at the H. So there's never any"}, {"start": 1355.96, "end": 1366.36, "text": " information flowing from the current word embedding of the token you're trying to predict to the"}, {"start": 1366.36, "end": 1373.16, "text": " prediction layer. So basically that means the model can't just look like you're not telling the"}, {"start": 1373.16, "end": 1378.6000000000001, "text": " model the answer. Yet you're still able to predict multiple things in a single pass through the"}, {"start": 1378.6000000000001, "end": 1387.72, "text": " model. Formally, this is described here in the attention layer. So they divide how they produce"}, {"start": 1387.72, "end": 1393.88, "text": " the queries and how they produce the keys and values. Usually the queries and the keys and values"}, {"start": 1393.88, "end": 1399.3200000000002, "text": " are produced from the same hidden representation, but here they produce the keys and values"}, {"start": 1399.32, "end": 1408.6799999999998, "text": " from the Hs in both cases. But to update the Gs, they produce the queries from the last layers G"}, {"start": 1408.6799999999998, "end": 1415.1599999999999, "text": " and to produce the Hs, they produce the queries from the last layer Hs. And most importantly,"}, {"start": 1415.1599999999999, "end": 1424.2, "text": " when they produce the keys and values, the Hs they look at here to update the G, you're only allowed"}, {"start": 1424.2, "end": 1430.28, "text": " to look at Hs before you in the permutation order, but to update the H, you're allowed to look at"}, {"start": 1430.28, "end": 1436.52, "text": " everything before including the position you're currently at. So that's kind of the, it's an"}, {"start": 1436.52, "end": 1443.96, "text": " engineering solution to the problem introduced by their augmentation. I think it's a pretty neat"}, {"start": 1443.96, "end": 1454.04, "text": " solution, pretty cool. Yeah. So the rest of the paper here is incorporating"}, {"start": 1454.04, "end": 1459.96, "text": " ideas from transformer XL. So transformer XL is one of these classic transformers that"}, {"start": 1459.96, "end": 1467.8, "text": " that is like this AR, so this other regressive style of transformer, but that has a few"}, {"start": 1467.8, "end": 1475.48, "text": " improvements over the classic vanilla transformer. And they incorporate a number of things here,"}, {"start": 1475.48, "end": 1480.84, "text": " namely, first of all, they incorporate this memory thing. So the memory thing allows you"}, {"start": 1480.84, "end": 1490.1999999999998, "text": " to input longer sequences. Let's say our transformer input length is maximum of five tokens."}, {"start": 1490.1999999999998, "end": 1498.28, "text": " What the transformer XL allows you to do is you input five tokens. And then you save, you do your"}, {"start": 1498.28, "end": 1504.6799999999998, "text": " transformer thing, you encode it, and you save something into this memory. And then when you"}, {"start": 1504.68, "end": 1512.28, "text": " input the next five tokens, your transformer is then allowed to look at the memory of the last"}, {"start": 1512.28, "end": 1519.64, "text": " sequence, right? And also updated. So that that's kind of this, these Membloxi saw here. So you're"}, {"start": 1519.64, "end": 1525.16, "text": " always allowed to look at these Memblox from last sequence. And then the hidden representations"}, {"start": 1525.16, "end": 1531.48, "text": " here of this sequence, they will actually be stored in the Memblock for the next sequence."}, {"start": 1531.48, "end": 1542.1200000000001, "text": " So this is kind of a trick to carry over information. It's not the updating the memory part"}, {"start": 1542.1200000000001, "end": 1546.3600000000001, "text": " isn't learned with the objective to make the next prediction better, but it's just"}, {"start": 1548.04, "end": 1554.68, "text": " some information, it's kind of gradient free information to provide to the next step."}, {"start": 1554.68, "end": 1559.72, "text": " And it apparently helps you can incorporate longer sequences into this transformer XL. So they"}, {"start": 1559.72, "end": 1566.92, "text": " take this over and implement this into XL net. They also do relative positioning coatings,"}, {"start": 1566.92, "end": 1576.04, "text": " relative segment encodings. I won't go into this too much more here because it's not the main"}, {"start": 1576.04, "end": 1583.64, "text": " idea basically. So they do experiments and they compare to bird architecture with the same,"}, {"start": 1583.64, "end": 1590.92, "text": " basically same architecture with the same number of parameters or layers. And they beat bird"}, {"start": 1590.92, "end": 1599.88, "text": " in all of these kind of NLP tasks or most of, I think they said in 20. They reach new state"}, {"start": 1599.88, "end": 1610.3600000000001, "text": " of the art in 18 NLP tasks. So apparently their method works very well. So what they do here is"}, {"start": 1610.36, "end": 1616.28, "text": " the last thing I find important is an ablation study of the effects of their improvements."}, {"start": 1618.1999999999998, "end": 1625.6399999999999, "text": " So they were because kind of my problem is I never know. Like they have this new idea, okay,"}, {"start": 1625.6399999999999, "end": 1630.84, "text": " we do these random permutines. But then they also say, oh, and also we include"}, {"start": 1631.8799999999999, "end": 1637.56, "text": " memory from XL net and we do relative positioning coatings and so on. To me,"}, {"start": 1637.56, "end": 1641.8799999999999, "text": " these kind of papers, of course, you reach better numbers. You get a new state of the art. So it's"}, {"start": 1641.8799999999999, "end": 1649.72, "text": " kind of a landmark paper. But to me, a paper should more be like a single thing. So whatever,"}, {"start": 1649.72, "end": 1654.6, "text": " your idea is this, your idea is these orderings and whatever you need to do to make that work,"}, {"start": 1654.6, "end": 1665.24, "text": " okay, fine. But then why the additional transformer XL things, it's really then hard to estimate"}, {"start": 1665.24, "end": 1669.8, "text": " how much of the improvement comes from your idea and how much of the improvement simply comes"}, {"start": 1669.8, "end": 1674.44, "text": " from the fact that you already put these other things. Actually, I have nothing to do with it."}, {"start": 1674.44, "end": 1681.0, "text": " So I appreciate these kind of analyses called ablation studies where they kind of try to take"}, {"start": 1681.0, "end": 1689.0, "text": " away the memory and these things and kind of look at what it's doing to the model. And you see here"}, {"start": 1689.0, "end": 1696.36, "text": " kind of the grades down here as, for example, this kind of the grades as you take stuff away"}, {"start": 1698.28, "end": 1706.6, "text": " while still being more kind of more successful than birth. So that that I would say also,"}, {"start": 1707.8, "end": 1712.92, "text": " yeah, here is more unclear, but also kind of seems to degrade a bit"}, {"start": 1712.92, "end": 1720.92, "text": " and while being more successful than birth. So I appreciate this, this, some kind of really trying"}, {"start": 1720.92, "end": 1726.76, "text": " to show that your gains really come from your new idea and not from some other stuff."}, {"start": 1727.72, "end": 1735.64, "text": " All right, so the last thing I want to mention actually is this thing. So someone claiming or"}, {"start": 1735.64, "end": 1746.2, "text": " calculating that it costs to $45,000 to train the Excel net model the way they describe it in the"}, {"start": 1746.2, "end": 1751.16, "text": " paper. I'm sure this is going to be brought down because it was brought down that like the time"}, {"start": 1751.16, "end": 1756.5200000000002, "text": " to train was brought down with birth as well, but this is just, I mean, this is crazy. This is just"}, {"start": 1756.52, "end": 1766.36, "text": " training it. It kind of gives large questions about the state of research and the ability for kind"}, {"start": 1766.36, "end": 1773.56, "text": " of, let's say, more academic players to participate in research. On the one hand, of course, we,"}, {"start": 1773.56, "end": 1779.8799999999999, "text": " like, of course, these companies should be able to do this. And on the other hand, if it seems"}, {"start": 1779.88, "end": 1786.6000000000001, "text": " like currently in some fields, just putting more money on the table will get your weather result."}, {"start": 1788.3600000000001, "end": 1793.3200000000002, "text": " Not this, this actually, like this paper is actually a cool idea, but it's still kind of"}, {"start": 1793.3200000000002, "end": 1800.2800000000002, "text": " prohibitively expensive to even reproduce it. Yeah, all right. So that was, that was that for this"}, {"start": 1800.28, "end": 1813.24, "text": " paper. I hope you enjoyed this and see you."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=hkw-WDBipgo | Talking to companies at ICML19 | A short rant on sponsor companies at ICML and how to talk to them. | Alright, I quickly want to talk about the interaction with corporation company reps at these conferences. Because to me it's still a bit of a secret or a bit of a not really clear of what to do. There's very different kinds of companies at these conferences. So some companies I feel are there to basically show off their technology kind of wanting to use it. One example is, for example, Graphcore. The kind of new kid on the block for AI hardware in that they claim to have a chip specifically designed for the types of operations that machine learning applications do. So even more specialized than a GPU. And also they claim they are faster for equivalent kind of money spending than an Nvidia GPU like a classic GPU. So basically you get much more bang for the buck. Now for now they just offer a cloud solution, I believe. And they're going to sell their cards through Dell. The way it works is they have kind of a low level compiler that will compile your model to these cards. And for now you can interact with it through C++. And then TensorFlow will come later something like this. The thing about their card is that they have an extremely large memory right next to the compute units. This would be kind of your traditional level one cache. Yeah, so that means that you get much faster access technically to your local variables. But then they don't have any kind of RAM, which means their entire card only has somewhat like 300 megabytes of memory. But they claim they can just basically distribute. If you have a large model, you can distribute that over many cards. And then you'll get a good basically the speed up of the cards without having to sacrifice a model size. Another company that shows off really cool technology is a company that does light are. And I forget the name right now, but when I try to look it up, they so they do a light are sensor basically that is super tiny and costs a fraction of like a traditional light are sensor. So I think they said there's cost about $12,000. And it's really tiny. And as a couple of advantages compared to traditional sensors as far as I understand their lasers are mounted on the same chip. They always point in the same direction, which reduces a lot of inaccuracies. I guess people would be interested in that for self-driving cars and so on. And these are kind of the hardware demonstrations that I've seen. Then there's other things like there is a wellness center where you can get a like a massage, which is sponsored by the big companies, which is pretty nice with them a lot. Probably too much. I don't like these kinds of things too much. Maybe I'm just socially too awkward. Yeah, for some companies, I feel that they're just there to recruit and they don't really want to want to talk about what they do too much. So this would be an indication of this would be a company where basically all of the reps at the booth are recruiters. So non-technical recruiters that basically just kind of tell you what you can do as a career and not really what the company does as a whole. I never really know what to talk about then because I don't know. I feel like most people are interested in drawn towards interesting work. If that comes with good working conditions, then that's a plus, but I don't feel for many people that that is the most important thing. Though I could be wrong. Probably it's good that for some people it is because otherwise everyone would take my jobs. The ones that I like. Yeah, so these companies will usually, if there is an engineer, they will not talk about too much what they do. Like, oh, it's company secret and so on. So the funniest one was actually the NSA. Talking to the NSA was kind of painful because you kind of ask them, so what do you do? And they're like, yeah, machine learning. Because what I want to know as a researcher is like, is there anything I could do there that I couldn't do anywhere else. Is there any unique problems that the NSA faces that actually demand new research, like demand new machine learning methods or some kind of change? So I ask this and they're like, yes, there are problems like this. And you ask like, which problems? And they're like, yeah, there are problems we can't tell you. And everything's basically whatever. So I made it a game till I asked them more specific questions and watched them like, oh, this is classified. So yeah, if you're here, definitely check them out. It's a fun. It's just fun to talk to them. Yeah, the, I feel to most, most companies, they're really interesting. I don't know more than half of them. So just going up, ask them what they do. Kind of just get an overview of the landscape of what's needed currently in machine learning research. I think that's really useful. Because as an academic, I tend to be very disconnected from the, from the industry side of things and from what people actually need or want in practice. So talking to all these companies is really helpful to get an overview of that. Yeah, so, but if, you know, if you know a better way, I know people, some people are much more successful than me talking to companies at conferences. I'm definitely not the best at this. And, yeah, if you have a better strategy, let me know. So I'm pretty happy so far. All right. That was that. See ya. | [{"start": 0.0, "end": 13.0, "text": " Alright, I quickly want to talk about the interaction with corporation company reps at these conferences."}, {"start": 13.0, "end": 20.0, "text": " Because to me it's still a bit of a secret or a bit of a not really clear of what to do."}, {"start": 20.0, "end": 25.0, "text": " There's very different kinds of companies at these conferences."}, {"start": 25.0, "end": 35.0, "text": " So some companies I feel are there to basically show off their technology kind of wanting to use it."}, {"start": 35.0, "end": 39.0, "text": " One example is, for example, Graphcore."}, {"start": 39.0, "end": 54.0, "text": " The kind of new kid on the block for AI hardware in that they claim to have a chip specifically designed for the types of operations that machine learning applications do."}, {"start": 54.0, "end": 57.0, "text": " So even more specialized than a GPU."}, {"start": 57.0, "end": 69.0, "text": " And also they claim they are faster for equivalent kind of money spending than an Nvidia GPU like a classic GPU."}, {"start": 69.0, "end": 73.0, "text": " So basically you get much more bang for the buck."}, {"start": 73.0, "end": 77.0, "text": " Now for now they just offer a cloud solution, I believe."}, {"start": 77.0, "end": 82.0, "text": " And they're going to sell their cards through Dell."}, {"start": 82.0, "end": 92.0, "text": " The way it works is they have kind of a low level compiler that will compile your model to these cards."}, {"start": 92.0, "end": 96.0, "text": " And for now you can interact with it through C++."}, {"start": 96.0, "end": 100.0, "text": " And then TensorFlow will come later something like this."}, {"start": 100.0, "end": 116.0, "text": " The thing about their card is that they have an extremely large memory right next to the compute units. This would be kind of your traditional level one cache."}, {"start": 116.0, "end": 134.0, "text": " Yeah, so that means that you get much faster access technically to your local variables. But then they don't have any kind of RAM, which means their entire card only has somewhat like 300 megabytes of memory."}, {"start": 134.0, "end": 152.0, "text": " But they claim they can just basically distribute. If you have a large model, you can distribute that over many cards. And then you'll get a good basically the speed up of the cards without having to sacrifice a model size."}, {"start": 152.0, "end": 161.0, "text": " Another company that shows off really cool technology is a company that does light are."}, {"start": 161.0, "end": 179.0, "text": " And I forget the name right now, but when I try to look it up, they so they do a light are sensor basically that is super tiny and costs a fraction of like a traditional light are sensor."}, {"start": 179.0, "end": 197.0, "text": " So I think they said there's cost about $12,000. And it's really tiny. And as a couple of advantages compared to traditional sensors as far as I understand their lasers are mounted on the same chip."}, {"start": 197.0, "end": 211.0, "text": " They always point in the same direction, which reduces a lot of inaccuracies. I guess people would be interested in that for self-driving cars and so on."}, {"start": 211.0, "end": 229.0, "text": " And these are kind of the hardware demonstrations that I've seen. Then there's other things like there is a wellness center where you can get a like a massage, which is sponsored by the big companies, which is pretty nice with them a lot."}, {"start": 229.0, "end": 251.0, "text": " Probably too much. I don't like these kinds of things too much. Maybe I'm just socially too awkward. Yeah, for some companies, I feel that they're just there to recruit and they don't really want to want to talk about what they do too much."}, {"start": 251.0, "end": 275.0, "text": " So this would be an indication of this would be a company where basically all of the reps at the booth are recruiters. So non-technical recruiters that basically just kind of tell you what you can do as a career and not really what the company does as a whole."}, {"start": 275.0, "end": 296.0, "text": " I never really know what to talk about then because I don't know. I feel like most people are interested in drawn towards interesting work. If that comes with good working conditions, then that's a plus, but I don't feel for many people that that is the most important thing."}, {"start": 296.0, "end": 307.0, "text": " Though I could be wrong. Probably it's good that for some people it is because otherwise everyone would take my jobs. The ones that I like."}, {"start": 307.0, "end": 320.0, "text": " Yeah, so these companies will usually, if there is an engineer, they will not talk about too much what they do. Like, oh, it's company secret and so on. So the funniest one was actually the NSA."}, {"start": 320.0, "end": 331.0, "text": " Talking to the NSA was kind of painful because you kind of ask them, so what do you do? And they're like, yeah, machine learning."}, {"start": 331.0, "end": 340.0, "text": " Because what I want to know as a researcher is like, is there anything I could do there that I couldn't do anywhere else."}, {"start": 340.0, "end": 354.0, "text": " Is there any unique problems that the NSA faces that actually demand new research, like demand new machine learning methods or some kind of change?"}, {"start": 354.0, "end": 365.0, "text": " So I ask this and they're like, yes, there are problems like this. And you ask like, which problems? And they're like, yeah, there are problems we can't tell you."}, {"start": 365.0, "end": 375.0, "text": " And everything's basically whatever. So I made it a game till I asked them more specific questions and watched them like, oh, this is classified."}, {"start": 375.0, "end": 384.0, "text": " So yeah, if you're here, definitely check them out. It's a fun. It's just fun to talk to them."}, {"start": 384.0, "end": 396.0, "text": " Yeah, the, I feel to most, most companies, they're really interesting. I don't know more than half of them. So just going up, ask them what they do."}, {"start": 396.0, "end": 404.0, "text": " Kind of just get an overview of the landscape of what's needed currently in machine learning research. I think that's really useful."}, {"start": 404.0, "end": 417.0, "text": " Because as an academic, I tend to be very disconnected from the, from the industry side of things and from what people actually need or want in practice."}, {"start": 417.0, "end": 422.0, "text": " So talking to all these companies is really helpful to get an overview of that."}, {"start": 422.0, "end": 435.0, "text": " Yeah, so, but if, you know, if you know a better way, I know people, some people are much more successful than me talking to companies at conferences. I'm definitely not the best at this."}, {"start": 435.0, "end": 455.0, "text": " And, yeah, if you have a better strategy, let me know. So I'm pretty happy so far. All right. That was that. See ya."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=TFiZYA_JfJs | Population-Based Search and Open-Ended Algorithms | Comments on the ICML2019 tutorial on population-based search and open-ended learning.
Talk: https://www.facebook.com/icml.imls/videos/481758745967365/
Slides: http://www.cs.uwyo.edu/~jeffclune/share/2019_06_10_ICML_Tutorial.pdf
Book: https://www.amazon.com/dp/B00X57B4JG/
Event: https://icml.cc/Conferences/2019/ScheduleMultitrack?event=4336 | This is huge. This is just one hall and most people I guess are still waiting for registration. Yeah, but definitely the size of these things is ginormous. The tutorials have just started. I'll be going to find a place. Hi, so I just wanted to give a little update on a tutorial that I liked which was the population-based search and open-ended learning tutorial which happened on Monday here. I was pleasantly surprised by this tutorial because I knew almost nothing about these techniques and they seem really cool. Seems to be a really cool line of research. So started out with what is population-based search and basically in population-based search you don't want to just reach one solution of a problem but you want to kind of maintain a population of solutions that you develop over time. So natural evolution will be an example of that. So this can have many benefits that will kind of that were explored in the tutorial. So the kind of culprit of traditional optimization. Let's say you have a classification problem. You just train one classifier on it is what they call deception meaning that okay better example is an RL problem where you need to reach some goal but since the goal might be very hard to reach your algorithm has basically nothing to go on. So there's no stepping stones. So usually people go and construct a reward function in a very clever way but this can be overcome with these techniques as well. So just imagine like the hardest video game in the Atari suite this would be something like Montezuma's revenge where you first need to collect some key and then go to some door and only then you get a score right. So this reward function is too ambitious and would lead is a problem they call your deception. An observation they make is if you look at nature and natural evolution it is very successful even without a goal. So there's no goal in mind to natural evolution except reproduction creates other reproduction but there's not a goal that's that's simply a kind of underlying mechanic mechanism and if you look at nature all this variety of life was produced without a goal in mind and all this variety of life-filling different niches and basically reproducing at their own pace. So it's a very interesting observation that the goal of this entire field is kind of too model to go into the this direction of what if we don't really go after only the cost function but what if we so in the most extreme case what if we build a search algorithm that only wants to create novel things so where kind of novelty is the only goal what happens then and it turns out some some interesting things can be achieved with that. So they introduced this notion of quality diversity which basically means if if you look at let's again take life on earth you want all the achievable behaviors that there are so maybe one achievable behavior is a very fast life form that can hunt other life forms and another achievable behavior is one that camouflages very well and so on and you want to kind of find for each of these behaviors you want to find the best possible example. So that's the direction that these algorithms going to and an algorithm that they presented was map elites so MAP elites which goes as follows so let's say you have a bunch of dimensions you care about say how fast the creature is how tall it is how well it is camouflaged and so on. Now you want to discretize each of those dimensions so this will give you cells basically so each each of these discretization will introduce a grid of cells and what you now do is you want to keep the best examples of each cell so if a if you have a creature that's very fast but not very well camouflaged that's some cell you look at how well it's doing at the goal that you have in mind and you want to keep the best one of those you have a population and whichever ones are in that cell you keep the best and then you go ahead and you kind of change them you could do this via evolutionary process like you can mutate them or it could do it could be via gradient kind of gradient descent something so but you mutate them and they will I guess they will probably end up in a different cell so you go look at that cell are these new ones better than the ones that you remembered from that old cell and if so replace them kind of you want to yeah for each cell keep the best one and then kind of start continue developing from those sort of like dyke straws shortest path algorithm and this will so what it will return is like an entire landscape of possible behaviors and for each behavior will give you the best the best result now it doesn't mean they all do equally some will be better some cells will be not as good in your in with regards to your cost function but it will give you an entire landscape and you can could see then that there are many kind of modes in this landscape as I said some creatures are very fast hunters some camouflage very well so but then they're kind of slower so you be able to see these modes in that so that's I found this pretty pretty interesting and very kind of opens the door to a lot different applications so a principle they employ is what is called goal switching namely that means if like a line of development can benefit from inventions of another line so let's say the the very fast hunters they they're good at that but then maybe they don't reach quite optimal performance but then another line develops somewhere else and these are camouflage like the camouflage life forms develop so they invent kind of camouflage now because these the way this mutation and so on is you kind of keep the camouflage ones around and the hunters and now the camouflage can kind of jump over to the hunters it's very difficult to explain like this but they call this goal switching and what it means is that yeah the hunters can now adopt a little bit of camouflage through let's say mutating one of the camouflage ones into the hunters or vice versa and then can kind of benefit from that invention over there so a good example of that they mentioned is that in order to discover the microwave you first had to work on radar technology which had nothing to do with microwaves but because of the inventions made in radar technology you could then invent the microwave easily so kind of jumped over into the space of ovens basically before you all you had to make food warm was just put it in an oven and heated up now yet the microwave so that kind of these algorithms capture the spirit of this a book they that the the people who gave the tutorial wrote is why greatness cannot be planned I'll definitely get that and I can't recommend it since I haven't read it yet but I'm going to get and read it should be fairly interesting so they give then a number they give a number of examples of this for example robots that can recover from damage because so they had a robot with six legs they trained to move now they disabled one leg usually you have one solution like you trained your neural network I don't think it was even a neural network or you trained you like your system to move this robot as efficiently as possible and now because you only have one solution one legs broken it doesn't work anymore but since you have the entire landscape of solutions you can easily kind of jump to other not as good solutions if you have all legs but you can jump to other solutions in the solution space and try them out which ones do still work if I only now have five legs since you have the entire landscape you're very well able to do that so that's pretty cool another algorithm they planned it was go explore which is is an algorithm that kind of solved these really hard Atari games while while back and what they do in specific is they kind of have an archive of states that they have reached in the past so it's a video game and you do some things and then you are in certain states so it's an archive of states and you just you pick one of that right you pick like okay this state means I'm like my little person I control is somewhere over there and then you just explore from it right you do a population based you just kind of go around from it and so on and then you look at the state you end up in and if the state you end up in is a known state like you've been there before so it's also an your archive then you compare the two did you get faster to that state via the new route or did you get faster to that state via the route that was already in your archive and if you're faster in that state via the new route you will you replace the archived one with the new one so this again is kind of like a dike stress shortest path algorithm extrapolated to this to this kind of domain where you have to explore you don't actually have a graph so I think it's pretty cool it's all kind of the same principle but it can employ this goal switching thing right so you go to a certain state but then all of a sudden because you explore something else you find a much quicker way to that state which you never intended but it happens so this yeah this is a basic principle that kind of if you explore a lot then good things might happen so so kind of a serendipity discovery mechanism and you could use those good things incorporate them into the things that already work the last topic they covered was open-ended search so distinction from what they've already discussed two open-ended is now they give the example again life on earth if you consider it it's a single run of an algorithm it's not that for every life form a different optimization was started and kind of started and and finished optimized for a certain thing it's all one single run of the same algorithm and it doesn't really have a goal in mind so open-ended algorithms are like that they kind of define interesting notion is it still interesting if we were to just let it run for a billion years like would it still be interesting if yes consider it an open-ended algorithm which I find a really good kind of definition so the the fundamental property that open-ended algorithms have and research in this has defined is that constantly not only is the population shifting but also the environment is shifting so there's kind of a never static situation the environment's always shifting that also means there's always new opportunities opening up for kind of new in on life on earth for new creatures to evolve to kind of fill the niches that open up and the the research community around this the open-ended search open and learning community is considering exactly those types of environments like how can they how can they even describe those manufacture those and then learning those so pretty cool cool experiment they've shown was the pick reader experiment or basically it's human in the loop so they give human humans could cooperate so as a human you go to a website you you pick one picture and these pictures are procedurally generated so they start out with very simple pattern and you just have the opportunity to kind of you pick one and it gives you a bunch of random perturbations of the procedurally generated image and you pick the ones that you like and then you continue exploring from there and if you're happy you can just save that to the database and someone else can look through the database and then pick yours for example to continue and the things that the humans came up with or the result of that was extremely interesting so so not only could you perturb but you could also kind of mix pictures as far as I remember not sure anymore but yeah so but the things they end up with is yeah you could breed pictures right you could you could kind of also put pictures together so the procedural generation of them and what you end up with are remarkable remarkably interesting things and the point they made is it's really only from very few iterations these are like tens or hundreds of iterations of development not like a million like we're used to and there's real tree of phologies that emerge and the crucial lesson they say is people only find when they are not looking so if you had a certain goal in mind you would never be able to you know change the pictures in the way that this goal would appear but if you have no goal in mind you might discover all kinds of interesting things yeah so that that is a kind of all I'm going to say of this they discussed many more things but I think these are the main takeaways so population population based search is interesting because it can kind of overcome the problems that if you only had one optimizer one optimization run of one algorithm and if you employ quality diversity in the algorithm map elites this this enables this kind of goal switching gives you back an entire landscape of of the of learned actors or systems that for each one you know it's kind of the best performing one in that particular constraint of the of the dimensions you care about and yeah open-ended algorithms open-ended search is definitely a cool research direction and I encourage you to check it out all right that was it so far thanks for listening bye | [{"start": 0.0, "end": 7.32, "text": " This is huge. This is just one hall and most people I guess are still waiting for"}, {"start": 7.32, "end": 14.94, "text": " registration. Yeah, but definitely the size of these things is ginormous. The"}, {"start": 14.94, "end": 20.02, "text": " tutorials have just started. I'll be going to find a place."}, {"start": 20.02, "end": 26.8, "text": " Hi, so I just wanted to give a little update on a tutorial that I liked which"}, {"start": 26.8, "end": 31.34, "text": " was the population-based search and open-ended learning tutorial which"}, {"start": 31.34, "end": 37.56, "text": " happened on Monday here. I was pleasantly surprised by this tutorial because I"}, {"start": 37.56, "end": 42.28, "text": " knew almost nothing about these techniques and they seem really cool. Seems to be"}, {"start": 42.28, "end": 47.2, "text": " a really cool line of research. So started out with what is population-based"}, {"start": 47.2, "end": 51.760000000000005, "text": " search and basically in population-based search you don't want to just"}, {"start": 51.76, "end": 57.04, "text": " reach one solution of a problem but you want to kind of maintain a population"}, {"start": 57.04, "end": 64.12, "text": " of solutions that you develop over time. So natural evolution will be an"}, {"start": 64.12, "end": 71.6, "text": " example of that. So this can have many benefits that will kind of that were"}, {"start": 71.6, "end": 78.52, "text": " explored in the tutorial. So the kind of culprit of traditional optimization."}, {"start": 78.52, "end": 83.72, "text": " Let's say you have a classification problem. You just train one classifier on"}, {"start": 83.72, "end": 90.64, "text": " it is what they call deception meaning that okay better example is an RL"}, {"start": 90.64, "end": 98.36, "text": " problem where you need to reach some goal but since the goal might be very hard"}, {"start": 98.36, "end": 102.92, "text": " to reach your algorithm has basically nothing to go on. So there's no stepping"}, {"start": 102.92, "end": 107.6, "text": " stones. So usually people go and construct a reward function in a very clever"}, {"start": 107.6, "end": 114.83999999999999, "text": " way but this can be overcome with these techniques as well. So just imagine"}, {"start": 114.83999999999999, "end": 120.08, "text": " like the hardest video game in the Atari suite this would be something like"}, {"start": 120.08, "end": 124.39999999999999, "text": " Montezuma's revenge where you first need to collect some key and then go to"}, {"start": 124.39999999999999, "end": 129.12, "text": " some door and only then you get a score right. So this reward function is too"}, {"start": 129.12, "end": 136.24, "text": " ambitious and would lead is a problem they call your deception. An observation"}, {"start": 136.24, "end": 142.28, "text": " they make is if you look at nature and natural evolution it is very successful"}, {"start": 142.28, "end": 147.64000000000001, "text": " even without a goal. So there's no goal in mind to natural evolution except"}, {"start": 147.64000000000001, "end": 154.8, "text": " reproduction creates other reproduction but there's not a goal that's that's"}, {"start": 154.8, "end": 161.8, "text": " simply a kind of underlying mechanic mechanism and if you look at nature all"}, {"start": 161.8, "end": 166.44, "text": " this variety of life was produced without a goal in mind and all this variety of"}, {"start": 166.44, "end": 172.28, "text": " life-filling different niches and basically reproducing at their own pace."}, {"start": 172.28, "end": 178.16000000000003, "text": " So it's a very interesting observation that the goal of this entire field is"}, {"start": 178.16000000000003, "end": 183.60000000000002, "text": " kind of too model to go into the this direction of what if we don't really go"}, {"start": 183.60000000000002, "end": 190.16000000000003, "text": " after only the cost function but what if we so in the most extreme case what"}, {"start": 190.16, "end": 196.32, "text": " if we build a search algorithm that only wants to create novel things so"}, {"start": 196.32, "end": 204.12, "text": " where kind of novelty is the only goal what happens then and it turns out"}, {"start": 204.12, "end": 207.8, "text": " some some interesting things can be achieved with that. So they introduced this"}, {"start": 207.8, "end": 215.92, "text": " notion of quality diversity which basically means if if you look at let's"}, {"start": 215.92, "end": 222.51999999999998, "text": " again take life on earth you want all the achievable behaviors that there are"}, {"start": 222.51999999999998, "end": 229.67999999999998, "text": " so maybe one achievable behavior is a very fast life form that can hunt other"}, {"start": 229.67999999999998, "end": 234.56, "text": " life forms and another achievable behavior is one that camouflages very well"}, {"start": 234.56, "end": 239.92, "text": " and so on and you want to kind of find for each of these behaviors you want to"}, {"start": 239.92, "end": 245.67999999999998, "text": " find the best possible example. So that's the direction that these algorithms"}, {"start": 245.68, "end": 253.84, "text": " going to and an algorithm that they presented was map elites so MAP elites"}, {"start": 253.84, "end": 260.24, "text": " which goes as follows so let's say you have a bunch of dimensions you care"}, {"start": 260.24, "end": 265.72, "text": " about say how fast the creature is how tall it is how well it is camouflaged"}, {"start": 265.72, "end": 270.72, "text": " and so on. Now you want to discretize each of those dimensions so this will"}, {"start": 270.72, "end": 278.44000000000005, "text": " give you cells basically so each each of these discretization will introduce a"}, {"start": 278.44000000000005, "end": 285.08000000000004, "text": " grid of cells and what you now do is you want to keep the best examples of each"}, {"start": 285.08000000000004, "end": 290.20000000000005, "text": " cell so if a if you have a creature that's very fast but not very well camouflaged"}, {"start": 290.20000000000005, "end": 295.64000000000004, "text": " that's some cell you look at how well it's doing at the goal that you have in"}, {"start": 295.64, "end": 301.68, "text": " mind and you want to keep the best one of those you have a population and"}, {"start": 301.68, "end": 306.88, "text": " whichever ones are in that cell you keep the best and then you go ahead and"}, {"start": 306.88, "end": 311.2, "text": " you kind of change them you could do this via evolutionary process like you"}, {"start": 311.2, "end": 316.32, "text": " can mutate them or it could do it could be via gradient kind of gradient descent"}, {"start": 316.32, "end": 321.2, "text": " something so but you mutate them and they will I guess they will probably end up"}, {"start": 321.2, "end": 326.32, "text": " in a different cell so you go look at that cell are these new ones better than"}, {"start": 326.32, "end": 331.47999999999996, "text": " the ones that you remembered from that old cell and if so replace them kind of"}, {"start": 331.47999999999996, "end": 336.08, "text": " you want to yeah for each cell keep the best one and then kind of start"}, {"start": 336.08, "end": 341.68, "text": " continue developing from those sort of like dyke straws shortest path algorithm"}, {"start": 341.68, "end": 349.52, "text": " and this will so what it will return is like an entire landscape of possible"}, {"start": 349.52, "end": 356.47999999999996, "text": " behaviors and for each behavior will give you the best the best result now it"}, {"start": 356.47999999999996, "end": 361.52, "text": " doesn't mean they all do equally some will be better some cells will be not as"}, {"start": 361.52, "end": 366.32, "text": " good in your in with regards to your cost function but it will give you an"}, {"start": 366.32, "end": 371.56, "text": " entire landscape and you can could see then that there are many kind of modes in"}, {"start": 371.56, "end": 376.71999999999997, "text": " this landscape as I said some creatures are very fast hunters some camouflage"}, {"start": 376.72, "end": 381.92, "text": " very well so but then they're kind of slower so you be able to see these"}, {"start": 381.92, "end": 389.12, "text": " modes in that so that's I found this pretty pretty interesting and very kind"}, {"start": 389.12, "end": 394.44000000000005, "text": " of opens the door to a lot different applications so a principle they"}, {"start": 394.44000000000005, "end": 401.36, "text": " employ is what is called goal switching namely that means if like a line of"}, {"start": 401.36, "end": 407.84000000000003, "text": " development can benefit from inventions of another line so let's say the the"}, {"start": 407.84000000000003, "end": 416.44, "text": " very fast hunters they they're good at that but then maybe they don't reach"}, {"start": 416.44, "end": 422.0, "text": " quite optimal performance but then another line develops somewhere else and"}, {"start": 422.0, "end": 427.36, "text": " these are camouflage like the camouflage life forms develop so they invent"}, {"start": 427.36, "end": 433.64, "text": " kind of camouflage now because these the way this mutation and so on is you"}, {"start": 433.64, "end": 439.12, "text": " kind of keep the camouflage ones around and the hunters and now the camouflage"}, {"start": 439.12, "end": 445.08000000000004, "text": " can kind of jump over to the hunters it's very difficult to explain like this"}, {"start": 445.08000000000004, "end": 451.36, "text": " but they call this goal switching and what it means is that yeah the hunters"}, {"start": 451.36, "end": 457.52000000000004, "text": " can now adopt a little bit of camouflage through let's say mutating one of the"}, {"start": 457.52000000000004, "end": 463.12, "text": " camouflage ones into the hunters or vice versa and then can kind of benefit"}, {"start": 463.12, "end": 469.8, "text": " from that invention over there so a good example of that they mentioned is that"}, {"start": 469.8, "end": 476.12, "text": " in order to discover the microwave you first had to work on radar technology"}, {"start": 476.12, "end": 480.16, "text": " which had nothing to do with microwaves but because of the inventions made in"}, {"start": 480.16, "end": 486.04, "text": " radar technology you could then invent the microwave easily so kind of jumped"}, {"start": 486.04, "end": 490.84000000000003, "text": " over into the space of ovens basically before you all you had to make food"}, {"start": 490.84000000000003, "end": 496.48, "text": " warm was just put it in an oven and heated up now yet the microwave so that kind"}, {"start": 496.48, "end": 504.64000000000004, "text": " of these algorithms capture the spirit of this a book they that the the people"}, {"start": 504.64000000000004, "end": 509.04, "text": " who gave the tutorial wrote is why greatness cannot be planned I'll definitely"}, {"start": 509.04, "end": 514.64, "text": " get that and I can't recommend it since I haven't read it yet but I'm going to"}, {"start": 514.64, "end": 521.72, "text": " get and read it should be fairly interesting so they give then a number they"}, {"start": 521.72, "end": 526.84, "text": " give a number of examples of this for example robots that can recover from"}, {"start": 526.84, "end": 532.6800000000001, "text": " damage because so they had a robot with six legs they trained to move now"}, {"start": 532.6800000000001, "end": 538.8000000000001, "text": " they disabled one leg usually you have one solution like you trained your"}, {"start": 538.8, "end": 542.64, "text": " neural network I don't think it was even a neural network or you trained you"}, {"start": 542.64, "end": 547.8, "text": " like your system to move this robot as efficiently as possible and now because"}, {"start": 547.8, "end": 553.0, "text": " you only have one solution one legs broken it doesn't work anymore but since"}, {"start": 553.0, "end": 558.12, "text": " you have the entire landscape of solutions you can easily kind of jump to"}, {"start": 558.12, "end": 563.3599999999999, "text": " other not as good solutions if you have all legs but you can jump to other"}, {"start": 563.36, "end": 570.52, "text": " solutions in the solution space and try them out which ones do still work if I"}, {"start": 570.52, "end": 575.12, "text": " only now have five legs since you have the entire landscape you're very well"}, {"start": 575.12, "end": 581.44, "text": " able to do that so that's pretty cool another algorithm they planned it was"}, {"start": 581.44, "end": 588.5600000000001, "text": " go explore which is is an algorithm that kind of solved these really hard Atari"}, {"start": 588.56, "end": 595.88, "text": " games while while back and what they do in specific is they kind of have an"}, {"start": 595.88, "end": 601.4799999999999, "text": " archive of states that they have reached in the past so it's a video game and"}, {"start": 601.4799999999999, "end": 607.4799999999999, "text": " you do some things and then you are in certain states so it's an archive of"}, {"start": 607.4799999999999, "end": 613.52, "text": " states and you just you pick one of that right you pick like okay this state"}, {"start": 613.52, "end": 619.64, "text": " means I'm like my little person I control is somewhere over there and then you"}, {"start": 619.64, "end": 624.6, "text": " just explore from it right you do a population based you just kind of go"}, {"start": 624.6, "end": 630.64, "text": " around from it and so on and then you look at the state you end up in and if the"}, {"start": 630.64, "end": 636.84, "text": " state you end up in is a known state like you've been there before so it's also"}, {"start": 636.84, "end": 642.4399999999999, "text": " an your archive then you compare the two did you get faster to that state via"}, {"start": 642.44, "end": 647.8000000000001, "text": " the new route or did you get faster to that state via the route that was"}, {"start": 647.8000000000001, "end": 651.6, "text": " already in your archive and if you're faster in that state via the new route"}, {"start": 651.6, "end": 658.36, "text": " you will you replace the archived one with the new one so this again is kind of"}, {"start": 658.36, "end": 664.6, "text": " like a dike stress shortest path algorithm extrapolated to this to this kind of"}, {"start": 664.6, "end": 670.48, "text": " domain where you have to explore you don't actually have a graph so I think"}, {"start": 670.48, "end": 676.9200000000001, "text": " it's pretty cool it's all kind of the same principle but it can employ this"}, {"start": 676.9200000000001, "end": 681.88, "text": " goal switching thing right so you go to a certain state but then all of a sudden"}, {"start": 681.88, "end": 685.8000000000001, "text": " because you explore something else you find a much quicker way to that state"}, {"start": 685.8000000000001, "end": 693.5600000000001, "text": " which you never intended but it happens so this yeah this is a basic principle"}, {"start": 693.56, "end": 700.92, "text": " that kind of if you explore a lot then good things might happen so so kind of a"}, {"start": 700.92, "end": 706.3599999999999, "text": " serendipity discovery mechanism and you could use those good things"}, {"start": 706.3599999999999, "end": 713.4799999999999, "text": " incorporate them into the things that already work the last topic they covered"}, {"start": 713.4799999999999, "end": 720.28, "text": " was open-ended search so distinction from what they've already discussed two"}, {"start": 720.28, "end": 726.3199999999999, "text": " open-ended is now they give the example again life on earth if you consider it"}, {"start": 726.3199999999999, "end": 731.52, "text": " it's a single run of an algorithm it's not that for every life form a"}, {"start": 731.52, "end": 738.0, "text": " different optimization was started and kind of started and and finished"}, {"start": 738.0, "end": 744.0, "text": " optimized for a certain thing it's all one single run of the same algorithm and"}, {"start": 744.0, "end": 749.8, "text": " it doesn't really have a goal in mind so open-ended algorithms are like that"}, {"start": 749.8, "end": 755.0799999999999, "text": " they kind of define interesting notion is it still interesting if we were to"}, {"start": 755.0799999999999, "end": 759.0799999999999, "text": " just let it run for a billion years like would it still be interesting if yes"}, {"start": 759.0799999999999, "end": 764.3599999999999, "text": " consider it an open-ended algorithm which I find a really good kind of"}, {"start": 764.3599999999999, "end": 770.0799999999999, "text": " definition so the the fundamental property that open-ended algorithms have"}, {"start": 770.0799999999999, "end": 776.4, "text": " and research in this has defined is that constantly not only is the"}, {"start": 776.4, "end": 783.12, "text": " population shifting but also the environment is shifting so there's kind of a"}, {"start": 783.12, "end": 788.9599999999999, "text": " never static situation the environment's always shifting that also means"}, {"start": 788.9599999999999, "end": 795.68, "text": " there's always new opportunities opening up for kind of new in on life on"}, {"start": 795.68, "end": 802.8, "text": " earth for new creatures to evolve to kind of fill the niches that open up and the"}, {"start": 802.8, "end": 809.1999999999999, "text": " the research community around this the open-ended search open and learning"}, {"start": 809.1999999999999, "end": 815.1999999999999, "text": " community is considering exactly those types of environments like how can"}, {"start": 815.1999999999999, "end": 819.4399999999999, "text": " they how can they even describe those manufacture those and then learning"}, {"start": 819.4399999999999, "end": 824.76, "text": " those so pretty cool cool experiment they've shown was the pick reader experiment"}, {"start": 824.76, "end": 832.0, "text": " or basically it's human in the loop so they give human humans could cooperate"}, {"start": 832.0, "end": 837.56, "text": " so as a human you go to a website you you pick one picture and these pictures"}, {"start": 837.56, "end": 842.88, "text": " are procedurally generated so they start out with very simple pattern and you"}, {"start": 842.88, "end": 846.96, "text": " just have the opportunity to kind of you pick one and it gives you a bunch of"}, {"start": 846.96, "end": 852.08, "text": " random perturbations of the procedurally generated image and you pick the ones"}, {"start": 852.08, "end": 856.72, "text": " that you like and then you continue exploring from there and if you're happy"}, {"start": 856.72, "end": 860.56, "text": " you can just save that to the database and someone else can look through the"}, {"start": 860.56, "end": 866.5999999999999, "text": " database and then pick yours for example to continue and the things that the"}, {"start": 866.5999999999999, "end": 873.7199999999999, "text": " humans came up with or the result of that was extremely interesting so so"}, {"start": 873.7199999999999, "end": 878.4799999999999, "text": " not only could you perturb but you could also kind of mix pictures as far as I"}, {"start": 878.4799999999999, "end": 885.5999999999999, "text": " remember not sure anymore but yeah so but the things they end up with is yeah"}, {"start": 885.6, "end": 890.4, "text": " you could breed pictures right you could you could kind of also put pictures"}, {"start": 890.4, "end": 895.64, "text": " together so the procedural generation of them and what you end up with are"}, {"start": 895.64, "end": 903.24, "text": " remarkable remarkably interesting things and the point they made is it's really"}, {"start": 903.24, "end": 907.84, "text": " only from very few iterations these are like tens or hundreds of iterations of"}, {"start": 907.84, "end": 913.0400000000001, "text": " development not like a million like we're used to and there's real tree of"}, {"start": 913.04, "end": 920.76, "text": " phologies that emerge and the crucial lesson they say is people only find when"}, {"start": 920.76, "end": 925.76, "text": " they are not looking so if you had a certain goal in mind you would never be"}, {"start": 925.76, "end": 930.88, "text": " able to you know change the pictures in the way that this goal would appear"}, {"start": 930.88, "end": 935.36, "text": " but if you have no goal in mind you might discover all kinds of interesting"}, {"start": 935.36, "end": 944.52, "text": " things yeah so that that is a kind of all I'm going to say of this they"}, {"start": 944.52, "end": 948.4, "text": " discussed many more things but I think these are the main takeaways so"}, {"start": 948.4, "end": 953.48, "text": " population population based search is interesting because it can kind of"}, {"start": 953.48, "end": 959.52, "text": " overcome the problems that if you only had one optimizer one optimization run of"}, {"start": 959.52, "end": 964.88, "text": " one algorithm and if you employ quality diversity in the algorithm map elites"}, {"start": 964.88, "end": 969.04, "text": " this this enables this kind of goal switching gives you back an entire"}, {"start": 969.04, "end": 980.28, "text": " landscape of of the of learned actors or systems that for each one you know"}, {"start": 980.28, "end": 985.84, "text": " it's kind of the best performing one in that particular constraint of the"}, {"start": 985.84, "end": 992.68, "text": " of the dimensions you care about and yeah open-ended algorithms open-ended"}, {"start": 992.68, "end": 998.52, "text": " search is definitely a cool research direction and I encourage you to check"}, {"start": 998.52, "end": 1028.48, "text": " it out all right that was it so far thanks for listening bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=EA96xh9qog0 | I'm at ICML19 :) | Short intro to the International Conference on Machine Learning in Long Beach, CA.
I'll be making some updates from the conference. | Hi there, it's day one of ICML and we'll be attending the conference here and just quickly pre-video to let everyone know I'll be trying to report from here kind of what papers are cool what I liked, what are kind of the trends and so hopefully get this conference out to a broader community so everyone's conglomerating here the line's probably gonna be huge I'm already registered so that's pretty good it's beautiful weather and looking forward to five days of of conference so today's tutorial day and I'll think I'll be attending some some cool tutorials yeah look how pretty it is here nice all right bye everyone see you later | [{"start": 0.0, "end": 10.96, "text": " Hi there, it's day one of ICML and we'll be attending the conference here and"}, {"start": 10.96, "end": 18.16, "text": " just quickly pre-video to let everyone know I'll be trying to report from here"}, {"start": 18.16, "end": 25.16, "text": " kind of what papers are cool what I liked, what are kind of the trends and so"}, {"start": 25.16, "end": 29.84, "text": " hopefully get this conference out to a broader community so everyone's"}, {"start": 29.84, "end": 33.56, "text": " conglomerating here the line's probably gonna be huge I'm already registered"}, {"start": 33.56, "end": 39.519999999999996, "text": " so that's pretty good it's beautiful weather and looking forward to five days"}, {"start": 39.519999999999996, "end": 46.8, "text": " of of conference so today's tutorial day and I'll think I'll be attending some"}, {"start": 46.8, "end": 58.08, "text": " some cool tutorials yeah look how pretty it is here nice all right bye"}, {"start": 58.08, "end": 84.8, "text": " everyone see you later"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=hMO6rbMAPew | Adversarial Examples Are Not Bugs, They Are Features | Abstract:
Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. After capturing these features within a theoretical framework, we establish their widespread existence in standard datasets. Finally, we present a simple setting where we can rigorously tie the phenomena we observe in practice to a misalignment between the (human-specified) notion of robustness and the inherent geometry of the data.
Authors: Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry
https://arxiv.org/abs/1905.02175 | Hi there. Today we're looking at adversarial examples aren't bugs. They are features by Andrew Ilias et al. So this paper is pretty interesting as a catchy title and we try to kind of dissect what it says. So first of all, in the abstract, they say adversarial examples have attracted significant attention, but the reasons for their existence and pervasiveness remain unclear. So if you don't know what an adversarial example is, an adversarial example is basically the following. Say you have an image classifier, right? Classifier, boom, neural network, image here and the image is of a, let's say a cat. This is my best attempt at a cat, bang cat. And you feed it through the classifier and the classifier says cat. Now if you perturb this image, if you derive an image from it and you perturb it just very slightly, very subtly. So you introduce some pixels here, there, here, there, right? You change some pixels in a very targeted way. And you feed that new image through here. Then the classifier will say dog or something really, you can make it say anything like airplane or I don't know, sky or whatever you want. These are called adversarial examples. And it's true, their existence and the reasons for their existence and pervasiveness remain unclear. They say we demonstrate that adversarial examples can be directly attributed to the presence of non robust features. So they're basically, their paper is about these non robust features and they they define later what they mean exactly. But here they say features derive from patterns in the data distribution that are highly predictive, yet brittle and incomprehensible to humans. And this is this pretty neat. So the fundamental idea as I understand it and I'm going to take this away right here that if you have images, let's say here of cats and I'm going to draw another one over here. If you have an image, say of cats, there is multiple features in this image. And the feature is something that the classifier can pick up on and kind of learn to this is a horrible cat learn to learn to classify images from. So features that we humans generally use are a cat has ear ear eyes whiskers, right? And the general relationship to each other of these things, this is what constitutes a cat. And that's how we classify it. But also they say there are other features that are also very indicative, right? If you think what differentiates a cat from a dog and a dog here, let's big fluffy ears, also eyes, yeah, not going to go further with the dog too much. The what differentiates cat from a dog and we of course we would say well the head shape is different and the ears are different and the relationship to them to each other are different. But it could also be and this is a simplistic right now, right? But it's also that cats for example have different fur than dogs. And yeah, being overly simplistic here, but bear with me. So let's say in our hypothetical world cats have fur that all goes like this left to right, right? Every hair is basically vertical, sorry horizontal. If you look at it like that and dog fur on the other hand is always like this, right? Is is vertical, right? Talk to bottom. And so the the classifier might just as well pick up on the fur direction in order to classify images, right? Since all cats have that type of fur and all dogs have that other type of fur, the classifier might just as well pick up on that, right? And to us humans, we don't really pay attention to these things because they're minute, right? They don't look at the directions of the individual hairs to classify in an animal to cat or dog. You much rather go for these kind of large features like where the ears, how do they look and so on? But the classifier, there's actually you can make an argument that the classifier would more likely pick up on the fur direction, right? In order to in order to classify, since we're using convolutional neural networks and they're generally neighborhood pixel neighborhood operators, it can much easier pick up on these patterns than it can on the general relationship of the of the large features. So if a classifier now learns that cats always have fur like this and dogs always have fur like that, what we could do is we can go over here to the dog and change its fur, right? Change in the image, change its fur to this direction. Now to us humans, that would still very much look like a dog because the fur direction is almost imperceptible, but to the classifier that has only learned, hey, a cat always has this type of fur and the dog always has that type of fur, that new image would totally look like a cat. Right? So this paper argues exactly that, this paper argues that in the data set, there are features and these are real features like this, this actually could be the case that cats fur is always like that and dogs fur is always like this. It could be the case and the classifier could pick up on this, right? And then the adversarial examples, the reason why they exist is because the classifier has picked up on these imperceptible features and so by changing the features, we can change the classifier's decision without changing the image in a large scale. So they say that they make this hypothesis and they kind of, they say, okay, we established a widespread existence in standard data sets, so they kind of give supporting evidence for their hypothesis and then they say, finally, we present a simple setting, which is a theoretical setting where we can rigorously tie the phenomena we observe to a misalignment between the human specified notion of robustness and the inherent geometry of the data. All right, so it's kind of different pieces of the of this paper and we're going to look at them in succession. So the introduction we largely skip except that their main claim here is specifically, we claim that adversarial of vulnerability is a direct result of our model's sensitivity to well generalizing features in the data. So that's the the core point, I think, is well generalizing features, which is what we mentioned. These are features that actually describe the data well, but, but features that are kind of imperceptibly small to humans or don't fit our notion of robustness. All right, so they go on and they define more clearly what they mean. Here, whenever we talk of a feature, right, remember we had the our classifier here and we input an image and the image, it's called x, right, and that classifier, usually if we look at it closer consists of multiple layers of interconnected neurons, whatever, and the last layer will be an output layer into different classes. Right. And so the features when they say a feature, what they mean specifically is the last here, the last representation before it goes into the classifier. So the way you would classify them and here they just establish a two class setting, the way you would establish that is you have feature one, feature two, feature three, and you have a weight vector w one for each feature, w two, w three, you make the inner product and that will give you a y hat basically. If that is high, you say it's class one, if that is low, you say it's class minus one. So the classes here are plus one and minus one just to make things simple. So but you see the features are basically what comes out after these layers, what is then used to make a linear classification. This last thing is basically just a logistic regression. So you can think of the features as the output of the neural network, but before it goes into the classifier. So a feature basically, since then it's linearly classified. If the feature is high, it will give a signal for one class and if a feature is lower, it will give a signal for the other class, depending on of course if this w is negative or positive. Alright. So they say we call a feature row useful. And if this thing holds here, what is this thing? This thing means so the expectation over the dates. So generally in the data set, this must hold y times the feature. So why is the class? Remember, it's plus or minus one. And the feature as we see in some number, y times a feature must be higher than some number. So does it mean when a product is high? It means either both are high or both are low. So they're correlated. That's what that means. So basically, this is says feature f is useful. If whenever it an example x is of class one, if it's class class one, or let's, if it if y is one plus one, then f is high. And whenever y is minus one, then f is low, which means it's high in the negative direction. Alright. So this is our, this is intuitive. Right. If a feature is useful, it means it should say one thing in samples of class one, then it should say another thing in samples of class two. Then I can actually use the feature to make a decision when it's, you know, very correlated with the class. So that, you know, that makes perfect sense. So that's kind of when is a feature useful if it correlates with the class label? Yes. Cool. But the usefulness simply any feature, basically, that a class fair will extract will be useful. That's an assumption we can make otherwise the class if I wouldn't extract it. So the neural network here, that's an assumption will only extract useful features. Right. Because the non useful features, there would simply be no reason for it to extract them because they don't contribute to solving the task because they're not correlated with an output class. Right. So next, they define robust, robustly useful features. So in addition to being useful, they're now also robust. What does it mean? Again, we want a correlation of why and the feature to be higher than some constant, but not only the feature of the image X, but the feature of the image X that has been perturbed by a small perturbation. So and we take the the infinum here over a class of perturbations. Of course, this class of perturbations is exactly the adversarial perturbations. Basically what this means is it says that however we try to perturb X, right, and the infinum here means the minimum correlation. However, we try to make the feature not correlated with Y. However much we try, we can't get it lower than some some gamma, some number, right. We can't we can't get it down. So whatever we try to make the feature bad for the classifier, basically, we can't. Right. If this holds for a feature, if this is the case, then we call that feature a robust feature, right. Then that feature is robustly useful. If it correlates, no matter how hard we try to make it not correlate. And of course, a non robust features. So a useful non robust feature is a feature which is useful. You see here is useful, but is not gamma robust feature for any gamma. So it's a feature that is useful like the cat fur, right. So this here, an example of this would be that the cat's eyes and ear position, right. We can't just make a small perturbation for the image and make the ears be somewhere completely else. That's just that would require a large perturbation of the image. So the position of the ears and eyes are pretty robust features. But here the cat's fur, no matter how, no matter how small we make this this gamma, we can always kind of change the fur to make the feature not rogue to make the feature not useful, right. If we can change the cat fur into a dog fur and the dog fur into a cat fur, then the feature will become not useful anymore because we can, no, we can change that arbitrarily for any image and then the classifier will have no clue. It can't be like, well, this fur could be of any of any class, right. So the feature is not useful anymore. So this is a non robust feature. The technique you can say any feature that is useful, but not robust is a non robust feature. All right. So this is kind of the definition of what robust and non robust features are. Yeah. Remember, maybe remember robust features like position of the ears and their shape and non robust features would be which direction are the individual hairs in the fur going, right. And in our world where cat fur is going different ways than dog fur. Yeah. So they now going to experimental evidence for their hypothesis. And he here you have to understand they do two experiments, which give pretty good indication that their hypothesis is actually correct. And what you have to understand before this is two things. First of all, here you basically, you just have to assume that they already, they have some procedure where they can do the following where they can take an image of the training data set and they can decompose it into its robust and non robust features. Right. Don't I mean, don't ask yet how they do this, but they can decompose it into these two parts. Right. So that's assumption one, they have a procedure that can actually do that. And then number two is what they what they do here is basically the general theme of these experiments is they they have a training data set. Right. What's this is original training is they created derived version of it. So let's put a tick here. This is a derived version of the date set. Then they train a regular neural network with that. So what you can do with a neural network if you train one, all right. What you usually do is you feed images X, you feed images in, it gives you some output by hat and you say, well, but I know why is the true label. So I feed an image of a cat that the network says airplane. You say, well, but this should be a cat. So please make this by more to be more to be, um, please make this why hat more be like why. And then you have a loss function. Here you say this is wrong. Please correct this. You back propagate and all the network in here will update to make that a bit more likely. That's how you train usually in a network. Now what you can do is if you want to become robust to adversarial examples, you can do what is called adversarial training, which means that you have the same network here, but of each of the training data points, you create a derived version, an adversarial example to that, right, to this X, you feed the adversarial examples through the network together with the original examples. Then this will give you some why hat, uh, two. And then you say, but this should also be equal to why basically you train the classifier also on adversarial examples, right? Since the hypothesis is if you train on an image data set, then you can teach the classifier about that data set, right? Like you do with the regular data set, say, well, okay, I can now just train on adversarial examples. And my classifier will be able to better classify these correctly, right? This usually works it's called adversarial training and it's been a kind of a standard method to make your classifier robust. They don't do that here. They don't do this. They simply want to say, okay, we now have, we have a regular training procedure, right? Like this, except for what we change is here, the training data set, we change this to, in one case, for example, only robust images. So we've changed all the X to be only robust. And we do the regular training procedure. And then we evaluate that result in classifier here. This thing, we evaluate that, um, how does that behave? So it's kind of a new approach where you modify the, the date, the original data set. So what did they do? First of all, they decompose this training data set into a version that is only robust features, right? We, we, we assume we have such a procedure, we then train a regular neural network on that, right? We train a regular neural network on this, um, on this data set. And what we get is two things. First of all, good standard accuracy. What does good standard accuracy mean? It means that, um, we, we can test it on what's called the unmodified test set. So the, the test set, the original test set of the date set, the test set belonging to this training data set, we can test it on that. And it works just fine, right? So that basically means that the robust features are predictive of the, of the kind of, um, they generalize well. It means that if I train a classifier only on robust features, that can actually classify well to, um, to the, to the test set, right? So that means that's standard accuracy. Standard accuracy is how well do I classify the test set, just an unmodified test set. So they also obtain good robust accuracy, which means that what is robust accuracy, robust accuracy means your accuracy on adversarial examples of the test set. And usually classifiers are vulnerable to this. So I classifiers usually obtain good standard accuracy, but bad robust accuracy. But if I only train my classifier on what they call robust features, then I all of a sudden have retained good standard accuracy, but I also get good robust accuracy, um, which means that it gives pretty good support to their hypothesis that the adversarial examples are abusing the fact that the classifiers learn the non robust features. Since if I don't have any non robust features, it means my classifier can't learn any non robust features, which in turn means my classifier isn't vulnerable to adversarial attacks because they would abuse the fact that the classifier has learned about the non robust features. So that's pretty good, um, evidence for their hypothesis. Second thing they do is they now create this on this modified data set where they only have non robust features. Right. So the only thing they have is non robust features. Again, they train a standard neural network. They train just a regular neural network on that. And they also get good standard accuracy. So this means that also the non robust features as we seen like the cats, uh, third direction can lead to you generalize well to the test sets since in the test set also the cats will have that, um, that property. But you get bad robust accuracy. And this, this gives further support to their hypothesis if you train a classifier on only non robust features. And they are features because they generalize well, but they are very vulnerable because they're non robust. Right. So the classifier that has learned about non robust features is vulnerable. They didn't do a third experiment, which I find pretty cool, where they take, they take the training image. And of course it's a non modified training image. So it's robust features will basically say this is a dog. It's non robust features will also say this is a dog because it, you know, it's a, it's a training image of a dog. Um, and what they then do is they derive from this dog and adversarial example towards the cat class. Right. So what does it mean in their hypothesis, if their hypothesis is correct, it now means that the robust features still say it's a dog. We can also see this here, right. The, the kind of big shape of the image still is a dog to us humans. But the non robust features will say it's a cat. Right. This hinges on their hypothesis that adversarial examples actually abuse the non robust features. Right. They create an adversarial example. So if their hypothesis is correct, the non robust features now say that's a cat. Um, so they derive a, an entire data set where they change every image to another image. And they also change the labels accordingly. And then they train again a regular neural network on this. And they look what happens on the unmodified test set. So the unmodified test set will, so imagine if you're the, you're this classifier and what you get is an image X. And it has robust features. That's a dog. And has non robust features say cat and it's label your ask to predict cat. Right. And then we see the next image and the next image X2, the non robust features. Maybe it's derived from some other class. It will say plain. But the, the robust, the non robust features against a cat. Right. And you're asked to predict cat. So basically the constructed data set where the non robust features always agree with, with the label, but the robust features they don't. Um, so naturally what you can expect is the classifier will learn to disregard the robust features because they're no longer useful. Right. But it will actually only will learn to view these features. It's different from before before we only had these features. Now we, these features are still in there. Right. But they're not informative. So the classifier will naturally learn to pick up on the non robust features and classify and classify according to them. So much that if we now test on the test set and we feed in an actual cat. Right. It, of course, it's robust features will say cat and it's non robust features will say cat. And the classifier is able to accurately predict this is a cat. Even though the all the images of cats it has seen during training were actually of basically of non cats of here a dog. So this is pretty cool and shows that kind of these, these features that these non robust features that adversarial examples abuse since they're created by adversarial examples. They, they are actually predictive and generalize to the test set. So that's pretty, pretty good evidence for their hypothesis so far. Now the kind of final remaining question is how do they create? What is the procedure where they can create a robust and then basically non robust version of the data set. And here is kind of where we get into the into the sort of what I find. Yeah. So here you see basically examples of so this is an originally image of a ship in that see for 10 data set I believe. And this is a robust sample. So these are only robust features of the ship. And this is a ship made with only non robust features. You see it's actually a moose, but the non robust features have been changed to ship. All right. So the way they construct a robust version of the data set. They have a formal definition, but the way they do it is as follows. So and then they say, okay, here is where we where we get into the details. They say, imagine we have a classifier. Right. The classifier outputs features and here we call them, here they call them G, which is the representative. It can be larger than features. It can be a bigger class. But in essence, G is the features, which then goes into the into the classifier and into the labels and so on. So the neural network outputs the features inputs some X. Now what if what if I have another X, let's say X prime. And I just initialize this with random noise. And if I feed this and I get G prime here and I try to make the two as close as possible by changing X. So I'm going to change my X here. Basically, I'm going to change my image such that the outputs, the features here match each other as close as possible. What does it mean? And I do this via back propagation. Right. I match these and I back propagate to X. I can do that with gradient descent. What happens is that my image X will basically pick up will match the image my X prime will match the X in all the ways that are relevant for the features. Basically, I will transfer all of the features from X to X prime. But nothing else. Right. Since I start with random. Now, what if my classifier and that's what they do? What if the classifier is a robust classifier? So remember, we talked about we can actually robustify a classifier by doing adversarial training. What if I have a classifier? It's like such that is robust. If I input an X and it outputs me feature representation of X. If the classifier is robust, that representation will only contain robust features. And then if I have a second image X or and I started from random noise and I match the representation of X and by changing XR. Basically, I will transfer all of the robust features from X. But nothing else. Right. Given that I start from random noise here, this means random noise has no features. Let me just say assumption random noise has no features since it's random noise. And if I transfer only the robust features, basically what I've done is I've I've have now an image that I know has no non robust features and only robust features of X. So that's how they derive a how they derive a robustified version of X. Second, how do they derive a non robust version? And that's even even easier if I have a classifier. Right. A regular classifier. And I want a non robust version of X. Have X input output, G output, some label. What I do is I simply derive an adversarial example of X, like we did before adversarial example in here, out here. And that gives me some Y2, which is different from Y. Right. If I have an adversarial example, then basically I've transferred I've transferred the non robust features that lead to class Y2. I've transferred the non robust features here while still maintaining the robust features from here. So if this is to abstract, imagine here X is an image of a dog, right dog. And I derive from it an adversarial image that now says airplane. Right. So the robust features will still be of a dog will still be of the original image, but the non robust features will be of the airplane class. All right. So that's how I derive a non robust non robust version that has features of kind of one. Robust features of one class, but non robust features of the other class. That's what you see up here with the moose, right. The moose clearly has been started from the image of a moose and then has been has received non robust features from the ship class. And that's just your classic adversarial example procedure. So that's the that's the kind of procedure. And so what's kind of my criticism here, if you look at the first part, the first part where they say, well, in order to determine what the robust features are, we actually need a classifier that's already robust. So we've seen before we have a we have a data set. Sorry, let's go up here. They say, aha, here we have a data set, right. And we can disentangle this and then it will which color have we not used. We have a data set. We only we robustify the data set to a robust data set. We train a standard neural network. And that gives us good robust accuracy. Why? Which is really cool because we don't do anything special during training and we still get good robust accuracy. But in order to do this procedure here, this one, you actually have to have a robust classifier, right. You have to have this already robustified classifier, which you have obtained by adversarially training the robust classifier. Basically what you're doing now is you take this adversarial training procedure, which the point here is that you don't do anything different during training, right. But here you take the adversarial training procedure and via training the robust classifier and via changing the state to set here, you basically get good robust accuracy. Which to me is just a reflection that you've obtained the data set using this robust classifier in the first place. I mean, yeah, of course, their method gives a hint that I can actually, this is actually due to things in the data set themselves, right. But they're, I mean, that's really important because it surely means that it's not a point of let's say the classifier itself, but it's a point of the data set, which also say, okay, it also explains why these adversarial examples transfer between classifiers. If I have two classifiers that are different, but classify the same thing, they're vulnerable to the same adversarial example, which basically means it must be some property of the data set that these things learn. But to, to then say we have a procedure to extract the robust features, and if we only train on the robust features, we become robust, right, as here. But you obtain the robust features by using a robustified classifier, which you have adversarially trained. To me, that's kind of kind of back-doring adversarial training into this whole procedure. And, yeah. So that's kind of my first criticism. My second criticism is the fact that, in the, I mean, it's an interesting take on this, but this whole notion, this whole seeing of these features or robust, these features are non-robust, is basically just reframing the problem of adversarial examples in terms of features. It says nothing why these features are there, except simply postulating that they're there. It says nothing why they're there. It says nothing about why the classifiers pick up on them or how they do it, or how, you know, how this is to be mitigated without first having a robustly trained network to extract the robust features. So it basically just shoves the problem into the realm of the dataset rather than from the realm of the classifiers. So it's very much widely, or not, things are very much widely known about these samples. It's just a reframing of the problem, I feel. And it's cool experiments. I mean, it does show a lot of things about these adversarial examples, but it's certainly not an explanation, I find. At least that's my opinion. All right, so down here then, they show that they make an kind of simplified version of this a theoretical setting where they can analyze this. And they basically say, okay, this is generally what happens at the fundamental level. At the fundamental level, you have classes, and let's say the classes are distributed like this, right? These are the examples in the dataset, and they're distributed like that, right? You have them. Mean, and you have some covariance. These are Gaussian's here. So they're distributed like that. If I have two classes like this, such as here, right, and they're distributed like that, and I create like the separator, a linear classifier, the linear classifier will classify like this. It will be like super. This is the best linear classifier, right? We can calculate this accurately. But what do I say when I say, okay, I want an adversarial example. Adversarial examples means that I can shift my examples by a little bit, but achieve a big change in output. And since this distance here, right? So if I have a sample here, I need to go a long way to the boundary to achieve another output, but if I go into another direction, right? If I go down here, I only need to go a very short way. And since adversarial examples as they're specified, they say, okay, we want to go a short way and a short way is characterized by going a short way in any direction, right? This is a terrible circle. In any direction, we want to go a short way. That's another adversarial example. And you see that if I have this any direction property, there's actually directions where this classification boundary is very, very close. And so that's what they say. This is a fundamental misalignment between the geometry of the data, which is like this. And the geometry of how we specify adversarial examples, which is, you know, kind of equal in each direction, which leads to that. And they say, okay, what if I now robust parameter? So what if I adversarily train my network to be robust? It basically means that I expand my data because I add adversarial examples, right? Of this circle here, I actually add adversarial examples. So my class, my data distribution will actually more like this. And my separating hyperplane will change here. And the geometry of the adversarial examples will be much more aligned with my separating hyperplane. So this is kind of a toy example of where they say this is fundamentally what's going on. There's a misalignment between the geometry of the adversarial examples and the inherent geometry of the data. So that's kind of the theoretical analysis they do. And with that, I finish here. And I hope this was clear enough. And goodbye. | [{"start": 0.0, "end": 7.08, "text": " Hi there. Today we're looking at adversarial examples aren't bugs. They are features by Andrew"}, {"start": 7.08, "end": 17.240000000000002, "text": " Ilias et al. So this paper is pretty interesting as a catchy title and we try to kind of dissect"}, {"start": 17.240000000000002, "end": 23.96, "text": " what it says. So first of all, in the abstract, they say adversarial examples have attracted"}, {"start": 23.96, "end": 29.96, "text": " significant attention, but the reasons for their existence and pervasiveness remain unclear."}, {"start": 29.96, "end": 34.28, "text": " So if you don't know what an adversarial example is, an adversarial example is basically the"}, {"start": 34.28, "end": 40.44, "text": " following. Say you have an image classifier, right? Classifier, boom, neural network, image"}, {"start": 40.44, "end": 52.92, "text": " here and the image is of a, let's say a cat. This is my best attempt at a cat, bang cat."}, {"start": 52.92, "end": 60.24, "text": " And you feed it through the classifier and the classifier says cat. Now if you perturb"}, {"start": 60.24, "end": 66.12, "text": " this image, if you derive an image from it and you perturb it just very slightly, very"}, {"start": 66.12, "end": 73.68, "text": " subtly. So you introduce some pixels here, there, here, there, right? You change some pixels"}, {"start": 73.68, "end": 79.56, "text": " in a very targeted way. And you feed that new image through here. Then the classifier"}, {"start": 79.56, "end": 85.92, "text": " will say dog or something really, you can make it say anything like airplane or I don't"}, {"start": 85.92, "end": 94.44, "text": " know, sky or whatever you want. These are called adversarial examples. And it's true,"}, {"start": 94.44, "end": 100.68, "text": " their existence and the reasons for their existence and pervasiveness remain unclear."}, {"start": 100.68, "end": 105.48, "text": " They say we demonstrate that adversarial examples can be directly attributed to the presence"}, {"start": 105.48, "end": 111.48, "text": " of non robust features. So they're basically, their paper is about these non robust features"}, {"start": 111.48, "end": 117.44, "text": " and they they define later what they mean exactly. But here they say features derive from"}, {"start": 117.44, "end": 122.80000000000001, "text": " patterns in the data distribution that are highly predictive, yet brittle and incomprehensible"}, {"start": 122.80000000000001, "end": 132.52, "text": " to humans. And this is this pretty neat. So the fundamental idea as I understand it and"}, {"start": 132.52, "end": 138.88000000000002, "text": " I'm going to take this away right here that if you have images, let's say here of cats"}, {"start": 138.88000000000002, "end": 147.60000000000002, "text": " and I'm going to draw another one over here. If you have an image, say of cats, there"}, {"start": 147.60000000000002, "end": 153.92000000000002, "text": " is multiple features in this image. And the feature is something that the classifier can"}, {"start": 153.92, "end": 164.32, "text": " pick up on and kind of learn to this is a horrible cat learn to learn to classify images from."}, {"start": 164.32, "end": 173.64, "text": " So features that we humans generally use are a cat has ear ear eyes whiskers, right? And"}, {"start": 173.64, "end": 180.83999999999997, "text": " the general relationship to each other of these things, this is what constitutes a cat. And"}, {"start": 180.84, "end": 186.4, "text": " that's how we classify it. But also they say there are other features that are also very"}, {"start": 186.4, "end": 194.8, "text": " indicative, right? If you think what differentiates a cat from a dog and a dog here, let's"}, {"start": 194.8, "end": 206.96, "text": " big fluffy ears, also eyes, yeah, not going to go further with the dog too much. The what"}, {"start": 206.96, "end": 211.32000000000002, "text": " differentiates cat from a dog and we of course we would say well the head shape is different"}, {"start": 211.32000000000002, "end": 216.16, "text": " and the ears are different and the relationship to them to each other are different. But it"}, {"start": 216.16, "end": 220.88, "text": " could also be and this is a simplistic right now, right? But it's also that cats for"}, {"start": 220.88, "end": 226.08, "text": " example have different fur than dogs. And yeah, being overly simplistic here, but bear"}, {"start": 226.08, "end": 232.56, "text": " with me. So let's say in our hypothetical world cats have fur that all goes like this"}, {"start": 232.56, "end": 240.72, "text": " left to right, right? Every hair is basically vertical, sorry horizontal. If you look at"}, {"start": 240.72, "end": 249.32, "text": " it like that and dog fur on the other hand is always like this, right? Is is vertical,"}, {"start": 249.32, "end": 258.52, "text": " right? Talk to bottom. And so the the classifier might just as well pick up on the fur direction"}, {"start": 258.52, "end": 263.28, "text": " in order to classify images, right? Since all cats have that type of fur and all dogs"}, {"start": 263.28, "end": 268.24, "text": " have that other type of fur, the classifier might just as well pick up on that, right?"}, {"start": 268.24, "end": 272.79999999999995, "text": " And to us humans, we don't really pay attention to these things because they're minute, right?"}, {"start": 272.79999999999995, "end": 281.4, "text": " They don't look at the directions of the individual hairs to classify in an animal to cat or dog."}, {"start": 281.4, "end": 286.0, "text": " You much rather go for these kind of large features like where the ears, how do they look"}, {"start": 286.0, "end": 291.04, "text": " and so on? But the classifier, there's actually you can make an argument that the classifier"}, {"start": 291.04, "end": 297.8, "text": " would more likely pick up on the fur direction, right? In order to in order to classify,"}, {"start": 297.8, "end": 302.64, "text": " since we're using convolutional neural networks and they're generally neighborhood pixel"}, {"start": 302.64, "end": 308.12, "text": " neighborhood operators, it can much easier pick up on these patterns than it can on the"}, {"start": 308.12, "end": 317.04, "text": " general relationship of the of the large features. So if a classifier now learns that cats"}, {"start": 317.04, "end": 320.64, "text": " always have fur like this and dogs always have fur like that, what we could do is we can"}, {"start": 320.64, "end": 327.8, "text": " go over here to the dog and change its fur, right? Change in the image, change its fur"}, {"start": 327.8, "end": 332.92, "text": " to this direction. Now to us humans, that would still very much look like a dog because"}, {"start": 332.92, "end": 337.08, "text": " the fur direction is almost imperceptible, but to the classifier that has only learned,"}, {"start": 337.08, "end": 342.4, "text": " hey, a cat always has this type of fur and the dog always has that type of fur, that"}, {"start": 342.4, "end": 348.84, "text": " new image would totally look like a cat. Right? So this paper argues exactly that, this"}, {"start": 348.84, "end": 354.52, "text": " paper argues that in the data set, there are features and these are real features like"}, {"start": 354.52, "end": 360.08, "text": " this, this actually could be the case that cats fur is always like that and dogs fur is"}, {"start": 360.08, "end": 366.15999999999997, "text": " always like this. It could be the case and the classifier could pick up on this, right?"}, {"start": 366.16, "end": 372.96000000000004, "text": " And then the adversarial examples, the reason why they exist is because the classifier has"}, {"start": 372.96000000000004, "end": 379.24, "text": " picked up on these imperceptible features and so by changing the features, we can change"}, {"start": 379.24, "end": 388.72, "text": " the classifier's decision without changing the image in a large scale. So they say that"}, {"start": 388.72, "end": 395.6, "text": " they make this hypothesis and they kind of, they say, okay, we established a widespread"}, {"start": 395.6, "end": 401.64000000000004, "text": " existence in standard data sets, so they kind of give supporting evidence for their hypothesis"}, {"start": 401.64000000000004, "end": 406.64000000000004, "text": " and then they say, finally, we present a simple setting, which is a theoretical setting where"}, {"start": 406.64000000000004, "end": 414.6, "text": " we can rigorously tie the phenomena we observe to a misalignment between the human specified"}, {"start": 414.6, "end": 419.68, "text": " notion of robustness and the inherent geometry of the data. All right, so it's kind of different"}, {"start": 419.68, "end": 425.36, "text": " pieces of the of this paper and we're going to look at them in succession. So the introduction"}, {"start": 425.36, "end": 431.04, "text": " we largely skip except that their main claim here is specifically, we claim that adversarial"}, {"start": 431.04, "end": 437.96000000000004, "text": " of vulnerability is a direct result of our model's sensitivity to well generalizing features"}, {"start": 437.96000000000004, "end": 444.92, "text": " in the data. So that's the the core point, I think, is well generalizing features, which"}, {"start": 444.92, "end": 452.0, "text": " is what we mentioned. These are features that actually describe the data well, but, but"}, {"start": 452.0, "end": 458.6, "text": " features that are kind of imperceptibly small to humans or don't fit our notion of robustness."}, {"start": 458.6, "end": 466.04, "text": " All right, so they go on and they define more clearly what they mean. Here, whenever we"}, {"start": 466.04, "end": 473.08, "text": " talk of a feature, right, remember we had the our classifier here and we input an image"}, {"start": 473.08, "end": 478.68, "text": " and the image, it's called x, right, and that classifier, usually if we look at it closer"}, {"start": 478.68, "end": 485.88, "text": " consists of multiple layers of interconnected neurons, whatever, and the last layer will"}, {"start": 485.88, "end": 494.88, "text": " be an output layer into different classes. Right. And so the features when they say a feature,"}, {"start": 494.88, "end": 502.36, "text": " what they mean specifically is the last here, the last representation before it goes into"}, {"start": 502.36, "end": 509.64, "text": " the classifier. So the way you would classify them and here they just establish a two class"}, {"start": 509.64, "end": 514.76, "text": " setting, the way you would establish that is you have feature one, feature two, feature"}, {"start": 514.76, "end": 521.6800000000001, "text": " three, and you have a weight vector w one for each feature, w two, w three, you make the"}, {"start": 521.6800000000001, "end": 529.0, "text": " inner product and that will give you a y hat basically. If that is high, you say it's"}, {"start": 529.0, "end": 535.68, "text": " class one, if that is low, you say it's class minus one. So the classes here are plus one"}, {"start": 535.68, "end": 542.16, "text": " and minus one just to make things simple. So but you see the features are basically what"}, {"start": 542.16, "end": 548.24, "text": " comes out after these layers, what is then used to make a linear classification. This last"}, {"start": 548.24, "end": 554.16, "text": " thing is basically just a logistic regression. So you can think of the features as the output"}, {"start": 554.16, "end": 562.04, "text": " of the neural network, but before it goes into the classifier. So a feature basically, since"}, {"start": 562.04, "end": 569.0799999999999, "text": " then it's linearly classified. If the feature is high, it will give a signal for one class"}, {"start": 569.0799999999999, "end": 573.92, "text": " and if a feature is lower, it will give a signal for the other class, depending on of course"}, {"start": 573.92, "end": 583.88, "text": " if this w is negative or positive. Alright. So they say we call a feature row useful."}, {"start": 583.88, "end": 590.56, "text": " And if this thing holds here, what is this thing? This thing means so the expectation over"}, {"start": 590.56, "end": 598.72, "text": " the dates. So generally in the data set, this must hold y times the feature. So why is"}, {"start": 598.72, "end": 606.12, "text": " the class? Remember, it's plus or minus one. And the feature as we see in some number,"}, {"start": 606.12, "end": 616.04, "text": " y times a feature must be higher than some number. So does it mean when a product is high?"}, {"start": 616.04, "end": 621.92, "text": " It means either both are high or both are low. So they're correlated. That's what that"}, {"start": 621.92, "end": 630.92, "text": " means. So basically, this is says feature f is useful. If whenever it an example x is"}, {"start": 630.92, "end": 642.68, "text": " of class one, if it's class class one, or let's, if it if y is one plus one, then f is"}, {"start": 642.68, "end": 650.8, "text": " high. And whenever y is minus one, then f is low, which means it's high in the negative"}, {"start": 650.8, "end": 658.52, "text": " direction. Alright. So this is our, this is intuitive. Right. If a feature is useful,"}, {"start": 658.52, "end": 663.4399999999999, "text": " it means it should say one thing in samples of class one, then it should say another"}, {"start": 663.4399999999999, "end": 668.12, "text": " thing in samples of class two. Then I can actually use the feature to make a decision when"}, {"start": 668.12, "end": 677.12, "text": " it's, you know, very correlated with the class. So that, you know, that makes perfect sense."}, {"start": 677.12, "end": 682.56, "text": " So that's kind of when is a feature useful if it correlates with the class label? Yes."}, {"start": 682.56, "end": 688.0, "text": " Cool. But the usefulness simply any feature, basically, that a class fair will extract"}, {"start": 688.0, "end": 692.76, "text": " will be useful. That's an assumption we can make otherwise the class if I wouldn't extract"}, {"start": 692.76, "end": 701.2, "text": " it. So the neural network here, that's an assumption will only extract useful features."}, {"start": 701.2, "end": 707.48, "text": " Right. Because the non useful features, there would simply be no reason for it to extract"}, {"start": 707.48, "end": 712.64, "text": " them because they don't contribute to solving the task because they're not correlated with"}, {"start": 712.64, "end": 722.64, "text": " an output class. Right. So next, they define robust, robustly useful features. So in addition"}, {"start": 722.64, "end": 730.4399999999999, "text": " to being useful, they're now also robust. What does it mean? Again, we want a correlation"}, {"start": 730.4399999999999, "end": 738.04, "text": " of why and the feature to be higher than some constant, but not only the feature of the"}, {"start": 738.04, "end": 746.16, "text": " image X, but the feature of the image X that has been perturbed by a small perturbation."}, {"start": 746.16, "end": 751.64, "text": " So and we take the the infinum here over a class of perturbations. Of course, this class"}, {"start": 751.64, "end": 756.8399999999999, "text": " of perturbations is exactly the adversarial perturbations. Basically what this means"}, {"start": 756.8399999999999, "end": 763.9599999999999, "text": " is it says that however we try to perturb X, right, and the infinum here means the minimum"}, {"start": 763.96, "end": 771.0, "text": " correlation. However, we try to make the feature not correlated with Y. However much we"}, {"start": 771.0, "end": 778.52, "text": " try, we can't get it lower than some some gamma, some number, right. We can't we can't"}, {"start": 778.52, "end": 785.6800000000001, "text": " get it down. So whatever we try to make the feature bad for the classifier, basically,"}, {"start": 785.6800000000001, "end": 792.9200000000001, "text": " we can't. Right. If this holds for a feature, if this is the case, then we call that feature"}, {"start": 792.92, "end": 800.36, "text": " a robust feature, right. Then that feature is robustly useful. If it correlates, no matter"}, {"start": 800.36, "end": 808.16, "text": " how hard we try to make it not correlate. And of course, a non robust features. So a useful"}, {"start": 808.16, "end": 818.56, "text": " non robust feature is a feature which is useful. You see here is useful, but is not gamma"}, {"start": 818.56, "end": 825.76, "text": " robust feature for any gamma. So it's a feature that is useful like the cat fur, right."}, {"start": 825.76, "end": 830.9599999999999, "text": " So this here, an example of this would be that the cat's eyes and ear position, right."}, {"start": 830.9599999999999, "end": 838.1199999999999, "text": " We can't just make a small perturbation for the image and make the ears be somewhere"}, {"start": 838.1199999999999, "end": 843.0, "text": " completely else. That's just that would require a large perturbation of the image. So the"}, {"start": 843.0, "end": 850.8, "text": " position of the ears and eyes are pretty robust features. But here the cat's fur, no matter"}, {"start": 850.8, "end": 860.08, "text": " how, no matter how small we make this this gamma, we can always kind of change the fur"}, {"start": 860.08, "end": 866.08, "text": " to make the feature not rogue to make the feature not useful, right. If we can change the cat"}, {"start": 866.08, "end": 872.16, "text": " fur into a dog fur and the dog fur into a cat fur, then the feature will become not"}, {"start": 872.16, "end": 877.9599999999999, "text": " useful anymore because we can, no, we can change that arbitrarily for any image and then"}, {"start": 877.9599999999999, "end": 883.04, "text": " the classifier will have no clue. It can't be like, well, this fur could be of any of"}, {"start": 883.04, "end": 888.6, "text": " any class, right. So the feature is not useful anymore. So this is a non robust feature."}, {"start": 888.6, "end": 894.04, "text": " The technique you can say any feature that is useful, but not robust is a non robust"}, {"start": 894.04, "end": 900.72, "text": " feature. All right. So this is kind of the definition of what robust and non robust"}, {"start": 900.72, "end": 906.5600000000001, "text": " features are. Yeah. Remember, maybe remember robust features like position of the ears"}, {"start": 906.5600000000001, "end": 913.1600000000001, "text": " and their shape and non robust features would be which direction are the individual hairs"}, {"start": 913.1600000000001, "end": 919.6800000000001, "text": " in the fur going, right. And in our world where cat fur is going different ways than dog"}, {"start": 919.6800000000001, "end": 930.6800000000001, "text": " fur. Yeah. So they now going to experimental evidence for their hypothesis. And he"}, {"start": 930.68, "end": 935.8, "text": " here you have to understand they do two experiments, which give pretty good indication that"}, {"start": 935.8, "end": 943.56, "text": " their hypothesis is actually correct. And what you have to understand before this is two"}, {"start": 943.56, "end": 949.28, "text": " things. First of all, here you basically, you just have to assume that they already,"}, {"start": 949.28, "end": 955.4799999999999, "text": " they have some procedure where they can do the following where they can take an image"}, {"start": 955.48, "end": 962.4, "text": " of the training data set and they can decompose it into its robust and non robust features."}, {"start": 962.4, "end": 969.48, "text": " Right. Don't I mean, don't ask yet how they do this, but they can decompose it into"}, {"start": 969.48, "end": 974.0, "text": " these two parts. Right. So that's assumption one, they have a procedure that can actually"}, {"start": 974.0, "end": 981.6, "text": " do that. And then number two is what they what they do here is basically the general"}, {"start": 981.6, "end": 986.44, "text": " theme of these experiments is they they have a training data set. Right. What's this"}, {"start": 986.44, "end": 993.84, "text": " is original training is they created derived version of it. So let's put a tick here."}, {"start": 993.84, "end": 1003.84, "text": " This is a derived version of the date set. Then they train a regular neural network with"}, {"start": 1003.84, "end": 1010.64, "text": " that. So what you can do with a neural network if you train one, all right. What you usually"}, {"start": 1010.64, "end": 1018.1999999999999, "text": " do is you feed images X, you feed images in, it gives you some output by hat and you say,"}, {"start": 1018.1999999999999, "end": 1023.76, "text": " well, but I know why is the true label. So I feed an image of a cat that the network says"}, {"start": 1023.76, "end": 1031.4, "text": " airplane. You say, well, but this should be a cat. So please make this by more to be more"}, {"start": 1031.4, "end": 1040.0, "text": " to be, um, please make this why hat more be like why. And then you have a loss function."}, {"start": 1040.0, "end": 1044.44, "text": " Here you say this is wrong. Please correct this. You back propagate and all the network"}, {"start": 1044.44, "end": 1048.72, "text": " in here will update to make that a bit more likely. That's how you train usually in"}, {"start": 1048.72, "end": 1055.6, "text": " a network. Now what you can do is if you want to become robust to adversarial examples,"}, {"start": 1055.6, "end": 1063.0, "text": " you can do what is called adversarial training, which means that you have the same network"}, {"start": 1063.0, "end": 1071.36, "text": " here, but of each of the training data points, you create a derived version, an adversarial"}, {"start": 1071.36, "end": 1077.76, "text": " example to that, right, to this X, you feed the adversarial examples through the network"}, {"start": 1077.76, "end": 1087.36, "text": " together with the original examples. Then this will give you some why hat, uh, two. And"}, {"start": 1087.36, "end": 1093.84, "text": " then you say, but this should also be equal to why basically you train the classifier"}, {"start": 1093.84, "end": 1101.08, "text": " also on adversarial examples, right? Since the hypothesis is if you train on an image"}, {"start": 1101.08, "end": 1107.1999999999998, "text": " data set, then you can teach the classifier about that data set, right? Like you do with"}, {"start": 1107.1999999999998, "end": 1114.24, "text": " the regular data set, say, well, okay, I can now just train on adversarial examples."}, {"start": 1114.24, "end": 1119.28, "text": " And my classifier will be able to better classify these correctly, right? This usually works"}, {"start": 1119.28, "end": 1123.16, "text": " it's called adversarial training and it's been a kind of a standard method to make your"}, {"start": 1123.16, "end": 1128.8, "text": " classifier robust. They don't do that here. They don't do this. They simply want to say,"}, {"start": 1128.8, "end": 1136.96, "text": " okay, we now have, we have a regular training procedure, right? Like this, except for what"}, {"start": 1136.96, "end": 1145.24, "text": " we change is here, the training data set, we change this to, in one case, for example,"}, {"start": 1145.24, "end": 1150.68, "text": " only robust images. So we've changed all the X to be only robust. And we do the regular"}, {"start": 1150.68, "end": 1157.16, "text": " training procedure. And then we evaluate that result in classifier here. This thing,"}, {"start": 1157.16, "end": 1162.8400000000001, "text": " we evaluate that, um, how does that behave? So it's kind of a new approach where you modify"}, {"start": 1162.84, "end": 1170.56, "text": " the, the date, the original data set. So what did they do? First of all, they decompose"}, {"start": 1170.56, "end": 1177.6399999999999, "text": " this training data set into a version that is only robust features, right? We, we,"}, {"start": 1177.6399999999999, "end": 1184.84, "text": " we assume we have such a procedure, we then train a regular neural network on that,"}, {"start": 1184.84, "end": 1194.4399999999998, "text": " right? We train a regular neural network on this, um, on this data set. And what we get"}, {"start": 1194.4399999999998, "end": 1200.1999999999998, "text": " is two things. First of all, good standard accuracy. What does good standard accuracy mean?"}, {"start": 1200.1999999999998, "end": 1209.28, "text": " It means that, um, we, we can test it on what's called the unmodified test set. So the,"}, {"start": 1209.28, "end": 1214.1599999999999, "text": " the test set, the original test set of the date set, the test set belonging to this training"}, {"start": 1214.16, "end": 1220.48, "text": " data set, we can test it on that. And it works just fine, right? So that basically means"}, {"start": 1220.48, "end": 1228.3200000000002, "text": " that the robust features are predictive of the, of the kind of, um, they generalize well."}, {"start": 1228.3200000000002, "end": 1234.88, "text": " It means that if I train a classifier only on robust features, that can actually classify"}, {"start": 1234.88, "end": 1243.0800000000002, "text": " well to, um, to the, to the test set, right? So that means that's standard accuracy."}, {"start": 1243.08, "end": 1248.84, "text": " Standard accuracy is how well do I classify the test set, just an unmodified test set."}, {"start": 1248.84, "end": 1255.32, "text": " So they also obtain good robust accuracy, which means that what is robust accuracy, robust"}, {"start": 1255.32, "end": 1262.96, "text": " accuracy means your accuracy on adversarial examples of the test set. And usually classifiers"}, {"start": 1262.96, "end": 1268.36, "text": " are vulnerable to this. So I classifiers usually obtain good standard accuracy, but bad"}, {"start": 1268.36, "end": 1275.9599999999998, "text": " robust accuracy. But if I only train my classifier on what they call robust features, then I"}, {"start": 1275.9599999999998, "end": 1283.8799999999999, "text": " all of a sudden have retained good standard accuracy, but I also get good robust accuracy,"}, {"start": 1283.8799999999999, "end": 1291.52, "text": " um, which means that it gives pretty good support to their hypothesis that the adversarial"}, {"start": 1291.52, "end": 1297.08, "text": " examples are abusing the fact that the classifiers learn the non robust features. Since if I don't"}, {"start": 1297.08, "end": 1303.6399999999999, "text": " have any non robust features, it means my classifier can't learn any non robust features, which in"}, {"start": 1303.6399999999999, "end": 1309.0, "text": " turn means my classifier isn't vulnerable to adversarial attacks because they would abuse the"}, {"start": 1309.0, "end": 1315.1599999999999, "text": " fact that the classifier has learned about the non robust features. So that's pretty good, um,"}, {"start": 1316.12, "end": 1322.04, "text": " evidence for their hypothesis. Second thing they do is they now create this"}, {"start": 1322.04, "end": 1330.76, "text": " on this modified data set where they only have non robust features. Right. So the only thing they"}, {"start": 1330.76, "end": 1337.48, "text": " have is non robust features. Again, they train a standard neural network. They train just a regular"}, {"start": 1337.48, "end": 1344.36, "text": " neural network on that. And they also get good standard accuracy. So this means that also the"}, {"start": 1344.36, "end": 1351.56, "text": " non robust features as we seen like the cats, uh, third direction can lead to you generalize well"}, {"start": 1351.56, "end": 1359.0, "text": " to the test sets since in the test set also the cats will have that, um, that property. But you get"}, {"start": 1359.0, "end": 1365.96, "text": " bad robust accuracy. And this, this gives further support to their hypothesis if you train a classifier"}, {"start": 1365.96, "end": 1373.8, "text": " on only non robust features. And they are features because they generalize well, but they are very"}, {"start": 1373.8, "end": 1380.76, "text": " vulnerable because they're non robust. Right. So the classifier that has learned about non robust"}, {"start": 1380.76, "end": 1388.44, "text": " features is vulnerable. They didn't do a third experiment, which I find pretty cool,"}, {"start": 1388.92, "end": 1394.68, "text": " where they take, they take the training image. And of course it's a non modified training image."}, {"start": 1394.68, "end": 1401.72, "text": " So it's robust features will basically say this is a dog. It's non robust features will also say"}, {"start": 1401.72, "end": 1408.04, "text": " this is a dog because it, you know, it's a, it's a training image of a dog. Um, and what they"}, {"start": 1408.04, "end": 1414.6, "text": " then do is they derive from this dog and adversarial example towards the cat class."}, {"start": 1415.8799999999999, "end": 1423.3999999999999, "text": " Right. So what does it mean in their hypothesis, if their hypothesis is correct, it now means that"}, {"start": 1423.3999999999999, "end": 1429.96, "text": " the robust features still say it's a dog. We can also see this here, right. The, the kind of"}, {"start": 1429.96, "end": 1440.8400000000001, "text": " big shape of the image still is a dog to us humans. But the non robust features will say it's a cat."}, {"start": 1440.8400000000001, "end": 1447.08, "text": " Right. This hinges on their hypothesis that adversarial examples actually abuse the non robust"}, {"start": 1447.08, "end": 1452.28, "text": " features. Right. They create an adversarial example. So if their hypothesis is correct, the non robust"}, {"start": 1452.28, "end": 1461.56, "text": " features now say that's a cat. Um, so they derive a, an entire data set where they change every"}, {"start": 1461.56, "end": 1467.72, "text": " image to another image. And they also change the labels accordingly. And then they train again a"}, {"start": 1467.72, "end": 1475.72, "text": " regular neural network on this. And they look what happens on the unmodified test set. So"}, {"start": 1475.72, "end": 1482.76, "text": " the unmodified test set will, so imagine if you're the, you're this classifier and what you get is"}, {"start": 1482.76, "end": 1492.28, "text": " an image X. And it has robust features. That's a dog. And has non robust features say cat and"}, {"start": 1492.28, "end": 1499.32, "text": " it's label your ask to predict cat. Right. And then we see the next image and the next image X2,"}, {"start": 1499.88, "end": 1503.32, "text": " the non robust features. Maybe it's derived from some other class. It will say plain."}, {"start": 1503.32, "end": 1511.6399999999999, "text": " But the, the robust, the non robust features against a cat. Right. And you're asked to predict cat."}, {"start": 1511.6399999999999, "end": 1518.4399999999998, "text": " So basically the constructed data set where the non robust features always agree with, with the"}, {"start": 1518.4399999999998, "end": 1527.1599999999999, "text": " label, but the robust features they don't. Um, so naturally what you can expect is the classifier"}, {"start": 1527.1599999999999, "end": 1533.1599999999999, "text": " will learn to disregard the robust features because they're no longer useful. Right. But it will"}, {"start": 1533.16, "end": 1540.92, "text": " actually only will learn to view these features. It's different from before before we only had these"}, {"start": 1540.92, "end": 1546.1200000000001, "text": " features. Now we, these features are still in there. Right. But they're not informative. So the"}, {"start": 1546.1200000000001, "end": 1553.16, "text": " classifier will naturally learn to pick up on the non robust features and classify and classify"}, {"start": 1553.16, "end": 1559.72, "text": " according to them. So much that if we now test on the test set and we feed in an actual cat. Right."}, {"start": 1559.72, "end": 1565.32, "text": " It, of course, it's robust features will say cat and it's non robust features will say cat. And"}, {"start": 1565.32, "end": 1571.16, "text": " the classifier is able to accurately predict this is a cat. Even though the all the images of"}, {"start": 1571.16, "end": 1577.88, "text": " cats it has seen during training were actually of basically of non cats of here a dog."}, {"start": 1578.92, "end": 1586.2, "text": " So this is pretty cool and shows that kind of these, these features that these non robust features"}, {"start": 1586.2, "end": 1594.2, "text": " that adversarial examples abuse since they're created by adversarial examples. They, they are"}, {"start": 1594.2, "end": 1601.32, "text": " actually predictive and generalize to the test set. So that's pretty, pretty good evidence for"}, {"start": 1601.32, "end": 1608.76, "text": " their hypothesis so far. Now the kind of final remaining question is how do they create? What is"}, {"start": 1608.76, "end": 1616.36, "text": " the procedure where they can create a robust and then basically non robust version of the data set."}, {"start": 1617.4, "end": 1624.68, "text": " And here is kind of where we get into the into the sort of what I find. Yeah. So here you see"}, {"start": 1624.68, "end": 1632.44, "text": " basically examples of so this is an originally image of a ship in that see for 10 data set I believe."}, {"start": 1632.44, "end": 1640.52, "text": " And this is a robust sample. So these are only robust features of the ship. And this is a ship"}, {"start": 1640.52, "end": 1646.2, "text": " made with only non robust features. You see it's actually a moose, but the non robust features have"}, {"start": 1646.2, "end": 1654.2, "text": " been changed to ship. All right. So the way they construct a robust version of the data set."}, {"start": 1654.2, "end": 1663.4, "text": " They have a formal definition, but the way they do it is as follows. So and then they say,"}, {"start": 1663.4, "end": 1669.16, "text": " okay, here is where we where we get into the details. They say, imagine we have a classifier."}, {"start": 1670.28, "end": 1676.6000000000001, "text": " Right. The classifier outputs features and here we call them, here they call them G,"}, {"start": 1676.6000000000001, "end": 1683.4, "text": " which is the representative. It can be larger than features. It can be a bigger class. But in essence,"}, {"start": 1683.4, "end": 1690.3600000000001, "text": " G is the features, which then goes into the into the classifier and into the labels and so on."}, {"start": 1690.3600000000001, "end": 1699.48, "text": " So the neural network outputs the features inputs some X. Now what if what if I have another X,"}, {"start": 1699.48, "end": 1708.1200000000001, "text": " let's say X prime. And I just initialize this with random noise. And if I feed this and I get G"}, {"start": 1708.12, "end": 1716.36, "text": " prime here and I try to make the two as close as possible by changing X. So I'm going to change"}, {"start": 1716.36, "end": 1723.7199999999998, "text": " my X here. Basically, I'm going to change my image such that the outputs, the features here match"}, {"start": 1723.7199999999998, "end": 1728.6, "text": " each other as close as possible. What does it mean? And I do this via back propagation. Right."}, {"start": 1728.6, "end": 1736.04, "text": " I match these and I back propagate to X. I can do that with gradient descent. What happens is that"}, {"start": 1736.04, "end": 1748.84, "text": " my image X will basically pick up will match the image my X prime will match the X in all the ways"}, {"start": 1748.84, "end": 1758.52, "text": " that are relevant for the features. Basically, I will transfer all of the features from X to X prime."}, {"start": 1758.52, "end": 1762.04, "text": " But nothing else. Right. Since I start with random. Now,"}, {"start": 1762.04, "end": 1770.76, "text": " what if my classifier and that's what they do? What if the classifier is a robust classifier?"}, {"start": 1770.76, "end": 1776.76, "text": " So remember, we talked about we can actually robustify a classifier by doing adversarial training."}, {"start": 1776.76, "end": 1784.12, "text": " What if I have a classifier? It's like such that is robust. If I input an X and it outputs me"}, {"start": 1784.12, "end": 1791.72, "text": " feature representation of X. If the classifier is robust, that representation will only contain"}, {"start": 1791.72, "end": 1799.88, "text": " robust features. And then if I have a second image X or and I started from random noise and I"}, {"start": 1799.88, "end": 1810.68, "text": " match the representation of X and by changing XR. Basically, I will transfer all of the robust"}, {"start": 1810.68, "end": 1816.44, "text": " features from X. But nothing else. Right. Given that I start from random noise here,"}, {"start": 1816.44, "end": 1821.16, "text": " this means random noise has no features. Let me just say assumption random noise has no features"}, {"start": 1821.16, "end": 1827.4, "text": " since it's random noise. And if I transfer only the robust features, basically what I've done is I've"}, {"start": 1828.8400000000001, "end": 1837.96, "text": " I've have now an image that I know has no non robust features and only robust features of X."}, {"start": 1837.96, "end": 1844.52, "text": " So that's how they derive a how they derive a robustified version of X."}, {"start": 1844.52, "end": 1857.56, "text": " Second, how do they derive a non robust version? And that's even even easier if I have a classifier."}, {"start": 1857.56, "end": 1864.04, "text": " Right. A regular classifier. And I want a non robust version of X."}, {"start": 1864.04, "end": 1874.12, "text": " Have X input output, G output, some label. What I do is I simply derive an adversarial"}, {"start": 1874.12, "end": 1883.9599999999998, "text": " example of X, like we did before adversarial example in here, out here. And that gives me some"}, {"start": 1883.9599999999998, "end": 1893.56, "text": " Y2, which is different from Y. Right. If I have an adversarial example, then basically I've transferred"}, {"start": 1895.9599999999998, "end": 1903.0, "text": " I've transferred the non robust features that lead to class Y2. I've transferred the non robust"}, {"start": 1903.0, "end": 1910.92, "text": " features here while still maintaining the robust features from here. So if this is to abstract,"}, {"start": 1910.92, "end": 1922.2, "text": " imagine here X is an image of a dog, right dog. And I derive from it an adversarial image that"}, {"start": 1922.2, "end": 1931.72, "text": " now says airplane. Right. So the robust features will still be of a dog will still be of the original"}, {"start": 1931.72, "end": 1938.92, "text": " image, but the non robust features will be of the airplane class. All right. So that's how I"}, {"start": 1938.92, "end": 1946.76, "text": " derive a non robust non robust version that has features of kind of one."}, {"start": 1948.68, "end": 1952.76, "text": " Robust features of one class, but non robust features of the other class. That's what you see up"}, {"start": 1952.76, "end": 1958.92, "text": " here with the moose, right. The moose clearly has been started from the image of a moose and then"}, {"start": 1958.92, "end": 1965.16, "text": " has been has received non robust features from the ship class. And that's just your classic"}, {"start": 1965.16, "end": 1975.0, "text": " adversarial example procedure. So that's the that's the kind of procedure. And so what's kind of my"}, {"start": 1975.5600000000002, "end": 1981.0800000000002, "text": " criticism here, if you look at the first part, the first part where they say, well, in order to"}, {"start": 1981.0800000000002, "end": 1986.04, "text": " determine what the robust features are, we actually need a classifier that's already robust."}, {"start": 1986.04, "end": 1992.2, "text": " So we've seen before we have a we have a data set. Sorry, let's go up here."}, {"start": 1993.8799999999999, "end": 2000.36, "text": " They say, aha, here we have a data set, right. And we can disentangle this and then it will"}, {"start": 2001.8799999999999, "end": 2008.28, "text": " which color have we not used. We have a data set. We only we robustify the data set to a"}, {"start": 2008.28, "end": 2013.0, "text": " robust data set. We train a standard neural network. And that gives us good robust accuracy. Why?"}, {"start": 2013.0, "end": 2017.48, "text": " Which is really cool because we don't do anything special during training and we still get good"}, {"start": 2017.48, "end": 2028.68, "text": " robust accuracy. But in order to do this procedure here, this one, you actually have to have a robust"}, {"start": 2028.68, "end": 2037.88, "text": " classifier, right. You have to have this already robustified classifier, which you have obtained"}, {"start": 2037.88, "end": 2046.7600000000002, "text": " by adversarially training the robust classifier. Basically what you're doing now is you take this"}, {"start": 2046.7600000000002, "end": 2051.88, "text": " adversarial training procedure, which the point here is that you don't do anything different during"}, {"start": 2051.88, "end": 2057.0, "text": " training, right. But here you take the adversarial training procedure and via training the robust"}, {"start": 2057.0, "end": 2062.76, "text": " classifier and via changing the state to set here, you basically get good robust accuracy. Which to"}, {"start": 2062.76, "end": 2069.0, "text": " me is just a reflection that you've obtained the data set using this robust classifier in the first"}, {"start": 2069.0, "end": 2077.7200000000003, "text": " place. I mean, yeah, of course, their method gives a hint that I can actually, this is actually"}, {"start": 2077.7200000000003, "end": 2084.6800000000003, "text": " due to things in the data set themselves, right. But they're, I mean, that's really important because"}, {"start": 2084.68, "end": 2093.48, "text": " it surely means that it's not a point of let's say the classifier itself, but it's a point of"}, {"start": 2093.48, "end": 2101.0, "text": " the data set, which also say, okay, it also explains why these adversarial examples transfer between"}, {"start": 2101.0, "end": 2105.96, "text": " classifiers. If I have two classifiers that are different, but classify the same thing, they're"}, {"start": 2105.96, "end": 2111.16, "text": " vulnerable to the same adversarial example, which basically means it must be some property of the"}, {"start": 2111.16, "end": 2118.7599999999998, "text": " data set that these things learn. But to, to then say we have a procedure to extract the robust"}, {"start": 2118.7599999999998, "end": 2123.24, "text": " features, and if we only train on the robust features, we become robust, right, as here."}, {"start": 2124.2, "end": 2129.3199999999997, "text": " But you obtain the robust features by using a robustified classifier, which you have adversarially"}, {"start": 2129.3199999999997, "end": 2136.52, "text": " trained. To me, that's kind of kind of back-doring adversarial training into this whole procedure."}, {"start": 2136.52, "end": 2145.4, "text": " And, yeah. So that's kind of my first criticism. My second criticism is the fact that,"}, {"start": 2146.12, "end": 2150.84, "text": " in the, I mean, it's an interesting take on this, but this whole notion, this whole"}, {"start": 2151.56, "end": 2157.16, "text": " seeing of these features or robust, these features are non-robust, is basically just reframing"}, {"start": 2157.16, "end": 2162.2, "text": " the problem of adversarial examples in terms of features. It says nothing"}, {"start": 2162.2, "end": 2169.56, "text": " why these features are there, except simply postulating that they're there. It says nothing"}, {"start": 2169.56, "end": 2176.04, "text": " why they're there. It says nothing about why the classifiers pick up on them or how they do it,"}, {"start": 2176.04, "end": 2181.3199999999997, "text": " or how, you know, how this is to be mitigated without first having a robustly trained"}, {"start": 2182.2, "end": 2188.2799999999997, "text": " network to extract the robust features. So it basically just shoves the problem into the realm"}, {"start": 2188.28, "end": 2197.0, "text": " of the dataset rather than from the realm of the classifiers. So it's very much widely, or not,"}, {"start": 2198.44, "end": 2203.0, "text": " things are very much widely known about these samples. It's just a reframing of the problem,"}, {"start": 2203.0, "end": 2210.1200000000003, "text": " I feel. And it's cool experiments. I mean, it does show a lot of things about these adversarial"}, {"start": 2210.12, "end": 2220.12, "text": " examples, but it's certainly not an explanation, I find. At least that's my opinion. All right,"}, {"start": 2220.12, "end": 2231.96, "text": " so down here then, they show that they make an kind of simplified version of this a theoretical"}, {"start": 2231.96, "end": 2237.96, "text": " setting where they can analyze this. And they basically say, okay, this is generally what happens"}, {"start": 2237.96, "end": 2244.68, "text": " at the fundamental level. At the fundamental level, you have classes, and let's say the classes"}, {"start": 2244.68, "end": 2251.96, "text": " are distributed like this, right? These are the examples in the dataset, and they're distributed"}, {"start": 2251.96, "end": 2259.16, "text": " like that, right? You have them. Mean, and you have some covariance. These are Gaussian's here."}, {"start": 2259.16, "end": 2264.92, "text": " So they're distributed like that. If I have two classes like this, such as here, right,"}, {"start": 2264.92, "end": 2270.6, "text": " and they're distributed like that, and I create like the separator, a linear classifier,"}, {"start": 2270.6, "end": 2276.12, "text": " the linear classifier will classify like this. It will be like super. This is the best linear"}, {"start": 2276.12, "end": 2282.44, "text": " classifier, right? We can calculate this accurately. But what do I say when I say, okay,"}, {"start": 2283.56, "end": 2288.76, "text": " I want an adversarial example. Adversarial examples means that I can shift my"}, {"start": 2288.76, "end": 2296.92, "text": " examples by a little bit, but achieve a big change in output. And since this distance here,"}, {"start": 2298.6000000000004, "end": 2305.0, "text": " right? So if I have a sample here, I need to go a long way to the boundary to achieve another"}, {"start": 2305.0, "end": 2311.32, "text": " output, but if I go into another direction, right? If I go down here, I only need to go a very short"}, {"start": 2311.32, "end": 2319.2400000000002, "text": " way. And since adversarial examples as they're specified, they say, okay, we want to go a short"}, {"start": 2319.2400000000002, "end": 2325.2400000000002, "text": " way and a short way is characterized by going a short way in any direction, right? This is a"}, {"start": 2325.2400000000002, "end": 2331.4, "text": " terrible circle. In any direction, we want to go a short way. That's another adversarial example."}, {"start": 2331.4, "end": 2337.2400000000002, "text": " And you see that if I have this any direction property, there's actually directions where this"}, {"start": 2337.24, "end": 2343.7999999999997, "text": " classification boundary is very, very close. And so that's what they say. This is a fundamental"}, {"start": 2343.7999999999997, "end": 2349.3199999999997, "text": " misalignment between the geometry of the data, which is like this. And the geometry of how we"}, {"start": 2349.3199999999997, "end": 2355.16, "text": " specify adversarial examples, which is, you know, kind of equal in each direction, which leads to that."}, {"start": 2356.04, "end": 2363.56, "text": " And they say, okay, what if I now robust parameter? So what if I adversarily train my network"}, {"start": 2363.56, "end": 2371.08, "text": " to be robust? It basically means that I expand my data because I add adversarial examples,"}, {"start": 2371.08, "end": 2378.36, "text": " right? Of this circle here, I actually add adversarial examples. So my class, my data distribution"}, {"start": 2378.36, "end": 2387.48, "text": " will actually more like this. And my separating hyperplane will change here. And the geometry"}, {"start": 2387.48, "end": 2393.96, "text": " of the adversarial examples will be much more aligned with my separating hyperplane. So this"}, {"start": 2393.96, "end": 2399.48, "text": " is kind of a toy example of where they say this is fundamentally what's going on. There's a misalignment"}, {"start": 2399.48, "end": 2405.4, "text": " between the geometry of the adversarial examples and the inherent geometry of the data."}, {"start": 2407.88, "end": 2417.08, "text": " So that's kind of the theoretical analysis they do. And with that, I finish here. And I hope"}, {"start": 2417.08, "end": 2419.08, "text": " this was clear enough. And goodbye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=_N_nFzMtWkA | Reinforcement Learning, Fast and Slow | Abstract:
Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. However, the concern has been raised that deep RL may be too sample-inefficient – that is, it may simply be too slow – to provide a plausible model of how humans learn. In the present review, we counter this critique by describing recently developed techniques that allow deep RL to operate more nimbly, solving problems much more quickly than previous methods. Although these techniques were developed in an AI context, we propose that they may have rich implications for psychology and neuroscience. A key insight, arising from these AI methods, concerns the fundamental connection between fast RL and slower, more incremental forms of learning.
Authors: Matthew Botvinick, Sam Ritter, Jane X. Wang, Zeb Kurth-Nelson, Charles Blundell, Demis Hassabis
https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0 | Hi there. Today we're looking at reinforcement learning fast and slow by Matthew Botvenek, Sam Ritter, Jane X Wang, Zeb Kurt Nielsen, Charles Spendel and Demis Hassabis. These people are from Google Deep Mind and this is a review of kind of development in reinforcement learning, especially as it pertains to kind of how humans learn or what we can understand from the RL world that translates over to human learning. Alright, so basically their argument here is that the first wave of deep RL as you see here is powerful but slow and they give examples of this. So in box one, box one is this. So they believe there's an image missing here. This is backgammon, TD gammon. This is the famous Deep Mind Atari playing Bot and this is kind of the 3D labyrinth playing Bot. So there's been a number of advances in RL and especially what they talk about is deep RL. So when we talk about reinforcement learning, the easiest case is where you have an agent, agent and an environment. Alright, so the agent will observe some observation from the environment and then based on that the agent will perform an action A and then the environment will give back a reward and also a next observation. So this is 0, 0, 1. Alright, and then this is A, 0 and then here you give A1, AI. So basically this goes back and forth and back and forth. The agent performs an action, the environment gives a reward and the next observation. So this could be for example here in the Atari world. The observation is the screen itself. Alright, and then the agent needs to perform an action which is an input of the joystick or pressing some button. You can see the individual actions actually listed here and then the reward will be given to the agent via a number which I guess is the same number as up here. So the task is to maximize the reward simply by, so the difference is you're not doing this in a supervised manner. So you're not telling the agent what would be the correct action to do. You simply tell it that whether what it did was good or bad by giving it a high or low reward. Alright, so that's reinforcement learning. So what is deeper enforcement learning? Deep reinforcement learning simply means the agent maps the observation to the action via a deep neural network. So deep neural network. That's that's deeper enforcement learning where the mapping or some part of the agent consists of a deep neural network. And you see for example here there's a deep neural network mapping the observation to the action as well as as down here but it's a bit more complicated. Alright, so they argue that the first wave of this was powerful but slow meaning kind of you need a lot of samples and they give two sources of why it's slow. Why you need a lot of samples. They say the two factors are incremental parameter adjustment and weak inductive bias. So incremental parameter adjustment means basically that you have to update or train your neural network in a very small incremental way in order to basically because you train it one by one, right? You train your neural network step by step. You have to make small steps in order to not forget what what came before. You can't fundamentally readjust your neural network to every new batch of observations because then that's going to destroy all the information you've learned of the old one. And then weak inductive bias here is a basically an understanding of these neural networks. They are general function approximators and they can approximate any function. So if you just think in terms of kind of I don't know let's say polynomials and what kind of polynomials are this polynomial this polynomial this polynomial this weird polynomial. If I have a function that can approximate all of these then I have a weak inductive bias whereas if I kind of know okay all my polynomials are the polynomial that I'm looking for ultimately I'm very sure it's a third degree polynomial right so something like this or like this or like this. So this is much less of a much less of a class of functions that I can fit. But if I'm sure that the function that I'm trying to fit falls in this category then I'm much faster. So this is then called a strong inductive bias is where I build into the model basically I tell it beforehand. Here is a very restricted class of functions that you can fit. Whereas in a weak inductive bias I won't tell it that I'll simply say well model you could fit any function you want and I'm just giving you training samples. So this is a classic example of a bias variance trade off where there is a lot of variance in these models meaning you can fit also a lot of functions but here because you bias the model towards a certain set of functions it can lower this variance and in this case here they argue it speeds up learning because you don't have you don't have as much variance that means you can basically go faster while learning. Alright so they propose two solutions to this problem of this kind of to mitigate these problems that make reinforcement learning faster or have made reinforcement learning faster. This is a review remember. So the first one is episodic deep reinforcement learning and this episodic deep reinforcement learning is specified here fast learning through episodic memory. So the suggestion in this field of research is to augment the neural network or the agent by a memory and the memory could look something like this. So in a lot of these RL frameworks what you what a principal component of the agent is so the agent will get an observation O and one of the things it has to do is estimate the value of this observation of this state. So basically the agent isn't some state let's say you play pong right and you are here down and the ball comes your way up there right this little arrow sorry. So the ball flies away from you and you're all the way down which basically means I'm gonna draw this bigger. So here you are down here and the ball is here flying up there. So one task in these in these agents that occurs often is to estimate the value of this observation basically means how much reward am I expecting from this state going into the future. In this case I probably will not expect a lot of rewards since I can't move up fast enough right to catch the ball. So this I would assign this state a pretty low value whereas if I were up here I would assign this state quite a high value. So as we've already seen this is a deep neural network mapping we learn to assign value to different states and this is one of the parts that takes a long time and these methods they are the one that's depicted here replaces this value estimation by saying okay we have an observation we somehow need to estimate its value why why don't we look for similar observation so we have some kind of memory right and we go with our observation and we retrieve O prime 1 O prime 2 O prime 3 that are somehow similar right so in our in our pong example I'm down I'm up here ball moves here I could be looking now at at states where I was here or where I was here like really close or about the ball flew a bit differently but still in the same direction or down here right so all these states are kind of close to my state and I can I already have I already have played these since they're in my memory right so with every one of them I can also retrieve the rewards that I got so I owe it because I already know the problem in reinforcement learning is before you do an action you don't know what the reward will be but here I already know because I've played it I've already experienced it it's in the past so I know what reward I got right so and this is exactly what they say over here they basically say here we have time time runs this way we're in state 1 then in state 2 and so on and we perform actions and and get rewards and what we can do is we can save these states into this memory as along with their sum of discounted rewards that we collect from that state on and then later this is like a SpongeBob reference if we want to estimate the value of some new state right what we do is we retrieve all of these states from memory calculate a similarity score over them and with with we wait basically we add their rewards waited by how similar they are to the state that we want to compute so this basically amounts to averaging over states respective by how close they are to the current state right this is kind of a soft a soft way of saying I only select the states which are close and that gives you a value estimate for the new states so basically this means you just got rid of having to train a value function and this will speed up your reinforcement learning quite a bit if you don't have to train that if you already have good value estimations from your previous experience that's great of course there are a number of problems associated with that namely if this memory here for example becomes stale it doesn't represent the future rewards quite as well there is also a question of which states do you keep in memory just the good ones or do you have to have a certain property do you have to have some diversity in there and of course the biggest problem here the biggest problem is how do you know when two states are similar or when they aren't it might be easy in a situation like pong where I only have like three variables like position at y position of my of my paddle and position of the ball and velocity of the ball those are like I can specify those in five numbers but if it gets harder than that if it's like this labyrinth setting full 3d environment then we have no clue which states are similar to each other and what these what most end up doing is they will train you guessed it they will train a deep neural network to give you this similarity score between states right how they do it is is a different question but presumably you can train this network offline basically meaning you can pre-train it you could pre-train it and then the so we have two stages stage one pre-train train similarity dnn right this and then once we've done that second stage do reinforcement learning using this and the claim here is that by having this done this this second stage will become faster so it it doesn't really solve the problem of the sample efficiency but what it says is okay the actual reinforcement learning part will become faster because we've already done the work previously but basically by by including this similarity score sorry whatever dnn by including this in the language of the review here we have successfully introduced an inductive bias into the rl procedure because the rl procedure now can't just fit any function we say we tell it your value function is one that conforms to our notion of similarity that we've pre-trained this restricts the rl algorithm and we give it an inductive bias and as long as our similarity score is useful for the rl algorithm it can speed up its learning because it doesn't have to learn the value function itself all right cool so the second part here is a bit more abstract it's called meta reinforcement learning speeding up deep rl by learning to learn it's kind of learning to learn approaches are quite abundant in the literature people try this usually there's a I mean it's it's very large scale experiments basically you have I think I believe they show it somewhere here yeah you have like some some outer loop where you you'd say that's this thing here what the outer loop does is in each loop it samples one environment so it samples one environment from a distribution of environments so now you not only have one environment but you say okay if I'm going to navigate this maze want to trying to learn to navigate this maze I'm going actually to learn to learn to navigate many mazes right so it's not like you train one agent to learn at you train one agent to navigate many mazes that would just be classic reinforcement learning but you want to train an algorithm that helps an agent learners a particular maze and you do that by training your helper algorithm on a variety of agent maze combinations so in each step you sample one environment like this this here and you then have an inner loop here you fully reinforcement learn train an agent in the classic sense on this environment right you see here action action observation reward right but the agent receives some kind of signal from outside so the outside algorithm will kind of tell the agent how to approach the problem right this could be that it initializes the the weights I'm here you see that the outer loop trains the parameter weights which determine the inner learner that interacts with an environment during the duration of the episode for every cycle of the outer loop a new environment environment is sampled from a distribution of environments which share some common structure so basically the one would expect when you train this that these parameters here this could be for example it could be the initial weights of the network that the agent uses that is one possibility right this is very abstract here this metary reinforcement learning it could be literally anything that the outer model teaches the inner model or gives to the inner model all right and you you train both of these with reinforcement learning so the inner you train with reinforcement learning on the individual rewards and then you can train the outer loop on the reward that the entire agent environment episode achieved so the that's kind of a two loop situation and yeah so that's metary reinforcement learning again it's very unspecified what it does but as you can already see if you now have such an algorithm that kind of tells the the inner agent just as an example how to initialize its weights right how to initialize the weights of its deep neural network if you have that here then the agent you will technically bias it this is again an inductive bias so you will give it inductive bias towards what you think are good weights to generally learn these maze structured environments right since the the outer loop you can update it way slower because it needs to learn over a longer time horizon and it needs to learn things for a different variety of environments but once you have good kind of initial weights for a particular environment then this agent in here can learn much faster given an individual environment so the agent you instantiate it and then you give it good starting weights or some other kind of signal about the environment and then it can go much much faster at learning the environment thereby you have just sped up this inner agent by providing it an inductive bias and that's basically what the what the the these what the claim of the review is that by providing these models with a larger inductive bias you may then speed up their learning because you've kind of told them what good functions are on the out from the outset of course you see the problem again here what the problem the problem is of course you actually need to train this outer sorry this this outer loop and the outer loop may actually take much much longer to train than a signal un and unbiased reinforcement learning thing but again what you could do is you could pre-train on a distribution of environments and then once a new environment shows up that is similar to this distribution you can then have the agent instantiated and learn much faster so again kind of this two step process you could pre-train this outer loop and then the inner loop will be much faster than if you didn't have the outer loop all right so those are basically the kind of the kind of outlines they do here they then kind of do a connection to like the brain and so on and they relate this to biology and biological learning but ultimately their conclusion is here that whenever you want to do whenever you have slow or this is at least my conclusion from their article whenever you have slow or you can transform it to fast or l or l but you have to outsource the slow or l slow something else slow x you have to outsource the the slowness to some other part so if you want to do fast or l you have to outsource the slowness and what the slowness provides is an inductive bias which means yeah if you want to do like faster with episodic memory you have to learn the similarity function which again which might be slow in itself but then the or l will be fast and if you want to do this via kind of an outer meta learner again this learning of the outer meta learner might be slow but then the inner learner will be fast in a connection to the kind of biological aspect of this they do make a connection which I find is appropriate in that for example the human brain the reason we can learn things fast let's say in the physical world picking things up dropping things down or navigating our paths we're incredibly good at this and navigating through like a weird terrain with rocks in the way is because of course our brains have been adapted to these kinds of environment over generations so there is an outer process like evolution which is this kind of outer loop and it instantiates the inner loop which are the humans that kind of live or die by their ability to navigate better so if the outer loop does a good job of only keeping the humans alive that can navigate well then the individual human in here that does this the individual human given a landscape with rocks will then be much faster at learning to navigate it all right so that was it for that I it's an interesting article to read especially the connections to the kind of biological aspects and without having a nice day | [{"start": 0.0, "end": 7.84, "text": " Hi there. Today we're looking at reinforcement learning fast and slow by Matthew Botvenek,"}, {"start": 7.84, "end": 17.48, "text": " Sam Ritter, Jane X Wang, Zeb Kurt Nielsen, Charles Spendel and Demis Hassabis. These people"}, {"start": 17.48, "end": 26.080000000000002, "text": " are from Google Deep Mind and this is a review of kind of development in reinforcement"}, {"start": 26.08, "end": 33.28, "text": " learning, especially as it pertains to kind of how humans learn or what we can understand"}, {"start": 33.28, "end": 43.36, "text": " from the RL world that translates over to human learning. Alright, so basically their"}, {"start": 43.36, "end": 54.92, "text": " argument here is that the first wave of deep RL as you see here is powerful but slow"}, {"start": 54.92, "end": 62.400000000000006, "text": " and they give examples of this. So in box one, box one is this. So they believe there's"}, {"start": 62.400000000000006, "end": 72.04, "text": " an image missing here. This is backgammon, TD gammon. This is the famous Deep Mind Atari"}, {"start": 72.04, "end": 81.0, "text": " playing Bot and this is kind of the 3D labyrinth playing Bot. So there's been a number of advances"}, {"start": 81.0, "end": 86.28, "text": " in RL and especially what they talk about is deep RL. So when we talk about reinforcement"}, {"start": 86.28, "end": 98.52, "text": " learning, the easiest case is where you have an agent, agent and an environment. Alright,"}, {"start": 98.52, "end": 105.68, "text": " so the agent will observe some observation from the environment and then based on that"}, {"start": 105.68, "end": 112.52000000000001, "text": " the agent will perform an action A and then the environment will give back a reward and"}, {"start": 112.52000000000001, "end": 121.92000000000002, "text": " also a next observation. So this is 0, 0, 1. Alright, and then this is A, 0 and then"}, {"start": 121.92000000000002, "end": 129.08, "text": " here you give A1, AI. So basically this goes back and forth and back and forth. The agent"}, {"start": 129.08, "end": 133.84, "text": " performs an action, the environment gives a reward and the next observation. So this could"}, {"start": 133.84, "end": 142.28, "text": " be for example here in the Atari world. The observation is the screen itself. Alright,"}, {"start": 142.28, "end": 148.84, "text": " and then the agent needs to perform an action which is an input of the joystick or pressing"}, {"start": 148.84, "end": 156.4, "text": " some button. You can see the individual actions actually listed here and then the reward will"}, {"start": 156.4, "end": 164.16, "text": " be given to the agent via a number which I guess is the same number as up here. So the task"}, {"start": 164.16, "end": 170.48000000000002, "text": " is to maximize the reward simply by, so the difference is you're not doing this in a"}, {"start": 170.48000000000002, "end": 175.48000000000002, "text": " supervised manner. So you're not telling the agent what would be the correct action to"}, {"start": 175.48000000000002, "end": 183.08, "text": " do. You simply tell it that whether what it did was good or bad by giving it a high"}, {"start": 183.08, "end": 188.92000000000002, "text": " or low reward. Alright, so that's reinforcement learning. So what is deeper enforcement"}, {"start": 188.92000000000002, "end": 195.8, "text": " learning? Deep reinforcement learning simply means the agent maps the observation to the"}, {"start": 195.8, "end": 204.68, "text": " action via a deep neural network. So deep neural network. That's that's deeper enforcement"}, {"start": 204.68, "end": 211.72000000000003, "text": " learning where the mapping or some part of the agent consists of a deep neural network."}, {"start": 211.72, "end": 218.8, "text": " And you see for example here there's a deep neural network mapping the observation to"}, {"start": 218.8, "end": 225.56, "text": " the action as well as as down here but it's a bit more complicated. Alright, so they"}, {"start": 225.56, "end": 233.72, "text": " argue that the first wave of this was powerful but slow meaning kind of you need a lot of"}, {"start": 233.72, "end": 242.28, "text": " samples and they give two sources of why it's slow. Why you need a lot of samples. They"}, {"start": 242.28, "end": 250.48, "text": " say the two factors are incremental parameter adjustment and weak inductive bias. So incremental"}, {"start": 250.48, "end": 257.68, "text": " parameter adjustment means basically that you have to update or train your neural network"}, {"start": 257.68, "end": 267.0, "text": " in a very small incremental way in order to basically because you train it one by one, right?"}, {"start": 267.0, "end": 272.76, "text": " You train your neural network step by step. You have to make small steps in order to not"}, {"start": 272.76, "end": 280.6, "text": " forget what what came before. You can't fundamentally readjust your neural network to every new batch"}, {"start": 280.6, "end": 284.96000000000004, "text": " of observations because then that's going to destroy all the information you've learned"}, {"start": 284.96, "end": 293.79999999999995, "text": " of the old one. And then weak inductive bias here is a basically an understanding of these"}, {"start": 293.79999999999995, "end": 299.52, "text": " neural networks. They are general function approximators and they can approximate any"}, {"start": 299.52, "end": 305.12, "text": " function. So if you just think in terms of kind of I don't know let's say polynomials"}, {"start": 305.12, "end": 314.15999999999997, "text": " and what kind of polynomials are this polynomial this polynomial this polynomial this weird polynomial."}, {"start": 314.16, "end": 320.48, "text": " If I have a function that can approximate all of these then I have a weak inductive bias"}, {"start": 320.48, "end": 329.32000000000005, "text": " whereas if I kind of know okay all my polynomials are the polynomial that I'm looking for ultimately"}, {"start": 329.32000000000005, "end": 336.72, "text": " I'm very sure it's a third degree polynomial right so something like this or like this"}, {"start": 336.72, "end": 343.48, "text": " or like this. So this is much less of a much less of a class of functions that I can fit."}, {"start": 343.48, "end": 350.84000000000003, "text": " But if I'm sure that the function that I'm trying to fit falls in this category then I'm"}, {"start": 350.84000000000003, "end": 356.76, "text": " much faster. So this is then called a strong inductive bias is where I build into the model"}, {"start": 356.76, "end": 363.32, "text": " basically I tell it beforehand. Here is a very restricted class of functions that you can"}, {"start": 363.32, "end": 370.24, "text": " fit. Whereas in a weak inductive bias I won't tell it that I'll simply say well model"}, {"start": 370.24, "end": 375.2, "text": " you could fit any function you want and I'm just giving you training samples. So this is"}, {"start": 375.2, "end": 383.6, "text": " a classic example of a bias variance trade off where there is a lot of variance in these"}, {"start": 383.6, "end": 389.72, "text": " models meaning you can fit also a lot of functions but here because you bias the model towards"}, {"start": 389.72, "end": 396.32, "text": " a certain set of functions it can lower this variance and in this case here they argue"}, {"start": 396.32, "end": 402.48, "text": " it speeds up learning because you don't have you don't have as much variance that means"}, {"start": 402.48, "end": 412.64, "text": " you can basically go faster while learning. Alright so they propose two solutions to this"}, {"start": 412.64, "end": 421.03999999999996, "text": " problem of this kind of to mitigate these problems that make reinforcement learning faster"}, {"start": 421.04, "end": 428.92, "text": " or have made reinforcement learning faster. This is a review remember. So the first one is"}, {"start": 428.92, "end": 434.28000000000003, "text": " episodic deep reinforcement learning and this episodic deep reinforcement learning is"}, {"start": 434.28000000000003, "end": 441.96000000000004, "text": " specified here fast learning through episodic memory. So the suggestion in this field of"}, {"start": 441.96, "end": 451.52, "text": " research is to augment the neural network or the agent by a memory and the memory could"}, {"start": 451.52, "end": 460.24, "text": " look something like this. So in a lot of these RL frameworks what you what a principal component"}, {"start": 460.24, "end": 469.2, "text": " of the agent is so the agent will get an observation O and one of the things it has to do is estimate"}, {"start": 469.2, "end": 475.28, "text": " the value of this observation of this state. So basically the agent isn't some state let's"}, {"start": 475.28, "end": 485.24, "text": " say you play pong right and you are here down and the ball comes your way up there right"}, {"start": 485.24, "end": 490.28, "text": " this little arrow sorry. So the ball flies away from you and you're all the way down which"}, {"start": 490.28, "end": 500.96, "text": " basically means I'm gonna draw this bigger. So here you are down here and the ball is here"}, {"start": 500.96, "end": 509.4, "text": " flying up there. So one task in these in these agents that occurs often is to estimate the"}, {"start": 509.4, "end": 516.16, "text": " value of this observation basically means how much reward am I expecting from this state going"}, {"start": 516.16, "end": 522.76, "text": " into the future. In this case I probably will not expect a lot of rewards since I can't move"}, {"start": 522.76, "end": 530.16, "text": " up fast enough right to catch the ball. So this I would assign this state a pretty low value"}, {"start": 530.16, "end": 539.4399999999999, "text": " whereas if I were up here I would assign this state quite a high value. So as we've already"}, {"start": 539.4399999999999, "end": 545.9599999999999, "text": " seen this is a deep neural network mapping we learn to assign value to different states"}, {"start": 545.96, "end": 554.44, "text": " and this is one of the parts that takes a long time and these methods they are the one that's"}, {"start": 554.44, "end": 561.6800000000001, "text": " depicted here replaces this value estimation by saying okay we have an observation we somehow"}, {"start": 561.6800000000001, "end": 569.0400000000001, "text": " need to estimate its value why why don't we look for similar observation so we have some kind"}, {"start": 569.04, "end": 581.12, "text": " of memory right and we go with our observation and we retrieve O prime 1 O prime 2 O prime 3 that"}, {"start": 581.12, "end": 589.0799999999999, "text": " are somehow similar right so in our in our pong example I'm down I'm up here ball moves here I"}, {"start": 589.0799999999999, "end": 597.4399999999999, "text": " could be looking now at at states where I was here or where I was here like really close or"}, {"start": 597.44, "end": 604.12, "text": " about the ball flew a bit differently but still in the same direction or down here right so all"}, {"start": 604.12, "end": 610.72, "text": " these states are kind of close to my state and I can I already have I already have played these"}, {"start": 610.72, "end": 617.6, "text": " since they're in my memory right so with every one of them I can also retrieve the rewards that I"}, {"start": 617.6, "end": 623.32, "text": " got so I owe it because I already know the problem in reinforcement learning is before you do an"}, {"start": 623.32, "end": 629.6, "text": " action you don't know what the reward will be but here I already know because I've played it I've"}, {"start": 629.6, "end": 637.0, "text": " already experienced it it's in the past so I know what reward I got right so and this is exactly"}, {"start": 637.0, "end": 645.36, "text": " what they say over here they basically say here we have time time runs this way we're in state"}, {"start": 645.36, "end": 653.84, "text": " 1 then in state 2 and so on and we perform actions and and get rewards and what we can do is we"}, {"start": 653.84, "end": 662.6800000000001, "text": " can save these states into this memory as along with their sum of discounted rewards that we collect"}, {"start": 662.6800000000001, "end": 672.76, "text": " from that state on and then later this is like a SpongeBob reference if we want to estimate the"}, {"start": 672.76, "end": 682.92, "text": " value of some new state right what we do is we retrieve all of these states from memory calculate a"}, {"start": 682.92, "end": 690.52, "text": " similarity score over them and with with we wait basically we add their rewards waited by how"}, {"start": 690.52, "end": 699.4, "text": " similar they are to the state that we want to compute so this basically amounts to averaging over"}, {"start": 699.4, "end": 706.4399999999999, "text": " states respective by how close they are to the current state right this is kind of a soft a"}, {"start": 706.4399999999999, "end": 711.8, "text": " soft way of saying I only select the states which are close and that gives you a value estimate"}, {"start": 711.8, "end": 718.1999999999999, "text": " for the new states so basically this means you just got rid of having to train a value function"}, {"start": 718.1999999999999, "end": 724.92, "text": " and this will speed up your reinforcement learning quite a bit if you don't have to train that if"}, {"start": 724.92, "end": 731.4799999999999, "text": " you already have good value estimations from your previous experience that's great of course there"}, {"start": 731.4799999999999, "end": 736.04, "text": " are a number of problems associated with that namely if this memory here for example becomes"}, {"start": 736.04, "end": 746.28, "text": " stale it doesn't represent the future rewards quite as well there is also a question of which states do"}, {"start": 746.28, "end": 750.68, "text": " you keep in memory just the good ones or do you have to have a certain property do you have to have"}, {"start": 750.68, "end": 758.92, "text": " some diversity in there and of course the biggest problem here the biggest problem is how do you know"}, {"start": 760.8399999999999, "end": 766.52, "text": " when two states are similar or when they aren't it might be easy in a situation like pong where"}, {"start": 766.52, "end": 775.3199999999999, "text": " I only have like three variables like position at y position of my of my paddle and position of"}, {"start": 775.32, "end": 783.0, "text": " the ball and velocity of the ball those are like I can specify those in five numbers but if it gets"}, {"start": 783.88, "end": 791.48, "text": " harder than that if it's like this labyrinth setting full 3d environment then we have no clue"}, {"start": 791.48, "end": 798.7600000000001, "text": " which states are similar to each other and what these what most end up doing is they will train"}, {"start": 798.76, "end": 806.04, "text": " you guessed it they will train a deep neural network to give you this similarity score between states"}, {"start": 807.0, "end": 814.04, "text": " right how they do it is is a different question but presumably you can train this network offline"}, {"start": 814.04, "end": 821.72, "text": " basically meaning you can pre-train it you could pre-train it and then the so we have two stages"}, {"start": 821.72, "end": 835.1600000000001, "text": " stage one pre-train train similarity dnn right this and then once we've done that second stage"}, {"start": 835.1600000000001, "end": 844.12, "text": " do reinforcement learning using this and the claim here is that by having this done this this"}, {"start": 844.12, "end": 851.8, "text": " second stage will become faster so it it doesn't really solve the problem of the sample efficiency but"}, {"start": 851.8, "end": 856.44, "text": " what it says is okay the actual reinforcement learning part will become faster because we've"}, {"start": 856.44, "end": 863.16, "text": " already done the work previously but basically by by including this similarity score sorry"}, {"start": 864.2, "end": 871.08, "text": " whatever dnn by including this in the language of the review here we have successfully"}, {"start": 871.08, "end": 880.44, "text": " introduced an inductive bias into the rl procedure because the rl procedure now can't just fit any"}, {"start": 880.44, "end": 888.12, "text": " function we say we tell it your value function is one that conforms to our notion of similarity"}, {"start": 888.12, "end": 895.0, "text": " that we've pre-trained this restricts the rl algorithm and we give it an inductive bias and as"}, {"start": 895.0, "end": 903.08, "text": " long as our similarity score is useful for the rl algorithm it can speed up its learning because"}, {"start": 903.08, "end": 911.08, "text": " it doesn't have to learn the value function itself all right cool so the second part here is a"}, {"start": 911.08, "end": 918.04, "text": " bit more abstract it's called meta reinforcement learning speeding up deep rl by learning to learn"}, {"start": 918.04, "end": 924.44, "text": " it's kind of learning to learn approaches are quite abundant in the literature people try this"}, {"start": 924.44, "end": 931.24, "text": " usually there's a I mean it's it's very large scale experiments basically you have"}, {"start": 933.1600000000001, "end": 940.2, "text": " I think I believe they show it somewhere here yeah you have like some some outer loop where you"}, {"start": 940.2, "end": 947.6400000000001, "text": " you'd say that's this thing here what the outer loop does is in each loop it samples one environment"}, {"start": 947.6400000000001, "end": 953.08, "text": " so it samples one environment from a distribution of environments so now you not only have one"}, {"start": 953.08, "end": 960.12, "text": " environment but you say okay if I'm going to navigate this maze want to trying to learn to"}, {"start": 960.12, "end": 971.4000000000001, "text": " navigate this maze I'm going actually to learn to learn to navigate many mazes right so it's not"}, {"start": 971.4000000000001, "end": 978.44, "text": " like you train one agent to learn at you train one agent to navigate many mazes that would just be"}, {"start": 978.44, "end": 986.36, "text": " classic reinforcement learning but you want to train an algorithm that helps an agent"}, {"start": 986.36, "end": 994.9200000000001, "text": " learners a particular maze and you do that by training your helper algorithm on a variety"}, {"start": 994.9200000000001, "end": 1001.4000000000001, "text": " of agent maze combinations so in each step you sample one environment like this this here"}, {"start": 1001.4, "end": 1009.72, "text": " and you then have an inner loop here you fully reinforcement learn train an agent"}, {"start": 1010.76, "end": 1017.16, "text": " in the classic sense on this environment right you see here action action observation reward"}, {"start": 1017.16, "end": 1026.36, "text": " right but the agent receives some kind of signal from outside so the outside algorithm will kind"}, {"start": 1026.36, "end": 1034.36, "text": " of tell the agent how to approach the problem right this could be that it initializes the"}, {"start": 1035.08, "end": 1042.9199999999998, "text": " the weights I'm here you see that the outer loop trains the parameter weights which determine the"}, {"start": 1042.9199999999998, "end": 1050.28, "text": " inner learner that interacts with an environment during the duration of the episode for every cycle"}, {"start": 1050.28, "end": 1055.4799999999998, "text": " of the outer loop a new environment environment is sampled from a distribution of environments which"}, {"start": 1055.48, "end": 1061.96, "text": " share some common structure so basically the one would expect when you train this that these parameters"}, {"start": 1061.96, "end": 1070.1200000000001, "text": " here this could be for example it could be the initial weights of the network that the agent uses"}, {"start": 1070.1200000000001, "end": 1075.96, "text": " that is one possibility right this is very abstract here this metary reinforcement learning it could"}, {"start": 1075.96, "end": 1083.4, "text": " be literally anything that the outer model teaches the inner model or gives to the inner model"}, {"start": 1083.4, "end": 1089.72, "text": " all right and you you train both of these with reinforcement learning so the inner you train with"}, {"start": 1089.72, "end": 1096.52, "text": " reinforcement learning on the individual rewards and then you can train the outer loop on the reward"}, {"start": 1096.52, "end": 1105.48, "text": " that the entire agent environment episode achieved so the that's kind of a two loop situation"}, {"start": 1106.2800000000002, "end": 1111.64, "text": " and yeah so that's metary reinforcement learning again it's very unspecified what it does"}, {"start": 1111.64, "end": 1122.2800000000002, "text": " but as you can already see if you now have such an algorithm that kind of tells the the inner agent"}, {"start": 1123.0800000000002, "end": 1129.0, "text": " just as an example how to initialize its weights right how to initialize the weights of its"}, {"start": 1129.0, "end": 1137.4, "text": " deep neural network if you have that here then the agent you will technically bias it this is"}, {"start": 1137.4, "end": 1147.16, "text": " again an inductive bias so you will give it inductive bias towards what you think are good"}, {"start": 1147.16, "end": 1157.48, "text": " weights to generally learn these maze structured environments right since the the outer loop you can"}, {"start": 1157.48, "end": 1164.0400000000002, "text": " update it way slower because it needs to learn over a longer time horizon and it needs to learn"}, {"start": 1164.04, "end": 1170.44, "text": " things for a different variety of environments but once you have good kind of initial weights for"}, {"start": 1170.44, "end": 1177.08, "text": " a particular environment then this agent in here can learn much faster given an individual"}, {"start": 1177.08, "end": 1182.76, "text": " environment so the agent you instantiate it and then you give it good starting weights or some"}, {"start": 1182.76, "end": 1189.0, "text": " other kind of signal about the environment and then it can go much much faster at learning the"}, {"start": 1189.0, "end": 1196.6, "text": " environment thereby you have just sped up this inner agent by providing it an inductive bias and"}, {"start": 1196.6, "end": 1207.24, "text": " that's basically what the what the the these what the claim of the review is that by providing these"}, {"start": 1207.24, "end": 1213.8, "text": " models with a larger inductive bias you may then speed up their learning because you've kind of"}, {"start": 1213.8, "end": 1220.76, "text": " told them what good functions are on the out from the outset of course you see the problem again"}, {"start": 1220.76, "end": 1226.9199999999998, "text": " here what the problem the problem is of course you actually need to train this outer sorry this"}, {"start": 1226.9199999999998, "end": 1232.12, "text": " this outer loop and the outer loop may actually take much much longer to train than a signal"}, {"start": 1232.9199999999998, "end": 1241.1599999999999, "text": " un and unbiased reinforcement learning thing but again what you could do is you could pre-train"}, {"start": 1241.16, "end": 1246.76, "text": " on a distribution of environments and then once a new environment shows up that is similar to"}, {"start": 1246.76, "end": 1255.48, "text": " this distribution you can then have the agent instantiated and learn much faster so again kind of"}, {"start": 1255.48, "end": 1261.72, "text": " this two step process you could pre-train this outer loop and then the inner loop will be much"}, {"start": 1261.72, "end": 1271.48, "text": " faster than if you didn't have the outer loop all right so those are basically the kind of the"}, {"start": 1271.48, "end": 1279.32, "text": " kind of outlines they do here they then kind of do a connection to like the brain and so on"}, {"start": 1281.88, "end": 1291.08, "text": " and they relate this to biology and biological learning but ultimately their conclusion is here"}, {"start": 1291.08, "end": 1299.96, "text": " that whenever you want to do whenever you have slow or this is at least my conclusion from their"}, {"start": 1299.96, "end": 1309.08, "text": " article whenever you have slow or you can transform it to fast or l or l but you have to"}, {"start": 1309.08, "end": 1319.0, "text": " outsource the slow or l slow something else slow x you have to outsource the the slowness to"}, {"start": 1319.0, "end": 1325.08, "text": " some other part so if you want to do fast or l you have to outsource the slowness and what the"}, {"start": 1325.08, "end": 1334.68, "text": " slowness provides is an inductive bias which means yeah if you want to do like faster with"}, {"start": 1334.68, "end": 1341.24, "text": " episodic memory you have to learn the similarity function which again which might be slow in itself"}, {"start": 1341.24, "end": 1347.64, "text": " but then the or l will be fast and if you want to do this via kind of an outer"}, {"start": 1347.64, "end": 1353.5600000000002, "text": " meta learner again this learning of the outer meta learner might be slow but then the inner learner"}, {"start": 1353.5600000000002, "end": 1363.0, "text": " will be fast in a connection to the kind of biological aspect of this they do make a connection which"}, {"start": 1363.0, "end": 1372.2800000000002, "text": " I find is appropriate in that for example the human brain the reason we can learn things fast"}, {"start": 1372.28, "end": 1378.28, "text": " let's say in the physical world picking things up dropping things down or navigating our paths"}, {"start": 1378.28, "end": 1385.48, "text": " we're incredibly good at this and navigating through like a weird terrain with rocks in the way"}, {"start": 1386.76, "end": 1394.04, "text": " is because of course our brains have been adapted to these kinds of environment over generations"}, {"start": 1394.04, "end": 1401.8799999999999, "text": " so there is an outer process like evolution which is this kind of outer loop and it instantiates"}, {"start": 1401.88, "end": 1411.4, "text": " the inner loop which are the humans that kind of live or die by their ability to navigate better"}, {"start": 1412.2800000000002, "end": 1420.44, "text": " so if the outer loop does a good job of only keeping the humans alive that can navigate well"}, {"start": 1420.44, "end": 1428.6000000000001, "text": " then the individual human in here that does this the individual human given a landscape with rocks"}, {"start": 1428.6, "end": 1435.7199999999998, "text": " will then be much faster at learning to navigate it all right so that was it for that I"}, {"start": 1436.6, "end": 1441.7199999999998, "text": " it's an interesting article to read especially the connections to the kind of biological aspects"}, {"start": 1441.72, "end": 1467.4, "text": " and without having a nice day"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=F5mxzvgl_oU | S.H.E. - Search. Human. Equalizer. | Short opinion on Pantene's tool to de-bias Google search results.
https://www.apnews.com/Business%20Wire/c53a0e8f5fe04bf68e8311f214c806cf
https://shetransforms.us/ | Hi everyone, just a quick more of a news update in the air world, which is the following Pantene launches SHE, the search human equalizer, to shine a light on bias in search. So Pantene, the kind of cosmetic corporation launches this thing, which is supposed to correct your search. And it's introduced here in this YouTube video, which as you can see down here has of 400 likes, has 3.5K dislikes, and of course comments are disabled. So that's kind of already weird. Let's say weird. If you go to the website here that they made, basically let me refresh this and you can see the intro. They say let's take the bias out of search. So if you search for greatest engineers, you'll get all men. If you search for school girl, you'll get like this kind of sexualized images. If you search for Asian women in Spanish, same. So basically they have a browser extension that modifies your search results, do that, for example, school girl looks like this. Of course, I don't know. If I were to do this, I would actually let people explore the search box right here. But of course they want you to download this extension. So to me, what the interesting part is, how does this work? So you're asked to install a Chrome extension, which I won't do. But basically down here they say view the terms that SHE is equalizing. If you click on that, you get to a list. So it very much seems like this is absolutely manual, handcrafted work. Because there's a lot of work in kind of correcting bias in, for example, in search, in machine learning, and so on. These approaches usually have some data-driven approach that actually will change the models and so on or will re-rank based on some kind of learned input. But this here is simply a list of terms, for example, famous actor, famous athletes, and so on, that it will then re-rank. And I'm pretty sure this is just human manual labor. Someone comes up with a new term, like, oh, this term, you can actually flag yourself in the Chrome extension. So they say here, flag this search, there's a button. So you can suggest one and they will say, oh yeah, okay, that is really not. That is really biased. Well now, re-rank the search results for you. I mean, academically, this is a terrible idea, absolutely terrible. Because how are you going to do this, like, manually replace every single series? Like, I don't know. It reminds a bit of new speak. But yeah, this approach is doomed to fail. And of course, it's just a company trying to sell you stuff. It's not, I mean, this is not a, this is a PR gag, not really trying to do anything, anything state of the art or meaningful or even effective, right? If you search a little different thing, then this will still show you that old kind of result. Yeah, so from the terms you can also pretty clearly see where I come from. They have their own name, they have panting. I didn't see this yet. They have panting in here. Yeah. So yeah, if you want less biased search results for these exact terms, then install the extension. I do not recommend you do so. But I would like them to take on one more query that I came up with. That is pretty, pretty biased, I found. And that's the most dangerous criminals. All men. Goodbye. | [{"start": 0.0, "end": 8.48, "text": " Hi everyone, just a quick more of a news update in the air world, which is the following"}, {"start": 8.48, "end": 16.8, "text": " Pantene launches SHE, the search human equalizer, to shine a light on bias in search."}, {"start": 16.8, "end": 24.32, "text": " So Pantene, the kind of cosmetic corporation launches this thing, which is supposed to"}, {"start": 24.32, "end": 26.92, "text": " correct your search."}, {"start": 26.92, "end": 35.68, "text": " And it's introduced here in this YouTube video, which as you can see down here has of 400"}, {"start": 35.68, "end": 41.96, "text": " likes, has 3.5K dislikes, and of course comments are disabled."}, {"start": 41.96, "end": 47.160000000000004, "text": " So that's kind of already weird."}, {"start": 47.160000000000004, "end": 50.120000000000005, "text": " Let's say weird."}, {"start": 50.12, "end": 57.519999999999996, "text": " If you go to the website here that they made, basically let me refresh this and you can"}, {"start": 57.519999999999996, "end": 59.199999999999996, "text": " see the intro."}, {"start": 59.199999999999996, "end": 62.76, "text": " They say let's take the bias out of search."}, {"start": 62.76, "end": 68.16, "text": " So if you search for greatest engineers, you'll get all men."}, {"start": 68.16, "end": 76.12, "text": " If you search for school girl, you'll get like this kind of sexualized images."}, {"start": 76.12, "end": 85.24000000000001, "text": " If you search for Asian women in Spanish, same."}, {"start": 85.24000000000001, "end": 91.84, "text": " So basically they have a browser extension that modifies your search results, do that,"}, {"start": 91.84, "end": 96.16, "text": " for example, school girl looks like this."}, {"start": 96.16, "end": 98.16, "text": " Of course, I don't know."}, {"start": 98.16, "end": 104.68, "text": " If I were to do this, I would actually let people explore the search box right here."}, {"start": 104.68, "end": 110.16000000000001, "text": " But of course they want you to download this extension."}, {"start": 110.16000000000001, "end": 116.36000000000001, "text": " So to me, what the interesting part is, how does this work?"}, {"start": 116.36000000000001, "end": 123.60000000000001, "text": " So you're asked to install a Chrome extension, which I won't do."}, {"start": 123.60000000000001, "end": 131.44, "text": " But basically down here they say view the terms that SHE is equalizing."}, {"start": 131.44, "end": 133.92000000000002, "text": " If you click on that, you get to a list."}, {"start": 133.92, "end": 139.35999999999999, "text": " So it very much seems like this is absolutely manual, handcrafted work."}, {"start": 139.35999999999999, "end": 144.64, "text": " Because there's a lot of work in kind of correcting bias in, for example, in search, in machine"}, {"start": 144.64, "end": 146.0, "text": " learning, and so on."}, {"start": 146.0, "end": 152.48, "text": " These approaches usually have some data-driven approach that actually will change the models"}, {"start": 152.48, "end": 158.48, "text": " and so on or will re-rank based on some kind of learned input."}, {"start": 158.48, "end": 166.76, "text": " But this here is simply a list of terms, for example, famous actor, famous athletes, and"}, {"start": 166.76, "end": 169.23999999999998, "text": " so on, that it will then re-rank."}, {"start": 169.23999999999998, "end": 172.44, "text": " And I'm pretty sure this is just human manual labor."}, {"start": 172.44, "end": 178.83999999999997, "text": " Someone comes up with a new term, like, oh, this term, you can actually flag yourself"}, {"start": 178.83999999999997, "end": 180.12, "text": " in the Chrome extension."}, {"start": 180.12, "end": 184.92, "text": " So they say here, flag this search, there's a button."}, {"start": 184.92, "end": 188.44, "text": " So you can suggest one and they will say, oh yeah, okay, that is really not."}, {"start": 188.44, "end": 191.28, "text": " That is really biased."}, {"start": 191.28, "end": 195.68, "text": " Well now, re-rank the search results for you."}, {"start": 195.68, "end": 200.84, "text": " I mean, academically, this is a terrible idea, absolutely terrible."}, {"start": 200.84, "end": 207.0, "text": " Because how are you going to do this, like, manually replace every single series?"}, {"start": 207.0, "end": 209.0, "text": " Like, I don't know."}, {"start": 209.0, "end": 213.32, "text": " It reminds a bit of new speak."}, {"start": 213.32, "end": 215.96, "text": " But yeah, this approach is doomed to fail."}, {"start": 215.96, "end": 219.44, "text": " And of course, it's just a company trying to sell you stuff."}, {"start": 219.44, "end": 228.92000000000002, "text": " It's not, I mean, this is not a, this is a PR gag, not really trying to do anything, anything"}, {"start": 228.92000000000002, "end": 232.68, "text": " state of the art or meaningful or even effective, right?"}, {"start": 232.68, "end": 239.96, "text": " If you search a little different thing, then this will still show you that old kind of"}, {"start": 239.96, "end": 241.96, "text": " result."}, {"start": 241.96, "end": 248.28, "text": " Yeah, so from the terms you can also pretty clearly see where I come from."}, {"start": 248.28, "end": 250.04000000000002, "text": " They have their own name, they have panting."}, {"start": 250.04000000000002, "end": 251.44, "text": " I didn't see this yet."}, {"start": 251.44, "end": 254.04000000000002, "text": " They have panting in here."}, {"start": 254.04000000000002, "end": 256.16, "text": " Yeah."}, {"start": 256.16, "end": 264.56, "text": " So yeah, if you want less biased search results for these exact terms, then install the"}, {"start": 264.56, "end": 265.56, "text": " extension."}, {"start": 265.56, "end": 268.72, "text": " I do not recommend you do so."}, {"start": 268.72, "end": 275.12, "text": " But I would like them to take on one more query that I came up with."}, {"start": 275.12, "end": 277.48, "text": " That is pretty, pretty biased, I found."}, {"start": 277.48, "end": 281.20000000000005, "text": " And that's the most dangerous criminals."}, {"start": 281.20000000000005, "end": 282.20000000000005, "text": " All men."}, {"start": 282.2, "end": 307.68, "text": " Goodbye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=3Tqp_B2G6u0 | Blockwise Parallel Decoding for Deep Autoregressive Models | https://arxiv.org/abs/1811.03115
Abstract:
Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding.
Authors: Mitchell Stern, Noam Shazeer, Jakob Uszkoreit | Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by Mitchell Stern, Noem Shazir and Jakob Ushkurai of UC Berkeley and Google Brain. So this is a bit more of an engineering paper than usual, which is which I find cool. It's basically an engineering trick to get these autoregressive models to decode faster while you can either preserve fully their performance or suffer a bit of a drop in performance while even speeding them up more. Alright, so let's dive in actually. The paper starts out with the description of what autoregressive models are and what decoding is in them. So let me try to quickly quickly explain this. So an autoregressive model. So basically we're talking about, let's say, language models. So language models are the classic examples of these models where you have a language model is a model that simply predicts the next word in a sequence. So you could have something like a cat sits on the and then here is blank. So the language model is asked to predict which word is the word that follows. The language model basically does this by predicting the probability distribution over the next word. So w t plus one, if this is t here, this is t minus one and so on, w t is given all the w's smaller equal than t. So all the words that come before should lead to the next word being predicted. So the language model is task to ask what is the next word in the sequence or what's the probability distribution over the next word and then you can simply pick the maximum probability word or something like this. So that's pretty standard so far. So what is the autoregressive part in here? So basically the autoregressive part means that in order for me to find this word here, this next word, I will look at all of these words here. And what does it mean then when I want to use this language model for generating a sentence? Let's say so now I've trained the language model. It's really good at predicting the next word. Now I want to actually use it to do something more interesting. So I I want it to generate a full sentence. What I do, let's say I pick the first word, the right I pick the first word. And I simply ask the language model, why what's the next word? Right? And the language model can do this. It can assess what's the probability distribution here? Overwards, and for example, give me some some distribution over words. And I pick the maximum one, I say, okay, the maximum one here is house, okay, the house. The house. And then I go back and ask the language model, well, what's the next word? Then, see clearly, you're a language model. So you can give me based on the two previous words, you can give me the next word, what's the next word? And the language model was maybe say the house is and so on. So you can see how you can generate a sentence by simply basically feeding the answer that the language model gives feeding it into the next step of predicting. So all of these now go into the next step. And once you've predicted the next step, the house is on. Once you've predicted that, then you can take that and in conjunction with everything you've predicted so far to predict the next step. So you can use the language model that is trying to predict the next word to predict an entire sentence. And the other aggressive part basically means that its own predictions will serve as the basis for the next predictions. And this introduces a fundamental problem, namely that I have to basically wait for one prediction. So I have to wait here for is before I can predict on. And this means if I have a, I basically can't help but so if this is my language model, it's a box. I can't help but go to the language model. Wait for a response. Okay, then go to the language model again. Wait for a response again. This is inherently sequential nature here where I have to do like M steps if if M is the length of the sentence that I want. And we can't make use of batching. Normally. So usually what you do during training during training, you have a whole bunch of data, right? You have the cat sits on the mat. You have the house. The house is blue. So I can generate just from these two sentences, I can generate a bunch of training example. I can ask I can this is a training example where the input is the cat and it's meaning to predict sits. And then this is a training example where the input is the cat sits and the language model has to predict on. This here is a training example. This is a training example. So I can junk this up into a whole bunch of training examples and all of those I can write I can feed in parallel into a big matrix. I can all put them here and then run this thing through my language model in training mode because each of them is already like is in the corpus. I can I can I can I can batch the training but I can't batch the prediction because what we've seen before because inherently the next predicting the next word depends on the last word that the model itself has output. So there is no no training corpus around since we're we're not training. Yeah, so this is the fundamental problem and these authors tackle this problem. Say how can we make this faster this decoding? So they introduce greedy decoding here where they say okay this will just we've just seen the probability of the next word is like the maximum the maximum log probability here in that case if the model predicts a lot of probability over the words that we've input so far right and this X here is so this is for example a translation task a machine translation task of the X would be the source language sentence so maybe like a French sentence and the Y smaller equal to J would be the so far decoded English sentence if we're trying to translate to English and the Y J plus one would be the next word that we're trying to predict in the English sentence given the English sentence so far and the French sentence the total French sentence. So greedy decoding just does this one step after another and we'll we try to go to what they call blockwise parallel decoding so we can just jump to the graphics straight away because what they do is pretty straightforward and is best illustrated in this graphic actually so they go from the from this situation where they already have this here they have a saw a dog ride this is the sentence that has been decoded so far and we'll we have to try to complete it of naturally we'll ask what's the next word but they say okay what if we could predict not only the next word from this but the word two positions away or three positions away we could do this all at the same time right I mean I can certainly build a model a language model that doesn't only predict the next word predicts the word after that as well though of course if then the this word the predictor for this word still only gets this as an input so this is the important thing here so the the part of the model that predicts the is two words away doesn't isn't being informed that this word is being produced here so naturally you would expect the quality to be worse because the the word one position away two positions away and three positions away are each predicted basically independently of each other just from the from the source context so there's no there is no you can't expect like a coherency between the words or not not a lot so this is the the fundamental trade-off with such a model you can predict farther into the future at the same time but then these predictions can't basically depend on each other and this degrades your performance quite a bit so what these authors do is to remedy that they say well these these things here we can I mean we can produce a bunch of them right since all that's required as an input is this we can actually produce like we can produce a batch of them at the same time so we can produce one two and three words into the future and we can do this like a hundred times in parallel no problem all right and we can we can sample this we don't have to always take the most likely word we can actually sample a bunch into the future and this now gets smarter because now I have a list of 100 basically suggestions of what the continuation here could be right I have I take this not as a given but I take these outputs as suggestions and then I can have another model that this is called verify here can have another model that scores all of these different all of these different decodings in parallel both of these can be done by the same model we saw the language model can be either used to predict or to score something since it inherently predicts the probability of sequences or of following words we can we can let it output this probability all in parallel so this this also can count as a score what I'm trying to say you can you can since the language model is constructed as a as output in probabilities anyway like such we can use it both to predict the next word and also if we have a suggestion we can use it to score that and to say okay how likely is that and then what we can make sure is that the suggestion we are looking for the suggestion basically that has the highest score and if you want to be really true to your original model you say I want to look for the suggestion that has the the maximum that would have had the maximum score had I decoded one by one so then basically you you retain the original performance and you gain a speed up as long as the what the greedy decoding would have produced is in your suggestion in your box of suggestions that you produce as long as that's in there you gain a speed up if that's not in there then you can always you always have the the one word ahead model because that's you have that anyway you predict the next word anyway so in case none of these suggestions work out you still have this one word prediction basically which is the the model you started with so at worst case you're as fast as the the greedy model and in best case you always your your suggestions are so good that they're always the one that would have been decoded anyway so you can basically in this case do three steps at once all right so this verify step here is is shown here and you see it will decode now this is just one suggestion keep him and they can produce many suggestions at the same time if if there's memory or and they can actually they can score each of this so they can score this they can score this and they can score this also independently as a batch so they can do this in parallel and here you see yeah here is the executed in parallel so the model will go and will score this word in and say oh this would have been this is the argmax of the greedy decoding anyway and it can also score this step and say aha given that there is an in that this the the is the argmax anyway right and you can score this step and say a given that there's in the the argmax would have been car and so that's not bus so we reject the suggestion but we keep that part of the suggestion and say okay the in the is basically what would have been decoded anyway according to the greedy decoding so we can basically accept this here and continue from there this is the accept step here so this is basically so you can see in this one step which yeah we'll call one decoding step we have basically done two of the greedy decoding steps in one go so by predicting into the future and then selecting the one that agrees with the original model because we can the fundamental thing is we can score in parallel but we can greedily produce not in parallel all right so they actually push this further by also eliminating one of the one of the evaluations here by combining basically the the next predict step with the previous verify step and it's it's pretty cool to look at that so we're in the same situation you have this and you suggest this continuation and then the score if model again we'll go we'll go here but while you verify you also do the next predict at the same time since you you've built your model since it's the same model and this model every time you executed it outputs a distribution over the next set of positions you might as well take the outputs of it right so when you then decide to accept this here you will already have the outputs computed for the next three positions so this you can feed directly into this next predict step you basically don't have to execute it you simply go to the one you've accepted and then you look at the outputs that you get anyway from this model and use them so you might ask okay which which how does a model look that like scores and predicts into the future and this the answer is here it's a bit out of order I would have maybe like this more previously but in any case this is what they do so they they use a transformer architecture and you have to imagine it starts down here actually there is a huge network down here right this is just the output layer so there's a giant transformer network down below and it produces this output representation now normally from this representation you would go to this what's called p layer here this is a output vocabulary projection so this has one entry for each of the words in your vocabulary so the a cat and so on and you would then for each one predict a probability so with this representation you basically project it onto this vocabulary and predicted probability distribution over the next word but what they do is they say no no we not only need the next word we need the next three words so let's actually split this output signal into three output signals and they do this by introducing this this hidden feed forward layer here or a hidden transformer layer it's a it's a hidden layer yeah we insert a signal feed forward layer with hidden size okay so they insert a hidden layer and then they also add these skip connections here right they add the skip connections which basically just means they feed through this output directly to to here and add it to that so basically the feed forward layer needs to transform this this output here into the vocabulary input one step ahead two steps ahead and three steps and and you can see here the those are independent right they don't depend on each other there's nothing feeding back p1 here into the decision of p2 so they can be executed in parallel but they lose the the dependence on each other all right so that's um that's the architecture and you can clearly see here it's able to predict three steps into the future at the same time so yeah all right so they they they also do different adjustments where they say now yeah we can also kind of sacrifice um a bit of a bit of the fidelity to the original model by not requiring that the basically we don't we don't only accept when the suggestion is the perfect best suggestion that would have been decoded by the greedy model but what we could do is we could just if it's in the top k we could accept it if it's in the if it's good enough basically one of the suggestions that we have is good enough then we'll accept it or when you have like some sort of distance metric they say here so the distance between our suggestion and the maximum so the what would have been best by the greedy should be smaller than some constant epsilon and that way you can sacrifice a bit of performance but your suggestions will be accepted much more often and thereby your your speed up will be much higher and they also experiment with whether or not they should fine tune the original model along with their model and also the experiment with knowledge distillation where they basically um have like some some teacher model and you train the their your model on the output of the teacher model don't want to go too far into this since these these are mostly kind of um things to make it work even better and you can see here that um this is for example a machine translation task so this is the WMT 2014 English German translation and there's a regularly they get a bluestaur of 26 and here a higher is better and if you can see they get a fairly sizable speed ups by keeping the bluestcores fairly constant so they they almost speed up by 2x but if they allow the bluestcores to go down a bit um they get a much higher speed up of like three and then if they do like distillation and fine tuning they actually manage to keep up the performance even though they get um very very high speed up so they get speed ups until like five x by not dropping the bluestcores very much so that's that's pretty impressive another experiment they do is image super resolution where you can see here with regular they try to really keep exactly the original model output and it doesn't it doesn't speed it up too much but when they allow for a bit of a mistake to be made um so here this is image super resolution so values are between 0 and 255 and they allow epsilon equals to two um of that so that's this kind of less than 1% error on the individual pixel then they get a speed ups of 7x or something like this and you can see in this region here that when the k is 4 and k is the number of steps that you decode ahead so and the mini mean block size is 3.75 that means on average 3.75 steps ahead or accepted which means basically their their suggestions are almost always good enough to be accepted so they get this massive speed up by basically being able to jump uh these decoding steps um yeah so they have a bunch of other results here they're show their wall clock time speed up since all iteration speed up is well but if you have to pay in huge computational cost it's not so good but they also show that they have a big kind of wall clock speed up up to up to 4x here in super resolution and over 3x in translation so that's a pretty cool paper they give some examples here onto more tables uh some examples of their super resolution and yeah if this might be something for you then use it it's i think it's a pretty neat trick and yeah especially for production systems all right that was it bye bye | [{"start": 0.0, "end": 5.0600000000000005, "text": " Hi there, today we'll look at blockwise parallel decoding for deep"}, {"start": 5.0600000000000005, "end": 12.4, "text": " autoregressive models by Mitchell Stern, Noem Shazir and Jakob Ushkurai of UC"}, {"start": 12.4, "end": 17.44, "text": " Berkeley and Google Brain. So this is a bit more of an engineering paper than"}, {"start": 17.44, "end": 24.16, "text": " usual, which is which I find cool. It's basically an engineering trick to get"}, {"start": 24.16, "end": 30.32, "text": " these autoregressive models to decode faster while you can either preserve"}, {"start": 30.32, "end": 37.0, "text": " fully their performance or suffer a bit of a drop in performance while even"}, {"start": 37.0, "end": 47.8, "text": " speeding them up more. Alright, so let's dive in actually. The paper starts out"}, {"start": 47.8, "end": 52.6, "text": " with the description of what autoregressive models are and what decoding is"}, {"start": 52.6, "end": 60.120000000000005, "text": " in them. So let me try to quickly quickly explain this. So an autoregressive"}, {"start": 60.120000000000005, "end": 68.12, "text": " model. So basically we're talking about, let's say, language models. So"}, {"start": 68.12, "end": 72.24000000000001, "text": " language models are the classic examples of these models where you have a"}, {"start": 72.24000000000001, "end": 77.2, "text": " language model is a model that simply predicts the next word in a sequence. So"}, {"start": 77.2, "end": 88.64, "text": " you could have something like a cat sits on the and then here is blank. So the"}, {"start": 88.64, "end": 95.60000000000001, "text": " language model is asked to predict which word is the word that follows. The"}, {"start": 95.60000000000001, "end": 100.96000000000001, "text": " language model basically does this by predicting the probability distribution"}, {"start": 100.96, "end": 108.36, "text": " over the next word. So w t plus one, if this is t here, this is t minus one"}, {"start": 108.36, "end": 117.32, "text": " and so on, w t is given all the w's smaller equal than t. So all the words that"}, {"start": 117.32, "end": 123.24, "text": " come before should lead to the next word being predicted. So the language model"}, {"start": 123.24, "end": 129.51999999999998, "text": " is task to ask what is the next word in the sequence or what's the probability"}, {"start": 129.52, "end": 134.4, "text": " distribution over the next word and then you can simply pick the maximum"}, {"start": 134.4, "end": 143.20000000000002, "text": " probability word or something like this. So that's pretty standard so far. So"}, {"start": 143.20000000000002, "end": 148.4, "text": " what is the autoregressive part in here? So basically the autoregressive part"}, {"start": 148.4, "end": 155.4, "text": " means that in order for me to find this word here, this next word, I will look"}, {"start": 155.4, "end": 160.84, "text": " at all of these words here. And what does it mean then when I want to use this"}, {"start": 160.84, "end": 168.04000000000002, "text": " language model for generating a sentence? Let's say so now I've trained the"}, {"start": 168.04000000000002, "end": 171.6, "text": " language model. It's really good at predicting the next word. Now I want to"}, {"start": 171.6, "end": 178.28, "text": " actually use it to do something more interesting. So I I want it to generate a"}, {"start": 178.28, "end": 184.28, "text": " full sentence. What I do, let's say I pick the first word, the right I pick the"}, {"start": 184.28, "end": 188.44, "text": " first word. And I simply ask the language model, why what's the next word?"}, {"start": 189.0, "end": 194.64, "text": " Right? And the language model can do this. It can assess what's the"}, {"start": 194.64, "end": 199.72, "text": " probability distribution here? Overwards, and for example, give me some"}, {"start": 199.72, "end": 204.12, "text": " some distribution over words. And I pick the maximum one, I say, okay, the"}, {"start": 204.12, "end": 213.28, "text": " maximum one here is house, okay, the house. The house. And then I go back and"}, {"start": 213.28, "end": 218.08, "text": " ask the language model, well, what's the next word? Then, see clearly, you're a"}, {"start": 218.08, "end": 221.96, "text": " language model. So you can give me based on the two previous words, you can give me"}, {"start": 221.96, "end": 228.12, "text": " the next word, what's the next word? And the language model was maybe say the"}, {"start": 228.12, "end": 234.44, "text": " house is and so on. So you can see how you can generate a sentence by simply"}, {"start": 234.44, "end": 240.12, "text": " basically feeding the answer that the language model gives feeding it into the"}, {"start": 240.12, "end": 245.88, "text": " next step of predicting. So all of these now go into the next step. And once you've"}, {"start": 245.88, "end": 252.76, "text": " predicted the next step, the house is on. Once you've predicted that, then you can"}, {"start": 252.76, "end": 257.76, "text": " take that and in conjunction with everything you've predicted so far to predict"}, {"start": 257.76, "end": 261.76, "text": " the next step. So you can use the language model that is trying to predict the"}, {"start": 261.76, "end": 266.44, "text": " next word to predict an entire sentence. And the other aggressive part basically"}, {"start": 266.44, "end": 271.76, "text": " means that its own predictions will serve as the basis for the next predictions."}, {"start": 271.76, "end": 278.56, "text": " And this introduces a fundamental problem, namely that I have to basically wait"}, {"start": 278.56, "end": 286.92, "text": " for one prediction. So I have to wait here for is before I can predict on. And this"}, {"start": 286.92, "end": 294.28, "text": " means if I have a, I basically can't help but so if this is my language model,"}, {"start": 294.28, "end": 298.96, "text": " it's a box. I can't help but go to the language model. Wait for a response."}, {"start": 298.96, "end": 303.52, "text": " Okay, then go to the language model again. Wait for a response again. This is"}, {"start": 303.52, "end": 309.55999999999995, "text": " inherently sequential nature here where I have to do like M steps if if M is the"}, {"start": 309.55999999999995, "end": 317.67999999999995, "text": " length of the sentence that I want. And we can't make use of batching. Normally."}, {"start": 317.67999999999995, "end": 322.47999999999996, "text": " So usually what you do during training during training, you have a whole bunch"}, {"start": 322.48, "end": 341.48, "text": " of data, right? You have the cat sits on the mat. You have the house. The house is blue."}, {"start": 341.48, "end": 346.20000000000005, "text": " So I can generate just from these two sentences, I can generate a bunch of"}, {"start": 346.20000000000005, "end": 351.36, "text": " training example. I can ask I can this is a training example where the input is"}, {"start": 351.36, "end": 358.72, "text": " the cat and it's meaning to predict sits. And then this is a training example where"}, {"start": 358.72, "end": 364.28000000000003, "text": " the input is the cat sits and the language model has to predict on. This here is a"}, {"start": 364.28000000000003, "end": 369.72, "text": " training example. This is a training example. So I can junk this up into a whole"}, {"start": 369.72, "end": 375.52000000000004, "text": " bunch of training examples and all of those I can write I can feed in parallel"}, {"start": 375.52, "end": 382.44, "text": " into a big matrix. I can all put them here and then run this thing through my"}, {"start": 382.44, "end": 387.28, "text": " language model in training mode because each of them is already like is in the"}, {"start": 387.28, "end": 393.08, "text": " corpus. I can I can I can I can batch the training but I can't batch the"}, {"start": 393.08, "end": 397.28, "text": " prediction because what we've seen before because inherently the next"}, {"start": 397.28, "end": 401.91999999999996, "text": " predicting the next word depends on the last word that the model itself has"}, {"start": 401.92, "end": 407.44, "text": " output. So there is no no training corpus around since we're we're not training."}, {"start": 407.44, "end": 411.92, "text": " Yeah, so this is the fundamental problem and these authors tackle this problem."}, {"start": 411.92, "end": 419.76, "text": " Say how can we make this faster this decoding? So they introduce greedy decoding"}, {"start": 419.76, "end": 427.40000000000003, "text": " here where they say okay this will just we've just seen the probability of the"}, {"start": 427.4, "end": 434.71999999999997, "text": " next word is like the maximum the maximum log probability here in that case"}, {"start": 434.71999999999997, "end": 441.12, "text": " if the model predicts a lot of probability over the words that we've input"}, {"start": 441.12, "end": 446.88, "text": " so far right and this X here is so this is for example a translation task a"}, {"start": 446.88, "end": 452.2, "text": " machine translation task of the X would be the source language sentence so maybe"}, {"start": 452.2, "end": 459.8, "text": " like a French sentence and the Y smaller equal to J would be the so far decoded"}, {"start": 459.8, "end": 465.24, "text": " English sentence if we're trying to translate to English and the Y J plus one"}, {"start": 465.24, "end": 468.76, "text": " would be the next word that we're trying to predict in the English sentence"}, {"start": 468.76, "end": 473.84, "text": " given the English sentence so far and the French sentence the total French"}, {"start": 473.84, "end": 481.15999999999997, "text": " sentence. So greedy decoding just does this one step after another and we'll"}, {"start": 481.16, "end": 489.40000000000003, "text": " we try to go to what they call blockwise parallel decoding so we can just"}, {"start": 489.40000000000003, "end": 494.24, "text": " jump to the graphics straight away because what they do is pretty straightforward"}, {"start": 494.24, "end": 499.96000000000004, "text": " and is best illustrated in this graphic actually so they go from the from this"}, {"start": 499.96000000000004, "end": 506.92, "text": " situation where they already have this here they have a saw a dog ride this is"}, {"start": 506.92, "end": 515.48, "text": " the sentence that has been decoded so far and we'll we have to try to complete"}, {"start": 515.48, "end": 520.5600000000001, "text": " it of naturally we'll ask what's the next word but they say okay what if we"}, {"start": 520.5600000000001, "end": 525.5600000000001, "text": " could predict not only the next word from this but the word two positions"}, {"start": 525.5600000000001, "end": 531.52, "text": " away or three positions away we could do this all at the same time right I mean"}, {"start": 531.52, "end": 535.32, "text": " I can certainly build a model a language model that doesn't only predict the"}, {"start": 535.32, "end": 542.7600000000001, "text": " next word predicts the word after that as well though of course if then the"}, {"start": 542.7600000000001, "end": 548.2800000000001, "text": " this word the predictor for this word still only gets this as an input so this"}, {"start": 548.2800000000001, "end": 554.0400000000001, "text": " is the important thing here so the the part of the model that predicts the is two"}, {"start": 554.0400000000001, "end": 560.84, "text": " words away doesn't isn't being informed that this word is being produced here"}, {"start": 560.84, "end": 567.44, "text": " so naturally you would expect the quality to be worse because the the word one"}, {"start": 567.44, "end": 571.48, "text": " position away two positions away and three positions away are each predicted"}, {"start": 571.48, "end": 576.6800000000001, "text": " basically independently of each other just from the from the source context so"}, {"start": 576.6800000000001, "end": 583.2, "text": " there's no there is no you can't expect like a coherency between the words"}, {"start": 583.2, "end": 590.4000000000001, "text": " or not not a lot so this is the the fundamental trade-off with such a model"}, {"start": 590.4, "end": 594.0, "text": " you can predict farther into the future at the same time but then these"}, {"start": 594.0, "end": 599.24, "text": " predictions can't basically depend on each other and this degrades your"}, {"start": 599.24, "end": 604.04, "text": " performance quite a bit so what these authors do is to remedy that they say"}, {"start": 604.04, "end": 609.12, "text": " well these these things here we can I mean we can produce a bunch of them"}, {"start": 609.12, "end": 614.92, "text": " right since all that's required as an input is this we can actually produce"}, {"start": 614.92, "end": 620.8399999999999, "text": " like we can produce a batch of them at the same time so we can produce one two"}, {"start": 620.8399999999999, "end": 625.12, "text": " and three words into the future and we can do this like a hundred times in"}, {"start": 625.12, "end": 630.64, "text": " parallel no problem all right and we can we can sample this we don't have to"}, {"start": 630.64, "end": 635.88, "text": " always take the most likely word we can actually sample a bunch into the future"}, {"start": 635.88, "end": 645.64, "text": " and this now gets smarter because now I have a list of 100 basically suggestions"}, {"start": 645.64, "end": 651.0, "text": " of what the continuation here could be right I have I take this not as a"}, {"start": 651.0, "end": 658.36, "text": " given but I take these outputs as suggestions and then I can have another model"}, {"start": 658.36, "end": 665.68, "text": " that this is called verify here can have another model that scores all of these"}, {"start": 665.68, "end": 671.56, "text": " different all of these different decodings in parallel both of these can be done"}, {"start": 671.56, "end": 677.16, "text": " by the same model we saw the language model can be either used to predict or to"}, {"start": 677.16, "end": 683.5999999999999, "text": " score something since it inherently predicts the probability of sequences or of"}, {"start": 683.5999999999999, "end": 692.56, "text": " following words we can we can let it output this probability all in parallel so"}, {"start": 692.56, "end": 696.8, "text": " this this also can count as a score what I'm trying to say you can you can since"}, {"start": 696.8, "end": 703.28, "text": " the language model is constructed as a as output in probabilities anyway like"}, {"start": 703.28, "end": 709.5999999999999, "text": " such we can use it both to predict the next word and also if we have a"}, {"start": 709.5999999999999, "end": 717.1199999999999, "text": " suggestion we can use it to score that and to say okay how likely is that and"}, {"start": 717.12, "end": 724.68, "text": " then what we can make sure is that the suggestion we are looking for the"}, {"start": 724.68, "end": 731.52, "text": " suggestion basically that has the highest score and if you want to be really"}, {"start": 731.52, "end": 736.96, "text": " true to your original model you say I want to look for the suggestion that"}, {"start": 736.96, "end": 743.72, "text": " has the the maximum that would have had the maximum score had I decoded one by"}, {"start": 743.72, "end": 751.32, "text": " one so then basically you you retain the original performance and you gain a"}, {"start": 751.32, "end": 758.12, "text": " speed up as long as the what the greedy decoding would have produced is in your"}, {"start": 758.12, "end": 762.9200000000001, "text": " suggestion in your box of suggestions that you produce as long as that's in"}, {"start": 762.9200000000001, "end": 767.08, "text": " there you gain a speed up if that's not in there then you can always you always"}, {"start": 767.08, "end": 771.32, "text": " have the the one word ahead model because that's you have that anyway you"}, {"start": 771.32, "end": 777.2800000000001, "text": " predict the next word anyway so in case none of these suggestions work out you"}, {"start": 777.2800000000001, "end": 783.12, "text": " still have this one word prediction basically which is the the model you"}, {"start": 783.12, "end": 791.5600000000001, "text": " started with so at worst case you're as fast as the the greedy model and in best"}, {"start": 791.5600000000001, "end": 798.2, "text": " case you always your your suggestions are so good that they're always the one"}, {"start": 798.2, "end": 803.4000000000001, "text": " that would have been decoded anyway so you can basically in this case do three"}, {"start": 803.4000000000001, "end": 811.72, "text": " steps at once all right so this verify step here is is shown here and you see"}, {"start": 811.72, "end": 816.0400000000001, "text": " it will decode now this is just one suggestion keep him and they can produce"}, {"start": 816.0400000000001, "end": 823.24, "text": " many suggestions at the same time if if there's memory or and they can actually"}, {"start": 823.24, "end": 828.44, "text": " they can score each of this so they can score this they can score this and they"}, {"start": 828.44, "end": 837.72, "text": " can score this also independently as a batch so they can do this in parallel and"}, {"start": 837.72, "end": 843.16, "text": " here you see yeah here is the executed in parallel so the model will go and"}, {"start": 843.16, "end": 847.64, "text": " will score this word in and say oh this would have been this is the argmax of"}, {"start": 847.64, "end": 853.0, "text": " the greedy decoding anyway and it can also score this step and say aha"}, {"start": 853.0, "end": 858.76, "text": " given that there is an in that this the the is the argmax anyway right and you"}, {"start": 858.76, "end": 864.6, "text": " can score this step and say a given that there's in the the argmax would have been"}, {"start": 864.6, "end": 870.6, "text": " car and so that's not bus so we reject the suggestion but we keep that part"}, {"start": 870.6, "end": 877.56, "text": " of the suggestion and say okay the in the is basically what would have been"}, {"start": 877.56, "end": 886.5999999999999, "text": " decoded anyway according to the greedy decoding so we can basically accept"}, {"start": 886.5999999999999, "end": 895.4799999999999, "text": " this here and continue from there this is the accept step here so this is basically"}, {"start": 896.1999999999999, "end": 902.1999999999999, "text": " so you can see in this one step which yeah we'll call one decoding step we have"}, {"start": 902.2, "end": 910.6800000000001, "text": " basically done two of the greedy decoding steps in one go so by predicting"}, {"start": 910.6800000000001, "end": 915.6400000000001, "text": " into the future and then selecting the one that agrees with the original model"}, {"start": 916.6800000000001, "end": 922.0400000000001, "text": " because we can the fundamental thing is we can score in parallel but we can"}, {"start": 922.04, "end": 932.36, "text": " greedily produce not in parallel all right so they actually push this further by"}, {"start": 932.36, "end": 940.52, "text": " also eliminating one of the one of the evaluations here by combining basically"}, {"start": 940.52, "end": 947.64, "text": " the the next predict step with the previous verify step and it's it's pretty"}, {"start": 947.64, "end": 955.08, "text": " cool to look at that so we're in the same situation you have this and you suggest"}, {"start": 955.08, "end": 963.0, "text": " this continuation and then the score if model again we'll go we'll go here"}, {"start": 963.56, "end": 970.4399999999999, "text": " but while you verify you also do the next predict at the same time since you"}, {"start": 970.4399999999999, "end": 975.48, "text": " you've built your model since it's the same model and this model every time you"}, {"start": 975.48, "end": 982.9200000000001, "text": " executed it outputs a distribution over the next set of positions you might as"}, {"start": 982.9200000000001, "end": 990.2, "text": " well take the outputs of it right so when you then decide to accept this here you"}, {"start": 990.2, "end": 996.6, "text": " will already have the outputs computed for the next three positions so this you can"}, {"start": 996.6, "end": 1002.52, "text": " feed directly into this next predict step you basically don't have to execute it you simply"}, {"start": 1002.52, "end": 1008.6, "text": " go to the one you've accepted and then you look at the outputs that you get anyway"}, {"start": 1009.64, "end": 1019.24, "text": " from this model and use them so you might ask okay which which how does a model look that like"}, {"start": 1019.24, "end": 1024.92, "text": " scores and predicts into the future and this the answer is here it's a bit out of order I would"}, {"start": 1024.92, "end": 1031.24, "text": " have maybe like this more previously but in any case this is what they do so they they use a"}, {"start": 1031.24, "end": 1036.92, "text": " transformer architecture and you have to imagine it starts down here actually there is a huge network"}, {"start": 1036.92, "end": 1044.36, "text": " down here right this is just the output layer so there's a giant transformer network down below"}, {"start": 1044.36, "end": 1051.16, "text": " and it produces this output representation now normally from this representation you would go"}, {"start": 1051.16, "end": 1058.84, "text": " to this what's called p layer here this is a output vocabulary projection so this has one entry"}, {"start": 1058.84, "end": 1068.36, "text": " for each of the words in your vocabulary so the a cat and so on and you would then for each one"}, {"start": 1068.36, "end": 1077.24, "text": " predict a probability so with this representation you basically project it onto this vocabulary and"}, {"start": 1077.24, "end": 1083.72, "text": " predicted probability distribution over the next word but what they do is they say no no we"}, {"start": 1083.72, "end": 1089.16, "text": " not only need the next word we need the next three words so let's actually split this output"}, {"start": 1089.16, "end": 1097.16, "text": " signal into three output signals and they do this by introducing this this hidden feed forward"}, {"start": 1097.16, "end": 1104.6000000000001, "text": " layer here or a hidden transformer layer it's a it's a hidden layer yeah we insert a signal feed"}, {"start": 1104.6, "end": 1114.76, "text": " forward layer with hidden size okay so they insert a hidden layer and then they also add these"}, {"start": 1114.76, "end": 1121.08, "text": " skip connections here right they add the skip connections which basically just means they feed"}, {"start": 1121.08, "end": 1129.9599999999998, "text": " through this output directly to to here and add it to that so basically the feed forward layer"}, {"start": 1129.96, "end": 1140.28, "text": " needs to transform this this output here into the vocabulary input one step ahead two steps ahead"}, {"start": 1140.28, "end": 1145.72, "text": " and three steps and and you can see here the those are independent right they don't depend on"}, {"start": 1145.72, "end": 1150.68, "text": " each other there's nothing feeding back p1 here into the decision of p2 so they can be executed"}, {"start": 1150.68, "end": 1158.8400000000001, "text": " in parallel but they lose the the dependence on each other all right so that's um that's the"}, {"start": 1158.84, "end": 1164.6, "text": " architecture and you can clearly see here it's able to predict three steps into the future at"}, {"start": 1164.6, "end": 1174.84, "text": " the same time so yeah all right so they they they also do different adjustments where they say"}, {"start": 1174.84, "end": 1183.56, "text": " now yeah we can also kind of sacrifice um a bit of a bit of the fidelity to the original model by"}, {"start": 1183.56, "end": 1191.72, "text": " not requiring that the basically we don't we don't only accept when the suggestion is the perfect"}, {"start": 1191.72, "end": 1197.08, "text": " best suggestion that would have been decoded by the greedy model but what we could do is we could"}, {"start": 1197.08, "end": 1204.76, "text": " just if it's in the top k we could accept it if it's in the if it's good enough basically one of"}, {"start": 1204.76, "end": 1209.8799999999999, "text": " the suggestions that we have is good enough then we'll accept it or when you have like some sort"}, {"start": 1209.88, "end": 1215.8000000000002, "text": " of distance metric they say here so the distance between our suggestion and the maximum so the"}, {"start": 1215.8000000000002, "end": 1222.3600000000001, "text": " what would have been best by the greedy should be smaller than some constant epsilon and that way"}, {"start": 1222.3600000000001, "end": 1227.72, "text": " you can sacrifice a bit of performance but your suggestions will be accepted much more often and"}, {"start": 1227.72, "end": 1233.3200000000002, "text": " thereby your your speed up will be much higher and they also experiment with whether or not they"}, {"start": 1233.32, "end": 1240.52, "text": " should fine tune the original model along with their model and also the experiment with knowledge"}, {"start": 1240.52, "end": 1249.32, "text": " distillation where they basically um have like some some teacher model and you train the their"}, {"start": 1249.32, "end": 1253.6399999999999, "text": " your model on the output of the teacher model don't want to go too far into this since these these"}, {"start": 1253.64, "end": 1263.5600000000002, "text": " are mostly kind of um things to make it work even better and you can see here that um this is for"}, {"start": 1263.5600000000002, "end": 1271.48, "text": " example a machine translation task so this is the WMT 2014 English German translation and"}, {"start": 1271.88, "end": 1277.96, "text": " there's a regularly they get a bluestaur of 26 and here a higher is better and if you can see"}, {"start": 1277.96, "end": 1285.88, "text": " they get a fairly sizable speed ups by keeping the bluestcores fairly constant so they they"}, {"start": 1285.88, "end": 1293.88, "text": " almost speed up by 2x but if they allow the bluestcores to go down a bit um they get a much higher"}, {"start": 1293.88, "end": 1300.04, "text": " speed up of like three and then if they do like distillation and fine tuning they actually manage to"}, {"start": 1300.04, "end": 1307.4, "text": " keep up the performance even though they get um very very high speed up so they get speed ups"}, {"start": 1307.4, "end": 1315.3200000000002, "text": " until like five x by not dropping the bluestcores very much so that's that's pretty impressive"}, {"start": 1317.0800000000002, "end": 1324.6000000000001, "text": " another experiment they do is image super resolution where you can see here with regular they try"}, {"start": 1324.6000000000001, "end": 1330.52, "text": " to really keep exactly the original model output and it doesn't it doesn't speed it up too much"}, {"start": 1330.52, "end": 1338.76, "text": " but when they allow for a bit of a mistake to be made um so here this is image super resolution"}, {"start": 1338.76, "end": 1346.68, "text": " so values are between 0 and 255 and they allow epsilon equals to two um of that so that's"}, {"start": 1346.68, "end": 1354.92, "text": " this kind of less than 1% error on the individual pixel then they get a speed ups of 7x or something"}, {"start": 1354.92, "end": 1361.72, "text": " like this and you can see in this region here that when the k is 4 and k is the number of steps"}, {"start": 1361.72, "end": 1367.96, "text": " that you decode ahead so and the mini mean block size is 3.75 that means on average"}, {"start": 1369.0800000000002, "end": 1376.44, "text": " 3.75 steps ahead or accepted which means basically their their suggestions are almost always"}, {"start": 1376.44, "end": 1382.2, "text": " good enough to be accepted so they get this massive speed up by basically being able to jump"}, {"start": 1382.2, "end": 1390.3600000000001, "text": " uh these decoding steps um yeah so they have a bunch of other results here they're show their"}, {"start": 1390.3600000000001, "end": 1396.04, "text": " wall clock time speed up since all iteration speed up is well but if you have to pay in huge"}, {"start": 1396.04, "end": 1402.04, "text": " computational cost it's not so good but they also show that they have a big kind of wall clock"}, {"start": 1402.04, "end": 1410.44, "text": " speed up up to up to 4x here in super resolution and over 3x in translation so that's a pretty"}, {"start": 1410.44, "end": 1416.44, "text": " cool paper they give some examples here onto more tables uh some examples of their super resolution"}, {"start": 1417.24, "end": 1425.64, "text": " and yeah if this might be something for you then use it it's i think it's a pretty neat trick"}, {"start": 1425.64, "end": 1440.8400000000001, "text": " and yeah especially for production systems all right that was it bye bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=pPBqM4CKjUU | Discriminating Systems - Gender, Race, and Power in AI | TL;DR:
- There exists both an unequal representation of people in the AI workforce as well as examples of societal bias in AI systems.
- The authors claim that the former causally leads to the latter and vice versa.
- To me, the report does not manage to make a strong enough argument for that claim.
- I find the statements made quite dishonest at times.
https://ainowinstitute.org/discriminatingsystems.pdf
Authors:
Sarah Myers West, Meredith Whittaker, Kate Crawford | Hi there. Today we're looking at discriminating systems, gender, race and power in AI by Sarah Myers-West Meredith Whitaker and Kate Crawford of the AINow Institute, which is a part of New York University or associated with it. This is not as much a paper as it is a report kind of summarizing current literature and also kind of an opinion piece slash recommendation giving document. Yes, so we'll dive into it. As you can see from the index it's quite a long report and we don't have time to go into all of it. Actually we don't have time to go into most of it. I just hope to kind of point out what the main arguments and themes are in the report, kind of what it's trying to say. Pick out some interesting things and summarize it to the best of my ability. Also give a little critique. Let me actually go ahead and try to state the kind of core argument that the report is trying to make because it's not really clear from reading it and you have to kind of read the whole thing and then kind of becomes clear what the argument is I feel. Though they somehow stated in the introduction numerous times in various ways. So I might just be not as attentive reader at first time. But all right. So here's the argument and I really hope I'm representing this correctly. We have a problem currently that sometimes AI systems can exhibit what we usually call bias and we don't mean mathematical bias like bias variance trade off. We mean bias in a societal sense. Let's say bias against certain types of people where they shouldn't exist. So for example, let me draw an AI system and I'll just draw a little computer screen with a light bulb. All right. This is because it's smart. This is an AI system and the AI system and they give numerous examples. One example they give is like face recognition algorithm that is much more accurate on faces of white males as opposed to darker skin females. So let me draw like two curves to represent these distributions are unequal and so the AI system exhibits some bias with respect to some kinds of people with and especially protected attributes. And in this report they focus mainly on gender and race. So that's what we're going to talk about. The second thing they observe. So this is observation one. The second thing they observe is I'm going to draw some generic people here that represent the workforce of AI. So the AI workforce is classified as all the people that work on AI be that university researchers or within companies building AI products or deploying them. So this is the workforce and they observe that there is an unequal distribution among the AI workforce. So this distribution I'm also going to do this for unequal distribution. There's an unequal distribution in the AI workforce most notably it's predominantly males who work on AI and also white people are over represented compared to the world population at large. So that's kind of the two observations they make. And now what they claim is that the unequal or the unequal representation in the workforce is causing the bias in the AI systems. So they're basically saying these AI systems are biased because that the workforce is unequally distributed. And also they claim in a less powerful sense I feel but they claim there is a loop that this then leads back that because there is bias in the AI system that again leads to an unequal more unequal distribution of the workforce. So the core argument really is as they set out to do like in the introduction and also claim that they have done in the conclusion is to demonstrate these two directions here in a causal way. So the systems are biased because there is an unequal representation in the workforce and that feeds back. So the argument is that if you want to fix the bias here, if you want to fix that, then you will have to fix it via making the workforce more what they call diverse. So less unilaterally distributed towards white males. That's that's kind of the the final conclusion. If you read the report and the recommendations that's that's mainly what they're going for. Yeah, so my opinion or in my opinion having read the report a couple of times is that as I see it, they really don't demonstrate these links. So they get they give examples of this and they give examples of this right. They show that the workforce is unequally distributed. They show that AI systems can exhibit such bias but they never actually show these links in my opinion. They they don't show this. So if if you make the claim that in order to fix the bias in AI systems, you must fix the unequal representation in the workforce. I would need a argument that says because there is unequal representation, therefore a therefore b therefore c therefore bias like an actual argument to follow that says because of this that because of that that and so on. Like it's just not there. Like they they they simply show parallels. They simply show that these two things exist and they just list example after example of that and the the I don't think they make this argument. What I think I think to also the other direction they don't really make this argument. They accept in very in like one in one case where you know if you give them benefit of the doubt. What I also think is that it appears like like the article if you read it and I encourage you to read it if you have some time. It makes a lot of sense if you have already accepted this conclusion. Like if you've already accepted this then it's like oh yeah because I feel this is just a text where the confirmation bias is so high just the way it's written that it must make a lot of sense to someone who's already kind of in on this conclusion. But to someone who isn't sold yet like myself I am just not finding this convincing at all. The second thing is that it very much feels like this isn't like a discovery or something but someone actually set out with the goal to address this here with the goal of I want companies to hire more of these people or of certain kinds of people or to become more diverse or to promote more of a certain type of people. And now I'm going to find reasons for this and the reason is like oh look at look at this bias here. This is caused by this other thing and therefore we must fix this other thing. It very much feels like someone setting out with already the conclusion in mind rather than just being an honest investigation. But yeah I mean read it for yourself. I can't prove the absence of an argument by not reading every single line and I can't read every single line because it will just get very long and boring but read it yourself and I think I'm pretty I'm pretty I've read it numerous times with really an open mind to be convinced that there is an argument in there but I don't think there is or I don't think there is a very strong argument for this. Alright let this first party is more or less a summary so research findings is more or less a summary and we'll get to these things as they are important. Then they stayed recommendations right at the beginning. So actually you'd have to read the article first. This is kind of more of an abstract section but since it's right here will kind of jump right into it. So these are recommendations and yeah I've claimed they don't really show a connection but they actually just show examples examples of this and examples of this and parallel them and this is reflected in like every single section including here in the recommendations they have recommendations for improving workplace diversity and they have recommendations for addressing bias and discrimination in AI systems. Alright so alright in my case if you make this argument I would I would feel you also make recommendations for breaking these links or argue why they can't be broken. Alright let's jump into some of them and it is really a mixed back here really. So some recommendations I am really in favor of just from from the go not even you don't even need the article for those. Here published herassment and discrimination transparency reports including number of claims over time. The types of claims submitted and actions taken. So it's known that especially in these larger companies sexual harassment claims often go down in either bureaucracy or are kind of hushed under the table or something like this. What you have to recognize is that a human resource department of a large company isn't there to serve the human resources. It's there to serve the company providing human resources. That's why a sexual harassment claim to an HR department is just a potential lawsuit and that's why they don't want to take it seriously except for it must go away really quickly. So I think two kind of force companies or two ask companies to be more transparent to take more seriously these the accusations of sexual harassment and assault and also discrimination is very valuable goal and I fully fully support this. Also the here commit to transparency around hiring practices especially higher regarding how candidates are level compensated and promoted. That also is in the larger the company gets the less transparent this process usually becomes or the more bureaucratic the more people are able to game it and so on and distorted so I feel it's always good to be transparent around around okay this person provides this much value to the company therefore they should be compensated in according to that or at least be transparent about it. So these are kind of recommendations I like then recommendations that really going to a different direction is something like this here change hiring practices to maximize diversity and this is kind of reflect not going to go on this reflected in other in other points increase the number of people of color women and other underrepresented groups at senior leadership levels of AI companies across all departments. So these things they are usually within like company diversity goals and so on and doesn't really say how to do it but then the I mean as such they're not really recommendations yet they're more like goals but here recommendations seven I think is the crucial one ensure executive incentive structures are tied to increases in hiring and retention of underrepresented groups. So this is it's a bit of coded language but here they talk about executive incentive structure tied to hiring and retention of under represent groups this basically means if you are a manager or someone in charge of hiring or promoting and you hire or promote a underrepresented person and since they're talking about gender and race here if you that means if you hire or promote a person of color or a woman in this case you will become compensated more so at the end of the year you'll somehow have more money like more bonuses or more base comp or more equity or something like you'll get more money. So this recommendation is a direct call to hire based on race and gender so this is a direct call to racist and sexist hiring basically to discriminate people according to their skin color and according to their gender which I mean how is this okay with anyone like how can anyone how are people even able to state this and in like a high-profile report like this and get away with it and not have people criticize them this directly calls for people to be treated according to their gender and race and probably asked directly as you can go without getting into actual legal trouble but yeah I'm really really against such such practices I mean yeah that's I just I just don't know how this how this can ever how this can ever be thought of as a good thing by anyone alright so well yeah in my mind this recommendation and this recommendation kind of encounter to each other because if I commit to transparency how people are okay now I can I can transparently commit to to be racist I guess but if I say okay I'm gonna comp and promote people based on how much value they provide to the company then yeah I'd much rather have that than saying I'm going to comp and promote people based on their skin color alright so let's actually jump into the report I'm not gonna these recommendations for addressing bias and discrimination in systems this these are fairly general and common so as well as I said we'll jump most of the things in the report so introduction so they start out with there is a diversity crisis in the AI industry this they give like some numbers like 15% of AI research staff and 10% at Google so 15% of Facebook or women so these are some kind of fairly known statistics about how the AI field is a kind of gender and race skewed currently so they say they claim in bold diversity problem is not just about women it's about gender race and most fundamentally about power it affects how our companies work or products could build who they're designed to serve and who benefits from their development so this I find this this this word power and this notion of power a lot in this report it appears again and again and again in like power dynamics and power dynamics among groups it's like a world view it paints like a world view where these different gender and race groups kind of struggle against each other to gain power over another and whoever's in power will try to remain in power in alliance with their gender and race group and try to keep the other groups down it's I don't I'm not sure that's the correct view of the world in my mind the world is comprised of individual people that want to achieve something for themselves and they would like to prop themselves up whereas in this world view it's like I'm going to use the power of my group to keep the other groups down I don't know which world view you subscribe to but I find the world is comprised of individuals yeah and this is not discrediting that some people have it harder because of their gender or race but to see the entire world as a power struggle between these groups to me it's it's a yeah and I'm not going to point out everywhere it appears this power wording but it appears a lot and it's really shapes how the report reads you have to you have to kind of remember if you're a white male and currently the field is comprised of 90% white males you if you have like 10 like 10 hours let's say you have to do they have to 10 hours to do something right you can either choose to put down some other groups like put down groups that you're not part of or you can choose to invest these 10 hours in putting up yourself you right so if I like I profit if I'm a white male I profit minimally from keeping the other groups down because guess what I still have to compete with the like one billion other white males there are that's that's not going to help me to keep down anyone else and especially like it's it's moronic like who who does that who who like has alliance except most fringe people like to their race or gender rather than to the people they admire and respect and like to work with so I'm gonna if I have like 10 hours today I'm gonna rather spend this in popping up myself compared to everyone else and I don't care what gender or race they are and so that to me that's a much more accurate or I don't know plausible or worldview but just be aware that this report really takes on the language of kind of groups and power between groups and groups trying to you know kind of gain power and keep in keep power and then keep others from having power all right say say to date the diversity problems of the industry and the issues of bias in the systems it builds have tended to be considered separately we suggest that these are two versions of the same problem issues of discrimination in the workforce and insistent buildings are deeply intertwined challenge and more over tackling the challenges of bias within technical systems requires addressing workforce diversity and vice versa so the the um I think this this here actually is like how I described the argument and they kind of restated multiple times in a bit different way but I think this is the core and I really think I'm not misrepresenting the article here in that this is what they are setting out to do they're setting out to say okay the diversity um the kind of unequal representation in the workforce and the bias in some AI systems are causally linked to each other and tackling one requires tackling the other uh so yeah if if I'm misrepresenting them let let me know but I really think I'm actually representing their argument um so what they what they do as I said is they give examples of one and of the other and also they really they're really on kind of discrediting the the um kind of issues to solve problems of bias in a different way so they they point a little bit to this here in the introduction the same the face of growing evidence the air research community and the industry producing our products have begun addressing the problem of bias by building on a body of work of fairness accountability and transparency so fairness uh accountability and transparency research um concerns these these issues for one is research showing that some products are unfair or untransparented so on on on the other hand it's trying to devise algorithms that are more fair in according to some notions um um um or more accountable and transparent which means that the algorithm can kind of say why it made a certain decision rather than it being a deep learning system that you you don't really have an insight these fields are active fields of research definitely very interesting to look into uh so um but they they kind of it is not already here but they say yeah we have adjusting air systems that and produce a result that deemed fair by one of various mathematical definitions um you can already see in the language here they don't really like this research and they are trying in this report to kind of discredit it or at least claim that it doesn't solve the whole problem because their point is of course you have to address this diversity issue in the workforce in order to fix the problems um yeah so to to this i i just want to say uh no like if you can i mean you can criticize the the fairness and accountability and transparency research field in that they haven't solved the problem fully yet but in principle if i have an algorithm if i'm being delivered an algorithm right and the the fairness literature has been applied to that algorithm and someone tells me i guarantee you here is a proof the algorithm is fair right then i really don't care who made that algorithm as long as it's fair the problem is fixed if the bias is gone the problem is fixed and i don't care who fixed it i don't care if the person who fixed it is black or white or purple uh then the problem is fixed and they they really have to um they really try to just make the counter argument here is that no that's it's not enough um but i claim yes it if you can actually solve the the fairness problem technically then you have solved the fairness problem um yeah the only thing you can do is claim that it uh it's not good enough yet but not that it's fun they they kind of have to make the argument that it's fundamentally flawed approach and i don't think they succeed in doing that here um yeah so they they go on to say we should expand to consider not only how i tools can be biased technically but how they're shaped by the environments in which you're built in and the people that built them again this this focus like who builds the AI system i don't care i care what it does right as much as if if i hear an argument for or against something i don't care who makes the argument right i care what the argument says this is it's like an ad hominem attack for an entire community that's kind of how this this article this report um shows or it appears to me um so they say currently large scale AI systems are developed almost exclusively in a handful of technology companies in a small set of a university laboratories spaces that in the west tend to be extremely white affluent technically oriented and male so yeah they're they're problem that's their fundamental problem here that these these spaces are skewed in one direction uh interestingly enough their problem is not so much that it's that they're all in the same place right that they all live like 20 miles from each other in around San Francisco that's that seems to be not a problem at all as long as we get to like enough people of color and women into these 20 miles um but yeah so that that's pointing out the the problem here or the yeah kind of issue they have all right so they go on um just kind of want to highlight again they say both within the spaces where AI is being created and the logic of hey how AI systems are being designed so paralleling the two things the cost of bias harassment and discrimination are born by the same people um gender minorities people of color other under present groups then they also say similarly the benefits of such systems from profit to efficiency a group primarily to those are already in positions of power tend to be white educated and male so um they again they say the this points to a systematic relationship between patterns of exclusion within the field of AI in the industry driving its production on the one hand and the biases that manifest in the logics and applications of the technologies on the other and they try to make this connection because they say the cost and the benefit of these two things are overlap in the people that worry costs any benefits and I really again it's just a parallel but I really even don't think that's true because they kind of um the kind of argue against themselves later so they always say we have to look at again they shoot against the take much more than the technically driven problem solving um they they point to this uh so a research requires looking at gender and racist categories within which you must think well in short sorry studies of discriminatory systems we need to ask who is harmed who benefits who gets to decide so it's it's kind of who bears the cost who bears the benefits and who has the power um so that's yeah and and again it's we seek to understand how AI disadvantages some we also consider how it works to the advantage of others so keep keep that in mind that's kind of the lens through how they analyze the this thing again one that acknowledges power relationships and centers equity and justice that's the they want to see this bigger picture um so that's yeah keep yeah again keep that in mind so they go into a section called which humans are in the loop how workforces and AI systems interact so this kind of uh from the title of this section you think okay here's where we get in here's where we make the argument and they start by um listing examples of how AI systems can be discriminatory and first they go into an example of amazon had developed an experimental hiring tool to help rank job candidates um by learning from its past references amazon hoped that the résumé scanning tool be able to efficiently identify qualified applicants comparing their applications to previous hires the system quickly began to downgrade résumés from candidates who attended all women's colleges along with any résumés that included the word women's after uncovering this bias amazon engineers tried to fix the problem by directing the system to treat these terms in a neutral manner the company eventually abandoned the tool when they were unable to ensure that the algorithm would not be biased against women gender-based discrimination was built too deeply within the system and in amazon's past hiring practices to be operated using a purely technical approach so this just the way is written i find to be i find to be quite dishonest but let's let's analyze what what happened here so their final claim is that gender-based discrimination was built too deeply within the system um to be operated using a purely technical approach so this is one of their arguments they say technical approaches they don't help because the amazon engineers tried to fix the problem all right but were when they were unable to ensure that the algorithm would not be biased against women um so if you if you read this you really i mean i really get the impression that's not what happened here what happened here most probably is amazon built this tool okay and it fed in its past hires and we know of issues of like data set bias bias inherent in data sets so if your data set is skewed the um the AI tends to pick up on the skewed data set and becomes skewed itself okay so i actually would argue that most or all of the examples they stay stayed in here um are examples of such biased data sets and not so the the cause of the bias is the data set that they are strained on and not the person that ran the code or built the algorithm to train it on or built the deployment um and so but it doesn't matter your your your your your amazon you built this tool and you realize oh oh it discriminates against people having women's on their cv um so this is a pretty bad PR-wise so you tell your engineers engineers fix the problem so the engineers go fix the problem they come back and say okay we fix the problem and then what you do is you say okay engineers can you ensure me that the algorithm would not be biased against women because if only the slightest bias exists if only it doesn't even have to be if one journalist finds one example where there is a down rank because i add the word women's um then we are screwed right and the engineers will say no we can't guarantee that it's a deep learning system or something right we we can't like give you a proof that it's not biased if you're smart executive at that point you'll scrap the tool because the potential PR downside are just huge and probably they've also realized it's not that handy to have this this tool compared to their recruiters doing their job because their recruiters might actually be you know good and have been doing this for a while so to the fact that this tool was scrapped is probably much more a result of a PR disaster um but it also independent of that to say gender based discrimination sorry gender based discrimination was built to deploy within the system to be uprooted using a purely technical approach it's just i mean what is what is this uh this is just trying to discredit this kind of technical technical going about solving this problem but i'm pretty sure if someone comes to me it says here i have this tool and i can mathematically prove to you that it's not biased then um it's not then the problem is solved and also i really don't see how the person training the algorithm or the to person researching such an algorithm has any influence over how the algorithm works because they're not the ones making the data set or if they are yeah then they can make a better data set also if a person comes and makes a better data set that will fix the problem and it doesn't matter what skin color the person has that makes the better data set so all of this this link is just not demonstrated here or anywhere here at all but this this here is the closest Amazon that this report actually comes to making this point so before um i drew this thing workforce AI bias right so this this link since it's here the AI system is used for hiring the workforce so at least one could make a claim that this link is somewhat demonstrated um but i um this it's a weak case i would agree but this is the closest they come so the and but then to go this direction you have to somehow argue well the the workforce somehow makes the AI system biased no the workforce influences the data set i if the AI is trained so if a hiring AI how do you train a hiring AI you optimally train it on the performance so this this employee here is going to have a performance over time right and the AI system will look at that performance over time so if the AI system even if it's initially biased because it learns from the recruiters it will learn that okay actually if i always forego these women um then i don't get as much performance of a workforce so i should correct for that so if you train AI system on a good metric then um and then this problem will leave even out itself so but again this yeah this this is this could be considered like one point in the argument but i think it's a very weak point and only because the AI system is actually used for hiring where i think the point they're making is a much larger one is the general bias in the AI systems contributes to the workforce imbalances and there there you somehow have to say that okay the AI system somehow influences society at large and society at large then go leads to the workforce being skewed and i don't yeah it's just not strong enough in my opinion and the other direction also isn't isn't strong here but again the examples only get weaker from here on um they go on they say this is just one of many examples that show how the function the logic of a given technology echo the gender and racial dynamics of the industry that produced it here yeah this that's the claim they're making echo the gender and racial dynamics and they're actually making a stronger claim namely a causal claim um they give the the other example of the amazon's recognition facial analysis service previously demonstrated gender and racial biases worse than those of comparable tools so it failed to see dark skinned women while being most proficient at detecting likes light skinned men um and they later going to this example again where they basically also stayed yes this is an issue of the data set uh the data set being much more comprised of white men um and they say but then they you have to kind of make the turnaround argument and say well the data set is a reflection of society and society you know part of society is the workforce and it's just not I mean it's it again this argument only works if you already believe the conclusion um otherwise it there's actually no argument there or no solid one um but what they do here is they say amazon's initial response to such criticism has been to try and discredit the research behind it this reaction or let's let's first discuss this so the amazon yeah amazon of course being the accused here and a multi billion dollar company and the criticism is something that is PR wise very bad for them they discredit the research try to discredit the research behind it it's understandable that this could be dishonest from amazon's side right they i mean they're getting attacked it's like no the tobacco companies trying to discredit the smoking research but still i mean that doesn't mean it's wrong it could actually be bad research right so you have to actually go and look at what's amazon saying what is the research really doing is amazon right or wrong um i'm completely open that amazon is is wrong here but you still have to go look and this citation here i've tried this citation here this one isn't to a to amazon's response it's to like a a medium article and the medium article doesn't even include amazon's response i've i've looked maybe haven't seen it it doesn't also doesn't link amazon's response maybe link something that links something or that includes it in some way but basically this medium article only states here amazon has been denying this or amazon has been critical of this and if you state such a sentence amazon's initial response to such criticism has been to try and discredit the research behind it i at least expect the citation to lead me to amazon's response so that i can verify what they're saying right so this i mean i don't know willing to chalk it up to i don't know incompetence uh rather than malice right but then they go on and they say this reaction is evidence of the wider problem the research was conducted by two well-regarded AI researchers who are women of color by attempting to publicly discredit their expertise and research methods amazon is reinforcing the same kinds of prejudices and erasure that the research critiques yeah here you go straight to the identity of the researchers like play the race card straight out i mean this is this is maximum dishonesty right except if amazon said something like well these women of color clearly because they're women of color they have no idea what they're doing or something like this this is basically it's coded language for saying either saying you're not allowed to criticize people of color because they're a minority or you're basically saying amazon is racist and that's why they criticize them they they just don't take them seriously because they're women of color i mean both are both are apart this is just dishonesty uh really stated here to i mean again i'm perfectly willing to accept that amazon's critique of this research is wrong and is not well intended because they're the ones attacked but you still have to examine it rather than say well they they shoot against women of color and therefore somehow that makes their their counter argument irrelevant or even racist or something that's i don't know i find this dishonest um yeah i don't know about you going on so they um they go on and state a number of examples of kind of examples of bias and discrimination in the workforce and they a lot of time they make a mixture kind of of the kind of gender and race imbalance in a workforce and things like sexual harassment not being taken seriously by the companies and also the things like gender uh or race pay gaps which i'm open to to accept that the these things exist and are even intertwined um but just to tell you what's happening because we're we're kind of skipping but it's kind of a a mixture of these things so they say these issues are systemic there's a close relationship between these workplaces with discriminatory practices and discriminatory tools a feedback loop that is shaping the i industry and its tools so again here to state i think i've stated it enough now that or demonstrated enough that i'm really representing their arguments as they intended it to namely that there is this kind of causal links and loop between these two things um again they shoot against the fairness literature by saying from this perspective sorry from this perspective locating individual biases within giving giving technical systems and attempting to fix them by tweaking the system becomes an exercising futility um only by examining discrimination through the lens of social logics who benefits who it harms and how can we see the workings of these systems in the context of existing power relationships again they say these issues aren't technically fixing these systems won't help um if that's the problem and yeah i agree if that causal link actually exists then technically fixing the system might not solve the problem not even sure i mean if you technically fix a system like this then you technically break the causal link and thereby fix the problem i would oh sure but again this is based on the hypothesis that they've already reached like demonstrated their their conclusion at which they haven't and which they are not in the entire article uh yeah so the next section goes into who makes i so i don't know about you but this section was titled how workforces and AI systems interact and apart from one the AI system being used for hiring the workforce which is uh said this one instance where actually there could be one causal direction from bias to diff minister representation in the workforce other than that there isn't really anything in there that really shows how these two interact especially in in a causal way all right the next section is called who makes AI and it's broadly about the um about the gender and race imbalances or it's not unequal representation in the workforce and we're going to skip this um diversity statistics that kind of discuss that diversity statistics statistics of companies aren't really accurate or can we you know massaged kind of by the companies which you know is true uh definitely companies will always try to maximize their profits and even if they you know give out such a report um so that definitely critical thinking is an order all right so the next section is called the discrimination feedback loop right if so if in the earlier section you felt like here we're going to the meat then you must feel with this title like okay we're actually going to see how this loop works and how the two things are really linked like how one causes the other and vice versa so let's jump in they say AI systems increasingly play a role in our social and political institutions including education, healthcare, hiring, criminal justice um yes therefore we need to consider the relationship between the workplace diversity crisis and the problems with bias and discrimination in AI systems uh no why i don't see how therefore but yeah so i don't see how therefore we need to consider the relationship okay if there's a relationship we need to consider whether there's a relationship okay granted um so they say fairness accountability and transparency research is playing an emerging role and what they mean here is the aspect of fairness accountability and transparency research that shows that there is a problem so i told you there's two sides one side is showing there is a problem in current systems and the other side is trying to fix them so they're very much fans of the side that shows that there is a problem and they use show some of these problems here we've already seen some but they show some more like Facebook's ad delivery systems let users to be shown as for housing and employment in a discriminatory manner um so giving 2019 study found significant racial bias and a widely used commercial algorithm used to determine whether patients will be enrolled in care management programs um so the these are these are just examples of these AI systems being biased and um they so they go into this say taking a contextualized view may enable more extensifican and the contextualized view they when they say they say they mean anything more than just a technical approach at solving these problems a more extensif account of bias to emerge future work could examine the politics of system design study how AI systems insituated reality and study AI systems insituated realities ask why a system was designed in a particular way how it was constructed whose interest it shaped shaped by the metrics in which its success or failure is assessed rather than solely focusing on improving existing data sets or individual algorithms um yeah I agree I mean we always have to we always have to pay attention to these things especially like um looking at the metrics by which its success or failure is assessed but a lot of times this is this um is rather straightforward in in kind of if you look at the metric the metric most often especially in commercial applications is money right so the metric of like an ad showing system like if I have a system to recommend ads to people show people ads and personalize them and so on I simply want to maximize my revenue so I want to sell someone something and everything I want to know is how likely is it that person is going to buy that thing right that's basically yeah so in essence sometimes it's really valuable to consider what capitalism is so in capitalism in so capitalism these kind of this the system we're working on is kind of a form of limited capitalism but mostly mostly capitalism and um capitalism is very greedy so capitalism all corporations want to do basically is make money and that is and on the other side you have discrimination so discrimination meaning these unequal represent like unequal distribution actively so and often sometimes these go hand in hand sometimes you can make more money by discriminating against a certain type of people and that's that's a really bad scenario like that's a very like this is really something where we need to take action but a lot of times a lot of times these two things stand in opposition to each other so a little arrow here non-compatible that means if I want to sell someone something then I maximize my profit by not caring by accurately assessing how likely is it that person buys that thing um if I want to discriminate here if I want to discriminate start discriminating according to skin color saying like no I don't like that this person with this skin color is able to buy this product I want to kind of keep them down and so on then I forego profit right then I actually even though this person could buy this thing I um forego that so often these things are in direct opposition to each other also if I in in charge of hiring and I don't like people of a certain gender but they would actually be really really good whatever good employees so I forego that that means I'm getting I pay more for less qualified people just because I'm I'm biased and I'm down ranking unjustifiably these people of the gender I don't like so oftentimes you have to ask yourself are people fundamentally greedy or discriminatory which are they more if push comes to shove would they rather have more money or would they rather keep their own race and gender group in power um and with just yeah so so the and you have to ask this of corporations you have to ask this of people and in my experience and view I'll like people are much much more greedy than they are willing to discriminate um and give up money for discrimination and so if if we look at metrics by which it's success or failure of AI systems are designed then I would I would argue a lot of the times metrics actually um profit incentives and especially if we look at data set construction if there is a skewed data set that makes my AI system be biased that actually loses me money and the company would profit a lot from building a better data set so looking at kind of metrics uh actually makes a lot of sense to me and very much in favor of that and I think by by designing accurate metrics and then getting the best possible information the best possible data sets to maximize these metrics will oftentimes actually eliminate such forms of discrimination again there are situations where they don't we have to be very cognizant of these they go into this and they say um also examine more thoroughly how societal discrimination surfaces in data provenance examining the history and process of data set construction um and considering how cultural norms and stereotypes were numerated and represented at the time of data creation this is a big issue yes so the data set construction kind of at the time of data creation and so on this is a big issue in these systems and a lot of bias and I would argue most of the bias we've seen here arises from corrupt data sets and from data sets that were constructed in an already biased way and the AI system trained on these data sets simply replicates this um this bias so they I think that's very correct here um they go into this example they say the labeled faces in the wild data set contains over 15,000 images only 7% of images are of black people and this is because um these the media landscape of the early 2000s these images were gathered from the news media at the time predominantly featured white men in positions of celebrity and power this exactly so if you train a system on these and this data set the system will inherit this bias yeah so this is a classic example of a corrupt data set um also I mean this doesn't only with with race and gender this is also if you'd like take pictures from IMDB yes you know a lot of this currently sell up a data set that is used in all the GAN research is collected from IMDB you probably have overly beautiful like pretty face people on there like so the ureia system your generative model is only going to produce mostly pretty face people since movie stars tend to be a lot prettier than average humans um so the the kind of data set construction process I think is currently the biggest source of bias in AI but that also and it's interesting that they go into this here and they kind of want to make the point that this is you know this is because society and you know power in society the data set reflects that but I would argue if someone makes a data set that doesn't have this bias then the problem is solved and I don't care who makes the data set so the link between the workforce and the bias is really broken by an argument like this because as soon as we have a correct data set and on bias data set we can mitigate the bias and they go they go into this here um they say sorry yeah they say down here they say these people this um these researchers have looked at these facial recognition systems and they assessed this what we saw earlier higher error rates for darker skinned women than for any other group lowest error rates for light skinned men to measure this disparity these researchers developed a new data set that is more balanced both in terms of gender and skin color good like problem like make a larger data set to actually train on and then problem solved like and I don't I don't care at all what race and what gender these people are well done like good people make good data set like this and then we've solved the problem what's the problem here like why would you ever care what these people look like if they if they do good work that's it to me that's this just this this this actually breaks their own argument I don't know why they included here but um yeah to me that the to then suggest that there's a link to the workforces if it's if here is obvious that if you fix the data set you can fix the the recognition system all right so we'll go on here jump a couple more paragraphs except one they say they shoot again against this kind of say to this point a focus on fixing technical systems in isolation without examining their broader context of use and power and dynamics that a tensor choose is not limited in its intervention it can actively cause harm so if you fix the problem in a technical manner they argue here it can actively cause harm and the example they give is that facial and image recognition systems they are often applied in service of police surveillance which disproportionately harms poor people and communities of color so the this person this a quote from this person that says is this not social progress to make black people equally visible to software that will inevitably be further weaponized against us we are considered criminal and more available by orders of magnitude whatever claim to a ride of privacy that we may have is diminished by a state that believes we must always be watched and seen so this is an example where by improving the facial recognition for black people it makes them makes a police better at surveilling them which is true and then it is an ethical problem that the police is able to use these facial recognition systems to surveil people that's a massive privacy problem it's a massive problem in how much the state is allowed to overreach and so on so I think it's a discussion itself but here they they argue because at the very beginning I asked you to remember the this whole notion of we always have to look at who benefits from the way the AI system is constructed who who is harmed from that who is who benefits from how the metrics are shaped and so on in this case we actually have a perfect example where if the if the face recognition system is very inaccurate for black people's faces that actually helps them in the societal context so by logic of this report here that must mean that somehow the bias like the the bias works for them and thereby the system is good or something like this and by fixing it you actually make it worse yeah they say it can actively cause harm so I think this is a pretty much arguing against themselves earlier where they say oh we always have to look at who benefits from the system yeah here if face recognition system can't recognize you actually benefit so I don't think that argument works in any case except if you only look at it when you want to look at it all right so this we're we're gonna jump a couple of sections here but the the core thing here was the feedback loop and again the feedback loop isn't demonstrated at all here just examples of systems that are biased and of datasets that are biased or because of datasets that are biased but there's no demonstration of how the workforce how I mean yeah to take this previous argument so the workforce is supposedly supremely white and it makes a face recognition system that makes that is performing poorly for darker skin people and that actually in this context of police surveillance helps the darker skin people compared to the lighter skin people so that kind of existing exact counter example to the argument that the but that this misrepresentation in the workforce leads to the biases in the system if we interpret it through the lens who who it costs and who it benefits all right so the next section is corporate diversity beyond the pipeline problem and this is kind of an odd inclusion when I read it first to interpret to go against the pipeline problem here but it kind of makes sense if you if you know what these people set out to do so what these people set out to do is to argue we must fix the workforce right we must fix the we must hire more people of color more women and so on and promote them more so they have have a very much have a problem with this pipeline argument what the pipeline argument is is the following so at the beginning if you if you consider like the educational or career paths of people you have like 100% of people let's represent this at the beginning and then most of these people go through school so most of these go on this is kind of the area in here is the population and then some of them pursue higher education like some drop out so this gets a smaller amount go on so this is here this is time and this is kind of volume of people and then very few go into computer science right and then even fewer go into AI so what you end up is just a tiny sliver of people that actually go into AI um so this is called a a pipeline and um we have various junctions here like where you would go into higher education where you would choose your major in university where you would go into a subfield of computer science where the the kind of volume of people drop significantly from one point to the other and now if you compare this if you compare this and use it say we're not consider all of society but here over here we'll call consider all just men and over here we'll consider all women again they all go to high school and then university and then maybe very few go to CS even fewer go to AI what you'll find is and I've drawn it maybe wrong here is that this is smaller than this so if you comparatively look at how many males end up in the AI field you will find that fewer end up in more and will end up in the I feel than women if you comparatively look at it so at and this is over time like at the beginning you have 50 50 main women distribution in society almost I guess I think slightly more boys are born but I could be wrong about this and then as you go through time here um this skews I believe um so you go through high school and let's just assume like high school is still and that equal depends on the country then you go to university or is actually more more women at university slightly um and then you go into computer science and in computer science and this is just relative here that's why that kind of norm at 100% otherwise these things would go down all of them at the same time but comparatively you have then much more men than women in computer science right and then if you see who chooses AI I don't know if there's any statistics of specifically choosing AI from computer science I'm just gonna assume that remains the same so if you look into the AI field kind of this um this will stay the same so in the AI field you have much more men than women and presumably because you already have much more men than women choosing computer science as their major or choosing any technical field as their major um this this is uh kind of the so-called pipeline argument so where do AI companies hiring come in AI companies come in here they hire at this point after your university degree presumably um there's exceptions but we just say they hire after your university degree and therefore they basically have to choose from this distribution and if they just say okay we'll just take the the top I don't know 10% people will hire the good people of this we don't care what gender they are right so the top 10% here the top 10% here then this will end up being the same distribution as you have graduates all right so this is kind of the company company hiring from an let's say an 80-20 distribution without looking at gender will end up with an 80-20 distribution that's the pipeline argument of companies and they don't like the pipeline argument because the pipeline argument basically says that the problem is somewhere here right the problem isn't the company's hiring um wrongly the problem isn't that the company's here uh deselected the problem is somewhere here and because they want to make the argument that the company should hire in a different way they can't have that um so they argue against it now to argue against this would actually be very easy if this argument were wrong like they claim the argument is is is not good the pipeline argument isn't good if the pipeline argument were wrong what you'd have to do is you would have to say you'd have to say hey companies look at that in in your company you have an 80-20 distribution men to women right that's pretty unequal and university graduates the pool you choose from is actually 50-50 so obviously you're engaged in discriminatory hiring because you know the pool is 50-50 there's no reason why it um why your hiring practices should cause this inequality and therefore we can clearly show you do discriminatory hiring you should stop it you should definitely hire more women and people of color more of these more of the minorities because your hiring practices are the problem but that's not the case how do I know because if it were the case they would simply state this definitely in this report if that were the case that you could actually show with numbers that the pipeline argument is wrong then they would absolutely do this instead they have to like go back and they have to like ramble around it for several pages which will mostly skip but mainly because this is the case it is the case that these companies higher from a pool of of unequaly represented people and the only argument that you can make is that well if if you were to equalize this here then maybe here where the problem is that would fix like so the argument is often made if young girls choosing their majors have no no one to look up to like no strong women in in corporation CEO roles they will think that it's not a climate for women and they will elect not to go into these fields which is a valid argument like I'm completely open to that to that argument um but it's the only argument you can make and still then even if you determine this as the cause I would still not support racist and sexist hiring practice like do something else like make them clear that the environment can be changed or change the environment like change the if if it really is the case that it's kind of a not an anti-woman environment change that um if it's just a case that they perceive it as such change the perception but do not engage in discriminatory hiring practices because there's always someone losing out unfairly on these practices and that's that's uh something I'm not willing to to uh go into like that's something I'm not willing to engage in and I don't think people should engage be engaging in that actually that's why it's illegal so let's let's actually look at very few points this is just why the so they they claim they go kind of go over these um pipeline studies and they yeah they say the term used in the industry reference the absence of diverse candidates in the hiring pool of to justify the inability of large firms to achieve diversity due to scarcity right so that's they they basically agree the of the on the definition that I stated here um so they companies that are challenged on their lack of diversity frequently site pipeline studies have proof of the persistent challenge of finding enough women and people of color to hire yes and and the yeah but they say but the evidence suggests otherwise for example in 2016's Facebook chief diversity officer wrote that it has become clear that at the most fundamental level appropriate representation technology or any other industry will depend upon more people having the opportunity to gain necessary skills through the public education system well yes that's something I would agree and that's something clearly that addresses this region here um then and where the actual problem is happening so I I would say that's a very very good statement from the Facebook's chief diversity officer they say but as the center for investigative reporting study of tech company diversity data found 91 large tech companies headquartered in Silicon Valley managed to hire higher percentages of black Latina and multi-racial employees and Facebook that year well just if other just just because other companies employ racist and sexist hiring um to improve their diversity numbers doesn't mean that Facebook has to do this right does it it like just because other companies do this uh doesn't mean that it's it's a it's a it's a good thing to do or that's how you should go about it Facebook simply says like if we want to hire without being racist or sexist if we want to just hire the best people then more of the best people have to be in the pipeline like more people have to gain access to educational opportunities so we can then hire them whereas these other companies probably make a big effort to say well even if you are not as educated if you even if you're not as qualified as this other person will hire you because of your skin color I don't think that's that's an argument in that in the favor of what the report is claiming like I don't think that that is evidence that the pipeline argument is invalid all right so they're going to core themes in pipeline research and they do some they do some overview of the kind of pipeline research that um often so sometimes the pipeline research examines why why for example why women don't choose to go into computer science as much and sometimes they they focus on what is their perception of the field what are what is their perceptions of the stereotypes of the field what is their perceptions of the kind of culture the in the field is it suited to them what is their perception of how qualified they are for the field and is that true is that false and so on so this research examines a whole variety of things and it's very interesting actually to read through this research um I want to point out this here other studies suggest that gender is correlated with the person's motivations for pursuing a career in the field women and particularly women from low socioeconomic status or minority backgrounds are more likely to see computing as a versatile profession that provides an opportunity for secure employment higher pay and better social standing moreover their interests go beyond technical aspects of computing focusing instead on the purpose and application of software however such interests are often de-emphasized in computer science curricula a price technical skill and its ability to industrial it's applicability to industrial settings above all else so I find this really interesting because it's basically saying that women have different interests than men and on average right that's that's basically saying that which is almost heresy like to say this in this context like people will come after you if you suggest something at this and yet they're just stating it here and remember this for later this is this is really funny that they're like there yeah the interests could be different for women than for men and we might have to adjust our curriculum to be more suited to these different interests I mean yeah I'm sure that's uh yeah as I said you're like usually this is forbidden to say all right so they they go on um I say limitations of pipeline research right um these are fairly like common common limitations let's say of studies in general social science studies which yeah I won't go into much um um again this the this state we have to examine we don't we don't only have to examine this but the problem they basically say the problem is actually the um the culture and the problem is actually the the perpetrators we're the same I don't remember where this is where this is stated but they again say like we have to examine who benefits from its present construction like who is underserved within the current tech ecology who benefits from its present construction how these dynamics might be untangled and so on so um again stating that in this kind of power relationships from for the different groups which I don't agree is in large part what's happening um they say it's worth considering the scope of these studies and the by and large the recommendations the issue are limited targeted at the administrators of university computer science programs seeking to broaden the diversity of their student body yes that's exactly where we saw the problem appears to be right so the the reason they have a problem with these studies is that they actually focus on the point where this discrepancy appears to happen um because they want to claim that no no no you should focus on a different point namely hiring um in in these companies hiring and promotion um they say though important oh so at least they they acknowledge that that's an important problem um this is a narrow frame through which potential solutions to barriers to inclusion it does not address the companies that hire computer science students the peers responsible for promulgating stereotype views or engaging in hostile behavior or the broader social conditions that may influence students success in computer science programs actually the the research and even some of the examples they've included of this research addresses all of this like they they the research often addresses the kind of um stereotypes and the how the peers act and how the companies act and also how the companies hire and how the kind of how people have something to look forward to or nothing to look forward to and how that influences the their decisions uh yeah again they say the studies are frequently cited by those within corporate environments to justify their own lack of diversity as they situate the locals have changed outside of the corporation itself as such pipeline studies are disproportionately emphasized as a part of the broader research agenda on diversity and technology again they state companies use this to get out of course like companies of course they're gonna use this to get out I mean I agree at least with that I agree that companies are gonna try to use this to get out of responsibility um certainly all right so the last section here is the pipeline dreams after user for research again this is on on this pipeline studies basically say they say the pipeline research hasn't shown like hasn't borne fruit it it hasn't led to meaningful change in the field even though we've researched this um the reason they state the number of reasons they tends to place the owners to solve issues of discrimination silica and value on those who are discriminated against rather than the perpetrators I find this word choice really interesting perpetrators right like again the group of white men is trying to put down everyone else that's the perspective that the article takes and um it's not even true this research a lot of times it actually says the reason why for example women don't choose to go into computer science is the the male dominated culture within these corporations is the perception of you know this of this not being a woman friendly environment is the people here of sexual harassment and so on so I it's not even true but more over I just wanted to point out the choice of word here perpetrators I don't I don't yeah I don't know how you get to this word um it really shows kind of a world view of the of the authors in my opinion all right so they go on and say okay this pipeline studies haven't been beneficial and companies haven't done much or it hasn't been successful um they're going to work or led initiatives which I'm going to skip here there's just a kind of a reporting of what happened at companies where the workers themselves organized and then the last section here is the pushback against diversity so in this section they're kind of documenting and arguing against people who have basically stated counter arguments to their recommendations mainly so their recommendations being let's change the hiring let's change the promotion and so on to to be based on race and gender and the the pushback here yeah characterized in different ways so we'll go through this just the last section I know it's a long video already um if you're still here like the one person who's still here hi uh hope you're doing well good keep hydrated uh yeah so they say it's a critical time we now see diversity itself being weaponized um so they say this growing awareness accompanied by demands for inclusion and equity has led to some change but there has also been resistance especially among those implicitly privileged by the status quo so again jumping straight to attack on the person like I don't care if who makes an argument against me I want I'm gonna go on the argument and um I'm gonna go on the content of the argument but these people straight first thing they stayed is that's that's by that's just by the people who are you know benefiting it's just by the white men basically um straight to the identity of the person that's this honesty right there um yeah so those questioning and even rejecting the idea that racism and misogyny and harassment are problems within the AI field and the tech industry have appropriated the language of diversity to argue that efforts to improve inclusion are in fact exclusionary and addressing the deeper structural challenges post-barassism sex and inequity is misguided and yes yes definitely efforts to improve inclusion can be exclusionary like just because so this is this is a thing just because you're a safe fixing let's let's say it's just because you're fixing a problem doesn't mean the method you're using to fixing it is justified and is itself good right methods to improve inclusion can be exclusionary and some that have been proposed are exclusionary definitely uh it depends on the method it doesn't mean these people are against these efforts it means that the measures for example implementing racist hiring policy I can definitely see that this is going to lead to more equal representation within the workforce but the tool itself is really bad and exclusionary and discriminating um so yeah I would I would say that's it's it's it's accurate that they can be exclusionary I say for example some AI researchers greeted the announcement of black and AI workshop at NURP's leading machine learning conference by questioning whether the event was necessary arguing that it would be discriminatory but can't they can't they question whether the event was necessary like that would be a problem I would here I would need a discussion what is it for right what why is this event happening and what is it doing and is it is it discriminatory like it could be like any event can be discriminatory does it discriminate based on race or gender or anything um is it you know does it do so unjustly and in all so I don't I don't just don't see why question it could still be wrong like you could question and then you could be wrong um but you should be taken on your argument but the argument here is just already questioning this is is already uh on the wrong side of the argument and I don't agree with this though I don't agree with these people that that question this workshop um don't have a particular opinion on these things but I have the opinion that you have to take arguments at their argument value and not just and who makes them or whether or not they're against a particular viewpoint all right so they say such pushback often centers calls for cognitive diversity or viewpoint diversity the idea that individual differences in the ways people think and understand the world are distinctions that should be counted alongside or instead of other identity categories such as race and gender well yes that's I mean isn't isn't that isn't that very reasonable thing to say isn't it very reasonable to say that differences in the ways people think and understand the world there are distinctions that should be counted alongside other identity categories such as race and gender um they say a dozen white men so long as they were not raised in the same household and don't think identical thoughts could be considered diverse that's I don't know if this is a sarcastic statement or not but clearly it's it's kind of the the counterpoint they're trying to make here that but yes I would I would totally agree with this statement in a way a white man growing up in San Francisco a white man growing up in I'd rule Idaho a white man growing up in Florida a white man growing up in western Europe one in Russia and one growing up on the road with its circus his circus parents in Mongolia would definitely be that plenty diverse right I mean they they criticize this here but this is is actually how can you how can you not see this that yes these are valid differences and people are going to think differently independent of how they look people are going to have different thoughts and it's important to recognize other people think differently and therefore you should you know include them if it's relevant and the counter argument to this is of course what the authors are saying basically is that 12 a dozen people as long as they are don't look the same could be considered diverse even if they all were raised in the same place and basically all live in San Francisco and think the exact same thing yeah that's I mean it sounds to me it sounds as absurd as the other way around it to me so here's here's my here's my thoughts on this I am not going to pretend that I know what life is like as a woman right I'm absolutely sure that for areas of life it is it is definitely valuable to listen to the experience of a woman or multiple women and aggregate of women because the life is just different as a woman life is also different as a black person I absolutely can see that there are things that I might not be able to draw from my life experience because I am not of that skin color that different problems that people face and that's why it's important to have an an opinion of that at the table but I'm also absolutely certain that I have no relation to someone who grew up as a child pop star from the age of 12 and then had that life I have no relation to someone growing up under a communist regime I have no relation to someone growing up in in kind of a buddhistic religious tradition I just don't and I don't care how they look they have different experiences they have different bodies of knowledge to draw on and I don't think why we should make the difference along the exact lines of race and gender yeah but that's what they that's of course what they argue here those arguments work by centering identity while flat-nare unregnoring power relationships here the VP the Facebook VP of engineering said that the ultimate goal is cognitive diversity and cognitive diversity is correlated with identity diversity that means it's not just about getting women in tech it's about broad voices broad representation right so the the this is exactly what I would say the reason why we want different what the reason why we want a woman or a black person at the table is because they have a different knowledge is because they have different thoughts because of their different life experience they have different thoughts that they can bring in so actually by including these what they call bodies it is about cognitive diversity even in itself but the authors here really see this from a different angle they really see this in terms of power relationships between race and gender groups and I yeah the arguments of the authors don't make sense if you don't view it through that lens and that lens to me is just such a it's such a I don't know it's just sad look on the world and also I think it's a very very inaccurate look on the world and it's I think a very dangerous look on the world yeah again they say instead of looking at historical patterns of marginalization calls for cognitive diversity argue that all differences are equal no we're not like no calls for cognitive diversity or don't argue that all differences are equal well aware that some people have it harder well aware that some differences are bigger worse or better that's absolutely well aware all they're saying is that race and gender shouldn't be the like only things to consider and shouldn't be in itself be considered diverse just because someone is of a certain skin color it doesn't mean anything right it doesn't actually tell you anything about that person so why not consider people as individuals and look at what was their life like until this point and what could they contribute to the discussion we're having rather than looking at the color of their skin I mean if the color of their skin played a role in their life then obviously that would manifest in my suggestion as well but to just look at people through this kind of group lens is so foreign to me and yeah I feel it's it's quite dangerous um yeah so so again and this this could argue that all differences are equal I mean the point where you have to start misrepresenting what the counter argument is saying that's really how you know you're dealing with a with not a well intention person on the other side of the of the discussion this is really politics now this isn't a well intended argumentation it's really someone to trying to achieve some goal because they have to misrepresent the other side and this only gets worse from here they say recently was exemplified in the controversy over Google's appointment of Heritage Foundation CEO Kay calls James to its advanced technology external advisory council Google's reasoning for the appointment of James was ostensibly to ensure diversity of thought by including a conservative viewpoint on the council all right so Google has a technology advisory board or council sorry of external people and they've included a conservative and she is by all by all metrics let's say a standard conservative so this is not a far right neo-nazi type um I don't know but this this is someone who has similar opinions than half the US country and in generally in at least in the western world generally half of of the of the country's population tends to be conservative um more or less I mean that there's differences but yeah so this this is a this is an opinion that a large portion of the population shares so it would be I don't know it would be suitable to include at least someone of that opinion in an external advisory council to to have that on board it you don't have to listen to her like it's not like she's made king it's simply that she will have the opportunity to input her voice representative of kind of that large very large percentage of people they go on to say James is also a black woman thus adding racial and gender diversity to the panel so even further right this is it's a conservative black woman all right but the pushback following James's inclusion focused on her policy position citing specifically her vocal anti-LGBTQ and anti-immigrant views and highlighted why cognitive diversity is a particularly limited lens and the the pushback here um was very much spearheaded by one of the authors of this article so I um this isn't just reporting I will also I'll also criticize the the this pushback here uh since it's you know it's kind of argued for in this article it's not just reported and also because the authors are the same um so here they say they have vocal anti-LGBTQ and anti-immigrant views and I haven't actually gone specifically and looked at what this person particularly has said but given that she's a standard conservative and has been in public office I believe under George W Bush um she can't like I have trouble believing that she has like extremely hateful opinions like these people shouldn't exist or like something like that nature like often people like conservative people have have issues with um forcing people to adopt certain pronouns for people or issues with which bathrooms do people go in and you know generally are tougher on immigration um especially legal immigration and so on I mean these are these are views that people hold um and a large part of people and these are discussions to be had so including this this person would be a very sensible move but they say in a letter opposing the appointment a group of Google workers calling themselves Googlers against transphobia and hate we have transphobia and hate responded to the idea that diversity of thought justified James's addition to the council this is a weaponization of the language of diversity by appointing James to the ATAC Google Elevates and endorses her view implying that hers is a valid perspective worthy of inclusions in its decision making this is unacceptable here it says again the the author was one of the organizers of of that so and that's what they're saying here this the views if you don't have our views it's these are unacceptable views right it's a valid perspective worthy of inclusion it's what they're saying basically is you don't even talk to this to this person like talking to this person considering their opinion you can still evaluate the opinion but even considering their opinion is already wrong and that given that the person is a black woman so basically they're called the author's idea of diversity is people that look different that are from race and gender groups that have don't have much power or perceived what they call power right now as long as they all think exactly as we think right then that's fine as long as they they share of thoughts as long as they don't have dissenting opinions we want the we want the different looking people but don't dare talk to anyone of a different opinion um yeah this I don't I don't see I mean this these authors in my opinion they really live in in a bubble they really live in the in a tiny Silicon Valley or Silicon Valley influenced spaces because this is this is half the people they basically saying half the people in their greater community in their country aren't even worthy listening to their opinions aren't even worthy of inclusion in of consideration so yeah well well done might as well discredit them at once um I'm sure I'm sure I'm sure that's going to fly well with these people all right uh yeah might might might start calling them deplorables and see what they do maybe they'll return the favor and elect and moron just to stick it in your faith I mean that's what happened so the idea of cognitive diversity is mobilized by some support in support that the AI field and the tech industry are already diverse uh going as far as to support claims that not including identities like why the mail constitutes discrimination yes it can like if if you include every single identity except why to mail that constitutes discrimination that's I mean yes even if they're in the majority is still constitutes discrimination like no one can help being born white to male no one white to male chose to be born like that um don't mostly don't choose the male and incontinent of your skin you can modulate it a bit by going to the sun which computer science people statistically don't do very often so there's not much leeway there uh so yeah it's too too not include identities like that if you include every other one can constitute discrimination true a July 2017 memo written by James Demor a software engineer at Google is illustrative of such pushback titled google's ideological echo chamber and published in an internal mail links memo critiqued the company's diversity policies arguing the biological differences between men and women rather than bias and discrimination help explain gender disparities at the company I feel the you can leave out the rather than here I think the memo simply stated that biological differences can help explain the gender disparities um the more subjective writing the memo was to make the case that policies designed to achieve equal representation are unfair divisive and bad for business well some are yes especially the recommendations that you've given at the beginning number seven is unfair divisive and I would also argue bad for business yes um so supporters for the most point of view at times even drew on rather it of the pipeline to make the case that diversity initiatives are in fact discriminatory they argue incorrectly that if they're aren't qualified candidates in the pipeline then hiring those who are unqualified of the basis of identity discriminating discriminates against those who are qualified no I would I would say hiring anyone on the basis of identity discriminates I mean inherently so against I think that's the I think that's the larger argument that these people are making which is not incorrect is very correct um yeah so in an update to the memo the more himself asserted that he values diversity and inclusion uh but his primary concern was cognitive diversity he says diversity inclusion is not denying that sex isn't exists doesn't endorse using stereotypes and in specific I've read the memo and it directly says these are these are population level kind of statistics and there is more overlap than difference and you absolutely can't say anything about an individual by looking at these statistics that's almost a quote from this memo so he was very much concerned with considering people as individuals but also if you like he was basically making the same argument as earlier I told you to remember hey look the this one's they describe this one study that found that women's interests might be different and we might shape the curriculum that's basically what the more said he said women's interests might be different we'd have to maybe shape the way we do work like change the way we do software engineering to attract more of them that's what was one of his points so he's exactly the same thing but of course he's a misogynist because he suggested this could be due partly because of biological differences and I think the way he was dragged through the mud is just crazy and this they shoot here very much against this kind of biological what they call biological determinism we'll see this very briefly they say diversity becomes an empty signifier stripped of the histories and experiences of systemic discrimination repurposed around ideology rather than bodies yeah I'd say diversity has nothing inherently to do with bodies like as as such I think that only that's only the case if you are already convinced of this let's say yeah within hours of the memo's publication harassment targeting minority advocates who push back against the claims in the memo began with a particular focus on queer and trans workers whoop that's bad but also I think to push back against people who voiced support is also was also pretty bad because one of them was fired as you already stated google's vice president of diverse even locked down her twitter account shortly after the morse firing responding to the barrage of threats describing her as a police Nazi well yeah if you fire something I mean undoubtedly google fired this guy because they thought it was less of a PR disaster if they also fired him now then I mean this wasn't the probably wasn't an ideological decision much much more a PR decision but yeah if you fire someone after stating like after stating something like this it very much looks like you're firing them because you don't like their ideas and you don't like what they're saying which are generally not in favor of censoring freedom of speech but yeah that being said harassment is bad don't harass people also that being said criticism isn't always harassment and don't conflate the two the morse memory memo also stated that the distribution of preference abilities of men and women differ in part due to biological causes and that these differences may explain why we don't see equal representation of women in tech and leadership this is certain hinges on a flawed assumption that identities like gender and race are essential and fixed biological attributes and that inequalities are at least in part the product of such irreducible differences well I mean if they're not fixed biological attributes certainly gender and race have a.99 correlation with biology and since your biology is first and it's determined when you're conceived that that demonstrates a causal direction right so even if they're not exactly fixed they are like overwhelmingly fixed and to to suggest that is a flawed assumption that these inequalities are at least part product of such differences what you'd have to do they simply state it's a flawed assumption what you have to do in order to show this is a flawed assumption you have to show that gender and race as far as their biologically determined have no influence whatsoever on these differences that's what you have to show right that's the counter claim because the claim is they have at least in part something to do with it and that's also I believe what the more stated and what I have to predominate opinion it like it's is very like all the research points to for example there is a large difference in interest between genders as far as for example career selection goes and so on now we can we can talk about why that is but there's also a large consensus I believe that there's at least partly determined to however degree but it is at least partly determined by biology in order to show that this is flawed you need to show that it does not have it can't have any influence right you have to basically prove them the the impossibility of this having an influence which no one has done so far much to the contrary so simply state this is a flawed assumption kind of shows to me that they've already they are they're in a bubble and they're expecting to speak to people in the same bubble yeah so they they go on and kind of discredit this as the biological determinism which I don't think that's a I don't think that's a correct use of the term biological determinism but you can judge for yourself all I think these people are saying that biology might have some influence and we could adjust for that it's not even right it's not even yeah this this comes up here so conclusion conclusion finally I think it's been two hours sorry conclusion throughout this report we've outlined the scope and scale of the problem tracing how the diversity crisis in the industry and the problems of bias and AI systems are interrelated aspect of the same issue no in the past these topics are commonly examined in isolation but increasing evidence shows that they are closely intertwined no they're you've shown that they're parallel you have absolutely not shown that they're interrelated aspects of the same issue and you have not shown one any one of these causal influences the other that there is any feedback loop you have not shown that fixing one leads to fixing the other I mean you could also like take a company that extremely is focused on or like for some reason has a different workforce and then show how their products with the same datasets as the previous companies don't end up being being biased probably not so easy but again not none of that is in the report there many things you could actually do to show what you wanted to show but it's just not the case in this article our analysis surface two prominent responses to the diversity crisis on one hand a worker driven movement which we've skipped on the other hand we observe a small but vocal counter movement that actively resists diversity in the industry I mean what again what does honesty actively resists diversity I mean this the thought that these people stray around like no I don't like the I don't like to other looking people it's just so absurd all they're saying is that either we don't understand the problem in the correct way or our tools aren't appropriate to solve the problem I think everyone has the same goal of the workplace and the AI systems being as fair and as nondiscriminatory as possible to I don't do to misrepresentation of the other side is something that really bugs me and it's something that these authors do a lot so yeah I lose my polite side maybe and uses arguments from biological determinism to assert that women are inherently less suited to computer science in the AI what a load of crap sorry but uses to assert that women are inherently less suited to computer science in the AI no one okay not no one but no one that I know asserts that absolutely no one that makes these arguments okay sorry not no one you can always find a sexist douchebag that makes that argument but this this is not a serious argument made and this is not the this this counter movement most people in the argument that most people in this counter movement make not at all and to represent them as such is just so dishonest that yeah this this this basically this is the it's nice that it's in the conclusion because it finally like at the end it completely destroys the credibility of me taking seriously these authors I felt the head so that the parts we skipped over I mostly would say I'm mostly okay with the mostly show parallels between the have that AI systems are biased and they also show that there is any poor representation they also show examples of discrimination harassment and so on problems in AI companies and universities that all you can read the report for this that's it's pretty interesting to read but the points I've addressed I'm not happy with yeah so that was it for now sorry this was took so long but I felt that a thorough take was necessary have a nice rest of the day | [{"start": 0.0, "end": 7.68, "text": " Hi there. Today we're looking at discriminating systems, gender, race and power in AI by Sarah"}, {"start": 7.68, "end": 14.52, "text": " Myers-West Meredith Whitaker and Kate Crawford of the AINow Institute, which is a part"}, {"start": 14.52, "end": 22.56, "text": " of New York University or associated with it. This is not as much a paper as it is a report"}, {"start": 22.56, "end": 29.28, "text": " kind of summarizing current literature and also kind of an opinion piece slash recommendation"}, {"start": 29.28, "end": 37.88, "text": " giving document. Yes, so we'll dive into it. As you can see from the index it's quite"}, {"start": 37.88, "end": 42.92, "text": " a long report and we don't have time to go into all of it. Actually we don't have time to"}, {"start": 42.92, "end": 48.96, "text": " go into most of it. I just hope to kind of point out what the main arguments and themes"}, {"start": 48.96, "end": 56.32, "text": " are in the report, kind of what it's trying to say. Pick out some interesting things and"}, {"start": 56.32, "end": 65.76, "text": " summarize it to the best of my ability. Also give a little critique. Let me actually go ahead and"}, {"start": 65.76, "end": 75.52, "text": " try to state the kind of core argument that the report is trying to make because it's not"}, {"start": 75.52, "end": 81.92, "text": " really clear from reading it and you have to kind of read the whole thing and then kind of"}, {"start": 81.92, "end": 87.08, "text": " becomes clear what the argument is I feel. Though they somehow stated in the introduction"}, {"start": 87.08, "end": 94.0, "text": " numerous times in various ways. So I might just be not as attentive reader at first"}, {"start": 94.0, "end": 100.6, "text": " time. But all right. So here's the argument and I really hope I'm representing this correctly."}, {"start": 100.6, "end": 107.6, "text": " We have a problem currently that sometimes AI systems can exhibit what we usually call"}, {"start": 107.6, "end": 115.16, "text": " bias and we don't mean mathematical bias like bias variance trade off. We mean bias in a"}, {"start": 115.16, "end": 122.11999999999999, "text": " societal sense. Let's say bias against certain types of people where they shouldn't exist."}, {"start": 122.11999999999999, "end": 129.32, "text": " So for example, let me draw an AI system and I'll just draw a little computer screen"}, {"start": 129.32, "end": 137.6, "text": " with a light bulb. All right. This is because it's smart. This is an AI system and the AI"}, {"start": 137.6, "end": 142.07999999999998, "text": " system and they give numerous examples. One example they give is like face recognition"}, {"start": 142.07999999999998, "end": 150.68, "text": " algorithm that is much more accurate on faces of white males as opposed to darker skin"}, {"start": 150.68, "end": 159.12, "text": " females. So let me draw like two curves to represent these distributions are unequal"}, {"start": 159.12, "end": 165.20000000000002, "text": " and so the AI system exhibits some bias with respect to some kinds of people with and"}, {"start": 165.20000000000002, "end": 171.52, "text": " especially protected attributes. And in this report they focus mainly on gender and race."}, {"start": 171.52, "end": 176.84, "text": " So that's what we're going to talk about. The second thing they observe. So this is observation"}, {"start": 176.84, "end": 183.08, "text": " one. The second thing they observe is I'm going to draw some generic people here that represent"}, {"start": 183.08, "end": 189.36, "text": " the workforce of AI. So the AI workforce is classified as all the people that work on"}, {"start": 189.36, "end": 196.32000000000002, "text": " AI be that university researchers or within companies building AI products or deploying"}, {"start": 196.32000000000002, "end": 202.84, "text": " them. So this is the workforce and they observe that there is an unequal distribution among"}, {"start": 202.84, "end": 211.96, "text": " the AI workforce. So this distribution I'm also going to do this for unequal distribution."}, {"start": 211.96, "end": 218.24, "text": " There's an unequal distribution in the AI workforce most notably it's predominantly males"}, {"start": 218.24, "end": 227.12, "text": " who work on AI and also white people are over represented compared to the world population"}, {"start": 227.12, "end": 234.68, "text": " at large. So that's kind of the two observations they make. And now what they claim is that the"}, {"start": 234.68, "end": 243.24, "text": " unequal or the unequal representation in the workforce is causing the bias in the AI systems."}, {"start": 243.24, "end": 250.64000000000001, "text": " So they're basically saying these AI systems are biased because that the workforce is unequally"}, {"start": 250.64000000000001, "end": 256.68, "text": " distributed. And also they claim in a less powerful sense I feel but they claim there"}, {"start": 256.68, "end": 264.56, "text": " is a loop that this then leads back that because there is bias in the AI system that again"}, {"start": 264.56, "end": 272.2, "text": " leads to an unequal more unequal distribution of the workforce. So the core argument really"}, {"start": 272.2, "end": 277.88, "text": " is as they set out to do like in the introduction and also claim that they have done in the"}, {"start": 277.88, "end": 285.8, "text": " conclusion is to demonstrate these two directions here in a causal way. So the systems are"}, {"start": 285.8, "end": 294.44, "text": " biased because there is an unequal representation in the workforce and that feeds back. So the"}, {"start": 294.44, "end": 300.76, "text": " argument is that if you want to fix the bias here, if you want to fix that, then you will"}, {"start": 300.76, "end": 310.0, "text": " have to fix it via making the workforce more what they call diverse. So less unilaterally"}, {"start": 310.0, "end": 315.72, "text": " distributed towards white males. That's that's kind of the the final conclusion. If you"}, {"start": 315.72, "end": 321.48, "text": " read the report and the recommendations that's that's mainly what they're going for."}, {"start": 321.48, "end": 331.88, "text": " Yeah, so my opinion or in my opinion having read the report a couple of times is that as"}, {"start": 331.88, "end": 338.04, "text": " I see it, they really don't demonstrate these links. So they get they give examples of"}, {"start": 338.04, "end": 344.20000000000005, "text": " this and they give examples of this right. They show that the workforce is unequally distributed."}, {"start": 344.20000000000005, "end": 350.12, "text": " They show that AI systems can exhibit such bias but they never actually show these links"}, {"start": 350.12, "end": 356.36, "text": " in my opinion. They they don't show this. So if if you make the claim that in order to fix"}, {"start": 356.36, "end": 361.44, "text": " the bias in AI systems, you must fix the unequal representation in the workforce. I would"}, {"start": 361.44, "end": 367.76, "text": " need a argument that says because there is unequal representation, therefore a therefore"}, {"start": 367.76, "end": 378.4, "text": " b therefore c therefore bias like an actual argument to follow that says because of this"}, {"start": 378.4, "end": 385.28, "text": " that because of that that and so on. Like it's just not there. Like they they they simply"}, {"start": 385.28, "end": 390.76, "text": " show parallels. They simply show that these two things exist and they just list example"}, {"start": 390.76, "end": 399.59999999999997, "text": " after example of that and the the I don't think they make this argument. What I think"}, {"start": 399.59999999999997, "end": 404.71999999999997, "text": " I think to also the other direction they don't really make this argument."}, {"start": 404.72, "end": 412.92, "text": " They accept in very in like one in one case where you know if you give them benefit of"}, {"start": 412.92, "end": 423.6, "text": " the doubt. What I also think is that it appears like like the article if you read it and"}, {"start": 423.6, "end": 428.8, "text": " I encourage you to read it if you have some time. It makes a lot of sense if you have"}, {"start": 428.8, "end": 434.16, "text": " already accepted this conclusion. Like if you've already accepted this then it's like"}, {"start": 434.16, "end": 441.40000000000003, "text": " oh yeah because I feel this is just a text where the confirmation bias is so high just"}, {"start": 441.40000000000003, "end": 446.76000000000005, "text": " the way it's written that it must make a lot of sense to someone who's already kind of"}, {"start": 446.76000000000005, "end": 454.76000000000005, "text": " in on this conclusion. But to someone who isn't sold yet like myself I am just not finding"}, {"start": 454.76000000000005, "end": 463.56, "text": " this convincing at all. The second thing is that it very much feels like this isn't like"}, {"start": 463.56, "end": 470.76, "text": " a discovery or something but someone actually set out with the goal to address this here"}, {"start": 470.76, "end": 478.72, "text": " with the goal of I want companies to hire more of these people or of certain kinds of people"}, {"start": 478.72, "end": 485.04, "text": " or to become more diverse or to promote more of a certain type of people. And now I'm going"}, {"start": 485.04, "end": 492.28, "text": " to find reasons for this and the reason is like oh look at look at this bias here. This"}, {"start": 492.28, "end": 499.52, "text": " is caused by this other thing and therefore we must fix this other thing. It very much"}, {"start": 499.52, "end": 506.03999999999996, "text": " feels like someone setting out with already the conclusion in mind rather than just being"}, {"start": 506.03999999999996, "end": 512.04, "text": " an honest investigation. But yeah I mean read it for yourself. I can't prove the absence"}, {"start": 512.04, "end": 516.28, "text": " of an argument by not reading every single line and I can't read every single line because"}, {"start": 516.28, "end": 523.9599999999999, "text": " it will just get very long and boring but read it yourself and I think I'm pretty I'm"}, {"start": 523.9599999999999, "end": 530.12, "text": " pretty I've read it numerous times with really an open mind to be convinced that there is"}, {"start": 530.12, "end": 534.8, "text": " an argument in there but I don't think there is or I don't think there is a very strong"}, {"start": 534.8, "end": 541.88, "text": " argument for this. Alright let this first party is more or less a summary so research findings"}, {"start": 541.88, "end": 547.8, "text": " is more or less a summary and we'll get to these things as they are important. Then they"}, {"start": 547.8, "end": 552.24, "text": " stayed recommendations right at the beginning. So actually you'd have to read the article"}, {"start": 552.24, "end": 557.12, "text": " first. This is kind of more of an abstract section but since it's right here will kind"}, {"start": 557.12, "end": 562.48, "text": " of jump right into it. So these are recommendations and yeah I've claimed they don't really show"}, {"start": 562.48, "end": 568.6, "text": " a connection but they actually just show examples examples of this and examples of this"}, {"start": 568.6, "end": 573.64, "text": " and parallel them and this is reflected in like every single section including here in"}, {"start": 573.64, "end": 579.28, "text": " the recommendations they have recommendations for improving workplace diversity and they"}, {"start": 579.28, "end": 584.5600000000001, "text": " have recommendations for addressing bias and discrimination in AI systems. Alright so"}, {"start": 584.5600000000001, "end": 592.6800000000001, "text": " alright in my case if you make this argument I would I would feel you also make recommendations"}, {"start": 592.68, "end": 600.64, "text": " for breaking these links or argue why they can't be broken. Alright let's jump into some"}, {"start": 600.64, "end": 607.3199999999999, "text": " of them and it is really a mixed back here really. So some recommendations I am really"}, {"start": 607.3199999999999, "end": 612.88, "text": " in favor of just from from the go not even you don't even need the article for those."}, {"start": 612.88, "end": 617.7199999999999, "text": " Here published herassment and discrimination transparency reports including number of"}, {"start": 617.72, "end": 623.36, "text": " claims over time. The types of claims submitted and actions taken. So it's known that especially"}, {"start": 623.36, "end": 630.4, "text": " in these larger companies sexual harassment claims often go down in either bureaucracy"}, {"start": 630.4, "end": 634.96, "text": " or are kind of hushed under the table or something like this. What you have to recognize"}, {"start": 634.96, "end": 639.76, "text": " is that a human resource department of a large company isn't there to serve the human"}, {"start": 639.76, "end": 646.84, "text": " resources. It's there to serve the company providing human resources. That's why a sexual"}, {"start": 646.84, "end": 653.64, "text": " harassment claim to an HR department is just a potential lawsuit and that's why they don't"}, {"start": 653.64, "end": 659.1600000000001, "text": " want to take it seriously except for it must go away really quickly. So I think two kind"}, {"start": 659.1600000000001, "end": 665.52, "text": " of force companies or two ask companies to be more transparent to take more seriously"}, {"start": 665.52, "end": 674.12, "text": " these the accusations of sexual harassment and assault and also discrimination is very"}, {"start": 674.12, "end": 684.64, "text": " valuable goal and I fully fully support this. Also the here commit to transparency around"}, {"start": 684.64, "end": 691.88, "text": " hiring practices especially higher regarding how candidates are level compensated and promoted."}, {"start": 691.88, "end": 697.4, "text": " That also is in the larger the company gets the less transparent this process usually"}, {"start": 697.4, "end": 704.0, "text": " becomes or the more bureaucratic the more people are able to game it and so on and distorted"}, {"start": 704.0, "end": 711.12, "text": " so I feel it's always good to be transparent around around okay this person provides this"}, {"start": 711.12, "end": 718.6, "text": " much value to the company therefore they should be compensated in according to that or"}, {"start": 718.6, "end": 725.96, "text": " at least be transparent about it. So these are kind of recommendations I like then recommendations"}, {"start": 725.96, "end": 731.28, "text": " that really going to a different direction is something like this here change hiring"}, {"start": 731.28, "end": 736.16, "text": " practices to maximize diversity and this is kind of reflect not going to go on this"}, {"start": 736.16, "end": 741.8399999999999, "text": " reflected in other in other points increase the number of people of color women and other"}, {"start": 741.8399999999999, "end": 747.04, "text": " underrepresented groups at senior leadership levels of AI companies across all departments."}, {"start": 747.04, "end": 752.68, "text": " So these things they are usually within like company diversity goals and so on and doesn't"}, {"start": 752.68, "end": 758.8399999999999, "text": " really say how to do it but then the I mean as such they're not really recommendations"}, {"start": 758.84, "end": 765.84, "text": " yet they're more like goals but here recommendations seven I think is the crucial one ensure executive"}, {"start": 765.84, "end": 771.32, "text": " incentive structures are tied to increases in hiring and retention of underrepresented"}, {"start": 771.32, "end": 779.64, "text": " groups. So this is it's a bit of coded language but here they talk about executive incentive"}, {"start": 779.64, "end": 785.96, "text": " structure tied to hiring and retention of under represent groups this basically means"}, {"start": 785.96, "end": 792.44, "text": " if you are a manager or someone in charge of hiring or promoting and you hire or promote"}, {"start": 792.44, "end": 796.9200000000001, "text": " a underrepresented person and since they're talking about gender and race here if you"}, {"start": 796.9200000000001, "end": 804.1600000000001, "text": " that means if you hire or promote a person of color or a woman in this case you will"}, {"start": 804.1600000000001, "end": 808.32, "text": " become compensated more so at the end of the year you'll somehow have more money like"}, {"start": 808.32, "end": 814.2, "text": " more bonuses or more base comp or more equity or something like you'll get more money."}, {"start": 814.2, "end": 824.08, "text": " So this recommendation is a direct call to hire based on race and gender so this is a"}, {"start": 824.08, "end": 830.9200000000001, "text": " direct call to racist and sexist hiring basically to discriminate people according to their"}, {"start": 830.9200000000001, "end": 840.24, "text": " skin color and according to their gender which I mean how is this okay with anyone like"}, {"start": 840.24, "end": 846.92, "text": " how can anyone how are people even able to state this and in like a high-profile report"}, {"start": 846.92, "end": 852.24, "text": " like this and get away with it and not have people criticize them this directly calls"}, {"start": 852.24, "end": 859.16, "text": " for people to be treated according to their gender and race and probably asked directly"}, {"start": 859.16, "end": 866.52, "text": " as you can go without getting into actual legal trouble but yeah I'm really really against"}, {"start": 866.52, "end": 873.72, "text": " such such practices I mean yeah that's I just I just don't know how this how this can"}, {"start": 873.72, "end": 883.4399999999999, "text": " ever how this can ever be thought of as a good thing by anyone alright so well yeah in"}, {"start": 883.4399999999999, "end": 891.04, "text": " my mind this recommendation and this recommendation kind of encounter to each other because if"}, {"start": 891.04, "end": 896.8, "text": " I commit to transparency how people are okay now I can I can transparently commit to to be"}, {"start": 896.8, "end": 903.52, "text": " racist I guess but if I say okay I'm gonna comp and promote people based on how much value"}, {"start": 903.52, "end": 910.0, "text": " they provide to the company then yeah I'd much rather have that than saying I'm going"}, {"start": 910.0, "end": 915.64, "text": " to comp and promote people based on their skin color alright so let's actually jump into"}, {"start": 915.64, "end": 919.68, "text": " the report I'm not gonna these recommendations for addressing bias and discrimination in"}, {"start": 919.68, "end": 926.16, "text": " systems this these are fairly general and common so as well as I said we'll jump most of"}, {"start": 926.16, "end": 933.28, "text": " the things in the report so introduction so they start out with there is a diversity crisis"}, {"start": 933.28, "end": 940.68, "text": " in the AI industry this they give like some numbers like 15% of AI research staff and"}, {"start": 940.68, "end": 949.24, "text": " 10% at Google so 15% of Facebook or women so these are some kind of fairly known statistics"}, {"start": 949.24, "end": 959.04, "text": " about how the AI field is a kind of gender and race skewed currently so they say they"}, {"start": 959.04, "end": 965.96, "text": " claim in bold diversity problem is not just about women it's about gender race and most"}, {"start": 965.96, "end": 972.44, "text": " fundamentally about power it affects how our companies work or products could build"}, {"start": 972.44, "end": 978.64, "text": " who they're designed to serve and who benefits from their development so this I find this"}, {"start": 978.64, "end": 987.72, "text": " this this word power and this notion of power a lot in this report it appears again and"}, {"start": 987.72, "end": 994.4399999999999, "text": " again and again in like power dynamics and power dynamics among groups it's like a world"}, {"start": 994.4399999999999, "end": 1002.64, "text": " view it paints like a world view where these different gender and race groups kind of"}, {"start": 1002.64, "end": 1009.64, "text": " struggle against each other to gain power over another and whoever's in power will try"}, {"start": 1009.64, "end": 1016.04, "text": " to remain in power in alliance with their gender and race group and try to keep the other"}, {"start": 1016.04, "end": 1025.48, "text": " groups down it's I don't I'm not sure that's the correct view of the world in my mind"}, {"start": 1025.48, "end": 1030.16, "text": " the world is comprised of individual people that want to achieve something for themselves"}, {"start": 1030.16, "end": 1036.3600000000001, "text": " and they would like to prop themselves up whereas in this world view it's like I'm going"}, {"start": 1036.3600000000001, "end": 1043.1200000000001, "text": " to use the power of my group to keep the other groups down I don't know which world"}, {"start": 1043.1200000000001, "end": 1050.8400000000001, "text": " view you subscribe to but I find the world is comprised of individuals yeah and this"}, {"start": 1050.8400000000001, "end": 1055.6000000000001, "text": " is not discrediting that some people have it harder because of their gender or race"}, {"start": 1055.6, "end": 1061.7199999999998, "text": " but to see the entire world as a power struggle between these groups to me it's it's a yeah"}, {"start": 1061.7199999999998, "end": 1068.52, "text": " and I'm not going to point out everywhere it appears this power wording but it appears"}, {"start": 1068.52, "end": 1075.08, "text": " a lot and it's really shapes how the report reads you have to you have to kind of remember"}, {"start": 1075.08, "end": 1083.3999999999999, "text": " if you're a white male and currently the field is comprised of 90% white males you if"}, {"start": 1083.4, "end": 1089.5600000000002, "text": " you have like 10 like 10 hours let's say you have to do they have to 10 hours to do something"}, {"start": 1089.5600000000002, "end": 1098.24, "text": " right you can either choose to put down some other groups like put down groups that you're"}, {"start": 1098.24, "end": 1106.8000000000002, "text": " not part of or you can choose to invest these 10 hours in putting up yourself you right"}, {"start": 1106.8, "end": 1113.8799999999999, "text": " so if I like I profit if I'm a white male I profit minimally from keeping the other groups"}, {"start": 1113.8799999999999, "end": 1121.24, "text": " down because guess what I still have to compete with the like one billion other white males"}, {"start": 1121.24, "end": 1130.1599999999999, "text": " there are that's that's not going to help me to keep down anyone else and especially"}, {"start": 1130.16, "end": 1136.64, "text": " like it's it's moronic like who who does that who who like has alliance except most fringe"}, {"start": 1136.64, "end": 1143.24, "text": " people like to their race or gender rather than to the people they admire and respect and"}, {"start": 1143.24, "end": 1148.72, "text": " like to work with so I'm gonna if I have like 10 hours today I'm gonna rather spend this"}, {"start": 1148.72, "end": 1154.44, "text": " in popping up myself compared to everyone else and I don't care what gender or race they"}, {"start": 1154.44, "end": 1162.2, "text": " are and so that to me that's a much more accurate or I don't know plausible or worldview"}, {"start": 1162.2, "end": 1167.1200000000001, "text": " but just be aware that this report really takes on the language of kind of groups and power"}, {"start": 1167.1200000000001, "end": 1174.04, "text": " between groups and groups trying to you know kind of gain power and keep in keep power"}, {"start": 1174.04, "end": 1181.52, "text": " and then keep others from having power all right say say to date the diversity problems"}, {"start": 1181.52, "end": 1187.12, "text": " of the industry and the issues of bias in the systems it builds have tended to be considered"}, {"start": 1187.12, "end": 1193.96, "text": " separately we suggest that these are two versions of the same problem issues of discrimination"}, {"start": 1193.96, "end": 1200.08, "text": " in the workforce and insistent buildings are deeply intertwined challenge and more over"}, {"start": 1200.08, "end": 1205.6, "text": " tackling the challenges of bias within technical systems requires addressing workforce diversity"}, {"start": 1205.6, "end": 1214.08, "text": " and vice versa so the the um I think this this here actually is like how I described the"}, {"start": 1214.08, "end": 1218.7199999999998, "text": " argument and they kind of restated multiple times in a bit different way but I think this"}, {"start": 1218.7199999999998, "end": 1224.36, "text": " is the core and I really think I'm not misrepresenting the article here in that this is what they"}, {"start": 1224.36, "end": 1231.8799999999999, "text": " are setting out to do they're setting out to say okay the diversity um the kind of unequal"}, {"start": 1231.88, "end": 1238.6000000000001, "text": " representation in the workforce and the bias in some AI systems are causally linked to"}, {"start": 1238.6000000000001, "end": 1246.5600000000002, "text": " each other and tackling one requires tackling the other uh so yeah if if I'm misrepresenting"}, {"start": 1246.5600000000002, "end": 1256.0800000000002, "text": " them let let me know but I really think I'm actually representing their argument um so"}, {"start": 1256.08, "end": 1262.72, "text": " what they what they do as I said is they give examples of one and of the other and also they"}, {"start": 1262.72, "end": 1271.8799999999999, "text": " really they're really on kind of discrediting the the um kind of issues to solve problems of"}, {"start": 1271.8799999999999, "end": 1276.96, "text": " bias in a different way so they they point a little bit to this here in the introduction"}, {"start": 1276.96, "end": 1280.6399999999999, "text": " the same the face of growing evidence the air research community and the industry producing"}, {"start": 1280.64, "end": 1286.3200000000002, "text": " our products have begun addressing the problem of bias by building on a body of work of fairness"}, {"start": 1286.3200000000002, "end": 1293.2, "text": " accountability and transparency so fairness uh accountability and transparency research um concerns"}, {"start": 1293.2, "end": 1299.6000000000001, "text": " these these issues for one is research showing that some products are unfair or untransparented"}, {"start": 1299.6000000000001, "end": 1305.5200000000002, "text": " so on on on the other hand it's trying to devise algorithms that are more fair in according"}, {"start": 1305.52, "end": 1311.84, "text": " to some notions um um um or more accountable and transparent which means that the algorithm"}, {"start": 1311.84, "end": 1318.72, "text": " can kind of say why it made a certain decision rather than it being a deep learning system"}, {"start": 1318.72, "end": 1323.84, "text": " that you you don't really have an insight these fields are active fields of research definitely"}, {"start": 1323.84, "end": 1333.36, "text": " very interesting to look into uh so um but they they kind of it is not already here but they say"}, {"start": 1333.36, "end": 1338.8799999999999, "text": " yeah we have adjusting air systems that and produce a result that deemed fair by one of"}, {"start": 1338.8799999999999, "end": 1344.6399999999999, "text": " various mathematical definitions um you can already see in the language here they don't really"}, {"start": 1344.6399999999999, "end": 1351.4399999999998, "text": " like this research and they are trying in this report to kind of discredit it or at least claim"}, {"start": 1351.4399999999998, "end": 1357.12, "text": " that it doesn't solve the whole problem because their point is of course you have to address this"}, {"start": 1357.12, "end": 1366.4799999999998, "text": " diversity issue in the workforce in order to fix the problems um yeah so to to this i i just want to"}, {"start": 1366.4799999999998, "end": 1373.9199999999998, "text": " say uh no like if you can i mean you can criticize the the fairness and accountability and transparency"}, {"start": 1373.9199999999998, "end": 1380.56, "text": " research field in that they haven't solved the problem fully yet but in principle if i have an"}, {"start": 1380.56, "end": 1388.0, "text": " algorithm if i'm being delivered an algorithm right and the the fairness literature has been"}, {"start": 1388.0, "end": 1393.04, "text": " applied to that algorithm and someone tells me i guarantee you here is a proof the algorithm"}, {"start": 1393.04, "end": 1400.08, "text": " is fair right then i really don't care who made that algorithm as long as it's fair the problem"}, {"start": 1400.08, "end": 1406.0, "text": " is fixed if the bias is gone the problem is fixed and i don't care who fixed it i don't care if"}, {"start": 1406.0, "end": 1413.68, "text": " the person who fixed it is black or white or purple uh then the problem is fixed and they they"}, {"start": 1413.68, "end": 1419.84, "text": " really have to um they really try to just make the counter argument here is that no that's it's not"}, {"start": 1419.84, "end": 1428.08, "text": " enough um but i claim yes it if you can actually solve the the fairness problem technically then"}, {"start": 1428.08, "end": 1434.8, "text": " you have solved the fairness problem um yeah the only thing you can do is claim that it uh it's not"}, {"start": 1434.8, "end": 1439.44, "text": " good enough yet but not that it's fun they they kind of have to make the argument that it's"}, {"start": 1439.44, "end": 1445.2, "text": " fundamentally flawed approach and i don't think they succeed in doing that here um"}, {"start": 1446.6399999999999, "end": 1452.8799999999999, "text": " yeah so they they go on to say we should expand to consider not only how i tools can be biased"}, {"start": 1452.8799999999999, "end": 1456.96, "text": " technically but how they're shaped by the environments in which you're built in and the people"}, {"start": 1456.96, "end": 1463.6, "text": " that built them again this this focus like who builds the AI system i don't care i care what it does"}, {"start": 1463.6, "end": 1470.48, "text": " right as much as if if i hear an argument for or against something i don't care who makes the argument"}, {"start": 1470.48, "end": 1476.8, "text": " right i care what the argument says this is it's like an ad hominem attack for an entire community"}, {"start": 1477.6, "end": 1488.48, "text": " that's kind of how this this article this report um shows or it appears to me um so they say"}, {"start": 1488.48, "end": 1494.16, "text": " currently large scale AI systems are developed almost exclusively in a handful of technology"}, {"start": 1494.16, "end": 1499.44, "text": " companies in a small set of a university laboratories spaces that in the west tend to be extremely"}, {"start": 1499.44, "end": 1504.64, "text": " white affluent technically oriented and male so yeah they're they're problem that's"}, {"start": 1505.28, "end": 1511.28, "text": " their fundamental problem here that these these spaces are skewed in one direction uh"}, {"start": 1511.28, "end": 1516.64, "text": " interestingly enough their problem is not so much that it's that they're all in the same place"}, {"start": 1516.64, "end": 1524.4, "text": " right that they all live like 20 miles from each other in around San Francisco that's that seems"}, {"start": 1524.4, "end": 1530.16, "text": " to be not a problem at all as long as we get to like enough people of color and women into these 20"}, {"start": 1530.16, "end": 1540.96, "text": " miles um but yeah so that that's pointing out the the problem here or the yeah kind of issue they have"}, {"start": 1540.96, "end": 1553.3600000000001, "text": " all right so they go on um just kind of want to highlight again they say both within the spaces"}, {"start": 1553.3600000000001, "end": 1558.56, "text": " where AI is being created and the logic of hey how AI systems are being designed so paralleling"}, {"start": 1558.56, "end": 1564.24, "text": " the two things the cost of bias harassment and discrimination are born by the same people"}, {"start": 1564.24, "end": 1571.36, "text": " um gender minorities people of color other under present groups then they also say similarly"}, {"start": 1571.36, "end": 1578.08, "text": " the benefits of such systems from profit to efficiency a group primarily to those"}, {"start": 1578.08, "end": 1583.84, "text": " are already in positions of power tend to be white educated and male so um"}, {"start": 1585.92, "end": 1594.0, "text": " they again they say the this points to a systematic relationship between patterns of exclusion"}, {"start": 1594.0, "end": 1598.64, "text": " within the field of AI in the industry driving its production on the one hand and the biases"}, {"start": 1598.64, "end": 1604.0, "text": " that manifest in the logics and applications of the technologies on the other and they try to make"}, {"start": 1604.0, "end": 1612.4, "text": " this connection because they say the cost and the benefit of these two things are overlap"}, {"start": 1612.4, "end": 1618.08, "text": " in the people that worry costs any benefits and I really again it's just a parallel but I really"}, {"start": 1618.08, "end": 1625.9199999999998, "text": " even don't think that's true because they kind of um the kind of argue against themselves later"}, {"start": 1625.9199999999998, "end": 1633.04, "text": " so they always say we have to look at again they shoot against the take much more than the"}, {"start": 1633.04, "end": 1642.48, "text": " technically driven problem solving um they they point to this uh so a research requires looking at"}, {"start": 1642.48, "end": 1649.6, "text": " gender and racist categories within which you must think well in short sorry studies of discriminatory"}, {"start": 1649.6, "end": 1656.48, "text": " systems we need to ask who is harmed who benefits who gets to decide so it's it's kind of who"}, {"start": 1657.04, "end": 1666.96, "text": " bears the cost who bears the benefits and who has the power um so that's yeah and and again it's"}, {"start": 1666.96, "end": 1674.24, "text": " we seek to understand how AI disadvantages some we also consider how it works to the advantage of"}, {"start": 1674.24, "end": 1681.2, "text": " others so keep keep that in mind that's kind of the lens through how they analyze the this thing"}, {"start": 1681.2, "end": 1687.92, "text": " again one that acknowledges power relationships and centers equity and justice that's the they"}, {"start": 1687.92, "end": 1698.0800000000002, "text": " want to see this bigger picture um so that's yeah keep yeah again keep that in mind so they go into"}, {"start": 1698.0800000000002, "end": 1705.44, "text": " a section called which humans are in the loop how workforces and AI systems interact so this"}, {"start": 1705.44, "end": 1711.28, "text": " kind of uh from the title of this section you think okay here's where we get in here's where we"}, {"start": 1711.28, "end": 1719.52, "text": " make the argument and they start by um listing examples of how AI systems can be discriminatory"}, {"start": 1720.56, "end": 1729.2, "text": " and first they go into an example of amazon had developed an experimental hiring tool to help"}, {"start": 1729.2, "end": 1737.76, "text": " rank job candidates um by learning from its past references amazon hoped that the r\u00e9sum\u00e9"}, {"start": 1737.76, "end": 1743.44, "text": " scanning tool be able to efficiently identify qualified applicants comparing their applications"}, {"start": 1743.44, "end": 1750.16, "text": " to previous hires the system quickly began to downgrade r\u00e9sum\u00e9s from candidates who attended"}, {"start": 1750.16, "end": 1758.08, "text": " all women's colleges along with any r\u00e9sum\u00e9s that included the word women's after uncovering"}, {"start": 1758.08, "end": 1764.24, "text": " this bias amazon engineers tried to fix the problem by directing the system to treat these terms"}, {"start": 1764.24, "end": 1771.36, "text": " in a neutral manner the company eventually abandoned the tool when they were unable to ensure"}, {"start": 1771.36, "end": 1778.24, "text": " that the algorithm would not be biased against women gender-based discrimination was built too"}, {"start": 1778.24, "end": 1784.08, "text": " deeply within the system and in amazon's past hiring practices to be operated using a purely"}, {"start": 1784.08, "end": 1790.72, "text": " technical approach so this just the way is written i find to be i find to be quite dishonest but"}, {"start": 1790.72, "end": 1796.16, "text": " let's let's analyze what what happened here so their final claim is that gender-based"}, {"start": 1796.16, "end": 1802.88, "text": " discrimination was built too deeply within the system um to be operated using a purely"}, {"start": 1802.88, "end": 1808.88, "text": " technical approach so this is one of their arguments they say technical approaches they don't"}, {"start": 1808.88, "end": 1816.48, "text": " help because the amazon engineers tried to fix the problem all right but were when they were"}, {"start": 1816.48, "end": 1824.64, "text": " unable to ensure that the algorithm would not be biased against women um so if you if you read this"}, {"start": 1825.84, "end": 1831.2, "text": " you really i mean i really get the impression that's not what happened here what happened here"}, {"start": 1831.2, "end": 1840.0, "text": " most probably is amazon built this tool okay and it fed in its past hires and we know of issues"}, {"start": 1840.0, "end": 1847.92, "text": " of like data set bias bias inherent in data sets so if your data set is skewed the um the AI tends"}, {"start": 1847.92, "end": 1854.96, "text": " to pick up on the skewed data set and becomes skewed itself okay so i actually would argue that"}, {"start": 1855.76, "end": 1863.68, "text": " most or all of the examples they stay stayed in here um are examples of such biased data sets"}, {"start": 1863.68, "end": 1871.04, "text": " and not so the the cause of the bias is the data set that they are strained on and not the person"}, {"start": 1871.04, "end": 1879.52, "text": " that ran the code or built the algorithm to train it on or built the deployment um and so"}, {"start": 1881.28, "end": 1885.8400000000001, "text": " but it doesn't matter your your your your your amazon you built this tool and you realize oh"}, {"start": 1885.84, "end": 1895.76, "text": " oh it discriminates against people having women's on their cv um so this is a pretty bad PR-wise"}, {"start": 1895.76, "end": 1901.04, "text": " so you tell your engineers engineers fix the problem so the engineers go fix the problem"}, {"start": 1901.04, "end": 1906.48, "text": " they come back and say okay we fix the problem and then what you do is you say okay engineers"}, {"start": 1906.48, "end": 1911.1999999999998, "text": " can you ensure me that the algorithm would not be biased against women because"}, {"start": 1911.2, "end": 1919.68, "text": " if only the slightest bias exists if only it doesn't even have to be if one journalist finds one"}, {"start": 1919.68, "end": 1928.56, "text": " example where there is a down rank because i add the word women's um then we are screwed right"}, {"start": 1928.56, "end": 1933.52, "text": " and the engineers will say no we can't guarantee that it's a deep learning system or something right"}, {"start": 1933.52, "end": 1940.8, "text": " we we can't like give you a proof that it's not biased if you're smart executive at that point"}, {"start": 1940.8, "end": 1947.84, "text": " you'll scrap the tool because the potential PR downside are just huge and probably they've"}, {"start": 1947.84, "end": 1953.44, "text": " also realized it's not that handy to have this this tool compared to their recruiters doing their"}, {"start": 1953.44, "end": 1957.6, "text": " job because their recruiters might actually be you know good and have been doing this for a while"}, {"start": 1958.48, "end": 1965.68, "text": " so to the fact that this tool was scrapped is probably much more a result of a PR disaster"}, {"start": 1965.68, "end": 1973.3600000000001, "text": " um but it also independent of that to say gender based discrimination sorry"}, {"start": 1973.8400000000001, "end": 1980.0800000000002, "text": " gender based discrimination was built to deploy within the system to be uprooted using a purely"}, {"start": 1980.0800000000002, "end": 1990.0, "text": " technical approach it's just i mean what is what is this uh this is just trying to discredit this"}, {"start": 1990.0, "end": 1996.56, "text": " kind of technical technical going about solving this problem but i'm pretty sure if someone comes"}, {"start": 1996.56, "end": 2002.4, "text": " to me it says here i have this tool and i can mathematically prove to you that it's not biased then"}, {"start": 2002.72, "end": 2011.44, "text": " um it's not then the problem is solved and also i really don't see how the person training the"}, {"start": 2011.44, "end": 2018.24, "text": " algorithm or the to person researching such an algorithm has any influence over how the algorithm"}, {"start": 2018.24, "end": 2024.48, "text": " works because they're not the ones making the data set or if they are yeah then they can make a better"}, {"start": 2024.48, "end": 2031.84, "text": " data set also if a person comes and makes a better data set that will fix the problem and it doesn't"}, {"start": 2031.84, "end": 2038.08, "text": " matter what skin color the person has that makes the better data set so all of this this link is just"}, {"start": 2038.72, "end": 2046.48, "text": " not demonstrated here or anywhere here at all but this this here is the closest Amazon that this"}, {"start": 2046.48, "end": 2055.52, "text": " report actually comes to making this point so before um i drew this thing workforce AI bias right"}, {"start": 2055.52, "end": 2062.96, "text": " so this this link since it's here the AI system is used for hiring the workforce so at least one"}, {"start": 2062.96, "end": 2072.48, "text": " could make a claim that this link is somewhat demonstrated um but i um this it's a weak case"}, {"start": 2072.48, "end": 2078.64, "text": " i would agree but this is the closest they come so the and but then to go this direction you have"}, {"start": 2078.64, "end": 2085.44, "text": " to somehow argue well the the workforce somehow makes the AI system biased no the workforce"}, {"start": 2085.44, "end": 2092.56, "text": " influences the data set i if the AI is trained so if a hiring AI how do you train a hiring AI"}, {"start": 2092.56, "end": 2099.6, "text": " you optimally train it on the performance so this this employee here is going to have a performance"}, {"start": 2099.6, "end": 2105.6, "text": " over time right and the AI system will look at that performance over time so if the AI system"}, {"start": 2105.6, "end": 2112.64, "text": " even if it's initially biased because it learns from the recruiters it will learn that okay actually"}, {"start": 2112.64, "end": 2120.72, "text": " if i always forego these women um then i don't get as much performance of a workforce so i should"}, {"start": 2120.72, "end": 2129.52, "text": " correct for that so if you train AI system on a good metric then um and then this problem will"}, {"start": 2129.52, "end": 2137.36, "text": " leave even out itself so but again this yeah this this is this could be considered like one point"}, {"start": 2137.36, "end": 2143.04, "text": " in the argument but i think it's a very weak point and only because the AI system is actually"}, {"start": 2143.04, "end": 2149.52, "text": " used for hiring where i think the point they're making is a much larger one is the general bias in"}, {"start": 2149.52, "end": 2155.6, "text": " the AI systems contributes to the workforce imbalances and there there you somehow have to say"}, {"start": 2155.6, "end": 2161.92, "text": " that okay the AI system somehow influences society at large and society at large then"}, {"start": 2163.12, "end": 2170.88, "text": " go leads to the workforce being skewed and i don't yeah it's just not strong enough in my opinion"}, {"start": 2171.6, "end": 2178.16, "text": " and the other direction also isn't isn't strong here but again the examples only get weaker"}, {"start": 2178.16, "end": 2185.2, "text": " from here on um they go on they say this is just one of many examples that show how the function"}, {"start": 2185.2, "end": 2190.08, "text": " the logic of a given technology echo the gender and racial dynamics of the industry that produced it"}, {"start": 2190.08, "end": 2194.8799999999997, "text": " here yeah this that's the claim they're making echo the gender and racial dynamics and they're"}, {"start": 2194.8799999999997, "end": 2202.48, "text": " actually making a stronger claim namely a causal claim um they give the the other example of the"}, {"start": 2202.48, "end": 2208.3199999999997, "text": " amazon's recognition facial analysis service previously demonstrated gender and racial biases"}, {"start": 2208.3199999999997, "end": 2214.64, "text": " worse than those of comparable tools so it failed to see dark skinned women while being most proficient"}, {"start": 2214.64, "end": 2221.6, "text": " at detecting likes light skinned men um and they later going to this example again where they"}, {"start": 2222.16, "end": 2228.7999999999997, "text": " basically also stayed yes this is an issue of the data set uh the data set being much more"}, {"start": 2228.7999999999997, "end": 2234.96, "text": " comprised of white men um and they say but then they you have to kind of make the turnaround"}, {"start": 2234.96, "end": 2242.4, "text": " argument and say well the data set is a reflection of society and society you know part of society"}, {"start": 2242.4, "end": 2248.48, "text": " is the workforce and it's just not I mean it's it again this argument only works if you already"}, {"start": 2248.48, "end": 2255.44, "text": " believe the conclusion um otherwise it there's actually no argument there or no solid one um"}, {"start": 2256.96, "end": 2263.28, "text": " but what they do here is they say amazon's initial response to such criticism has been to try"}, {"start": 2263.28, "end": 2271.52, "text": " and discredit the research behind it this reaction or let's let's first discuss this so the"}, {"start": 2271.52, "end": 2279.04, "text": " amazon yeah amazon of course being the accused here and a multi billion dollar company"}, {"start": 2279.04, "end": 2286.96, "text": " and the criticism is something that is PR wise very bad for them they discredit the research"}, {"start": 2286.96, "end": 2291.6, "text": " try to discredit the research behind it it's understandable that this could be dishonest from"}, {"start": 2291.6, "end": 2295.92, "text": " amazon's side right they i mean they're getting attacked it's like no the tobacco companies"}, {"start": 2295.92, "end": 2301.12, "text": " trying to discredit the smoking research but still i mean that doesn't mean it's wrong it could"}, {"start": 2301.12, "end": 2307.68, "text": " actually be bad research right so you have to actually go and look at what's amazon saying what"}, {"start": 2307.68, "end": 2314.7999999999997, "text": " is the research really doing is amazon right or wrong um i'm completely open that amazon is"}, {"start": 2314.7999999999997, "end": 2320.96, "text": " is wrong here but you still have to go look and this citation here i've tried this citation here"}, {"start": 2320.96, "end": 2328.64, "text": " this one isn't to a to amazon's response it's to like a a medium article and the medium article"}, {"start": 2328.64, "end": 2333.52, "text": " doesn't even include amazon's response i've i've looked maybe haven't seen it it doesn't also"}, {"start": 2333.52, "end": 2339.6, "text": " doesn't link amazon's response maybe link something that links something or that includes it"}, {"start": 2339.6, "end": 2346.24, "text": " in some way but basically this medium article only states here amazon has been denying this or"}, {"start": 2346.24, "end": 2352.4, "text": " amazon has been critical of this and if you state such a sentence amazon's initial response to"}, {"start": 2352.4, "end": 2358.56, "text": " such criticism has been to try and discredit the research behind it i at least expect the citation"}, {"start": 2358.56, "end": 2366.08, "text": " to lead me to amazon's response so that i can verify what they're saying right so this i mean"}, {"start": 2366.08, "end": 2372.96, "text": " i don't know willing to chalk it up to i don't know incompetence uh rather than malice"}, {"start": 2375.04, "end": 2382.16, "text": " right but then they go on and they say this reaction is evidence of the wider problem the research"}, {"start": 2382.16, "end": 2389.2799999999997, "text": " was conducted by two well-regarded AI researchers who are women of color by attempting to publicly"}, {"start": 2389.2799999999997, "end": 2395.52, "text": " discredit their expertise and research methods amazon is reinforcing the same kinds of prejudices"}, {"start": 2395.52, "end": 2402.72, "text": " and erasure that the research critiques yeah here you go straight to the identity of the researchers"}, {"start": 2403.2, "end": 2409.12, "text": " like play the race card straight out i mean this is this is maximum dishonesty right"}, {"start": 2409.12, "end": 2416.0, "text": " except if amazon said something like well these women of color clearly because they're women of color"}, {"start": 2416.0, "end": 2421.68, "text": " they have no idea what they're doing or something like this this is basically it's coded language"}, {"start": 2422.4, "end": 2427.3599999999997, "text": " for saying either saying you're not allowed to criticize people of color because"}, {"start": 2428.4, "end": 2437.3599999999997, "text": " they're a minority or you're basically saying amazon is racist and that's why they criticize them"}, {"start": 2437.36, "end": 2442.88, "text": " they they just don't take them seriously because they're women of color i mean both are both are"}, {"start": 2442.88, "end": 2451.36, "text": " apart this is just dishonesty uh really stated here to i mean again i'm perfectly willing to accept"}, {"start": 2451.36, "end": 2457.52, "text": " that amazon's critique of this research is wrong and is not well intended because they're the ones"}, {"start": 2457.52, "end": 2466.8, "text": " attacked but you still have to examine it rather than say well they they shoot against women of color"}, {"start": 2466.8, "end": 2473.28, "text": " and therefore somehow that makes their their counter argument irrelevant or even racist or"}, {"start": 2473.84, "end": 2481.04, "text": " something that's i don't know i find this dishonest um yeah i don't know about you"}, {"start": 2483.2000000000003, "end": 2493.92, "text": " going on so they um they go on and state a number of examples of kind of examples of bias"}, {"start": 2493.92, "end": 2500.16, "text": " and discrimination in the workforce and they a lot of time they make a mixture kind of of the"}, {"start": 2500.16, "end": 2508.48, "text": " kind of gender and race imbalance in a workforce and things like sexual harassment not being taken"}, {"start": 2508.48, "end": 2518.7200000000003, "text": " seriously by the companies and also the things like gender uh or race pay gaps which i'm open to"}, {"start": 2518.72, "end": 2526.24, "text": " to accept that the these things exist and are even intertwined um but just to tell you what's"}, {"start": 2526.24, "end": 2531.52, "text": " happening because we're we're kind of skipping but it's kind of a a mixture of these things"}, {"start": 2532.48, "end": 2538.3999999999996, "text": " so they say these issues are systemic there's a close relationship between these workplaces with"}, {"start": 2538.3999999999996, "end": 2545.2799999999997, "text": " discriminatory practices and discriminatory tools a feedback loop that is shaping the i industry"}, {"start": 2545.28, "end": 2552.0800000000004, "text": " and its tools so again here to state i think i've stated it enough now that or demonstrated enough"}, {"start": 2552.0800000000004, "end": 2558.5600000000004, "text": " that i'm really representing their arguments as they intended it to namely that there is this"}, {"start": 2558.5600000000004, "end": 2565.76, "text": " kind of causal links and loop between these two things um again they shoot against the fairness"}, {"start": 2565.76, "end": 2573.28, "text": " literature by saying from this perspective sorry from this perspective locating individual biases"}, {"start": 2573.28, "end": 2579.36, "text": " within giving giving technical systems and attempting to fix them by tweaking the system becomes an"}, {"start": 2579.36, "end": 2586.2400000000002, "text": " exercising futility um only by examining discrimination through the lens of social logics who"}, {"start": 2586.2400000000002, "end": 2592.0, "text": " benefits who it harms and how can we see the workings of these systems in the context of existing"}, {"start": 2592.0, "end": 2598.1600000000003, "text": " power relationships again they say these issues aren't technically fixing these systems won't help"}, {"start": 2598.16, "end": 2604.8799999999997, "text": " um if that's the problem and yeah i agree if that causal link actually exists then technically fixing"}, {"start": 2604.8799999999997, "end": 2611.6, "text": " the system might not solve the problem not even sure i mean if you technically fix a system"}, {"start": 2612.16, "end": 2618.08, "text": " like this then you technically break the causal link and thereby fix the problem i would"}, {"start": 2619.52, "end": 2624.8799999999997, "text": " oh sure but again this is based on the hypothesis that they've already reached like demonstrated"}, {"start": 2624.88, "end": 2631.44, "text": " their their conclusion at which they haven't and which they are not in the entire article"}, {"start": 2633.6800000000003, "end": 2641.6, "text": " uh yeah so the next section goes into who makes i so i don't know about you but this section was"}, {"start": 2641.6, "end": 2652.4, "text": " titled how workforces and AI systems interact and apart from one the AI system being used for"}, {"start": 2652.4, "end": 2658.1600000000003, "text": " hiring the workforce which is uh said this one instance where actually there could be"}, {"start": 2659.28, "end": 2665.28, "text": " one causal direction from bias to diff minister representation in the workforce other than that"}, {"start": 2665.92, "end": 2672.2400000000002, "text": " there isn't really anything in there that really shows how these two interact especially in"}, {"start": 2672.2400000000002, "end": 2679.6800000000003, "text": " in a causal way all right the next section is called who makes AI and it's broadly about the um"}, {"start": 2679.68, "end": 2689.04, "text": " about the gender and race imbalances or it's not unequal representation in the workforce and we're"}, {"start": 2689.04, "end": 2698.3199999999997, "text": " going to skip this um diversity statistics that kind of discuss that diversity statistics statistics"}, {"start": 2698.3199999999997, "end": 2707.44, "text": " of companies aren't really accurate or can we you know massaged kind of by the companies which you know"}, {"start": 2707.44, "end": 2715.6, "text": " is true uh definitely companies will always try to maximize their profits and even if they"}, {"start": 2716.2400000000002, "end": 2722.08, "text": " you know give out such a report um so that definitely critical thinking is an order"}, {"start": 2723.12, "end": 2731.04, "text": " all right so the next section is called the discrimination feedback loop right if so if in"}, {"start": 2731.04, "end": 2735.76, "text": " the earlier section you felt like here we're going to the meat then you must feel with this title"}, {"start": 2735.76, "end": 2743.0400000000004, "text": " like okay we're actually going to see how this loop works and how the two things are really"}, {"start": 2743.0400000000004, "end": 2752.2400000000002, "text": " linked like how one causes the other and vice versa so let's jump in they say AI systems"}, {"start": 2753.0400000000004, "end": 2759.36, "text": " increasingly play a role in our social and political institutions including education,"}, {"start": 2759.36, "end": 2766.1600000000003, "text": " healthcare, hiring, criminal justice um yes therefore we need to consider the relationship between"}, {"start": 2766.1600000000003, "end": 2773.44, "text": " the workplace diversity crisis and the problems with bias and discrimination in AI systems uh"}, {"start": 2774.88, "end": 2784.1600000000003, "text": " no why i don't see how therefore but yeah so i don't see how therefore we need to consider the"}, {"start": 2784.16, "end": 2789.44, "text": " relationship okay if there's a relationship we need to consider whether there's a relationship"}, {"start": 2789.44, "end": 2796.0, "text": " okay granted um so they say fairness accountability and transparency research is playing an"}, {"start": 2796.0, "end": 2801.3599999999997, "text": " emerging role and what they mean here is the aspect of fairness accountability and transparency"}, {"start": 2801.3599999999997, "end": 2807.3599999999997, "text": " research that shows that there is a problem so i told you there's two sides one side is showing"}, {"start": 2807.3599999999997, "end": 2812.3199999999997, "text": " there is a problem in current systems and the other side is trying to fix them so they're very"}, {"start": 2812.32, "end": 2819.84, "text": " much fans of the side that shows that there is a problem and they use show some of these problems"}, {"start": 2819.84, "end": 2826.0, "text": " here we've already seen some but they show some more like Facebook's ad delivery systems let users"}, {"start": 2826.0, "end": 2833.6000000000004, "text": " to be shown as for housing and employment in a discriminatory manner um so giving 2019 study"}, {"start": 2833.6000000000004, "end": 2838.1600000000003, "text": " found significant racial bias and a widely used commercial algorithm used to determine whether"}, {"start": 2838.16, "end": 2845.52, "text": " patients will be enrolled in care management programs um so the these are these are just examples"}, {"start": 2845.52, "end": 2858.3199999999997, "text": " of these AI systems being biased and um they so they go into this say taking a contextualized"}, {"start": 2858.3199999999997, "end": 2864.08, "text": " view may enable more extensifican and the contextualized view they when they say they say they"}, {"start": 2864.08, "end": 2870.56, "text": " mean anything more than just a technical approach at solving these problems a more extensif"}, {"start": 2870.56, "end": 2876.16, "text": " account of bias to emerge future work could examine the politics of system design study how AI"}, {"start": 2876.16, "end": 2883.68, "text": " systems insituated reality and study AI systems insituated realities ask why a system was designed"}, {"start": 2883.68, "end": 2890.16, "text": " in a particular way how it was constructed whose interest it shaped shaped by the metrics"}, {"start": 2890.16, "end": 2896.64, "text": " in which its success or failure is assessed rather than solely focusing on improving existing"}, {"start": 2896.64, "end": 2904.08, "text": " data sets or individual algorithms um yeah I agree I mean we always have to we always have to"}, {"start": 2904.08, "end": 2911.6, "text": " pay attention to these things especially like um looking at the metrics by which its success"}, {"start": 2911.6, "end": 2921.52, "text": " or failure is assessed but a lot of times this is this um is rather straightforward in in kind of"}, {"start": 2921.52, "end": 2928.24, "text": " if you look at the metric the metric most often especially in commercial applications is money"}, {"start": 2928.24, "end": 2937.2799999999997, "text": " right so the metric of like an ad showing system like if I have a system to recommend ads to people"}, {"start": 2937.28, "end": 2944.2400000000002, "text": " show people ads and personalize them and so on I simply want to maximize my revenue so I want"}, {"start": 2944.2400000000002, "end": 2951.2000000000003, "text": " to sell someone something and everything I want to know is how likely is it that person is going"}, {"start": 2951.2000000000003, "end": 2961.84, "text": " to buy that thing right that's basically yeah so in essence sometimes it's really valuable to"}, {"start": 2961.84, "end": 2974.1600000000003, "text": " consider what capitalism is so in capitalism in so capitalism these kind of this the system"}, {"start": 2974.1600000000003, "end": 2980.2400000000002, "text": " we're working on is kind of a form of limited capitalism but mostly mostly capitalism and um"}, {"start": 2981.76, "end": 2989.52, "text": " capitalism is very greedy so capitalism all corporations want to do basically is make money"}, {"start": 2989.52, "end": 2993.92, "text": " and that is and on the other side you have discrimination"}, {"start": 2997.68, "end": 3003.04, "text": " so discrimination meaning these unequal represent like unequal distribution"}, {"start": 3003.6, "end": 3008.64, "text": " actively so and often sometimes these go hand in hand sometimes you can make more money by"}, {"start": 3008.64, "end": 3013.28, "text": " discriminating against a certain type of people and that's that's a really bad scenario like"}, {"start": 3013.28, "end": 3020.32, "text": " that's a very like this is really something where we need to take action but a lot of times a lot"}, {"start": 3020.32, "end": 3029.52, "text": " of times these two things stand in opposition to each other so a little arrow here non-compatible"}, {"start": 3030.5600000000004, "end": 3041.84, "text": " that means if I want to sell someone something then I maximize my profit by not caring by"}, {"start": 3041.84, "end": 3049.36, "text": " accurately assessing how likely is it that person buys that thing um if I want to discriminate"}, {"start": 3049.36, "end": 3055.1200000000003, "text": " here if I want to discriminate start discriminating according to skin color saying like no I don't"}, {"start": 3055.1200000000003, "end": 3062.0, "text": " like that this person with this skin color is able to buy this product I want to kind of keep"}, {"start": 3062.0, "end": 3068.96, "text": " them down and so on then I forego profit right then I actually even though this person could buy"}, {"start": 3068.96, "end": 3078.08, "text": " this thing I um forego that so often these things are in direct opposition to each other also if I"}, {"start": 3078.88, "end": 3085.12, "text": " in in charge of hiring and I don't like people of a certain gender but they would actually be"}, {"start": 3085.12, "end": 3092.2400000000002, "text": " really really good whatever good employees so I forego that that means I'm getting I pay more"}, {"start": 3092.24, "end": 3104.3199999999997, "text": " for less qualified people just because I'm I'm biased and I'm down ranking unjustifiably"}, {"start": 3104.3199999999997, "end": 3110.0, "text": " these people of the gender I don't like so oftentimes you have to ask yourself are people"}, {"start": 3110.64, "end": 3118.24, "text": " fundamentally greedy or discriminatory which are they more if push comes to shove would they"}, {"start": 3118.24, "end": 3127.2, "text": " rather have more money or would they rather keep their own race and gender group in power um and"}, {"start": 3128.08, "end": 3135.52, "text": " with just yeah so so the and you have to ask this of corporations you have to ask this of people"}, {"start": 3135.52, "end": 3144.72, "text": " and in my experience and view I'll like people are much much more greedy than they are willing to"}, {"start": 3144.72, "end": 3154.0, "text": " discriminate um and give up money for discrimination and so if if we look at metrics by which it's"}, {"start": 3154.0, "end": 3160.9599999999996, "text": " success or failure of AI systems are designed then I would I would argue a lot of the times metrics"}, {"start": 3160.9599999999996, "end": 3169.8399999999997, "text": " actually um profit incentives and especially if we look at data set construction if there is a"}, {"start": 3169.84, "end": 3176.0, "text": " skewed data set that makes my AI system be biased that actually loses me money and the company would"}, {"start": 3176.0, "end": 3183.6800000000003, "text": " profit a lot from building a better data set so looking at kind of metrics uh actually makes a lot"}, {"start": 3183.6800000000003, "end": 3190.56, "text": " of sense to me and very much in favor of that and I think by by designing accurate metrics and"}, {"start": 3190.56, "end": 3196.32, "text": " then getting the best possible information the best possible data sets to maximize these metrics"}, {"start": 3196.32, "end": 3201.84, "text": " will oftentimes actually eliminate such forms of discrimination again there are situations where"}, {"start": 3201.84, "end": 3209.84, "text": " they don't we have to be very cognizant of these they go into this and they say um also examine"}, {"start": 3209.84, "end": 3214.0, "text": " more thoroughly how societal discrimination surfaces in data provenance examining the history"}, {"start": 3214.0, "end": 3219.6800000000003, "text": " and process of data set construction um and considering how cultural norms and stereotypes were"}, {"start": 3219.68, "end": 3226.3999999999996, "text": " numerated and represented at the time of data creation this is a big issue yes so the data set construction"}, {"start": 3226.3999999999996, "end": 3232.56, "text": " kind of at the time of data creation and so on this is a big issue in these systems and a lot of bias"}, {"start": 3232.56, "end": 3238.64, "text": " and I would argue most of the bias we've seen here arises from corrupt data sets and from data sets"}, {"start": 3238.64, "end": 3244.0, "text": " that were constructed in an already biased way and the AI system trained on these data sets simply"}, {"start": 3244.0, "end": 3253.92, "text": " replicates this um this bias so they I think that's very correct here um they go into this example"}, {"start": 3253.92, "end": 3261.36, "text": " they say the labeled faces in the wild data set contains over 15,000 images only 7% of images"}, {"start": 3261.36, "end": 3269.84, "text": " are of black people and this is because um these the media landscape of the early 2000s these images"}, {"start": 3269.84, "end": 3275.04, "text": " were gathered from the news media at the time predominantly featured white men in positions of"}, {"start": 3275.04, "end": 3283.2000000000003, "text": " celebrity and power this exactly so if you train a system on these and this data set the system"}, {"start": 3283.2000000000003, "end": 3291.04, "text": " will inherit this bias yeah so this is a classic example of a corrupt data set um also I mean this"}, {"start": 3291.04, "end": 3297.52, "text": " doesn't only with with race and gender this is also if you'd like take pictures from IMDB yes"}, {"start": 3297.52, "end": 3302.72, "text": " you know a lot of this currently sell up a data set that is used in all the GAN research"}, {"start": 3302.72, "end": 3309.04, "text": " is collected from IMDB you probably have overly beautiful like pretty face people"}, {"start": 3309.92, "end": 3316.24, "text": " on there like so the ureia system your generative model is only going to produce mostly pretty face"}, {"start": 3316.24, "end": 3325.44, "text": " people since movie stars tend to be a lot prettier than average humans um so the the kind of"}, {"start": 3325.44, "end": 3336.16, "text": " data set construction process I think is currently the biggest source of bias in AI but that also"}, {"start": 3336.16, "end": 3340.48, "text": " and it's interesting that they go into this here and they kind of want to make the point that"}, {"start": 3340.48, "end": 3348.08, "text": " this is you know this is because society and you know power in society the data set reflects that"}, {"start": 3348.08, "end": 3355.04, "text": " but I would argue if someone makes a data set that doesn't have this bias then the problem is"}, {"start": 3355.04, "end": 3360.96, "text": " solved and I don't care who makes the data set so the link between the workforce and the bias is"}, {"start": 3360.96, "end": 3366.48, "text": " really broken by an argument like this because as soon as we have a correct data set and on bias"}, {"start": 3366.48, "end": 3376.16, "text": " data set we can mitigate the bias and they go they go into this here um they say sorry"}, {"start": 3376.16, "end": 3391.92, "text": " yeah they say down here they say these people this um these researchers have looked at these facial"}, {"start": 3391.92, "end": 3397.6, "text": " recognition systems and they assessed this what we saw earlier higher error rates for darker"}, {"start": 3397.6, "end": 3403.2, "text": " skinned women than for any other group lowest error rates for light skinned men to measure this"}, {"start": 3403.2, "end": 3410.0, "text": " disparity these researchers developed a new data set that is more balanced both in terms of gender"}, {"start": 3410.0, "end": 3416.8799999999997, "text": " and skin color good like problem like make a larger data set to actually train on and then"}, {"start": 3417.4399999999996, "end": 3424.7999999999997, "text": " problem solved like and I don't I don't care at all what race and what gender these people are"}, {"start": 3424.8, "end": 3434.2400000000002, "text": " well done like good people make good data set like this and then we've solved the problem what's"}, {"start": 3434.2400000000002, "end": 3442.32, "text": " the problem here like why would you ever care what these people look like if they if they do good work"}, {"start": 3443.2000000000003, "end": 3448.4, "text": " that's it to me that's this just this this this actually breaks their own argument I don't know"}, {"start": 3448.4, "end": 3457.52, "text": " why they included here but um yeah to me that the to then suggest that there's a link to the"}, {"start": 3457.52, "end": 3466.4, "text": " workforces if it's if here is obvious that if you fix the data set you can fix the the recognition"}, {"start": 3466.4, "end": 3486.1600000000003, "text": " system all right so we'll go on here jump a couple more paragraphs except one they say they"}, {"start": 3486.1600000000003, "end": 3491.6, "text": " shoot again against this kind of say to this point a focus on fixing technical systems in isolation"}, {"start": 3491.6, "end": 3496.56, "text": " without examining their broader context of use and power and dynamics that a tensor choose is"}, {"start": 3496.56, "end": 3503.7599999999998, "text": " not limited in its intervention it can actively cause harm so if you fix the problem in a technical"}, {"start": 3503.7599999999998, "end": 3510.0, "text": " manner they argue here it can actively cause harm and the example they give is that facial and"}, {"start": 3510.0, "end": 3518.88, "text": " image recognition systems they are often applied in service of police surveillance which disproportionately"}, {"start": 3518.88, "end": 3527.6, "text": " harms poor people and communities of color so the this person this a quote from this person that"}, {"start": 3527.6, "end": 3534.08, "text": " says is this not social progress to make black people equally visible to software that will"}, {"start": 3534.08, "end": 3539.6800000000003, "text": " inevitably be further weaponized against us we are considered criminal and more"}, {"start": 3539.68, "end": 3548.3199999999997, "text": " available by orders of magnitude whatever claim to a ride of privacy that we may have is diminished"}, {"start": 3548.3199999999997, "end": 3553.68, "text": " by a state that believes we must always be watched and seen so this is an example where by improving"}, {"start": 3553.68, "end": 3558.72, "text": " the facial recognition for black people it makes them makes a police better at surveilling them"}, {"start": 3558.72, "end": 3565.2, "text": " which is true and then it is an ethical problem that the police is able to use these facial recognition"}, {"start": 3565.2, "end": 3571.2799999999997, "text": " systems to surveil people that's a massive privacy problem it's a massive problem in how much"}, {"start": 3571.2799999999997, "end": 3577.7599999999998, "text": " the state is allowed to overreach and so on so I think it's a discussion itself but here they"}, {"start": 3578.72, "end": 3586.16, "text": " they argue because at the very beginning I asked you to remember the this whole notion of we"}, {"start": 3586.16, "end": 3593.04, "text": " always have to look at who benefits from the way the AI system is constructed who who is harmed"}, {"start": 3593.04, "end": 3600.4, "text": " from that who is who benefits from how the metrics are shaped and so on in this case we actually"}, {"start": 3600.4, "end": 3609.84, "text": " have a perfect example where if the if the face recognition system is very inaccurate for black"}, {"start": 3609.84, "end": 3618.88, "text": " people's faces that actually helps them in the societal context so by logic of this report here"}, {"start": 3618.88, "end": 3628.2400000000002, "text": " that must mean that somehow the bias like the the bias works for them and thereby the system"}, {"start": 3628.8, "end": 3633.92, "text": " is good or something like this and by fixing it you actually make it worse yeah they say it can"}, {"start": 3633.92, "end": 3640.8, "text": " actively cause harm so I think this is a pretty much arguing against themselves earlier where they"}, {"start": 3640.8, "end": 3648.2400000000002, "text": " say oh we always have to look at who benefits from the system yeah here if face recognition system"}, {"start": 3648.24, "end": 3658.0, "text": " can't recognize you actually benefit so I don't think that argument works in any case except if"}, {"start": 3658.0, "end": 3669.52, "text": " you only look at it when you want to look at it all right so this we're we're gonna jump a couple"}, {"start": 3669.52, "end": 3678.64, "text": " of sections here but the the core thing here was the feedback loop and again the feedback loop"}, {"start": 3678.64, "end": 3685.2, "text": " isn't demonstrated at all here just examples of systems that are biased and of datasets that are"}, {"start": 3685.2, "end": 3692.56, "text": " biased or because of datasets that are biased but there's no demonstration of how the workforce"}, {"start": 3692.56, "end": 3700.56, "text": " how I mean yeah to take this previous argument so the workforce is supposedly supremely white"}, {"start": 3701.52, "end": 3712.16, "text": " and it makes a face recognition system that makes that is performing poorly for darker skin people"}, {"start": 3713.44, "end": 3718.48, "text": " and that actually in this context of police surveillance helps the darker skin people"}, {"start": 3718.48, "end": 3724.96, "text": " compared to the lighter skin people so that kind of existing exact counter example to the argument"}, {"start": 3724.96, "end": 3731.36, "text": " that the but that this misrepresentation in the workforce leads to the biases in the system"}, {"start": 3732.4, "end": 3740.88, "text": " if we interpret it through the lens who who it costs and who it benefits all right so the next"}, {"start": 3740.88, "end": 3747.36, "text": " section is corporate diversity beyond the pipeline problem and this is kind of an odd inclusion when"}, {"start": 3747.36, "end": 3755.6, "text": " I read it first to interpret to go against the pipeline problem here but it kind of makes sense if you"}, {"start": 3755.6, "end": 3762.0, "text": " if you know what these people set out to do so what these people set out to do is to argue we"}, {"start": 3762.0, "end": 3770.2400000000002, "text": " must fix the workforce right we must fix the we must hire more people of color more women and so on"}, {"start": 3770.24, "end": 3778.0, "text": " and promote them more so they have have a very much have a problem with this pipeline argument"}, {"start": 3778.0, "end": 3783.3599999999997, "text": " what the pipeline argument is is the following so at the beginning if you if you consider like the"}, {"start": 3783.3599999999997, "end": 3790.56, "text": " educational or career paths of people you have like 100% of people let's represent this at the"}, {"start": 3790.56, "end": 3796.64, "text": " beginning and then most of these people go through school so most of these go on this is kind of"}, {"start": 3796.64, "end": 3803.44, "text": " the area in here is the population and then some of them pursue higher education like some drop out"}, {"start": 3803.44, "end": 3810.08, "text": " so this gets a smaller amount go on so this is here this is time and this is kind of volume of"}, {"start": 3810.08, "end": 3819.04, "text": " people and then very few go into computer science right and then even fewer go into AI so what you"}, {"start": 3819.04, "end": 3826.56, "text": " end up is just a tiny sliver of people that actually go into AI um so this is called a a pipeline"}, {"start": 3826.56, "end": 3833.2799999999997, "text": " and um we have various junctions here like where you would go into higher education where you would"}, {"start": 3833.2799999999997, "end": 3840.72, "text": " choose your major in university where you would go into a subfield of computer science where the"}, {"start": 3841.2799999999997, "end": 3848.4, "text": " the kind of volume of people drop significantly from one point to the other and now if you compare this"}, {"start": 3850.48, "end": 3855.2799999999997, "text": " if you compare this and use it say we're not consider all of society but here over here we'll"}, {"start": 3855.28, "end": 3861.52, "text": " call consider all just men and over here we'll consider all women again they all go to high school"}, {"start": 3861.52, "end": 3869.92, "text": " and then university and then maybe very few go to CS even fewer go to AI what you'll find is and"}, {"start": 3869.92, "end": 3877.6000000000004, "text": " I've drawn it maybe wrong here is that this is smaller than this so if you comparatively look at how"}, {"start": 3877.6, "end": 3888.08, "text": " many males end up in the AI field you will find that fewer end up in more and will end up in"}, {"start": 3888.08, "end": 3896.72, "text": " the I feel than women if you comparatively look at it so at and this is over time like at the"}, {"start": 3896.72, "end": 3908.3199999999997, "text": " beginning you have 50 50 main women distribution in society almost I guess I think slightly more boys"}, {"start": 3908.3199999999997, "end": 3916.16, "text": " are born but I could be wrong about this and then as you go through time here um this skews I"}, {"start": 3916.16, "end": 3922.56, "text": " believe um so you go through high school and let's just assume like high school is still"}, {"start": 3922.56, "end": 3929.12, "text": " and that equal depends on the country then you go to university or is actually more more women"}, {"start": 3929.12, "end": 3935.52, "text": " at university slightly um and then you go into computer science and in computer science and this"}, {"start": 3935.52, "end": 3940.72, "text": " is just relative here that's why that kind of norm at 100% otherwise these things would go down"}, {"start": 3941.44, "end": 3947.68, "text": " all of them at the same time but comparatively you have then much more men than women in computer"}, {"start": 3947.68, "end": 3955.3599999999997, "text": " science right and then if you see who chooses AI I don't know if there's any statistics of"}, {"start": 3955.3599999999997, "end": 3961.52, "text": " specifically choosing AI from computer science I'm just gonna assume that remains the same so if"}, {"start": 3961.52, "end": 3968.96, "text": " you look into the AI field kind of this um this will stay the same so in the AI field you have"}, {"start": 3968.96, "end": 3977.2799999999997, "text": " much more men than women and presumably because you already have much more men than women choosing"}, {"start": 3977.28, "end": 3985.44, "text": " computer science as their major or choosing any technical field as their major um this this is"}, {"start": 3985.44, "end": 3991.36, "text": " uh kind of the so-called pipeline argument so where do AI companies hiring come in AI companies"}, {"start": 3991.36, "end": 4000.2400000000002, "text": " come in here they hire at this point after your university degree presumably um there's exceptions"}, {"start": 4000.24, "end": 4008.7999999999997, "text": " but we just say they hire after your university degree and therefore they basically have to choose"}, {"start": 4008.7999999999997, "end": 4014.3199999999997, "text": " from this distribution and if they just say okay we'll just take the the top I don't know 10%"}, {"start": 4014.3199999999997, "end": 4019.9199999999996, "text": " people will hire the good people of this we don't care what gender they are right so the top 10%"}, {"start": 4019.9199999999996, "end": 4028.8799999999997, "text": " here the top 10% here then this will end up being the same distribution as you have graduates"}, {"start": 4028.88, "end": 4036.2400000000002, "text": " all right so this is kind of the company company hiring from an let's say an 80-20 distribution"}, {"start": 4036.2400000000002, "end": 4042.4, "text": " without looking at gender will end up with an 80-20 distribution that's the pipeline argument"}, {"start": 4042.96, "end": 4049.44, "text": " of companies and they don't like the pipeline argument because the pipeline argument basically"}, {"start": 4049.44, "end": 4057.52, "text": " says that the problem is somewhere here right the problem isn't the company's hiring um"}, {"start": 4057.52, "end": 4066.96, "text": " wrongly the problem isn't that the company's here uh deselected the problem is somewhere here"}, {"start": 4066.96, "end": 4071.52, "text": " and because they want to make the argument that the company should hire in a different way they"}, {"start": 4071.52, "end": 4078.96, "text": " can't have that um so they argue against it now to argue against this would actually be very easy"}, {"start": 4079.6, "end": 4086.16, "text": " if this argument were wrong like they claim the argument is is is not good the pipeline argument"}, {"start": 4086.16, "end": 4091.52, "text": " isn't good if the pipeline argument were wrong what you'd have to do is you would have to say"}, {"start": 4092.3999999999996, "end": 4102.8, "text": " you'd have to say hey companies look at that in in your company you have an 80-20 distribution"}, {"start": 4102.8, "end": 4110.72, "text": " men to women right that's pretty unequal and university graduates the pool you choose from"}, {"start": 4110.72, "end": 4117.2, "text": " is actually 50-50 so obviously you're engaged in discriminatory hiring because you know the"}, {"start": 4117.2, "end": 4126.08, "text": " pool is 50-50 there's no reason why it um why your hiring practices should cause this inequality"}, {"start": 4127.2, "end": 4132.400000000001, "text": " and therefore we can clearly show you do discriminatory hiring you should stop it you should"}, {"start": 4132.400000000001, "end": 4137.92, "text": " definitely hire more women and people of color more of these more of the minorities because"}, {"start": 4137.92, "end": 4144.72, "text": " your hiring practices are the problem but that's not the case how do I know because if it were the"}, {"start": 4144.72, "end": 4150.4, "text": " case they would simply state this definitely in this report if that were the case that you could"}, {"start": 4150.4, "end": 4155.92, "text": " actually show with numbers that the pipeline argument is wrong then they would absolutely do this"}, {"start": 4155.92, "end": 4163.28, "text": " instead they have to like go back and they have to like ramble around it for several pages which"}, {"start": 4163.28, "end": 4172.08, "text": " will mostly skip but mainly because this is the case it is the case that these companies higher"}, {"start": 4172.08, "end": 4182.08, "text": " from a pool of of unequaly represented people and the only argument that you can make is that"}, {"start": 4183.44, "end": 4192.719999999999, "text": " well if if you were to equalize this here then maybe here where the problem is that would fix like"}, {"start": 4192.72, "end": 4200.56, "text": " so the argument is often made if young girls choosing their majors have no no one to look up to"}, {"start": 4200.56, "end": 4208.8, "text": " like no strong women in in corporation CEO roles they will think that it's not a climate for women"}, {"start": 4208.8, "end": 4214.08, "text": " and they will elect not to go into these fields which is a valid argument like I'm completely open"}, {"start": 4214.08, "end": 4221.12, "text": " to that to that argument um but it's the only argument you can make and still then even if you"}, {"start": 4221.12, "end": 4227.5199999999995, "text": " determine this as the cause I would still not support racist and sexist hiring practice like"}, {"start": 4228.08, "end": 4234.72, "text": " do something else like make them clear that the environment can be changed or change the environment"}, {"start": 4234.72, "end": 4239.84, "text": " like change the if if it really is the case that it's kind of a"}, {"start": 4241.599999999999, "end": 4248.32, "text": " not an anti-woman environment change that um if it's just a case that they perceive it as such"}, {"start": 4248.32, "end": 4254.96, "text": " change the perception but do not engage in discriminatory hiring practices because there's"}, {"start": 4254.96, "end": 4262.16, "text": " always someone losing out unfairly on these practices and that's that's uh something I'm not"}, {"start": 4262.16, "end": 4268.32, "text": " willing to to uh go into like that's something I'm not willing to engage in and I don't think"}, {"start": 4269.2, "end": 4275.36, "text": " people should engage be engaging in that actually that's why it's illegal so let's"}, {"start": 4275.36, "end": 4282.48, "text": " let's actually look at very few points this is just why the so they they claim they go kind of go"}, {"start": 4282.48, "end": 4289.679999999999, "text": " over these um pipeline studies and they yeah they say the term used in the industry reference the"}, {"start": 4289.679999999999, "end": 4294.96, "text": " absence of diverse candidates in the hiring pool of to justify the inability of large firms to"}, {"start": 4294.96, "end": 4302.08, "text": " achieve diversity due to scarcity right so that's they they basically agree the of the on the"}, {"start": 4302.08, "end": 4310.16, "text": " definition that I stated here um so they companies that are challenged on their lack of diversity"}, {"start": 4310.16, "end": 4314.8, "text": " frequently site pipeline studies have proof of the persistent challenge of finding enough women"}, {"start": 4314.8, "end": 4323.2, "text": " and people of color to hire yes and and the yeah but they say but the evidence suggests otherwise"}, {"start": 4323.2, "end": 4328.24, "text": " for example in 2016's Facebook chief diversity officer wrote that it has become clear"}, {"start": 4328.24, "end": 4332.48, "text": " that at the most fundamental level appropriate representation technology or any other industry"}, {"start": 4332.48, "end": 4337.76, "text": " will depend upon more people having the opportunity to gain necessary skills through the public education"}, {"start": 4337.76, "end": 4346.24, "text": " system well yes that's something I would agree and that's something clearly that addresses this region"}, {"start": 4346.24, "end": 4355.76, "text": " here um then and where the actual problem is happening so I I would say that's a very very good"}, {"start": 4355.76, "end": 4361.92, "text": " statement from the Facebook's chief diversity officer they say but as the center for investigative"}, {"start": 4361.92, "end": 4368.56, "text": " reporting study of tech company diversity data found 91 large tech companies headquartered in"}, {"start": 4368.56, "end": 4374.320000000001, "text": " Silicon Valley managed to hire higher percentages of black Latina and multi-racial employees"}, {"start": 4374.320000000001, "end": 4384.4800000000005, "text": " and Facebook that year well just if other just just because other companies employ racist and"}, {"start": 4384.48, "end": 4392.32, "text": " sexist hiring um to improve their diversity numbers doesn't mean that Facebook has to do this"}, {"start": 4393.2, "end": 4401.5199999999995, "text": " right does it it like just because other companies do this uh doesn't mean that it's it's a it's a"}, {"start": 4401.5199999999995, "end": 4408.5599999999995, "text": " it's a good thing to do or that's how you should go about it Facebook simply says like if we want to"}, {"start": 4408.56, "end": 4419.04, "text": " hire without being racist or sexist if we want to just hire the best people then more of the best"}, {"start": 4419.04, "end": 4425.84, "text": " people have to be in the pipeline like more people have to gain access to educational opportunities"}, {"start": 4425.84, "end": 4433.360000000001, "text": " so we can then hire them whereas these other companies probably make a big effort to say well"}, {"start": 4433.36, "end": 4439.12, "text": " even if you are not as educated if you even if you're not as qualified as this other person will"}, {"start": 4439.12, "end": 4447.12, "text": " hire you because of your skin color I don't think that's that's an argument in that in the"}, {"start": 4447.12, "end": 4454.16, "text": " favor of what the report is claiming like I don't think that that is evidence that the pipeline"}, {"start": 4454.16, "end": 4460.32, "text": " argument is invalid all right so they're going to core themes in pipeline research and they"}, {"start": 4460.32, "end": 4469.84, "text": " do some they do some overview of the kind of pipeline research that um often so sometimes the"}, {"start": 4469.84, "end": 4476.24, "text": " pipeline research examines why why for example why women don't choose to go into computer science"}, {"start": 4476.24, "end": 4481.92, "text": " as much and sometimes they they focus on what is their perception of the field what are what is"}, {"start": 4481.92, "end": 4490.4, "text": " their perceptions of the stereotypes of the field what is their perceptions of the kind of culture"}, {"start": 4490.4, "end": 4495.52, "text": " the in the field is it suited to them what is their perception of how qualified they are for the"}, {"start": 4495.52, "end": 4500.72, "text": " field and is that true is that false and so on so this research examines a whole variety of things"}, {"start": 4500.72, "end": 4506.72, "text": " and it's very interesting actually to read through this research um I want to point out this here"}, {"start": 4506.72, "end": 4512.72, "text": " other studies suggest that gender is correlated with the person's motivations for pursuing a"}, {"start": 4512.72, "end": 4518.8, "text": " career in the field women and particularly women from low socioeconomic status or minority"}, {"start": 4518.8, "end": 4524.0, "text": " backgrounds are more likely to see computing as a versatile profession that provides an"}, {"start": 4524.0, "end": 4532.0, "text": " opportunity for secure employment higher pay and better social standing moreover their interests"}, {"start": 4532.0, "end": 4537.28, "text": " go beyond technical aspects of computing focusing instead on the purpose and application of software"}, {"start": 4537.92, "end": 4542.72, "text": " however such interests are often de-emphasized in computer science curricula"}, {"start": 4542.72, "end": 4549.12, "text": " a price technical skill and its ability to industrial it's applicability to industrial settings"}, {"start": 4549.12, "end": 4555.44, "text": " above all else so I find this really interesting because it's basically saying that women have"}, {"start": 4555.44, "end": 4562.48, "text": " different interests than men and on average right that's that's basically saying that which"}, {"start": 4562.48, "end": 4570.24, "text": " is almost heresy like to say this in this context like people will come after you if you suggest"}, {"start": 4570.24, "end": 4575.04, "text": " something at this and yet they're just stating it here and remember this for later this is"}, {"start": 4575.04, "end": 4581.04, "text": " this is really funny that they're like there yeah the interests could be different for women"}, {"start": 4581.04, "end": 4587.84, "text": " than for men and we might have to adjust our curriculum to be more suited to these different interests"}, {"start": 4589.36, "end": 4599.04, "text": " I mean yeah I'm sure that's uh yeah as I said you're like usually this is forbidden to say"}, {"start": 4601.36, "end": 4609.6, "text": " all right so they they go on um I say limitations of pipeline research"}, {"start": 4609.6, "end": 4611.360000000001, "text": " right"}, {"start": 4613.04, "end": 4614.320000000001, "text": " um"}, {"start": 4618.320000000001, "end": 4627.280000000001, "text": " these are fairly like common common limitations let's say of studies in general social science studies"}, {"start": 4627.76, "end": 4631.200000000001, "text": " which yeah I won't go into much um"}, {"start": 4631.2, "end": 4634.88, "text": " um"}, {"start": 4636.24, "end": 4645.76, "text": " again this the this state we have to examine we don't we don't only have to examine this but the"}, {"start": 4645.76, "end": 4653.28, "text": " problem they basically say the problem is actually the um the culture and the problem is actually"}, {"start": 4653.28, "end": 4662.08, "text": " the the perpetrators we're the same I don't remember where this is where this is stated"}, {"start": 4662.08, "end": 4666.5599999999995, "text": " but they again say like we have to examine who benefits from its present construction"}, {"start": 4667.36, "end": 4672.48, "text": " like who is underserved within the current tech ecology who benefits from its present"}, {"start": 4672.48, "end": 4680.08, "text": " construction how these dynamics might be untangled and so on so um again stating that in this kind"}, {"start": 4680.08, "end": 4687.68, "text": " of power relationships from for the different groups which I don't agree is in large part what's"}, {"start": 4687.68, "end": 4695.28, "text": " happening um they say it's worth considering the scope of these studies and the by and large the"}, {"start": 4695.28, "end": 4700.88, "text": " recommendations the issue are limited targeted at the administrators of university computer science"}, {"start": 4700.88, "end": 4706.08, "text": " programs seeking to broaden the diversity of their student body yes that's exactly where we saw"}, {"start": 4706.08, "end": 4712.4, "text": " the problem appears to be right so the the reason they have a problem with these studies is that"}, {"start": 4712.4, "end": 4720.16, "text": " they actually focus on the point where this discrepancy appears to happen um because they want to"}, {"start": 4720.16, "end": 4727.12, "text": " claim that no no no you should focus on a different point namely hiring um in in these companies"}, {"start": 4727.12, "end": 4736.32, "text": " hiring and promotion um they say though important oh so at least they they acknowledge that that's"}, {"start": 4736.32, "end": 4742.72, "text": " an important problem um this is a narrow frame through which potential solutions to barriers"}, {"start": 4742.72, "end": 4748.32, "text": " to inclusion it does not address the companies that hire computer science students the peers"}, {"start": 4748.32, "end": 4753.5199999999995, "text": " responsible for promulgating stereotype views or engaging in hostile behavior or the broader"}, {"start": 4753.52, "end": 4758.96, "text": " social conditions that may influence students success in computer science programs actually the"}, {"start": 4758.96, "end": 4763.84, "text": " the research and even some of the examples they've included of this research addresses all of this"}, {"start": 4763.84, "end": 4772.56, "text": " like they they the research often addresses the kind of um stereotypes and the how the peers act and"}, {"start": 4773.4400000000005, "end": 4781.4400000000005, "text": " how the companies act and also how the companies hire and how the kind of how people have something"}, {"start": 4781.44, "end": 4787.12, "text": " to look forward to or nothing to look forward to and how that influences the their decisions uh"}, {"start": 4788.0, "end": 4792.32, "text": " yeah again they say the studies are frequently cited by those within corporate environments to"}, {"start": 4792.32, "end": 4796.5599999999995, "text": " justify their own lack of diversity as they situate the locals have changed outside of the"}, {"start": 4796.5599999999995, "end": 4802.48, "text": " corporation itself as such pipeline studies are disproportionately emphasized as a part of the"}, {"start": 4802.48, "end": 4809.2, "text": " broader research agenda on diversity and technology again they state companies use this to get out"}, {"start": 4809.2, "end": 4814.24, "text": " of course like companies of course they're gonna use this to get out I mean I agree at least with"}, {"start": 4814.24, "end": 4821.599999999999, "text": " that I agree that companies are gonna try to use this to get out of responsibility um certainly"}, {"start": 4822.4, "end": 4831.679999999999, "text": " all right so the last section here is the pipeline dreams after user for research again"}, {"start": 4831.679999999999, "end": 4837.28, "text": " this is on on this pipeline studies basically say they say the pipeline research"}, {"start": 4837.28, "end": 4847.5199999999995, "text": " hasn't shown like hasn't borne fruit it it hasn't led to meaningful change in the field even though"}, {"start": 4847.5199999999995, "end": 4853.759999999999, "text": " we've researched this um the reason they state the number of reasons they tends to place the"}, {"start": 4853.759999999999, "end": 4859.36, "text": " owners to solve issues of discrimination silica and value on those who are discriminated against"}, {"start": 4859.36, "end": 4865.599999999999, "text": " rather than the perpetrators I find this word choice really interesting perpetrators right like"}, {"start": 4865.6, "end": 4873.04, "text": " again the group of white men is trying to put down everyone else that's the perspective that the"}, {"start": 4873.04, "end": 4884.08, "text": " article takes and um it's not even true this research a lot of times it actually says the reason why"}, {"start": 4884.08, "end": 4890.56, "text": " for example women don't choose to go into computer science is the the male dominated culture within"}, {"start": 4890.56, "end": 4899.04, "text": " these corporations is the perception of you know this of this not being a woman friendly"}, {"start": 4899.04, "end": 4906.080000000001, "text": " environment is the people here of sexual harassment and so on so I it's not even true but more"}, {"start": 4906.080000000001, "end": 4914.160000000001, "text": " over I just wanted to point out the choice of word here perpetrators I don't I don't yeah I don't"}, {"start": 4914.16, "end": 4923.28, "text": " know how you get to this word um it really shows kind of a world view of the of the authors in my"}, {"start": 4923.28, "end": 4932.8, "text": " opinion all right so they go on and say okay this pipeline studies haven't been beneficial and"}, {"start": 4932.8, "end": 4940.4, "text": " companies haven't done much or it hasn't been successful um they're going to work or led initiatives"}, {"start": 4940.4, "end": 4946.08, "text": " which I'm going to skip here there's just a kind of a reporting of what happened"}, {"start": 4947.28, "end": 4953.28, "text": " at companies where the workers themselves organized and then the last section here is"}, {"start": 4953.28, "end": 4960.0, "text": " the pushback against diversity so in this section they're kind of documenting and arguing against"}, {"start": 4960.639999999999, "end": 4968.0, "text": " people who have basically stated counter arguments to their recommendations mainly so their"}, {"start": 4968.0, "end": 4975.68, "text": " recommendations being let's change the hiring let's change the promotion and so on to to be based"}, {"start": 4975.68, "end": 4985.84, "text": " on race and gender and the the pushback here yeah characterized in different ways so we'll go"}, {"start": 4986.48, "end": 4991.6, "text": " through this just the last section I know it's a long video already um if you're still here like"}, {"start": 4991.6, "end": 5000.96, "text": " the one person who's still here hi uh hope you're doing well good keep hydrated uh yeah so they say"}, {"start": 5000.96, "end": 5006.4800000000005, "text": " it's a critical time we now see diversity itself being weaponized um"}, {"start": 5010.4800000000005, "end": 5017.4400000000005, "text": " so they say this growing awareness accompanied by demands for inclusion and equity has led to some"}, {"start": 5017.44, "end": 5024.4, "text": " change but there has also been resistance especially among those implicitly privileged by the status quo"}, {"start": 5024.4, "end": 5033.2, "text": " so again jumping straight to attack on the person like I don't care if who makes an argument against"}, {"start": 5033.2, "end": 5039.679999999999, "text": " me I want I'm gonna go on the argument and um I'm gonna go on the content of the argument but these"}, {"start": 5039.68, "end": 5047.6, "text": " people straight first thing they stayed is that's that's by that's just by the people who are you"}, {"start": 5047.6, "end": 5053.92, "text": " know benefiting it's just by the white men basically um straight to the identity of the person that's"}, {"start": 5053.92, "end": 5064.320000000001, "text": " this honesty right there um yeah so those questioning and even rejecting the idea that racism and"}, {"start": 5064.32, "end": 5069.92, "text": " misogyny and harassment are problems within the AI field and the tech industry have appropriated"}, {"start": 5069.92, "end": 5076.639999999999, "text": " the language of diversity to argue that efforts to improve inclusion are in fact exclusionary"}, {"start": 5076.639999999999, "end": 5081.679999999999, "text": " and addressing the deeper structural challenges post-barassism sex and inequity is misguided"}, {"start": 5082.4, "end": 5091.36, "text": " and yes yes definitely efforts to improve inclusion can be exclusionary like just because so this"}, {"start": 5091.36, "end": 5098.32, "text": " is this is a thing just because you're a safe fixing let's let's say it's just because you're"}, {"start": 5098.32, "end": 5105.92, "text": " fixing a problem doesn't mean the method you're using to fixing it is justified and is itself good"}, {"start": 5106.639999999999, "end": 5113.12, "text": " right methods to improve inclusion can be exclusionary and some that have been proposed"}, {"start": 5113.12, "end": 5119.5199999999995, "text": " are exclusionary definitely uh it depends on the method it doesn't mean these people are against"}, {"start": 5119.52, "end": 5127.84, "text": " these efforts it means that the measures for example implementing racist hiring policy I can"}, {"start": 5127.84, "end": 5133.84, "text": " definitely see that this is going to lead to more equal representation within the workforce"}, {"start": 5133.84, "end": 5144.0, "text": " but the tool itself is really bad and exclusionary and discriminating um so yeah I would I would"}, {"start": 5144.0, "end": 5151.44, "text": " say that's it's it's it's accurate that they can be exclusionary I say for example"}, {"start": 5151.44, "end": 5156.48, "text": " some AI researchers greeted the announcement of black and AI workshop at NURP's leading machine"}, {"start": 5156.48, "end": 5160.4, "text": " learning conference by questioning whether the event was necessary arguing that it would be"}, {"start": 5160.4, "end": 5167.44, "text": " discriminatory but can't they can't they question whether the event was necessary like that would"}, {"start": 5167.44, "end": 5174.5599999999995, "text": " be a problem I would here I would need a discussion what is it for right what why is this event happening"}, {"start": 5175.44, "end": 5183.12, "text": " and what is it doing and is it is it discriminatory like it could be like any event can be discriminatory"}, {"start": 5183.12, "end": 5193.759999999999, "text": " does it discriminate based on race or gender or anything um is it you know does it do so unjustly"}, {"start": 5193.76, "end": 5200.16, "text": " and in all so I don't I don't just don't see why question it could still be wrong like you could"}, {"start": 5200.16, "end": 5207.2, "text": " question and then you could be wrong um but you should be taken on your argument but the argument"}, {"start": 5207.2, "end": 5215.2, "text": " here is just already questioning this is is already uh on the wrong side of the argument"}, {"start": 5215.84, "end": 5221.2, "text": " and I don't agree with this though I don't agree with these people that that question this workshop"}, {"start": 5221.2, "end": 5228.72, "text": " um don't have a particular opinion on these things but I have the opinion that you have to take"}, {"start": 5228.72, "end": 5236.48, "text": " arguments at their argument value and not just and who makes them or whether or not they're against"}, {"start": 5236.48, "end": 5244.08, "text": " a particular viewpoint all right so they say such pushback often centers calls for cognitive"}, {"start": 5244.08, "end": 5250.0, "text": " diversity or viewpoint diversity the idea that individual differences in the ways people think"}, {"start": 5250.0, "end": 5255.76, "text": " and understand the world are distinctions that should be counted alongside or instead of other"}, {"start": 5255.76, "end": 5262.96, "text": " identity categories such as race and gender well yes that's I mean isn't isn't that isn't that"}, {"start": 5262.96, "end": 5271.28, "text": " very reasonable thing to say isn't it very reasonable to say that differences in the ways people"}, {"start": 5271.28, "end": 5276.4, "text": " think and understand the world there are distinctions that should be counted alongside"}, {"start": 5276.4, "end": 5284.5599999999995, "text": " other identity categories such as race and gender um they say a dozen white men so long as they"}, {"start": 5284.5599999999995, "end": 5289.92, "text": " were not raised in the same household and don't think identical thoughts could be considered diverse"}, {"start": 5290.799999999999, "end": 5296.0, "text": " that's I don't know if this is a sarcastic statement or not but clearly it's it's kind of the"}, {"start": 5296.719999999999, "end": 5303.12, "text": " the counterpoint they're trying to make here that but yes I would I would totally agree with this"}, {"start": 5303.12, "end": 5311.68, "text": " statement in a way a white man growing up in San Francisco a white man growing up in I'd rule"}, {"start": 5311.68, "end": 5319.84, "text": " Idaho a white man growing up in Florida a white man growing up in western Europe one in Russia"}, {"start": 5320.5599999999995, "end": 5329.76, "text": " and one growing up on the road with its circus his circus parents in Mongolia would definitely"}, {"start": 5329.76, "end": 5339.92, "text": " be that plenty diverse right I mean they they criticize this here but this is is actually how can you"}, {"start": 5340.72, "end": 5347.4400000000005, "text": " how can you not see this that yes these are valid differences and people are going to think"}, {"start": 5347.4400000000005, "end": 5351.84, "text": " differently independent of how they look people are going to have different thoughts and it's"}, {"start": 5351.84, "end": 5359.52, "text": " important to recognize other people think differently and therefore you should you know include"}, {"start": 5359.52, "end": 5366.240000000001, "text": " them if it's relevant and the counter argument to this is of course what the authors are saying"}, {"start": 5366.240000000001, "end": 5375.76, "text": " basically is that 12 a dozen people as long as they are don't look the same"}, {"start": 5378.96, "end": 5383.52, "text": " could be considered diverse even if they all were raised in the same place and basically all"}, {"start": 5383.52, "end": 5392.160000000001, "text": " live in San Francisco and think the exact same thing yeah that's I mean it sounds to me it sounds"}, {"start": 5392.160000000001, "end": 5401.92, "text": " as absurd as the other way around it to me so here's here's my here's my thoughts on this I am"}, {"start": 5402.56, "end": 5411.360000000001, "text": " not going to pretend that I know what life is like as a woman right I'm absolutely sure that for"}, {"start": 5411.36, "end": 5421.679999999999, "text": " areas of life it is it is definitely valuable to listen to the experience of a woman or multiple"}, {"start": 5422.24, "end": 5430.48, "text": " women and aggregate of women because the life is just different as a woman life is also different"}, {"start": 5431.04, "end": 5439.12, "text": " as a black person I absolutely can see that there are things that I might not be able to draw from my"}, {"start": 5439.12, "end": 5446.48, "text": " life experience because I am not of that skin color that different problems that people face and"}, {"start": 5446.48, "end": 5453.599999999999, "text": " that's why it's important to have an an opinion of that at the table but I'm also absolutely certain"}, {"start": 5453.599999999999, "end": 5466.08, "text": " that I have no relation to someone who grew up as a child pop star from the age of 12 and then"}, {"start": 5466.08, "end": 5473.5199999999995, "text": " had that life I have no relation to someone growing up under a communist regime I have no relation"}, {"start": 5473.5199999999995, "end": 5482.0, "text": " to someone growing up in in kind of a buddhistic religious tradition I just don't and I don't care"}, {"start": 5482.0, "end": 5487.92, "text": " how they look they have different experiences they have different bodies of knowledge to draw on"}, {"start": 5487.92, "end": 5495.28, "text": " and I don't think why we should make the difference along the exact lines of race and gender"}, {"start": 5496.8, "end": 5502.08, "text": " yeah but that's what they that's of course what they argue here those arguments work"}, {"start": 5502.08, "end": 5510.08, "text": " by centering identity while flat-nare unregnoring power relationships here the VP the Facebook VP"}, {"start": 5510.08, "end": 5516.64, "text": " of engineering said that the ultimate goal is cognitive diversity and cognitive diversity is"}, {"start": 5516.64, "end": 5522.64, "text": " correlated with identity diversity that means it's not just about getting women in tech it's about"}, {"start": 5522.64, "end": 5535.12, "text": " broad voices broad representation right so the the this is exactly what I would say the reason"}, {"start": 5535.12, "end": 5540.8, "text": " why we want different what the reason why we want a woman or a black person at the table is because"}, {"start": 5540.8, "end": 5544.96, "text": " they have a different knowledge is because they have different thoughts because of their different"}, {"start": 5544.96, "end": 5551.52, "text": " life experience they have different thoughts that they can bring in so actually by including"}, {"start": 5551.52, "end": 5560.72, "text": " these what they call bodies it is about cognitive diversity even in itself but the authors here"}, {"start": 5560.72, "end": 5565.84, "text": " really see this from a different angle they really see this in terms of power relationships"}, {"start": 5565.84, "end": 5572.0, "text": " between race and gender groups and I yeah the arguments of the authors don't make sense if you"}, {"start": 5572.0, "end": 5578.72, "text": " don't view it through that lens and that lens to me is just such a it's such a I don't know it's"}, {"start": 5578.72, "end": 5586.56, "text": " just sad look on the world and also I think it's a very very inaccurate look on the world and it's"}, {"start": 5586.56, "end": 5595.84, "text": " I think a very dangerous look on the world yeah again they say instead of looking at historical"}, {"start": 5595.84, "end": 5601.6, "text": " patterns of marginalization calls for cognitive diversity argue that all differences are equal no"}, {"start": 5601.6, "end": 5608.72, "text": " we're not like no calls for cognitive diversity or don't argue that all differences are equal well"}, {"start": 5608.72, "end": 5615.6, "text": " aware that some people have it harder well aware that some differences are bigger worse or better"}, {"start": 5616.64, "end": 5622.64, "text": " that's absolutely well aware all they're saying is that race and gender shouldn't be"}, {"start": 5622.64, "end": 5631.4400000000005, "text": " the like only things to consider and shouldn't be in itself be considered diverse"}, {"start": 5633.52, "end": 5639.84, "text": " just because someone is of a certain skin color it doesn't mean anything right it doesn't"}, {"start": 5639.84, "end": 5648.08, "text": " actually tell you anything about that person so why not consider people as individuals and look"}, {"start": 5648.08, "end": 5653.92, "text": " at what was their life like until this point and what could they contribute to the discussion"}, {"start": 5653.92, "end": 5660.48, "text": " we're having rather than looking at the color of their skin I mean if the color of their skin"}, {"start": 5660.48, "end": 5666.16, "text": " played a role in their life then obviously that would manifest in my suggestion as well"}, {"start": 5667.04, "end": 5671.76, "text": " but to just look at people through this kind of group lens is so foreign to me"}, {"start": 5671.76, "end": 5676.96, "text": " and yeah I feel it's it's quite dangerous um"}, {"start": 5681.280000000001, "end": 5682.24, "text": " yeah so so"}, {"start": 5686.16, "end": 5693.76, "text": " again and this this could argue that all differences are equal I mean the point where you have to start"}, {"start": 5693.76, "end": 5699.4400000000005, "text": " misrepresenting what the counter argument is saying that's really how you know you're dealing with"}, {"start": 5699.44, "end": 5706.24, "text": " a with not a well intention person on the other side of the of the discussion this is really politics"}, {"start": 5706.24, "end": 5713.36, "text": " now this isn't a well intended argumentation it's really someone to trying to achieve some goal"}, {"start": 5713.36, "end": 5717.679999999999, "text": " because they have to misrepresent the other side and this only gets worse from here"}, {"start": 5718.799999999999, "end": 5719.44, "text": " they say"}, {"start": 5722.08, "end": 5727.04, "text": " recently was exemplified in the controversy over Google's appointment of Heritage Foundation"}, {"start": 5727.04, "end": 5732.56, "text": " CEO Kay calls James to its advanced technology external advisory council"}, {"start": 5733.44, "end": 5739.12, "text": " Google's reasoning for the appointment of James was ostensibly to ensure diversity of thought"}, {"start": 5739.12, "end": 5747.2, "text": " by including a conservative viewpoint on the council all right so Google has a technology advisory"}, {"start": 5747.2, "end": 5756.4, "text": " board or council sorry of external people and they've included a conservative and she is by all"}, {"start": 5756.4, "end": 5763.839999999999, "text": " by all metrics let's say a standard conservative so this is not a far right neo-nazi type"}, {"start": 5764.639999999999, "end": 5773.36, "text": " um I don't know but this this is someone who has similar opinions than half the US country and in"}, {"start": 5773.36, "end": 5781.12, "text": " generally in at least in the western world generally half of of the of the country's population"}, {"start": 5781.12, "end": 5787.92, "text": " tends to be conservative um more or less I mean that there's differences but yeah so this this"}, {"start": 5787.92, "end": 5794.4, "text": " is a this is an opinion that a large portion of the population shares so it would be I don't know"}, {"start": 5794.4, "end": 5802.24, "text": " it would be suitable to include at least someone of that opinion in an external advisory council"}, {"start": 5802.24, "end": 5809.12, "text": " to to have that on board it you don't have to listen to her like it's not like she's made king"}, {"start": 5809.12, "end": 5818.64, "text": " it's simply that she will have the opportunity to input her voice representative of kind of that"}, {"start": 5819.5199999999995, "end": 5827.2, "text": " large very large percentage of people they go on to say James is also a black woman thus adding"}, {"start": 5827.2, "end": 5834.4, "text": " racial and gender diversity to the panel so even further right this is it's a conservative black"}, {"start": 5834.4, "end": 5841.679999999999, "text": " woman all right but the pushback following James's inclusion focused on her policy position"}, {"start": 5841.679999999999, "end": 5849.5199999999995, "text": " citing specifically her vocal anti-LGBTQ and anti-immigrant views and highlighted why cognitive"}, {"start": 5849.5199999999995, "end": 5859.36, "text": " diversity is a particularly limited lens and the the pushback here um was very much spearheaded by"}, {"start": 5859.36, "end": 5866.719999999999, "text": " one of the authors of this article so I um this isn't just reporting I will also I'll also"}, {"start": 5866.719999999999, "end": 5875.36, "text": " criticize the the this pushback here uh since it's you know it's kind of argued for in this article"}, {"start": 5875.36, "end": 5883.599999999999, "text": " it's not just reported and also because the authors are the same um so here they say they have"}, {"start": 5883.6, "end": 5889.92, "text": " vocal anti-LGBTQ and anti-immigrant views and I haven't actually gone specifically and looked"}, {"start": 5889.92, "end": 5897.6, "text": " at what this person particularly has said but given that she's a standard conservative and has"}, {"start": 5897.6, "end": 5905.52, "text": " been in public office I believe under George W Bush um she can't like I have trouble believing"}, {"start": 5905.52, "end": 5912.08, "text": " that she has like extremely hateful opinions like these people shouldn't exist or like"}, {"start": 5912.08, "end": 5921.36, "text": " something like that nature like often people like conservative people have have issues with um"}, {"start": 5922.8, "end": 5930.24, "text": " forcing people to adopt certain pronouns for people or issues with which bathrooms do people go in"}, {"start": 5930.24, "end": 5937.5199999999995, "text": " and you know generally are tougher on immigration um especially legal immigration and so on I mean"}, {"start": 5937.52, "end": 5945.84, "text": " these are these are views that people hold um and a large part of people and these are discussions"}, {"start": 5945.84, "end": 5952.96, "text": " to be had so including this this person would be a very sensible move but they say in a letter"}, {"start": 5952.96, "end": 5958.8, "text": " opposing the appointment a group of Google workers calling themselves Googlers against transphobia"}, {"start": 5958.8, "end": 5965.68, "text": " and hate we have transphobia and hate responded to the idea that diversity of thought justified"}, {"start": 5965.68, "end": 5970.56, "text": " James's addition to the council this is a weaponization of the language of diversity by appointing"}, {"start": 5970.56, "end": 5978.4800000000005, "text": " James to the ATAC Google Elevates and endorses her view implying that hers is a valid perspective"}, {"start": 5978.4800000000005, "end": 5983.84, "text": " worthy of inclusions in its decision making this is unacceptable here it says again the"}, {"start": 5984.96, "end": 5991.360000000001, "text": " the author was one of the organizers of of that so and that's what they're saying here this the views"}, {"start": 5991.36, "end": 5999.04, "text": " if you don't have our views it's these are unacceptable views right it's a valid perspective worthy"}, {"start": 5999.04, "end": 6005.44, "text": " of inclusion it's what they're saying basically is you don't even talk to this to this person like"}, {"start": 6005.44, "end": 6011.28, "text": " talking to this person considering their opinion you can still evaluate the opinion but even"}, {"start": 6011.28, "end": 6019.12, "text": " considering their opinion is already wrong and that given that the person is a black woman so"}, {"start": 6019.12, "end": 6026.88, "text": " basically they're called the author's idea of diversity is people that look different that are"}, {"start": 6027.599999999999, "end": 6034.48, "text": " from race and gender groups that have don't have much power or perceived what they call power right"}, {"start": 6034.48, "end": 6041.76, "text": " now as long as they all think exactly as we think right then that's fine as long as they they share"}, {"start": 6041.76, "end": 6048.0, "text": " of thoughts as long as they don't have dissenting opinions we want the we want the different looking"}, {"start": 6048.0, "end": 6056.32, "text": " people but don't dare talk to anyone of a different opinion um yeah this I don't I don't see"}, {"start": 6056.32, "end": 6062.88, "text": " I mean this these authors in my opinion they really live in in a bubble they really live in the"}, {"start": 6063.52, "end": 6071.6, "text": " in a tiny Silicon Valley or Silicon Valley influenced spaces because this is this is"}, {"start": 6071.6, "end": 6078.64, "text": " half the people they basically saying half the people in their greater community in their country"}, {"start": 6078.64, "end": 6088.240000000001, "text": " aren't even worthy listening to their opinions aren't even worthy of inclusion in of consideration"}, {"start": 6088.24, "end": 6103.36, "text": " so yeah well well done might as well discredit them at once um I'm sure I'm sure I'm sure that's"}, {"start": 6103.36, "end": 6112.639999999999, "text": " going to fly well with these people all right uh yeah might might might start calling them"}, {"start": 6112.64, "end": 6120.400000000001, "text": " deplorables and see what they do maybe they'll return the favor and elect and moron just to stick it"}, {"start": 6121.12, "end": 6130.56, "text": " in your faith I mean that's what happened so the idea of cognitive diversity is mobilized by some"}, {"start": 6130.56, "end": 6139.360000000001, "text": " support in support that the AI field and the tech industry are already diverse uh going as far"}, {"start": 6139.36, "end": 6144.799999999999, "text": " as to support claims that not including identities like why the mail constitutes discrimination"}, {"start": 6145.839999999999, "end": 6155.5199999999995, "text": " yes it can like if if you include every single identity except why to mail that constitutes"}, {"start": 6155.5199999999995, "end": 6162.96, "text": " discrimination that's I mean yes even if they're in the majority is still constitutes discrimination"}, {"start": 6162.96, "end": 6172.0, "text": " like no one can help being born white to male no one white to male chose to be born like that um don't"}, {"start": 6172.0, "end": 6177.68, "text": " mostly don't choose the male and incontinent of your skin you can modulate it a bit by going"}, {"start": 6177.68, "end": 6185.52, "text": " to the sun which computer science people statistically don't do very often so there's not much"}, {"start": 6185.52, "end": 6195.040000000001, "text": " leeway there uh so yeah it's too too not include identities like that if you include every other one"}, {"start": 6196.0, "end": 6203.68, "text": " can constitute discrimination true a July 2017 memo written by James Demor a software engineer at"}, {"start": 6203.68, "end": 6211.360000000001, "text": " Google is illustrative of such pushback titled google's ideological echo chamber and published"}, {"start": 6211.36, "end": 6216.88, "text": " in an internal mail links memo critiqued the company's diversity policies arguing the biological"}, {"start": 6216.88, "end": 6222.719999999999, "text": " differences between men and women rather than bias and discrimination help explain gender"}, {"start": 6222.719999999999, "end": 6231.28, "text": " disparities at the company I feel the you can leave out the rather than here I think the memo"}, {"start": 6231.28, "end": 6240.08, "text": " simply stated that biological differences can help explain the gender disparities um the"}, {"start": 6240.08, "end": 6244.64, "text": " more subjective writing the memo was to make the case that policies designed to achieve"}, {"start": 6244.64, "end": 6251.12, "text": " equal representation are unfair divisive and bad for business well some are yes especially"}, {"start": 6251.12, "end": 6257.84, "text": " the recommendations that you've given at the beginning number seven is unfair divisive and"}, {"start": 6257.84, "end": 6263.36, "text": " I would also argue bad for business yes um"}, {"start": 6263.36, "end": 6272.88, "text": " so supporters for the most point of view at times even drew on rather it of the pipeline to make"}, {"start": 6272.88, "end": 6278.639999999999, "text": " the case that diversity initiatives are in fact discriminatory they argue incorrectly that if"}, {"start": 6278.639999999999, "end": 6282.88, "text": " they're aren't qualified candidates in the pipeline then hiring those who are unqualified"}, {"start": 6282.88, "end": 6287.92, "text": " of the basis of identity discriminating discriminates against those who are qualified"}, {"start": 6287.92, "end": 6302.32, "text": " no I would I would say hiring anyone on the basis of identity discriminates I mean inherently"}, {"start": 6302.32, "end": 6308.32, "text": " so against I think that's the I think that's the larger argument that these people are making"}, {"start": 6308.32, "end": 6312.8, "text": " which is not incorrect is very correct um"}, {"start": 6312.8, "end": 6321.28, "text": " yeah so in an update to the memo the more himself asserted that he values diversity and inclusion"}, {"start": 6321.28, "end": 6329.6, "text": " uh but his primary concern was cognitive diversity he says diversity inclusion is not denying that sex"}, {"start": 6329.6, "end": 6335.92, "text": " isn't exists doesn't endorse using stereotypes and in specific I've read the memo and it directly"}, {"start": 6335.92, "end": 6342.24, "text": " says these are these are population level kind of statistics and there is more overlap than"}, {"start": 6342.24, "end": 6348.48, "text": " difference and you absolutely can't say anything about an individual by looking at these statistics"}, {"start": 6348.48, "end": 6355.92, "text": " that's almost a quote from this memo so he was very much concerned with considering people as"}, {"start": 6355.92, "end": 6362.96, "text": " individuals but also if you like he was basically making the same argument as earlier I told you to"}, {"start": 6362.96, "end": 6370.16, "text": " remember hey look the this one's they describe this one study that found that women's interests"}, {"start": 6370.16, "end": 6375.5199999999995, "text": " might be different and we might shape the curriculum that's basically what the more said he said"}, {"start": 6376.16, "end": 6382.0, "text": " women's interests might be different we'd have to maybe shape the way we do work like"}, {"start": 6382.5599999999995, "end": 6387.599999999999, "text": " change the way we do software engineering to attract more of them that's what was one of his"}, {"start": 6387.599999999999, "end": 6394.48, "text": " points so he's exactly the same thing but of course he's a misogynist because he suggested"}, {"start": 6394.48, "end": 6403.04, "text": " this could be due partly because of biological differences and I think the way he was dragged"}, {"start": 6403.04, "end": 6412.5599999999995, "text": " through the mud is just crazy and this they shoot here very much against this kind of biological"}, {"start": 6412.5599999999995, "end": 6419.759999999999, "text": " what they call biological determinism we'll see this very briefly they say diversity becomes an"}, {"start": 6419.76, "end": 6425.68, "text": " empty signifier stripped of the histories and experiences of systemic discrimination repurposed"}, {"start": 6425.68, "end": 6432.320000000001, "text": " around ideology rather than bodies yeah I'd say diversity has nothing inherently to do with bodies"}, {"start": 6434.8, "end": 6444.08, "text": " like as as such I think that only that's only the case if you are already convinced of this"}, {"start": 6444.08, "end": 6453.44, "text": " let's say yeah within hours of the memo's publication harassment targeting minority advocates who"}, {"start": 6453.44, "end": 6460.0, "text": " push back against the claims in the memo began with a particular focus on queer and trans workers"}, {"start": 6460.0, "end": 6468.24, "text": " whoop that's bad but also I think to push back against people who voiced support is also was"}, {"start": 6468.24, "end": 6475.2, "text": " also pretty bad because one of them was fired as you already stated google's vice president"}, {"start": 6475.2, "end": 6479.28, "text": " of diverse even locked down her twitter account shortly after the morse firing responding to the"}, {"start": 6479.28, "end": 6485.679999999999, "text": " barrage of threats describing her as a police Nazi well yeah if you fire something I mean undoubtedly"}, {"start": 6485.679999999999, "end": 6491.44, "text": " google fired this guy because they thought it was less of a PR disaster if they also fired him"}, {"start": 6491.44, "end": 6498.639999999999, "text": " now then I mean this wasn't the probably wasn't an ideological decision much much more a PR decision but"}, {"start": 6501.36, "end": 6508.4, "text": " yeah if you fire someone after stating like after stating something like this it very much looks"}, {"start": 6508.4, "end": 6513.44, "text": " like you're firing them because you don't like their ideas and you don't like what they're saying"}, {"start": 6513.44, "end": 6524.4, "text": " which are generally not in favor of censoring freedom of speech but yeah that being said harassment"}, {"start": 6524.4, "end": 6534.24, "text": " is bad don't harass people also that being said criticism isn't always harassment and don't"}, {"start": 6534.24, "end": 6543.599999999999, "text": " conflate the two the morse memory memo also stated that the distribution of preference abilities of"}, {"start": 6543.599999999999, "end": 6550.16, "text": " men and women differ in part due to biological causes and that these differences may explain why"}, {"start": 6550.16, "end": 6557.92, "text": " we don't see equal representation of women in tech and leadership this is certain hinges on a"}, {"start": 6557.92, "end": 6563.5199999999995, "text": " flawed assumption that identities like gender and race are essential and fixed biological attributes"}, {"start": 6563.52, "end": 6570.0, "text": " and that inequalities are at least in part the product of such irreducible differences well"}, {"start": 6571.200000000001, "end": 6576.4800000000005, "text": " I mean if they're not fixed biological attributes certainly gender and race have a"}, {"start": 6577.68, "end": 6586.4800000000005, "text": ".99 correlation with biology and since your biology is first and it's determined when you're"}, {"start": 6586.48, "end": 6596.24, "text": " conceived that that demonstrates a causal direction right so even if they're not exactly fixed they are"}, {"start": 6597.28, "end": 6606.719999999999, "text": " like overwhelmingly fixed and to to suggest that is a flawed assumption that these inequalities"}, {"start": 6606.719999999999, "end": 6613.04, "text": " are at least part product of such differences what you'd have to do they simply state it's a flawed"}, {"start": 6613.04, "end": 6618.96, "text": " assumption what you have to do in order to show this is a flawed assumption you have to show that"}, {"start": 6621.5199999999995, "end": 6628.88, "text": " gender and race as far as their biologically determined have no influence whatsoever on these"}, {"start": 6628.88, "end": 6633.68, "text": " differences that's what you have to show right that's the counter claim because the claim is they"}, {"start": 6633.68, "end": 6640.32, "text": " have at least in part something to do with it and that's also I believe what the more stated and what"}, {"start": 6640.32, "end": 6647.759999999999, "text": " I have to predominate opinion it like it's is very like all the research points to for example"}, {"start": 6648.719999999999, "end": 6656.08, "text": " there is a large difference in interest between genders as far as for example career selection"}, {"start": 6656.08, "end": 6663.28, "text": " goes and so on now we can we can talk about why that is but there's also a large consensus I believe"}, {"start": 6663.28, "end": 6671.5199999999995, "text": " that there's at least partly determined to however degree but it is at least partly determined"}, {"start": 6671.5199999999995, "end": 6680.16, "text": " by biology in order to show that this is flawed you need to show that it does not have it can't"}, {"start": 6680.16, "end": 6687.5199999999995, "text": " have any influence right you have to basically prove them the the impossibility of this having an"}, {"start": 6687.52, "end": 6694.96, "text": " influence which no one has done so far much to the contrary so simply state this is a flawed"}, {"start": 6694.96, "end": 6701.92, "text": " assumption kind of shows to me that they've already they are they're in a bubble and they're"}, {"start": 6701.92, "end": 6712.88, "text": " expecting to speak to people in the same bubble yeah so they they go on and kind of discredit this"}, {"start": 6712.88, "end": 6723.92, "text": " as the biological determinism which I don't think that's a I don't think that's a correct use of"}, {"start": 6723.92, "end": 6730.72, "text": " the term biological determinism but you can judge for yourself all I think these people are saying"}, {"start": 6730.72, "end": 6738.88, "text": " that biology might have some influence and we could adjust for that it's not even right it's not"}, {"start": 6738.88, "end": 6745.68, "text": " even yeah this this comes up here so conclusion conclusion finally I think it's been two hours sorry"}, {"start": 6745.68, "end": 6754.4800000000005, "text": " conclusion throughout this report we've outlined the scope and scale of the problem tracing how"}, {"start": 6754.4800000000005, "end": 6759.6, "text": " the diversity crisis in the industry and the problems of bias and AI systems are interrelated"}, {"start": 6759.6, "end": 6768.96, "text": " aspect of the same issue no in the past these topics are commonly examined in isolation but"}, {"start": 6768.96, "end": 6775.280000000001, "text": " increasing evidence shows that they are closely intertwined no they're you've shown that they're"}, {"start": 6775.280000000001, "end": 6782.96, "text": " parallel you have absolutely not shown that they're interrelated aspects of the same issue and you"}, {"start": 6782.96, "end": 6789.04, "text": " have not shown one any one of these causal influences the other that there is any feedback loop"}, {"start": 6789.04, "end": 6795.04, "text": " you have not shown that fixing one leads to fixing the other I mean you could also like take a"}, {"start": 6795.04, "end": 6805.2, "text": " company that extremely is focused on or like for some reason has a different workforce and then show"}, {"start": 6805.2, "end": 6812.64, "text": " how their products with the same datasets as the previous companies don't end up being being biased"}, {"start": 6812.64, "end": 6820.240000000001, "text": " probably not so easy but again not none of that is in the report there many things you could"}, {"start": 6820.240000000001, "end": 6827.4400000000005, "text": " actually do to show what you wanted to show but it's just not the case in this article"}, {"start": 6830.56, "end": 6836.72, "text": " our analysis surface two prominent responses to the diversity crisis on one hand a worker driven"}, {"start": 6836.72, "end": 6845.04, "text": " movement which we've skipped on the other hand we observe a small but vocal counter movement"}, {"start": 6845.04, "end": 6852.320000000001, "text": " that actively resists diversity in the industry I mean what again what does honesty actively"}, {"start": 6852.320000000001, "end": 6859.4400000000005, "text": " resists diversity I mean this the thought that these people stray around like no I don't like"}, {"start": 6859.44, "end": 6867.5199999999995, "text": " the I don't like to other looking people it's just so absurd all they're saying is that either"}, {"start": 6868.24, "end": 6873.44, "text": " we don't understand the problem in the correct way or our tools aren't appropriate to solve"}, {"start": 6873.44, "end": 6881.839999999999, "text": " the problem I think everyone has the same goal of the workplace and the AI systems being as fair"}, {"start": 6881.839999999999, "end": 6889.04, "text": " and as nondiscriminatory as possible to I don't do to misrepresentation of the other side is"}, {"start": 6889.04, "end": 6897.2, "text": " something that really bugs me and it's something that these authors do a lot so yeah I lose my"}, {"start": 6897.2, "end": 6906.4, "text": " polite side maybe and uses arguments from biological determinism to assert that women are"}, {"start": 6906.4, "end": 6912.56, "text": " inherently less suited to computer science in the AI what a load of crap sorry but"}, {"start": 6912.56, "end": 6919.52, "text": " uses to assert that women are inherently less suited to computer science in the AI no one"}, {"start": 6920.320000000001, "end": 6929.68, "text": " okay not no one but no one that I know asserts that absolutely no one that makes these arguments"}, {"start": 6929.68, "end": 6939.68, "text": " okay sorry not no one you can always find a sexist douchebag that makes that argument but"}, {"start": 6939.68, "end": 6947.92, "text": " this this is not a serious argument made and this is not the this this counter movement most"}, {"start": 6947.92, "end": 6953.52, "text": " people in the argument that most people in this counter movement make not at all and to represent"}, {"start": 6953.52, "end": 6963.92, "text": " them as such is just so dishonest that yeah this this this basically this is the it's nice that"}, {"start": 6963.92, "end": 6970.96, "text": " it's in the conclusion because it finally like at the end it completely destroys the credibility"}, {"start": 6971.68, "end": 6979.76, "text": " of me taking seriously these authors I felt the head so that the parts we skipped over I mostly"}, {"start": 6979.76, "end": 6987.68, "text": " would say I'm mostly okay with the mostly show parallels between the have that AI systems are"}, {"start": 6987.68, "end": 6992.64, "text": " biased and they also show that there is any poor representation they also show examples of"}, {"start": 6992.64, "end": 7000.320000000001, "text": " discrimination harassment and so on problems in AI companies and universities that all you can"}, {"start": 7000.320000000001, "end": 7005.76, "text": " read the report for this that's it's pretty interesting to read but the points I've addressed"}, {"start": 7005.76, "end": 7015.76, "text": " I'm not happy with yeah so that was it for now sorry this was took so long but I felt that a"}, {"start": 7015.76, "end": 7024.88, "text": " thorough take was necessary have a nice rest of the day"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=sbKaUc0tPaY | The Odds are Odd: A Statistical Test for Detecting Adversarial Examples | https://arxiv.org/abs/1902.04818
Abstract:
We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy.
Authors:
Kevin Roth, Yannic Kilcher, Thomas Hofmann | Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples. So shameless self-promotion here since this is me. So this is on archive. And basically what we do is we're detecting adversarial examples. For those who don't know what an adversarial example is, is basically a way of fooling a classifier in order to kind of get it to do something weird. Let's look at it. So maybe you have an image of a cat. And yeah, I have no clue how a cat looks. Alright, so we have an image of a cat and you have a classifier. So the classifier takes the same image as an input, kind of winds it down to some probabilities of classes and a cat dog and so on. And it then gives you an estimate of how likely each class is. Right. And so on. So what the netversarial example does is it changes this image and it adds a noise. So this is just a very specific noise. And you have kind of a multiplier here, gamma, which is super small. So the noise is almost, you can't see it by with a human eye basically. It's so small, but it's able to perturb this image in a way that the probabilities will change such that all of a sudden a different class now is the highest class. So basically you're able to fool these classifiers by adding just very little bit of very, very specific noise. So that's an adversarial example. These are many implications in, let's say, security applications and also in understanding how these classifier works. So our task is to explain and detect them, explain why they happen and detect when they happen. Right. So what is what do we do? Right. Basically, let's just jump right into the into the thing here. We view a classifier as an output. So you have log it. What's called log it L is this. It's this here is your neural network up to the last layer. Basically, it can be like something like a convolutional neural network and so on. Because you have feature representation. So you extract from the image X feature representation, which is this entire thing here. And then you multiply this feature representation. So this is going to be some vector of dimension D. You multiply this by this weight matrix, which is going to be something like, okay, I've drawn it in the wrong direction here. Let's draw W over here. It's called it's going to be D by, let's say K, with K is the number of classes. Okay. Still wrong. D by K. Right. And output a vector of dimension K, which then is this cat dog and so on. So these are the log it's and the log it's transformed to the probabilities by running it through a softmax layer. But basically, we view a classifier as having a feature representation and a weight matrix. And this here is a is a matrix multiplication adult product by matrix. So what we see basically is this is this is kind of where that versatile examples happen. So when we look at this weight matrix, right. Again, we look at the D dimensional feature vector here. And we look at the weight matrix. What it does is it has columns, right. Columns. Let's say we have four classes here. Right. So it has these four columns. And each of them is D dimensional. So each of them is going to be multiplied by this thing and giving a score. So the final score for a class is going to be the multiplication of a row. W one W two W three W four by this feature vector. Let's call the feature vector. Little F. Right. So your your log it of class I is going to be the inner product of W I and F. All right, we'll leave away biases for now. There's there's okay. We can introduce biases to make it a bit more complicated, but it changes nothing. So your log it's going to be the inner product. And whichever log it is the highest wins. So that's going to be the prediction of the class for so since you since you can in an adversarial example, what can you change? You can change this feature vector here this F by changing the X. You can change the output of the of the convolutional network, which is the feature vector. And what you have to do in order to make a log it as high as possible basically mean make one class as high as possible is you need to make this inner product as high as possible. And what's an inner product if you're looking at in like a classic vector representation space if this is W I and this is F. What you want to do is you want to make F and W align as much as possible because the inner product is going to be basically dependent on the angle and the magnitude. So you can you can stretch F for sure, but it's going to be kind of aligned with all the W's then more by stretching or more negatively in whatever way you want it. But basically you want to rotate you want to align as much as possible the F with the W I so you want to kind of go into this direction with F. So now not only do you want to kind of maximize F with a particular W I what you want to do is be adversarial. The adversarial task is often framed as just it's either targeted so find a part so I need to be a particular other class or it's untargeted, which means just just give me a perturbation that will make the classifier be fooled. And be fooled means whatever predicts right now it should predict something different. So what you ultimately want to do is you want this as high as possible for some I that is not the correct I and you want sorry you want this other quantity. So W I let's call it W I if let's say the classifier is 100% correct so W I why is the label of X right and W I is whatever column here current is currently predicted. So I want the some column where I is not equal to Y to have maximum inner product and so this is not no longer I will get to that they have maximum inner product and you want this to you want this inner product with the correct class to be as small as possible. So which ultimately means you want this entire quantity maximized. So it's a pretty simple idea we'll call let's say this is the log at I minus the log at Y we have slightly different notation in the paper I think we call this Z but never mind. So you basically just want to make this as as large as possible and so our point is since since you're this is not the only thing you want to maximize this. But you have a constraint namely your constraint is that your delta X can only be small right your delta X can only be small because the point of an adversarial example is that the perturbation is so small you can't see it. And that means that you basically don't have much wiggle room to do these to do these perturbations which means that we should be able to detect a pattern like this in the latent space. So this this here is the latent space feature vector and if we can kind of detect a pattern in the latent space then we can get that adversarial example detector. Right so how do we do this we measure exactly this what we do is we measure the alignment between the original between the currently predicted class and between all other classes. So in this graphic here you see this it's a 10 class classifier this is C4 10 we only show one two we only show six of the other classes but we have the full graphic. So this shows an adversarial example the axis going on top of each of the images is the alignment with the adversarial class so this has been an adversarial perturbed sample so this shows the alignment with the adversarial class and of course you see the bright red dot if you just focus on that that is the adversarial example. Right projected into into this so of course the alignment is going to be very very high with this class since the classifier actually predicts this class the blue here is the sample that the adversarial sample was derived from which means the original image kind of and you already see without looking at any of the other dots that the blue is around zero here around zero here around zero here around zero. Here around zero here around zero here here but here it's very high in this axis so the axis to the right is for each of these for each of these plots here it's one of the other classes except for the the currently predicted adversarial class so this axis is always the same axis while the axis to the right for each plot is a different one and you can already see then that's why we frame it in the green the this this plot here is where the axis to the right corresponds to the original class of the classifier. Right so don't look yet at the other plots what you see here is basically the blue is is really high in this class right and the adversarial example procedure basically has driven it down this class and up this class which is exactly saying it has made this inner product small and this inner product large. So where do we go from here let's actually jump this graphic a bit and go to this one that's way down here all right so what we've done is we've taken the example just out of the data set right and then we've taken an adversarial example so X is the example of the data set and then X hat is the adversarial example derived from this. All right in this plot to the right X would be sitting about here I'm going to explain what the what the kind of meaning is and X hat would be sitting down here right is about one third from the top one third from the bottom let me draw this more. All right so what this X is represent here is basically what we've done is we've gone from X to X hat in very small steps and at each step we've asked the classifier hey classifier what's the probability of the class Y. So Y is the class of X right and the class of X X hat is some some different was some some other class right since it's an adversarial example we've asked so we've asked the classifier what's the class of X and basically we've asked what what's the probability that Y is the class of X and that's represented in white right so the more white it the higher the classifier things the class Y is probable. So the direction down is going into the direction of X hat minus X so it's going into the into the adversarial direction. And then the direction across is we've taken some direction that's orthogonal to this direction and then also went into tiny steps and ask the classifier hey classifier what do you think is the probability of Y here. So we've basically done this kind of grid sampling and at each point we've asked the classifier what do you think which hat probable is Y and the classifier will always add put some number and we plotted in white and so this direction again is always the same in this direction we basically randomize it and then aggregate it over over lots and lots of different samples. And we also aggregate this entire thing over the entire over the data set so we get a comprehensive view of what adversarial examples look like in the view of the classifier and what we find is pretty interesting so when you go from the original class. So you basically in every direction here in every direction it kind of the original class kind of decreases smoothly right you see at the edges here it kind of gets black so the further away you go from the original example that the more kind of shadier the classifier gets is like yeah I'm not so sure anymore that this is the class. But if you go into the direction of here if you go into the direction of the adversarial example the kind of drop off is first of all it's very steep so all of a sudden here you're in very dark territory which means the classifier doesn't think why is probable at all anymore and moreover you get this kind of cone here. So what we see is what we think is happening is that given an example there are these directions in late in in in image space basically straight directions that go to adversarial examples right and we call these cones because they they're kind of low dimensional directions in in the space where the adversarial example lies. And what's really interesting is we have so plots here do we have more so what's what's quite interesting is that here. If you if you go come on what this is kind of okay the quality of the of the plots is not is not very very good so I'm going to I may be able to draw this here so if you start here and you go here. What happens to the original class is you start out high you go down rapidly and you stay down even if you go super far into this direction the this class will stay down whereas let's say this is why I had why I will start low go up and then kind of fade. So here is where the adversarial example would sit sorry at about this distance that's this distance here. It means as you go towards the adversarial example right here the probability of the adversarial class rises and the probability of the original class drops then as you go further this is what's what's interesting kind of this probability here drops which means the classifier is kind of like yeah okay there's too much noise not so sure about this class anymore but this this class here kind of stays low. Very very long even if you go into this direction so this this gives us kind of a that adversarial examples are characterized by a specific directions that you go into that you can go into and kind of suppress the original class and pump the new class up which is kind of exactly what we've claimed with this inner inner product alignment. Right. The next experiments we've done is we've taken this adversarial example here and said well if we go outside if we go into random directions right it's just really this one direction that's problematic if we go into random directions actually we should be you know go back to the original class right since it's basically surrounded by the original class. This is just one direction and this here represents all the other directions there are and how many directions are there in pixel space like a lot so we should be able to get back to the original class but that's not the case. Now that's we found that's not the case and we also found why so I still want to go back to this plot here if you do this if you add noise and this is the noise magnitude here what you'll see is the orange here is the adversarial class so orange will go down down down down down down down right as you increase the noise. The blue is the source class so the blue goes up and it goes up faster it's easy goes up faster than the green which is the highest other class so green is whatever class is not the adversarial not the source of the highest class other than that so the source class goes up quickly but before the source class can overpass the adversarial class which happens back there the highest other class. The highest other classes already kind of taken over so the source class is basically two week and if you again look at this this plot here if you go with an actual color picker you see that the amount of white here and here is is not high enough it's like 0.3 or something out of one or even lower. So the kind of source class is not strong enough that by simply adding a bit of noise you can go back but we thought hey if this is correct we can actually detect this effect here this rising of the source class faster so our plan is basically we add noise. Particular amount of noise just a little bit actually and then we detect which basically which class falls and which class rises and the way we do this is we detect the this exact alignment that I've described before under noise so we form this quantity here for all classes. Other than why so why is the the class is currently predicted and we look at it what happens under under noise right so and that's where we get to this graphic here so again this axis is the adversarial class or the class is currently predicted right. This axis here is all the other classes for each plot one and when we add noise what do you see is the noise magnitude is encoded in the brightness of the dots so the darker the red dots the more noise we've added here is the original adversarial sample then as we add noise you see see here here more noise more noise more noise. Nothing's really happening for the if if if it's like one class that has nothing to do with the original class it simply kind of goes down simply kind of gets less sure about this class right but in case of the original class that adversarial example was derived from it really rises it really kind of at the same time that it drops it rises into that direction so we're able to measure these these delta's here under noise and we're able to to device basically statistics of what happens to these quantities under like if it's not an adversarial sample versus what happens to these quantities if it's an adversarial sample so here you see pairings of basically source class and adversarial class samples so each of these histograms is collected from that and what you can see is in blue the kind of alignments under noise of the source class sorry the alignments under noise of a non perturbed sample and in orange the alignments under noise of an adversarial sample and what's cool is that these these alignments you can see in all of these cases are very different so there's a clear signature in the adversarial sample in these noise induced alignments with the with the weight matrix rows that makes you able to basically build a detector you can say alright anything to the left is clean anything to the right is adversarial and we can do this over many different types of noises and then build basically a voting mechanism on that and thereby detect adversarial examples so we have a bunch of experiments we mostly experiment on the c4 10 and on the image net data set and you can see where the we see here so this is the main kind of one of the main results the detection rates of our statistical test so as you can see we are detection rate this is on clean samples on clean samples you want the detection rate to be low on adversarial samples you want the detection rate to be high and this we achieve very large detection rates while having very low false positive rates especially on image net so it seems like the more tuned these models are the better these models are the better we are detecting and adversarial examples to it this is kind of a direct correlation to how well the models perform on accuracy in a clean setting and what we can do is now since we cannot only detect these things but we can detect these things in a fashion so if you look at these things and you have like a sample of a particular class that's predicted let's say this class and you go and look at it at the position of the noise induced features over each of them so let's say here here here here here here here here here you can then clearly say well not only do I detect an adversarial example here I look at each of the class of the class that it could be derived from if all of them say it's a clean sample then all right it's a clean sample but if one of them says it's an adversarial sample then I don't not only do I know it's an adversarial sample but I say aha this must be the source class right this is exact effect we saw here all right we can if we detect this pattern here we can also back deduce basically aha so this must be the original class that the adversarial example was derived from so we're basically able to build a not only a detector but we're basically able to reconstruct the original class and here you see for these models let's say on c4 10 we imagine that is a bit too large as if yeah for for our compute but on these models that have clean accuracy is that are pretty high on c4 10 plus this this kind of toy network here we're able to reconstruct the original class so basically this is defense against adversarial examples by by getting to almost clean accuracy back so this is really surprising actually and kind of nice so we do a bunch of other experiments including we defend against an attacker that's actually aware of this thing but the main the main point here is we don't say this is kind of the end all method of defending against adversarial examples we simply want to kind of encourage the way of thinking of of these kind of noise what what if you what if you noise induce perturbations how does your network react to that can you can you detect these effects here can you detect effects like this and are these unavoidable or are there architectures are there architectures we can basically build such that adversarial examples have no chance except doing something like this which we can then easily detect alright so that was a bit of an introduction if you like it check out the entire paper and goodbye | [{"start": 0.0, "end": 8.0, "text": " Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples."}, {"start": 8.0, "end": 15.0, "text": " So shameless self-promotion here since this is me."}, {"start": 15.0, "end": 22.0, "text": " So this is on archive. And basically what we do is we're detecting adversarial examples."}, {"start": 22.0, "end": 37.0, "text": " For those who don't know what an adversarial example is, is basically a way of fooling a classifier in order to kind of get it to do something weird."}, {"start": 37.0, "end": 42.0, "text": " Let's look at it. So maybe you have an image of a cat."}, {"start": 42.0, "end": 51.0, "text": " And yeah, I have no clue how a cat looks. Alright, so we have an image of a cat and you have a classifier."}, {"start": 51.0, "end": 63.0, "text": " So the classifier takes the same image as an input, kind of winds it down to some probabilities of classes and a cat dog and so on."}, {"start": 63.0, "end": 74.0, "text": " And it then gives you an estimate of how likely each class is. Right. And so on."}, {"start": 74.0, "end": 84.0, "text": " So what the netversarial example does is it changes this image and it adds a noise."}, {"start": 84.0, "end": 97.0, "text": " So this is just a very specific noise. And you have kind of a multiplier here, gamma, which is super small. So the noise is almost, you can't see it by with a human eye basically."}, {"start": 97.0, "end": 112.0, "text": " It's so small, but it's able to perturb this image in a way that the probabilities will change such that all of a sudden a different class now is the highest class."}, {"start": 112.0, "end": 120.0, "text": " So basically you're able to fool these classifiers by adding just very little bit of very, very specific noise."}, {"start": 120.0, "end": 130.0, "text": " So that's an adversarial example. These are many implications in, let's say, security applications and also in understanding how these classifier works."}, {"start": 130.0, "end": 146.0, "text": " So our task is to explain and detect them, explain why they happen and detect when they happen. Right. So what is what do we do? Right."}, {"start": 146.0, "end": 174.0, "text": " Basically, let's just jump right into the into the thing here. We view a classifier as an output. So you have log it. What's called log it L is this."}, {"start": 174.0, "end": 185.0, "text": " It's this here is your neural network up to the last layer. Basically, it can be like something like a convolutional neural network and so on."}, {"start": 185.0, "end": 197.0, "text": " Because you have feature representation. So you extract from the image X feature representation, which is this entire thing here. And then you multiply this feature representation."}, {"start": 197.0, "end": 214.0, "text": " So this is going to be some vector of dimension D. You multiply this by this weight matrix, which is going to be something like, okay, I've drawn it in the wrong direction here."}, {"start": 214.0, "end": 226.0, "text": " Let's draw W over here. It's called it's going to be D by, let's say K, with K is the number of classes. Okay. Still wrong."}, {"start": 226.0, "end": 244.0, "text": " D by K. Right. And output a vector of dimension K, which then is this cat dog and so on. So these are the log it's and the log it's transformed to the probabilities by running it through a softmax layer."}, {"start": 244.0, "end": 258.0, "text": " But basically, we view a classifier as having a feature representation and a weight matrix. And this here is a is a matrix multiplication adult product by matrix."}, {"start": 258.0, "end": 276.0, "text": " So what we see basically is this is this is kind of where that versatile examples happen. So when we look at this weight matrix, right. Again, we look at the D dimensional feature vector here. And we look at the weight matrix."}, {"start": 276.0, "end": 294.0, "text": " What it does is it has columns, right. Columns. Let's say we have four classes here. Right. So it has these four columns. And each of them is D dimensional."}, {"start": 294.0, "end": 306.0, "text": " So each of them is going to be multiplied by this thing and giving a score. So the final score for a class is going to be the multiplication of a row."}, {"start": 306.0, "end": 315.0, "text": " W one W two W three W four by this feature vector. Let's call the feature vector."}, {"start": 315.0, "end": 329.0, "text": " Little F. Right. So your your log it of class I is going to be the inner product of W I and F."}, {"start": 329.0, "end": 345.0, "text": " All right, we'll leave away biases for now. There's there's okay. We can introduce biases to make it a bit more complicated, but it changes nothing. So your log it's going to be the inner product. And whichever log it is the highest wins."}, {"start": 345.0, "end": 366.0, "text": " So that's going to be the prediction of the class for so since you since you can in an adversarial example, what can you change? You can change this feature vector here this F by changing the X. You can change the output of the of the convolutional network, which is the feature vector."}, {"start": 366.0, "end": 379.0, "text": " And what you have to do in order to make a log it as high as possible basically mean make one class as high as possible is you need to make this inner product as high as possible."}, {"start": 379.0, "end": 405.0, "text": " And what's an inner product if you're looking at in like a classic vector representation space if this is W I and this is F. What you want to do is you want to make F and W align as much as possible because the inner product is going to be basically dependent on the angle and the magnitude."}, {"start": 405.0, "end": 416.0, "text": " So you can you can stretch F for sure, but it's going to be kind of aligned with all the W's then more by stretching or more negatively in whatever way you want it."}, {"start": 416.0, "end": 429.0, "text": " But basically you want to rotate you want to align as much as possible the F with the W I so you want to kind of go into this direction with F."}, {"start": 429.0, "end": 439.0, "text": " So now not only do you want to kind of maximize F with a particular W I what you want to do is be adversarial."}, {"start": 439.0, "end": 456.0, "text": " The adversarial task is often framed as just it's either targeted so find a part so I need to be a particular other class or it's untargeted, which means just just give me a perturbation that will make the classifier be fooled."}, {"start": 456.0, "end": 462.0, "text": " And be fooled means whatever predicts right now it should predict something different."}, {"start": 462.0, "end": 487.0, "text": " So what you ultimately want to do is you want this as high as possible for some I that is not the correct I and you want sorry you want this other quantity."}, {"start": 487.0, "end": 505.0, "text": " So W I let's call it W I if let's say the classifier is 100% correct so W I why is the label of X right and W I is whatever column here current is currently predicted."}, {"start": 505.0, "end": 529.0, "text": " So I want the some column where I is not equal to Y to have maximum inner product and so this is not no longer I will get to that they have maximum inner product and you want this to you want this inner product with the correct class to be as small as possible."}, {"start": 529.0, "end": 535.0, "text": " So which ultimately means you want this entire quantity maximized."}, {"start": 535.0, "end": 550.0, "text": " So it's a pretty simple idea we'll call let's say this is the log at I minus the log at Y we have slightly different notation in the paper I think we call this Z but never mind."}, {"start": 550.0, "end": 565.0, "text": " So you basically just want to make this as as large as possible and so our point is since since you're this is not the only thing you want to maximize this."}, {"start": 565.0, "end": 583.0, "text": " But you have a constraint namely your constraint is that your delta X can only be small right your delta X can only be small because the point of an adversarial example is that the perturbation is so small you can't see it."}, {"start": 583.0, "end": 599.0, "text": " And that means that you basically don't have much wiggle room to do these to do these perturbations which means that we should be able to detect a pattern like this in the latent space."}, {"start": 599.0, "end": 615.0, "text": " So this this here is the latent space feature vector and if we can kind of detect a pattern in the latent space then we can get that adversarial example detector."}, {"start": 615.0, "end": 633.0, "text": " Right so how do we do this we measure exactly this what we do is we measure the alignment between the original between the currently predicted class and between all other classes."}, {"start": 633.0, "end": 650.0, "text": " So in this graphic here you see this it's a 10 class classifier this is C4 10 we only show one two we only show six of the other classes but we have the full graphic."}, {"start": 650.0, "end": 679.0, "text": " So this shows an adversarial example the axis going on top of each of the images is the alignment with the adversarial class so this has been an adversarial perturbed sample so this shows the alignment with the adversarial class and of course you see the bright red dot if you just focus on that that is the adversarial example."}, {"start": 679.0, "end": 708.0, "text": " Right projected into into this so of course the alignment is going to be very very high with this class since the classifier actually predicts this class the blue here is the sample that the adversarial sample was derived from which means the original image kind of and you already see without looking at any of the other dots that the blue is around zero here around zero here around zero here around zero."}, {"start": 708.0, "end": 730.0, "text": " Here around zero here around zero here here but here it's very high in this axis so the axis to the right is for each of these for each of these plots here it's one of the other classes except for the the currently predicted adversarial class so this axis is always the same axis"}, {"start": 730.0, "end": 747.0, "text": " while the axis to the right for each plot is a different one and you can already see then that's why we frame it in the green the this this plot here is where the axis to the right corresponds to the original class of the classifier."}, {"start": 747.0, "end": 776.0, "text": " Right so don't look yet at the other plots what you see here is basically the blue is is really high in this class right and the adversarial example procedure basically has driven it down this class and up this class which is exactly saying it has made this inner product small and this inner product large."}, {"start": 776.0, "end": 805.0, "text": " So where do we go from here let's actually jump this graphic a bit and go to this one that's way down here all right so what we've done is we've taken the example just out of the data set right and then we've taken an"}, {"start": 805.0, "end": 816.0, "text": " adversarial example so X is the example of the data set and then X hat is the adversarial example derived from this."}, {"start": 816.0, "end": 833.0, "text": " All right in this plot to the right X would be sitting about here I'm going to explain what the what the kind of meaning is and X hat would be sitting down here right is about one third from the top one third from the bottom"}, {"start": 833.0, "end": 836.0, "text": " let me draw this more."}, {"start": 836.0, "end": 861.0, "text": " All right so what this X is represent here is basically what we've done is we've gone from X to X hat in very small steps and at each step we've asked the classifier hey classifier what's the probability of the class Y."}, {"start": 861.0, "end": 890.0, "text": " So Y is the class of X right and the class of X X hat is some some different was some some other class right since it's an adversarial example we've asked so we've asked the classifier what's the class of X and basically we've asked what what's the probability that Y is the class of X and that's represented in white right so the more white"}, {"start": 890.0, "end": 898.0, "text": " it the higher the classifier things the class Y is probable."}, {"start": 898.0, "end": 913.0, "text": " So the direction down is going into the direction of X hat minus X so it's going into the into the adversarial direction."}, {"start": 913.0, "end": 931.0, "text": " And then the direction across is we've taken some direction that's orthogonal to this direction and then also went into tiny steps and ask the classifier hey classifier what do you think is the probability of Y here."}, {"start": 931.0, "end": 959.0, "text": " So we've basically done this kind of grid sampling and at each point we've asked the classifier what do you think which hat probable is Y and the classifier will always add put some number and we plotted in white and so this direction again is always the same in this direction we basically randomize it and then aggregate it over over lots and lots of different samples."}, {"start": 959.0, "end": 980.0, "text": " And we also aggregate this entire thing over the entire over the data set so we get a comprehensive view of what adversarial examples look like in the view of the classifier and what we find is pretty interesting so when you go from the original class."}, {"start": 980.0, "end": 1008.0, "text": " So you basically in every direction here in every direction it kind of the original class kind of decreases smoothly right you see at the edges here it kind of gets black so the further away you go from the original example that the more kind of shadier the classifier gets is like yeah I'm not so sure anymore that this is the class."}, {"start": 1008.0, "end": 1037.0, "text": " But if you go into the direction of here if you go into the direction of the adversarial example the kind of drop off is first of all it's very steep so all of a sudden here you're in very dark territory which means the classifier doesn't think why is probable at all anymore and moreover you get this kind of cone here."}, {"start": 1037.0, "end": 1065.0, "text": " So what we see is what we think is happening is that given an example there are these directions in late in in in image space basically straight directions that go to adversarial examples right and we call these cones because they they're kind of low dimensional directions in in the space where the adversarial example lies."}, {"start": 1065.0, "end": 1085.0, "text": " And what's really interesting is we have so plots here do we have more so what's what's quite interesting is that here."}, {"start": 1085.0, "end": 1114.0, "text": " If you if you go come on what this is kind of okay the quality of the of the plots is not is not very very good so I'm going to I may be able to draw this here so if you start here and you go here."}, {"start": 1114.0, "end": 1143.0, "text": " What happens to the original class is you start out high you go down rapidly and you stay down even if you go super far into this direction the this class will stay down whereas let's say this is why I had why I will start low go up and then kind of fade."}, {"start": 1143.0, "end": 1156.0, "text": " So here is where the adversarial example would sit sorry at about this distance that's this distance here."}, {"start": 1156.0, "end": 1185.0, "text": " It means as you go towards the adversarial example right here the probability of the adversarial class rises and the probability of the original class drops then as you go further this is what's what's interesting kind of this probability here drops which means the classifier is kind of like yeah okay there's too much noise not so sure about this class anymore but this this class here kind of stays low."}, {"start": 1185.0, "end": 1213.0, "text": " Very very long even if you go into this direction so this this gives us kind of a that adversarial examples are characterized by a specific directions that you go into that you can go into and kind of suppress the original class and pump the new class up which is kind of exactly what we've claimed with this inner inner product alignment."}, {"start": 1213.0, "end": 1241.0, "text": " Right. The next experiments we've done is we've taken this adversarial example here and said well if we go outside if we go into random directions right it's just really this one direction that's problematic if we go into random directions actually we should be you know go back to the original class right since it's basically surrounded by the original class."}, {"start": 1241.0, "end": 1254.0, "text": " This is just one direction and this here represents all the other directions there are and how many directions are there in pixel space like a lot so we should be able to get back to the original class but that's not the case."}, {"start": 1254.0, "end": 1269.0, "text": " Now that's we found that's not the case and we also found why so I still want to go back to this plot here if you do this if you add noise and"}, {"start": 1269.0, "end": 1284.0, "text": " this is the noise magnitude here what you'll see is the orange here is the adversarial class so orange will go down down down down down down down right as you increase the noise."}, {"start": 1284.0, "end": 1313.0, "text": " The blue is the source class so the blue goes up and it goes up faster it's easy goes up faster than the green which is the highest other class so green is whatever class is not the adversarial not the source of the highest class other than that so the source class goes up quickly but before the source class can overpass the adversarial class which happens back there the highest other class."}, {"start": 1313.0, "end": 1338.0, "text": " The highest other classes already kind of taken over so the source class is basically two week and if you again look at this this plot here if you go with an actual color picker you see that the amount of white here and here is is not high enough it's like 0.3 or something out of one or even lower."}, {"start": 1338.0, "end": 1366.0, "text": " So the kind of source class is not strong enough that by simply adding a bit of noise you can go back but we thought hey if this is correct we can actually detect this effect here this rising of the source class faster so our plan is basically we add noise."}, {"start": 1366.0, "end": 1394.0, "text": " Particular amount of noise just a little bit actually and then we detect which basically which class falls and which class rises and the way we do this is we detect the this exact alignment that I've described before under noise so we form this quantity here for all classes."}, {"start": 1394.0, "end": 1421.0, "text": " Other than why so why is the the class is currently predicted and we look at it what happens under under noise right so and that's where we get to this graphic here so again this axis is the adversarial class or the class is currently predicted right."}, {"start": 1421.0, "end": 1448.0, "text": " This axis here is all the other classes for each plot one and when we add noise what do you see is the noise magnitude is encoded in the brightness of the dots so the darker the red dots the more noise we've added here is the original adversarial sample then as we add noise you see see here here more noise more noise more noise."}, {"start": 1448.0, "end": 1476.0, "text": " Nothing's really happening for the if if if it's like one class that has nothing to do with the original class it simply kind of goes down simply kind of gets less sure about this class right but in case of the original class that adversarial example was derived from it really rises it really kind of"}, {"start": 1476.0, "end": 1501.0, "text": " at the same time that it drops it rises into that direction so we're able to measure these these delta's here under noise and we're able to to device basically statistics of what happens to these quantities under like if it's not an adversarial sample versus what happens to these quantities if it's an adversarial sample so here you see pairings"}, {"start": 1501.0, "end": 1519.0, "text": " of basically source class and adversarial class samples so each of these histograms is collected from that and what you can see is in blue the kind of alignments under noise of the source class"}, {"start": 1519.0, "end": 1548.0, "text": " sorry the alignments under noise of a non perturbed sample and in orange the alignments under noise of an adversarial sample and what's cool is that these these alignments you can see in all of these cases are very different so there's a clear signature in the adversarial sample in these noise induced alignments with the with the weight matrix rows"}, {"start": 1548.0, "end": 1569.0, "text": " that makes you able to basically build a detector you can say alright anything to the left is clean anything to the right is adversarial and we can do this over many different types of noises and then build basically a voting mechanism on that and thereby detect adversarial examples"}, {"start": 1569.0, "end": 1597.0, "text": " so we have a bunch of experiments we mostly experiment on the c4 10 and on the image net data set and you can see where the we see here so this is the main kind of one of the main results the detection rates of our statistical test so as you can see we are detection rate"}, {"start": 1597.0, "end": 1625.0, "text": " this is on clean samples on clean samples you want the detection rate to be low on adversarial samples you want the detection rate to be high and this we achieve very large detection rates while having very low false positive rates especially on image net so it seems like the more tuned these models are the better these models are the better we are detecting and adversarial examples to it"}, {"start": 1625.0, "end": 1652.0, "text": " this is kind of a direct correlation to how well the models perform on accuracy in a clean setting and what we can do is now since we cannot only detect these things but we can detect these things in a fashion so if you look at these things and you have like a sample of a particular class that's predicted"}, {"start": 1652.0, "end": 1668.0, "text": " let's say this class and you go and look at it at the position of the noise induced features over each of them so let's say here here here here here here here here here"}, {"start": 1668.0, "end": 1687.0, "text": " you can then clearly say well not only do I detect an adversarial example here I look at each of the class of the class that it could be derived from if all of them say it's a clean sample then all right it's a clean sample"}, {"start": 1687.0, "end": 1701.0, "text": " but if one of them says it's an adversarial sample then I don't not only do I know it's an adversarial sample but I say aha this must be the source class right this is exact effect we saw here"}, {"start": 1701.0, "end": 1718.0, "text": " all right we can if we detect this pattern here we can also back deduce basically aha so this must be the original class that the adversarial example was derived from"}, {"start": 1718.0, "end": 1742.0, "text": " so we're basically able to build a not only a detector but we're basically able to reconstruct the original class and here you see for these models let's say on c4 10 we imagine that is a bit too large as if yeah for for our compute but on these models that have clean accuracy is that are pretty high on c4 10 plus this this kind of toy network here"}, {"start": 1742.0, "end": 1760.0, "text": " we're able to reconstruct the original class so basically this is defense against adversarial examples by by getting to almost clean accuracy back so this is really surprising actually and kind of nice"}, {"start": 1760.0, "end": 1781.0, "text": " so we do a bunch of other experiments including we defend against an attacker that's actually aware of this thing but the main the main point here is we don't say this is kind of the end all method of defending against adversarial examples"}, {"start": 1781.0, "end": 1802.0, "text": " we simply want to kind of encourage the way of thinking of of these kind of noise what what if you what if you noise induce perturbations how does your network react to that can you can you detect these effects here can you detect effects like this"}, {"start": 1802.0, "end": 1817.0, "text": " and are these unavoidable or are there architectures are there architectures we can basically build such that adversarial examples have no chance except doing something like this which we can then easily detect"}, {"start": 1817.0, "end": 1832.0, "text": " alright so that was a bit of an introduction if you like it check out the entire paper and goodbye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=jltgNGt8Lpg | Neural Ordinary Differential Equations | https://arxiv.org/abs/1806.07366
Abstract:
We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.
Authors:
Ricky T. Q. Chen, Yulia Rubanova, Jesse Bettencourt, David Duvenaud | Hello and welcome. Today we're going to look at neural ordinary differential equations by Ricky Chen, Yulia Rubanova, Jesse Betencourt and David Devano. So this has been quite an interesting kind of paper to see because it's a bit special. We're going to go over parts of it, not the full paper, just kind of the important parts because the paper is quite packed and we'd rather explain it in parts and kind of get the gist of it. So basically what they do is they say, we introduce a new family of deep neural network models instead of specifying a discrete sequence of hidden layers. We parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black box differential equation solver. These continuous depth models have constant memory cost, adapt their evaluation strategy to each input, can explicitly trade numerical precision for speed. I mean, it sounds awesome, honestly. It sounds really cool and it sounds really new and yeah, all right, let's jump in. So what they say is let's look at kind of classic neural networks, especially residual neural networks. So what residual neural networks do is in each hidden layer, they kind of have a representation, HT. This is kind of their hidden representation at layer T. And what they do is they then add something. So if you don't know a recurrent neural network is where you have, let's say this is your hidden state, HT. And in a classic neural network, you would just have a weight matrix here blah, blah, blah, blah. You do a matrix multiplication to get HT plus one. So to get the next kind of the next hidden state, you do a matrix multiplication by a big weight matrix here, W, all right. In a residual neural network, what you do is you have this weight matrix W, you multiply it to get delta HT plus one. And you take HT and you add the two, right. You add HT and delta HT plus one to arrive at HT plus one. So that's a residual network. It basically doesn't learn the transformation to the next layer, but it learns what, how is the next representation different from this representation, right. So what do I need to add to this representation to get to the next representation, which is kind of its reason that we're deep networks since each layer only does a little bit of transformation. We should basically bias it towards keeping the representation the same and just kind of changing it a little bit. So this is the inherent biases, the identity transform. Right. So that's a residual residual network. This here is characterized by F of kind of theta and an HT. So this is kind of the, this is what we called delta H. It's now called F. So this F would be the kind of neural network layer and the theta would be the parameters of it. So the weight matrix in our case, they say, okay, what if you do many of those, right. So they say basically what this is is kind of a time process, right. It's kind of a state in the next state in the next state. And you always learn how to go to the next state to the next state and so on. What if you go very deep and what if you look at this as a time process and kind of make these steps very small, right. Make this super small. And basically what if you have many, many infinitely many layers, right. And I say, well, okay, this then becomes a dynamic process, basically an ordinary differential equation where I say, okay, my time is now continuous. And I look at it as a linearization, as a local linearization, basically. And I say, okay, I basically specify how to get from this time to the next instance of time, the next instant, the next infinitesimally small instance of time by specifying this f. And in the continuous case, this is to say that the derivative of the hidden state is now parameterized by an neural network, right. So if you know what a differential equation is, it has like, it has like, look, it's looking at it here as like a start state. And then what do you do is you specify how in each, at each point in time. So that's T at each point in time, how does the gradient look? So maybe the gradient looks like this. And then what the node E solver will do is the ODE solver will say, okay, the gradient is going to do an infinite small step in this direction. And then it goes back to f, hey, f, what's the gradient at this infinitely small step next in time. And then f would say, well, the gradient is like this. And then the ODE solver will go like, okay, I need to be a little bit flatter. So I go here. So what's the gradient at this time? Okay, maybe it's up this. I need to go up here. So the ODE solver will kind of construct the curve. And at each point, it needs to look that whatever f says is the gradient is actually the gradient, right? This is the gradient. This is the gradient. This is the gradient. So that's that's kind of how an ODE works. And let's they say, okay, you can actually look at residual networks here as being a discrete time analog to such an ODE. So what we want to do is actually we want to specify, we want to actually, and this is the crazy part, right? Or the cool part is we want to do this for neural networks basically, we simply specify an ODE and the start state here, the start state is let's say, if you want to build an MNIST class, it's a image, right? The start state is our MNIST image. And we're simply training a neural network such that the ODE equation, if you solve it, the curve at the end will arrive at the correct class. I mean, that's I'm skipping a few parts here about dimensionalities and so on, right? Because you need to keep and same dimension. But in essence, they say here, we start out with our input and we trained neural network to give us the correct gradients, the correct derivatives of this curve at each point in time, such that when you solve the ODE, at the end point, you are going to be at the correct label. So that's, this is the input to your task basically, and this is the output, right? But instead of having a neural network go from input to output, you have a neural network that parameterizes how you go from each step in time to the next one. What's the gradient at each point in time? That's the kind of gist of it and that's that's kind of really cool. It's a really new approach. All right, so they give various advantages of this. And so here is this demonstrated again, right? You are here. This is your input and you want to go to the output and then the loss of the loss that you specify, it can depend on kind of either on the output as in like an image classifier or it can depend on intermediate states. This is, it's kept general, right? So the way they go out of this, they say, well, okay, but so the neural network now specifies how to get from one step to the next, right? Here. And the neural network has parameters, right? So we need to train this neural network such that the correct output is given to some input, right? We actually need to train it. So we need to, we need to somehow, way to train these parameters theta and they say, okay, we do gradient descent on theta like in a classic neural network. But now we need, it's not, it's not so easy, right? It's not one pass through this function. It's like infinitely many passes through this function until you arrive here. And then if you basically need to somehow get a gradient with respect to these parameters here, they say this again, the loss of the, this is the loss of the end state, right? Is the loss of the start state plus the integral over time of this derivative, which is basically this curve. And the curve is given by an ODE solver, we're input all these things. So we need gradients with respect to that. How do we do that? And they give away here of saying, okay, we could either kind of back propagate through the ODE solver, but that would depend on the ODE solver and so on. But there's another method. There's called what's called the, we need the, what's called the adjoint. So this is reverse mode differentiation of an ODE solution. Adjoint sensitivity method solves an augmented ODE backwards in time. So basically what you need to do is, you forward propagate, you come here, right? And then what you can do is you can solve the second ODE. So you can generate a second curve here, this one. And don't worry about these little jumps here. You can solve the second curve and the second curve together with the first and the second curve. You can then compute the gradients you need, right? So the second curve is basically simply something like the application of the chain rule to the continuous domain. And you need to, you need to adjust these jumps here only when your loss depends on intermediate states. This is, this is kind of the offset caused by including or not including the loss. So let's dive a bit further into this adjoint state. What's the red curve? The red curve is called A. And what's A? A is a curve and this is the differential equation for it. Again, we specify the curve A by specifying its starch state and its derivative. And from its starch state and its derivative at each time the ODE solver is able to construct the curve entirely. So A t, it says here, is del L to del Zt. This means how does the loss depend on this Zt on the hidden state, right? How does the loss depend on the hidden state at time t? So it doesn't even have to be any of these points here. How does the loss depend on this hidden state here? And in order to find that out you would need to go, you would need to develop the curve until here, right? And then calculate the loss and then back propagate through here. But you can do this by calculating this adjoint thing. So as you can see here is a demonstration. It's an example, right? So the starch state here is simply given by the loss. How does the loss of this state, how does the loss depend on this state? Well, simply by plugging it into the loss equation, right? So your loss is maybe a cross entropy loss or something. How does the loss depend on this state here? Well, we go, we go from this state that we already know. And we know how in reverse time, so backwards and time, this sensitivity of the loss develops. So we go, we develop this curve until here. And we say, aha, this point influences this loss in this much basically, right? So, so, and if the loss explicitly depends on this point, then we have to we have to calculate in this offset since this point here only depends on this time up till here. And then it changes. So there's there's a discontinuation. But yeah, don't worry about that too much. Basically, what we can do is we can calculate the curve in a forward pass curve and the loss in the forward pass. Then we can do a second pass backward again by an ODE solve to say, how does the how does the loss depend on each one of the states here of the hidden states? Right? So that's the second point. But that's not all because we're ultimately not interested in the how the loss depends on the state. We're the we're interested in how the loss depends on these parameters that tell us how to get from one hidden state to the next. But luckily, we can then simply evaluate this integral that depends as you can see here on a and on z. We can evaluate this and get the gradients for the parameters. Right? So, I also have to say the parameters are static. So the parameters are given over the entire duration of this. They're the same. And it's simply what changes is time. All right. So this is how you can get this is how you can get gradients with respect to parameters. And the cool thing is now you can train these. You can actually train this neural network here that tells you how to go from one state to the next such that if you input the digit to as an image, you can output to not exactly, but that's that's the point. You can buy by going through this motion by going through this od solve. So that's I mean, that's a man's the cool. They actually define how to do this here in one forward one kind of backward pass. You can solve everything at the same time. It's it's pretty cool. And they evaluate their their net and they compare it with a bunch of other nets. And they interestingly show that so basically with an od solver, you can never kind of tell how many evaluations it's is going to do because it's going to get increasingly like it's increasingly accurate over time. So you let it run and maybe it's going to first generate a curve that's like something like this, right? And then it needs to say crap, okay. I need to go back and refine. And then it maybe goes to curve like this. And so on. So it gets continually closer over time. And for that, it needs to kind of query. It's like a query. It needs to query this this. So you need to give it the function as an valuable function. And it goes and just like, okay, I need to I need to know it here. Okay, I got it from here. Okay, I need to know it here. Okay, I got it from all. No, I didn't get it. Okay, I need also need to know it here. All right. And so you can never know how much they will evaluate, but you basically have a parameter to trade off accuracy and how much they evaluate. That's what they show here. So the the less error they want in their forward pass, the more forward passes they have to do. That's this curve. The more forward passes, they do the more time they have to invest. All right, that's this curve. Interestingly, the more forward passes, the time required for forward passes or the evaluations required for forward passes increases also the evaluation required for backward passes, but not by much. So the the back rep has to require about half the amount of evaluations. That's the forward passes, which is encouraging since the the backward passes don't go kind of overboard. Like if you had to back propagate through the operations of the ODE solver itself. And they also show as your training epoch continues, the the ODE solver requests more and more evaluations for so for the same epoch basically, or the same samples within different epochs, which means as it gets more accurate kind of needs to know more and more and more about the the samples basically about the test at the training samples, which is all basically showing that this kind of works. Yeah, so they they kind of get into normalizing flows, which I don't want to get into here much because we haven't done a video on that yet. We'll do one, but they basically show that it's it's quite easy to do normalizing flows in a continuous fashion. And the top of normalizing flows it's in itself pretty cool. What they do at the end is they say, okay, what we can now do is we can actually take sequential data. So now we've just talked about let's input one data point, get out, let's say a label or something, which we can actually do sequential data. And let's for example, have an RNN encoder for our sequential data. So here, here, here, these are data points, right? These are measurements like a blood pressure of a person. And what we can do is we can do a variational auto encoder. We've talked about this. We can have an RNN encoder parameterize the distribution and then as a decoder have this ODE neural network. And basically what that allows us to do is that allows us to deal with time steps that are not regularly sampled. And so we can extrapolate from the data point at time, yeah, times not regular samplings like or with RNNs, you basically force to have always the same time step difference. Otherwise, you have a very tough time. But with this, since these are continuous flows, you can basically unroll them and evaluate them at whatever time you want. So they have pretty cool experiments here where they kind of try to learn these kind of spiraling behaviors. And you see on top the RNN decoder will get all jaggy and so on, where as the so basically as the neural ordinary differential equation will generate quite let's say smooth things. And also it can extrapolate as you can see here, it can go the red thing is the extrapolation. Only there's only data where the green dots are. So that's pretty cool. You can see the RNN sometimes isn't able to kind of continue the flow as you can see in here. It extrapolates wrongly. So this kind of, I mean, it's a toy, it's a toy example, but these kind of dynamics are pretty cool. And they also show here when they learn the spirals and vary one dimension of the latent code that is given by the encoder. Then the flow goes from clockwise. It goes from two counter clockwise as you see here. I've drawn this in wrong, but so it's pretty, pretty cool what these things learn. And since it's only small data right now, small models, but I'm pretty sure this is going to develop further and be a cool, just a cool way, cool alley of research, cool idea and looking forward to what they come up next. Alright, so that was it for today a bit shorter, but I hope this was somewhat clear enough. Alright, have a great day. | [{"start": 0.0, "end": 6.0, "text": " Hello and welcome. Today we're going to look at neural ordinary differential equations"}, {"start": 6.0, "end": 14.36, "text": " by Ricky Chen, Yulia Rubanova, Jesse Betencourt and David Devano. So this has been quite an"}, {"start": 14.36, "end": 19.84, "text": " interesting kind of paper to see because it's a bit special. We're going to go over parts"}, {"start": 19.84, "end": 25.36, "text": " of it, not the full paper, just kind of the important parts because the paper is quite"}, {"start": 25.36, "end": 34.4, "text": " packed and we'd rather explain it in parts and kind of get the gist of it. So basically what"}, {"start": 34.4, "end": 40.8, "text": " they do is they say, we introduce a new family of deep neural network models instead of"}, {"start": 40.8, "end": 45.84, "text": " specifying a discrete sequence of hidden layers. We parameterize the derivative of the hidden"}, {"start": 45.84, "end": 51.8, "text": " state using a neural network. The output of the network is computed using a black box differential"}, {"start": 51.8, "end": 56.599999999999994, "text": " equation solver. These continuous depth models have constant memory cost, adapt their"}, {"start": 56.599999999999994, "end": 62.599999999999994, "text": " evaluation strategy to each input, can explicitly trade numerical precision for speed. I mean,"}, {"start": 62.599999999999994, "end": 69.32, "text": " it sounds awesome, honestly. It sounds really cool and it sounds really new and yeah, all right,"}, {"start": 69.32, "end": 77.0, "text": " let's jump in. So what they say is let's look at kind of classic neural networks, especially"}, {"start": 77.0, "end": 82.8, "text": " residual neural networks. So what residual neural networks do is in each hidden layer, they"}, {"start": 82.8, "end": 89.48, "text": " kind of have a representation, HT. This is kind of their hidden representation at layer"}, {"start": 89.48, "end": 98.84, "text": " T. And what they do is they then add something. So if you don't know a recurrent neural network"}, {"start": 98.84, "end": 106.28, "text": " is where you have, let's say this is your hidden state, HT. And in a classic neural network,"}, {"start": 106.28, "end": 112.6, "text": " you would just have a weight matrix here blah, blah, blah, blah. You do a matrix multiplication"}, {"start": 112.6, "end": 119.32000000000001, "text": " to get HT plus one. So to get the next kind of the next hidden state, you do a matrix"}, {"start": 119.32000000000001, "end": 126.96000000000001, "text": " multiplication by a big weight matrix here, W, all right. In a residual neural network,"}, {"start": 126.96, "end": 137.92, "text": " what you do is you have this weight matrix W, you multiply it to get delta HT plus one."}, {"start": 137.92, "end": 144.6, "text": " And you take HT and you add the two, right. You add HT and delta HT plus one to arrive"}, {"start": 144.6, "end": 150.44, "text": " at HT plus one. So that's a residual network. It basically doesn't learn the transformation"}, {"start": 150.44, "end": 156.48, "text": " to the next layer, but it learns what, how is the next representation different from this"}, {"start": 156.48, "end": 161.2, "text": " representation, right. So what do I need to add to this representation to get to the next"}, {"start": 161.2, "end": 167.32, "text": " representation, which is kind of its reason that we're deep networks since each layer only"}, {"start": 167.32, "end": 174.04, "text": " does a little bit of transformation. We should basically bias it towards keeping the representation"}, {"start": 174.04, "end": 179.88, "text": " the same and just kind of changing it a little bit. So this is the inherent biases, the identity"}, {"start": 179.88, "end": 189.76, "text": " transform. Right. So that's a residual residual network. This here is characterized by F"}, {"start": 189.76, "end": 199.35999999999999, "text": " of kind of theta and an HT. So this is kind of the, this is what we called delta H. It's"}, {"start": 199.35999999999999, "end": 207.44, "text": " now called F. So this F would be the kind of neural network layer and the theta would"}, {"start": 207.44, "end": 215.56, "text": " be the parameters of it. So the weight matrix in our case, they say, okay, what if you do"}, {"start": 215.56, "end": 223.12, "text": " many of those, right. So they say basically what this is is kind of a time process, right."}, {"start": 223.12, "end": 227.12, "text": " It's kind of a state in the next state in the next state. And you always learn how to"}, {"start": 227.12, "end": 233.88, "text": " go to the next state to the next state and so on. What if you go very deep and what if"}, {"start": 233.88, "end": 240.04, "text": " you look at this as a time process and kind of make these steps very small, right. Make"}, {"start": 240.04, "end": 248.44, "text": " this super small. And basically what if you have many, many infinitely many layers, right."}, {"start": 248.44, "end": 257.8, "text": " And I say, well, okay, this then becomes a dynamic process, basically an ordinary differential"}, {"start": 257.8, "end": 267.36, "text": " equation where I say, okay, my time is now continuous. And I look at it as a linearization,"}, {"start": 267.36, "end": 279.08000000000004, "text": " as a local linearization, basically. And I say, okay, I basically specify how to get from"}, {"start": 279.08000000000004, "end": 285.16, "text": " this time to the next instance of time, the next instant, the next infinitesimally small"}, {"start": 285.16, "end": 295.04, "text": " instance of time by specifying this f. And in the continuous case, this is to say that"}, {"start": 295.04, "end": 305.0, "text": " the derivative of the hidden state is now parameterized by an neural network, right. So if"}, {"start": 305.0, "end": 309.28000000000003, "text": " you know what a differential equation is, it has like, it has like, look, it's looking"}, {"start": 309.28, "end": 315.84, "text": " at it here as like a start state. And then what do you do is you specify how in each,"}, {"start": 315.84, "end": 321.23999999999995, "text": " at each point in time. So that's T at each point in time, how does the gradient look? So"}, {"start": 321.23999999999995, "end": 327.91999999999996, "text": " maybe the gradient looks like this. And then what the node E solver will do is the ODE"}, {"start": 327.91999999999996, "end": 334.03999999999996, "text": " solver will say, okay, the gradient is going to do an infinite small step in this direction."}, {"start": 334.04, "end": 341.96000000000004, "text": " And then it goes back to f, hey, f, what's the gradient at this infinitely small step next"}, {"start": 341.96000000000004, "end": 348.04, "text": " in time. And then f would say, well, the gradient is like this. And then the ODE solver"}, {"start": 348.04, "end": 353.08000000000004, "text": " will go like, okay, I need to be a little bit flatter. So I go here. So what's the gradient"}, {"start": 353.08000000000004, "end": 359.72, "text": " at this time? Okay, maybe it's up this. I need to go up here. So the ODE solver will kind"}, {"start": 359.72, "end": 369.68, "text": " of construct the curve. And at each point, it needs to look that whatever f says is the"}, {"start": 369.68, "end": 375.32000000000005, "text": " gradient is actually the gradient, right? This is the gradient. This is the gradient. This"}, {"start": 375.32000000000005, "end": 383.72, "text": " is the gradient. So that's that's kind of how an ODE works. And let's they say, okay,"}, {"start": 383.72, "end": 391.16, "text": " you can actually look at residual networks here as being a discrete time analog to such"}, {"start": 391.16, "end": 396.96000000000004, "text": " an ODE. So what we want to do is actually we want to specify, we want to actually, and"}, {"start": 396.96000000000004, "end": 403.48, "text": " this is the crazy part, right? Or the cool part is we want to do this for neural networks"}, {"start": 403.48, "end": 414.32, "text": " basically, we simply specify an ODE and the start state here, the start state is let's"}, {"start": 414.32, "end": 421.24, "text": " say, if you want to build an MNIST class, it's a image, right? The start state is our MNIST"}, {"start": 421.24, "end": 431.48, "text": " image. And we're simply training a neural network such that the ODE equation, if you solve"}, {"start": 431.48, "end": 437.84000000000003, "text": " it, the curve at the end will arrive at the correct class. I mean, that's I'm skipping"}, {"start": 437.84000000000003, "end": 442.92, "text": " a few parts here about dimensionalities and so on, right? Because you need to keep"}, {"start": 442.92, "end": 449.20000000000005, "text": " and same dimension. But in essence, they say here, we start out with our input and we"}, {"start": 449.20000000000005, "end": 455.38, "text": " trained neural network to give us the correct gradients, the correct derivatives of this"}, {"start": 455.38, "end": 461.12, "text": " curve at each point in time, such that when you solve the ODE, at the end point, you are"}, {"start": 461.12, "end": 468.32, "text": " going to be at the correct label. So that's, this is the input to your task basically,"}, {"start": 468.32, "end": 473.84000000000003, "text": " and this is the output, right? But instead of having a neural network go from input to"}, {"start": 473.84000000000003, "end": 480.92, "text": " output, you have a neural network that parameterizes how you go from each step in time to the"}, {"start": 480.92, "end": 490.4, "text": " next one. What's the gradient at each point in time? That's the kind of gist of it and"}, {"start": 490.4, "end": 495.76, "text": " that's that's kind of really cool. It's a really new approach. All right, so they give"}, {"start": 495.76, "end": 504.88, "text": " various advantages of this. And so here is this demonstrated again, right? You are here."}, {"start": 504.88, "end": 511.88, "text": " This is your input and you want to go to the output and then the loss of the loss that"}, {"start": 511.88, "end": 518.16, "text": " you specify, it can depend on kind of either on the output as in like an image classifier"}, {"start": 518.16, "end": 525.6, "text": " or it can depend on intermediate states. This is, it's kept general, right? So the way"}, {"start": 525.6, "end": 530.3199999999999, "text": " they go out of this, they say, well, okay, but so the neural network now specifies how"}, {"start": 530.3199999999999, "end": 535.16, "text": " to get from one step to the next, right? Here. And the neural network has parameters,"}, {"start": 535.16, "end": 542.4399999999999, "text": " right? So we need to train this neural network such that the correct output is given to"}, {"start": 542.4399999999999, "end": 547.12, "text": " some input, right? We actually need to train it. So we need to, we need to somehow,"}, {"start": 547.12, "end": 551.5600000000001, "text": " way to train these parameters theta and they say, okay, we do gradient descent on theta"}, {"start": 551.5600000000001, "end": 558.5600000000001, "text": " like in a classic neural network. But now we need, it's not, it's not so easy, right?"}, {"start": 558.5600000000001, "end": 563.84, "text": " It's not one pass through this function. It's like infinitely many passes through this"}, {"start": 563.84, "end": 575.5600000000001, "text": " function until you arrive here. And then if you basically need to somehow get a gradient"}, {"start": 575.56, "end": 580.4799999999999, "text": " with respect to these parameters here, they say this again, the loss of the, this is"}, {"start": 580.4799999999999, "end": 589.0, "text": " the loss of the end state, right? Is the loss of the start state plus the integral over"}, {"start": 589.0, "end": 597.28, "text": " time of this derivative, which is basically this curve. And the curve is given by an ODE"}, {"start": 597.28, "end": 603.5999999999999, "text": " solver, we're input all these things. So we need gradients with respect to that. How do"}, {"start": 603.6, "end": 609.32, "text": " we do that? And they give away here of saying, okay, we could either kind of back propagate"}, {"start": 609.32, "end": 614.76, "text": " through the ODE solver, but that would depend on the ODE solver and so on. But there's"}, {"start": 614.76, "end": 622.32, "text": " another method. There's called what's called the, we need the, what's called the adjoint."}, {"start": 622.32, "end": 628.88, "text": " So this is reverse mode differentiation of an ODE solution. Adjoint sensitivity method"}, {"start": 628.88, "end": 633.5600000000001, "text": " solves an augmented ODE backwards in time. So basically what you need to do is, you"}, {"start": 633.56, "end": 639.64, "text": " forward propagate, you come here, right? And then what you can do is you can solve the"}, {"start": 639.64, "end": 645.3599999999999, "text": " second ODE. So you can generate a second curve here, this one. And don't worry about these"}, {"start": 645.3599999999999, "end": 651.3599999999999, "text": " little jumps here. You can solve the second curve and the second curve together with the"}, {"start": 651.3599999999999, "end": 658.68, "text": " first and the second curve. You can then compute the gradients you need, right? So the second"}, {"start": 658.68, "end": 665.8399999999999, "text": " curve is basically simply something like the application of the chain rule to the continuous"}, {"start": 665.8399999999999, "end": 673.9599999999999, "text": " domain. And you need to, you need to adjust these jumps here only when your loss depends"}, {"start": 673.9599999999999, "end": 681.9599999999999, "text": " on intermediate states. This is, this is kind of the offset caused by including or not"}, {"start": 681.96, "end": 689.44, "text": " including the loss. So let's dive a bit further into this adjoint state. What's the red curve?"}, {"start": 689.44, "end": 696.8000000000001, "text": " The red curve is called A. And what's A? A is a curve and this is the differential"}, {"start": 696.8000000000001, "end": 704.52, "text": " equation for it. Again, we specify the curve A by specifying its starch state and its"}, {"start": 704.52, "end": 709.84, "text": " derivative. And from its starch state and its derivative at each time the ODE solver"}, {"start": 709.84, "end": 724.72, "text": " is able to construct the curve entirely. So A t, it says here, is del L to del Zt. This"}, {"start": 724.72, "end": 734.52, "text": " means how does the loss depend on this Zt on the hidden state, right? How does the loss"}, {"start": 734.52, "end": 740.04, "text": " depend on the hidden state at time t? So it doesn't even have to be any of these points"}, {"start": 740.04, "end": 746.3199999999999, "text": " here. How does the loss depend on this hidden state here? And in order to find that out"}, {"start": 746.3199999999999, "end": 751.92, "text": " you would need to go, you would need to develop the curve until here, right? And then calculate"}, {"start": 751.92, "end": 758.8, "text": " the loss and then back propagate through here. But you can do this by calculating this adjoint"}, {"start": 758.8, "end": 767.68, "text": " thing. So as you can see here is a demonstration. It's an example, right? So the starch state"}, {"start": 767.68, "end": 775.64, "text": " here is simply given by the loss. How does the loss of this state, how does the loss depend"}, {"start": 775.64, "end": 782.24, "text": " on this state? Well, simply by plugging it into the loss equation, right? So your loss"}, {"start": 782.24, "end": 788.88, "text": " is maybe a cross entropy loss or something. How does the loss depend on this state here?"}, {"start": 788.88, "end": 797.28, "text": " Well, we go, we go from this state that we already know. And we know how in reverse"}, {"start": 797.28, "end": 805.36, "text": " time, so backwards and time, this sensitivity of the loss develops. So we go, we develop"}, {"start": 805.36, "end": 814.84, "text": " this curve until here. And we say, aha, this point influences this loss in this much"}, {"start": 814.84, "end": 824.88, "text": " basically, right? So, so, and if the loss explicitly depends on this point, then we have"}, {"start": 824.88, "end": 830.84, "text": " to we have to calculate in this offset since this point here only depends on this time"}, {"start": 830.84, "end": 837.6, "text": " up till here. And then it changes. So there's there's a discontinuation. But yeah, don't"}, {"start": 837.6, "end": 846.44, "text": " worry about that too much. Basically, what we can do is we can calculate the curve in"}, {"start": 846.44, "end": 856.64, "text": " a forward pass curve and the loss in the forward pass. Then we can do a second pass backward"}, {"start": 856.64, "end": 866.84, "text": " again by an ODE solve to say, how does the how does the loss depend on each one of the"}, {"start": 866.84, "end": 873.1999999999999, "text": " states here of the hidden states? Right? So that's the second point. But that's not all"}, {"start": 873.1999999999999, "end": 879.24, "text": " because we're ultimately not interested in the how the loss depends on the state. We're"}, {"start": 879.24, "end": 884.6, "text": " the we're interested in how the loss depends on these parameters that tell us how to get"}, {"start": 884.6, "end": 893.88, "text": " from one hidden state to the next. But luckily, we can then simply evaluate this integral"}, {"start": 893.88, "end": 902.2, "text": " that depends as you can see here on a and on z. We can evaluate this and get the gradients"}, {"start": 902.2, "end": 911.08, "text": " for the parameters. Right? So, I also have to say the parameters are static. So the parameters"}, {"start": 911.08, "end": 917.4000000000001, "text": " are given over the entire duration of this. They're the same. And it's simply what changes"}, {"start": 917.4000000000001, "end": 925.96, "text": " is time. All right. So this is how you can get this is how you can get gradients with respect"}, {"start": 925.96, "end": 930.4000000000001, "text": " to parameters. And the cool thing is now you can train these. You can actually train this"}, {"start": 930.4000000000001, "end": 936.1600000000001, "text": " neural network here that tells you how to go from one state to the next such that if you"}, {"start": 936.16, "end": 944.8, "text": " input the digit to as an image, you can output to not exactly, but that's that's the point."}, {"start": 944.8, "end": 952.7199999999999, "text": " You can buy by going through this motion by going through this od solve. So that's I mean,"}, {"start": 952.7199999999999, "end": 958.04, "text": " that's a man's the cool. They actually define how to do this here in one forward one kind"}, {"start": 958.04, "end": 964.28, "text": " of backward pass. You can solve everything at the same time. It's it's pretty cool. And"}, {"start": 964.28, "end": 973.0799999999999, "text": " they evaluate their their net and they compare it with a bunch of other nets. And they interestingly"}, {"start": 973.0799999999999, "end": 981.92, "text": " show that so basically with an od solver, you can never kind of tell how many evaluations"}, {"start": 981.92, "end": 988.72, "text": " it's is going to do because it's going to get increasingly like it's increasingly accurate"}, {"start": 988.72, "end": 995.0, "text": " over time. So you let it run and maybe it's going to first generate a curve that's like"}, {"start": 995.0, "end": 1002.36, "text": " something like this, right? And then it needs to say crap, okay. I need to go back and"}, {"start": 1002.36, "end": 1008.0400000000001, "text": " refine. And then it maybe goes to curve like this. And so on. So it gets continually closer"}, {"start": 1008.0400000000001, "end": 1013.08, "text": " over time. And for that, it needs to kind of query. It's like a query. It needs to query"}, {"start": 1013.08, "end": 1017.96, "text": " this this. So you need to give it the function as an valuable function. And it goes and just"}, {"start": 1017.96, "end": 1021.72, "text": " like, okay, I need to I need to know it here. Okay, I got it from here. Okay, I need to"}, {"start": 1021.72, "end": 1025.4, "text": " know it here. Okay, I got it from all. No, I didn't get it. Okay, I need also need to"}, {"start": 1025.4, "end": 1032.48, "text": " know it here. All right. And so you can never know how much they will evaluate, but you"}, {"start": 1032.48, "end": 1036.76, "text": " basically have a parameter to trade off accuracy and how much they evaluate. That's what they"}, {"start": 1036.76, "end": 1044.04, "text": " show here. So the the less error they want in their forward pass, the more forward passes"}, {"start": 1044.04, "end": 1049.96, "text": " they have to do. That's this curve. The more forward passes, they do the more time they"}, {"start": 1049.96, "end": 1056.0, "text": " have to invest. All right, that's this curve. Interestingly, the more forward passes, the"}, {"start": 1056.0, "end": 1063.84, "text": " time required for forward passes or the evaluations required for forward passes increases also"}, {"start": 1063.84, "end": 1068.44, "text": " the evaluation required for backward passes, but not by much. So the the back rep has to"}, {"start": 1068.44, "end": 1073.04, "text": " require about half the amount of evaluations. That's the forward passes, which is encouraging"}, {"start": 1073.04, "end": 1082.6, "text": " since the the backward passes don't go kind of overboard. Like if you had to back propagate"}, {"start": 1082.6, "end": 1089.96, "text": " through the operations of the ODE solver itself. And they also show as your training epoch"}, {"start": 1089.96, "end": 1098.1599999999999, "text": " continues, the the ODE solver requests more and more evaluations for so for the same epoch"}, {"start": 1098.1599999999999, "end": 1103.0, "text": " basically, or the same samples within different epochs, which means as it gets more accurate"}, {"start": 1103.0, "end": 1110.96, "text": " kind of needs to know more and more and more about the the samples basically about the test"}, {"start": 1110.96, "end": 1120.12, "text": " at the training samples, which is all basically showing that this kind of works. Yeah, so they"}, {"start": 1120.12, "end": 1125.52, "text": " they kind of get into normalizing flows, which I don't want to get into here much because"}, {"start": 1125.52, "end": 1130.0, "text": " we haven't done a video on that yet. We'll do one, but they basically show that it's it's"}, {"start": 1130.0, "end": 1139.24, "text": " quite easy to do normalizing flows in a continuous fashion. And the top of normalizing flows"}, {"start": 1139.24, "end": 1144.36, "text": " it's in itself pretty cool. What they do at the end is they say, okay, what we can now"}, {"start": 1144.36, "end": 1149.48, "text": " do is we can actually take sequential data. So now we've just talked about let's input"}, {"start": 1149.48, "end": 1154.28, "text": " one data point, get out, let's say a label or something, which we can actually do sequential"}, {"start": 1154.28, "end": 1164.96, "text": " data. And let's for example, have an RNN encoder for our sequential data. So here, here,"}, {"start": 1164.96, "end": 1170.0, "text": " here, these are data points, right? These are measurements like a blood pressure of a person."}, {"start": 1170.0, "end": 1174.44, "text": " And what we can do is we can do a variational auto encoder. We've talked about this. We"}, {"start": 1174.44, "end": 1182.0, "text": " can have an RNN encoder parameterize the distribution and then as a decoder have this ODE"}, {"start": 1182.0, "end": 1188.94, "text": " neural network. And basically what that allows us to do is that allows us to deal with time"}, {"start": 1188.94, "end": 1197.28, "text": " steps that are not regularly sampled. And so we can extrapolate from the data point at"}, {"start": 1197.28, "end": 1205.72, "text": " time, yeah, times not regular samplings like or with RNNs, you basically force to have"}, {"start": 1205.72, "end": 1213.32, "text": " always the same time step difference. Otherwise, you have a very tough time. But with this,"}, {"start": 1213.32, "end": 1218.6000000000001, "text": " since these are continuous flows, you can basically unroll them and evaluate them at"}, {"start": 1218.6000000000001, "end": 1223.6000000000001, "text": " whatever time you want. So they have pretty cool experiments here where they kind of try"}, {"start": 1223.6000000000001, "end": 1233.6000000000001, "text": " to learn these kind of spiraling behaviors. And you see on top the RNN decoder will get"}, {"start": 1233.6, "end": 1246.52, "text": " all jaggy and so on, where as the so basically as the neural ordinary differential equation"}, {"start": 1246.52, "end": 1254.24, "text": " will generate quite let's say smooth things. And also it can extrapolate as you can see here,"}, {"start": 1254.24, "end": 1263.4399999999998, "text": " it can go the red thing is the extrapolation. Only there's only data where the green"}, {"start": 1263.44, "end": 1271.0, "text": " dots are. So that's pretty cool. You can see the RNN sometimes isn't able to kind of"}, {"start": 1271.0, "end": 1281.0, "text": " continue the flow as you can see in here. It extrapolates wrongly. So this kind of,"}, {"start": 1281.0, "end": 1284.72, "text": " I mean, it's a toy, it's a toy example, but these kind of dynamics are pretty cool. And"}, {"start": 1284.72, "end": 1292.24, "text": " they also show here when they learn the spirals and vary one dimension of the latent code"}, {"start": 1292.24, "end": 1302.92, "text": " that is given by the encoder. Then the flow goes from clockwise. It goes from two counter"}, {"start": 1302.92, "end": 1309.56, "text": " clockwise as you see here. I've drawn this in wrong, but so it's pretty, pretty cool"}, {"start": 1309.56, "end": 1315.48, "text": " what these things learn. And since it's only small data right now, small models, but"}, {"start": 1315.48, "end": 1323.16, "text": " I'm pretty sure this is going to develop further and be a cool, just a cool way, cool"}, {"start": 1323.16, "end": 1328.2, "text": " alley of research, cool idea and looking forward to what they come up next. Alright, so"}, {"start": 1328.2, "end": 1337.2, "text": " that was it for today a bit shorter, but I hope this was somewhat clear enough. Alright,"}, {"start": 1337.2, "end": 1347.2, "text": " have a great day."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=u1_qMdb0kYU | GPT-2: Language Models are Unsupervised Multitask Learners | A look at OpenAI's new GPT-2 model and the surrounding controversy.
https://blog.openai.com/better-language-models/
Abstract:
Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
Authors:
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever | Hi, today we're looking at language models are unsupervised multitask learners by Alec Radford, Jeffrey Wu, Reverend Child, David Luan, Daria Amadai and Ilya Satskiver from OpenAI. This paper has generated a bit of hype in the last few days so I wanted to go over it basically take a look and take a look at the surrounding let's say controversy. So let's actually have a look at the blog post that OpenAI released along with this paper. They say we've trained a large scale on supervised language model which generates coherent paragraphs of text. A chief state of the art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question and sound string and summarization all without task specific training. So this sounds quite suspicious at the beginning but we're actually going to have to look at how they do this. It sounds really good being able to do rudimentary translation without any training on translation itself just learning a language model but this has been continuing of trend in recent kind of time where we see that the better your language model gets the better basically. Your model gets on all these these kind of language tasks. They go into this and we'll see how they do it so basically what they've done is they've taken a kind of a bigger data set of language model of language model data set which is about 40 gigabytes of internet text they say this is here on the top. So it's one of the largest kind of text data sets there is unsupervised and they also taken one of the largest language models so they have their largest transformer based model has 1.5 billion parameters. So they take huge amount of data, huge model they train this on they train the model on the data and what comes out is like a giant super language model that is able to perform all these cool tasks. So here they have it like a bit of a sample. So what they can do is they can basically so the way a language model works is you always try to predict the next word based on what you've already seen. So you kind of clear it by giving it some starting words and it needs to continue the text from there. So here system prompt on top you see in a shocking finding scientists discovered a herd of unicorns living in a remote previously unexplored valley in the Andes Mountains. Even more surprising to the researcher the fact that unicorns spoke perfect English and then the model continues. The scientists named their population the population after their distinctive horn orbits unicorns. These four horns silver white unicorns were previously unknown to science. And after almost two centuries the mystery of what sparked this odd phenomenon phenomenon is solved. I mean you can even read this it's really really coherent text and it's quite surprising and I think it is slightly cherry picked but still the fact that I'm always able to do this is unprecedented. Especially like since it's like a general language model not specific to the task of continuing news articles about unicorns or anything. So yeah they're going to these findings we'll see them in the paper and they they also say that yeah they can they can now do these all these kind of tasks like question answering reading comprehension in the zero-shot fashion. Right so at the end here they say what it's capable of. So let's say AI writing assistance more capable dialogue agents on super-estranslation blah blah blah. They also say a few kind of let's say bad implications. Generating misleading news articles impersonate others online automate the production of abusive or fake content to post on social media automate the production of spam or fishing content. They liken it to a system called deep fakes which generates really well-crafted let's say videos of people. So they frame it in a way as this could be used for dangerous things and they say they aren't releasing they're only releasing the small version of GPT-2 along with the code. They're not releasing the data set training code or the GPT-2 this is the big model the model weights. Right and they do this they cite safety concerns. So I mean the community basically is is going nuts over over this the statement this decision not to release basically the code or the model or the data set to the to the world and so if you search on Twitter for hashtag GPT-2 then everyone basically has an opinion of whether or not this is a good thing or not apart from people testing it out. So they've given access to like a selected set of journalists to an API where they can query the model. So there are some samples flying around but basically people people are just debating whether or not this is a good thing or not and I mean it's just hilarious to to go along with this and to just read all the all the people having opinions I mean I've given my opinion as well just chime in it's a fun it's fun ride especially like this post here on Reddit machine learning says should I release my in this model or keep it close to or is fearing malicious use. Today I trained a 23,000 layer resident got a 99% 6% accuracy on evidence I'd love to share the model but if here it being used maliciously whatever it is used to read documents by the Russians what are your thoughts. I mean yeah this is an essence I mean in essence it's it's that right it's like yeah come on. I mean so I can just give my opinion up front it's it's I think a lot of a lot of things came together here and I think that this being open AI being kind of this initiative and he's never been there before they're not an academic institution institution they're not a company but still they you know their researchers they want to have a career so there's lots of pressures on them there's pressure to further publish so I think that that's kind of an underlying motivation to not release your model and your code and your data set is you actually you know there's a lot of work in this and you actually might want to get more than one paper out of it so keeping these things yourself is basically a surefire guarantee you're gonna get another two three four five papers out of this data or model or whatever it's also a good way to kind of generate press if you basically say oh we're not releasing but we have this really good model and there's there's one thing on Twitter right I mean you can't probably can't find it but it says like step one my model is state of the art step two my model is state of the art but generalize is better to other tasks step three my model does the same thing but with fewer parameters and step four my model is so good I can't even talk about it yeah so so basically I think a lot of things came together this this press generating the pressure to to create more kind of papers out of it and generally security concerns I think being open AI and the open AI kind of was established as a way to let's say the demo like their statutes pretty clearly say we want to open AI and research it in ethical use and you have backers like Elon Musk that talk all the time about kind of safety related issues in AI I think there's a lot of pressure on these people to with everything they do basically have an ethical component all right so so every everything they do is kind of scrutiny to this does this have ethical implications and where can we basically stand out from from the rest of the community by doing something well it doesn't need to be more ethical it just needs to be different with an ethical reason behind reasoning behind it and this I think this is it I think there's a lot of things coming together I don't I don't think anyone like maliciously thought oh this you know we're gonna do this it's gonna generate so much press or I don't think anyone actively thought ah well just keep it secret we're gonna get so much more papers out of it I honestly think that the reasoning was there's a lot of you know a lot of pressure to do this ethical things when there's there's not if you think about it it's yeah it's a good language model and it can generate text but you can also hire you know people to generate text to generate fake news to do phishing spam it's just a bit more efficient now right and yeah so it's it's unprecedented that you don't you don't release this research kind of cold war style so it's it's not really dangerous to release this and it's just delaying the inevitable basically but I think the the pressure the pressure was on and they made a decision that they thought was in line with their values and I think the this just neatly aligns with the underlying the other benefits to them that yeah all right so let's dive into the paper the paper is actually not you know too too much content there or they basically say so far is that a lot of a lot of these kind of papers on these tasks they they say the dominant approached creating ML systems is to collect the data set of training examples demonstrate correct behavior training system to imitate test its performance in IID samples so they basically say the there's kind of the single task training on single domain data sets is a major contribute to the lack of generalization of their thinker and systems so they basically say these language systems they don't generalize like a QA system might be trained on a QA task but it you know has nothing to to do with the task is basically a little bit different and even in multitask learning they say multitask learning is a promising framework but also it's kind of say it's still nascent and there's only very few different tasks right to do so they basically say you need basically a big big unsupervised model and it will implicitly learn all of the kind of special tasks yeah so they say there there are approaches that basically basically learn these language models but then still require supervised training so basically fine tuning this has been the this is for example the bird paper we discussed in the in the last video or two two videos ago that learns a giant language model but then does fine tuning for each of these tasks and gets really well what they want to do here is basically simply learn a language model and then investigate whether or not the language model can perform downstream tasks in a zero shot setting that means without any parameter or architecture modifications or no fine tuning all right so what they do so basically what a language model is if for those who don't know it's if you have a sequence of text let's say a b c d these are words let's say what's actually words the cat set on the mat and so on and you and you you a language model is you can remove the end of the sentence at some point and ask the model what comes next right that's a language model I mean there's different kinds of language models specific language ones but that's the basic the basic thing so the you just asked the model what's next so you can you can do a lot of unsupervised training because you don't need a labeled dataset for this you simply need a text corpus and that's basically all they do they use transformers which we've also discussed in attention is all you need paper so if you if you don't know what transformers are go back and look at that yeah all right so basically they say a lot of these special tasks like translation and question answering can be framed in language model way for example if you simply input if you this is your text translate to French and then you English text and then the French text right and then at at test time basically you leave away the French text you simply ask the language model what comes next right if and it's an input is translate to French and then English text this is the translation framed as a language model task because you can specify the task that the language allows to do also as language so this is quite this is quite an interesting approach and one they exploit here and they basically say well since in a large and diverse corpus of web pages that they collect here some there is going to be some websites that basically all do translation from English to French and the model can learn from that so here in this paragraph they basically list examples of naturally occurring demonstrations of English to French and French to English translation found throughout the training dataset so basically this is this is how the model could learn let's just look at one I hate the word perfume versus it's somewhat better in French right so that there's a way in just an unsupervised setting where the language model could learn right if if you just cross out this word paffa at the end and you just ask a model what comes next right the model sees I hate the word perfume versus it's somewhat better in French here colon then the model has to put something there in the most logical continuation is to put the French word for perfume right so that that's kind of have a frame translation and these other tasks in language model way all right so they talk about the training dataset which is a major component here they say they make a new training dataset because all of the current ones aren't sufficient they say most problem source of diverse nearly only text is web site such as common crawl while these archives are many orders of magnitude larger than current language modeling datasets they have significant data quality issues so to say content are mostly unintelligible and so on so they basically describe here how they scrape a new web scrape which emphasizes document quality they go on Reddit basically and scrape all outbound links from Reddit that have received at least three karma which means that it yeah three upvotes for a post of a link which basically means that three humans agreed that this was a good link so so they that's how they collected dataset resulting dataset web site web text contains the text subset of the 45 million links they then kind of clean this and scrape it down and remove some stuff and they they end up with a large corpus right and then they talk about how they represent the input which is byte pair encoding style it's not exactly byte pair encoding it's a byte pair encoding inspired encoding we want make a video about this by itself because it's really interesting but basically it's you can think of it as tokenization and pre-processing right then they say they show their models so architecture hyper parameters basically these are these are their models this is the smallest one this second smallest one they say it's the same size as birthed so the the language model by Google that we've looked at and then the largest one 1.5 billion parameters I mean that's huge and yeah they say it's 10 times larger than the previous so the first one is their previous model and this now is this this is the GPT2 model that that gets all these these nice results so they do experiments first they do experiments on language modeling itself right so they train on their on their corpus and then they evaluate on a bunch of other language modeling corpus so these up here are language modeling corp corpuses and the state of the art isn't this row and then you just look at basically the bottom row compare to their largest model this this is perplexity where it says ppl and the I think this here is is accuracy so perplexity lower is better which you can you can see here the previous state of the art was 39 on week text 2 they get to 18 with accuracy obviously higher is better so the the kind of previous accuracy in lambada was 59 they get to 63 basically they improve everything except for this 1 billion word corpus and they also explain why they say this is the most heavily pre-processed text and so on so that basically they basically are really good at language modeling even though they train on a different data set that's the point right the point is they train on their own corpus and then they go and just evaluate on the test set of these of these new of these new tasks and they become better basically than the models that trained on the training data set of that particular task all right so they do a number of further experiments where they basically show that the model has now learned kind of implicitly learned a number of different tasks so let's look at for example summarization this just want to show an example of how you can do this so summarization summarization task is you get a long text you need to produce a short text and that short text is then compared to short texts that humans wrote when the task was to summarize the long text and you get points on how much your text overlaps with these human texts so they they say we test GPT2's ability to perform summarization on the CNN and Daily Mail data set to induce summarization here's what I found interesting we add the text TLDR after the article and generate a hundred tokens right then they say they need to reduce repetition and so on but basically this this this is right this is the way you can frame summarization by text input so I find this just kind of a really nice way to think about these problems the fact that instructions of the task can be given as text this is a very nice example here so basically you put you as input you put the entire article right and so you here is the CNN article blah blah blah super long right and then here you put TLDR which is for those who don't know it's too long didn't read so people use this this phrase to indicate that then they will write a short summary of whatever was before here they will either put this at the beginning or at the end of a long text right to say to people okay if you don't want to read all this just read this down here gives you the gist of it which is a great summarization so if you then take this away and ask the language model what's here right basically throughout the training corpus the language model will have encountered such pieces of text with a TLDR in it and the language model might have learned that whatever is down here is a short version of whatever is up here and thereby if you then ask language model what comes next here right language model might learn I need to summarize whatever is above and that's my best shot at completing at at answering the question what comes next and yeah so they get you know surprisingly good results from from this so they say on the commonly reported Rouge 1-2 metrics to generate the summaries only begin to approach the performance of classic neural baselines just barely outperform selecting three random sentences from the article but still it it while qualitatively the generations resemble summaries they often focus on recent content from the article of confused specific details so this is kind of a task where it kind of worked but not really but I just find it it's really interesting that it kind of how they frame the task and how this can still so it still kind of works and that's the the just here in all of these tasks is also with like translation they obviously don't get near the performance of a system specifically trained to do this task but they all also always say it kind of works right it's sort of sort of it learns something and their entire point of this paper is to say well look yeah the the diversity of tasks the model is able to perform and I would say kind of perform in a zero-shot setting suggests that high capacity models trained to maximize the likelihood of a sufficiently very text corpus begin to learn how to perform a surprising amount of tasks without the need for explicit supervision so yeah their entire point is if we train on such very data that kind of that spans the entire range of human language expression the kind of tasks we want these systems to do will be learned implicitly so basically it points to let's get an even bigger corpus let's get even bigger models and we might get even better unsupervised zero-shot way in these kind of special tasks and general language understanding all right so that that was basically it I've jumped over a lot of points but I encourage you to look into this to look into the specific experiments they're really interesting we have they framed things and give just just shout your opinion around about whether or not the publishing is a good thing or not it's really funny I love it and without having a good day | [{"start": 0.0, "end": 6.22, "text": " Hi, today we're looking at language models are unsupervised multitask learners by"}, {"start": 6.22, "end": 12.56, "text": " Alec Radford, Jeffrey Wu, Reverend Child, David Luan, Daria Amadai and Ilya Satskiver"}, {"start": 12.56, "end": 19.62, "text": " from OpenAI. This paper has generated a bit of hype in the last few days so I"}, {"start": 19.62, "end": 24.36, "text": " wanted to go over it basically take a look and take a look at the surrounding"}, {"start": 24.36, "end": 30.34, "text": " let's say controversy. So let's actually have a look at the blog post that OpenAI"}, {"start": 30.34, "end": 38.36, "text": " released along with this paper. They say we've trained a large scale on"}, {"start": 38.36, "end": 41.980000000000004, "text": " supervised language model which generates coherent paragraphs of text."}, {"start": 41.980000000000004, "end": 45.82, "text": " A chief state of the art performance on many language modeling benchmarks and"}, {"start": 45.82, "end": 50.16, "text": " performs rudimentary reading comprehension, machine translation, question"}, {"start": 50.16, "end": 54.239999999999995, "text": " and sound string and summarization all without task specific training. So this"}, {"start": 54.239999999999995, "end": 58.31999999999999, "text": " sounds quite suspicious at the beginning but we're actually going to have to"}, {"start": 58.31999999999999, "end": 64.36, "text": " look at how they do this. It sounds really good being able to do rudimentary"}, {"start": 64.36, "end": 69.72, "text": " translation without any training on translation itself just learning a"}, {"start": 69.72, "end": 75.8, "text": " language model but this has been continuing of trend in recent kind of time"}, {"start": 75.8, "end": 80.96, "text": " where we see that the better your language model gets the better basically."}, {"start": 80.96, "end": 86.2, "text": " Your model gets on all these these kind of language tasks."}, {"start": 86.2, "end": 95.03999999999999, "text": " They go into this and we'll see how they do it so basically what they've done"}, {"start": 95.03999999999999, "end": 103.28, "text": " is they've taken a kind of a bigger data set of language model of language"}, {"start": 103.28, "end": 108.16, "text": " model data set which is about 40 gigabytes of internet text they say this is"}, {"start": 108.16, "end": 114.64, "text": " here on the top. So it's one of the largest kind of text data sets there is"}, {"start": 114.64, "end": 121.72, "text": " unsupervised and they also taken one of the largest language models so they"}, {"start": 121.72, "end": 130.12, "text": " have their largest transformer based model has 1.5 billion parameters. So they"}, {"start": 130.12, "end": 137.08, "text": " take huge amount of data, huge model they train this on they train the model on"}, {"start": 137.08, "end": 143.8, "text": " the data and what comes out is like a giant super language model that is able"}, {"start": 143.8, "end": 150.44, "text": " to perform all these cool tasks. So here they have it like a bit of a sample."}, {"start": 150.44, "end": 155.44, "text": " So what they can do is they can basically so the way a language model works is"}, {"start": 155.44, "end": 159.16, "text": " you always try to predict the next word based on what you've already seen. So"}, {"start": 159.16, "end": 164.92, "text": " you kind of clear it by giving it some starting words and it needs to"}, {"start": 164.92, "end": 170.2, "text": " continue the text from there. So here system prompt on top you see in a"}, {"start": 170.2, "end": 174.07999999999998, "text": " shocking finding scientists discovered a herd of unicorns living in a remote"}, {"start": 174.07999999999998, "end": 178.44, "text": " previously unexplored valley in the Andes Mountains. Even more surprising to"}, {"start": 178.44, "end": 184.24, "text": " the researcher the fact that unicorns spoke perfect English and then the model"}, {"start": 184.24, "end": 191.96, "text": " continues. The scientists named their population the population after their"}, {"start": 191.96, "end": 196.68, "text": " distinctive horn orbits unicorns. These four horns silver white unicorns were"}, {"start": 196.68, "end": 200.52, "text": " previously unknown to science. And after almost two centuries the mystery of"}, {"start": 200.52, "end": 205.0, "text": " what sparked this odd phenomenon phenomenon is solved. I mean you can even"}, {"start": 205.0, "end": 211.24, "text": " read this it's really really coherent text and it's quite surprising and I"}, {"start": 211.24, "end": 217.56, "text": " think it is slightly cherry picked but still the fact that I'm always able to"}, {"start": 217.56, "end": 225.24, "text": " do this is unprecedented. Especially like since it's like a general"}, {"start": 225.24, "end": 230.72, "text": " language model not specific to the task of continuing news articles about"}, {"start": 230.72, "end": 238.56, "text": " unicorns or anything. So yeah they're going to these findings we'll see"}, {"start": 238.56, "end": 245.32, "text": " them in the paper and they they also say that yeah they can they can now do"}, {"start": 245.32, "end": 249.32, "text": " these all these kind of tasks like question answering reading comprehension in"}, {"start": 249.32, "end": 257.72, "text": " the zero-shot fashion. Right so at the end here they say what it's capable of."}, {"start": 257.72, "end": 262.36, "text": " So let's say AI writing assistance more capable dialogue agents on"}, {"start": 262.36, "end": 267.32, "text": " super-estranslation blah blah blah. They also say a few kind of let's say bad"}, {"start": 267.32, "end": 273.08, "text": " implications. Generating misleading news articles impersonate others online"}, {"start": 273.08, "end": 276.92, "text": " automate the production of abusive or fake content to post on social media"}, {"start": 276.92, "end": 283.32, "text": " automate the production of spam or fishing content. They liken it to a system"}, {"start": 283.32, "end": 289.24, "text": " called deep fakes which generates really well-crafted let's say videos of"}, {"start": 289.24, "end": 300.48, "text": " people. So they frame it in a way as this could be used for dangerous things and"}, {"start": 300.48, "end": 307.6, "text": " they say they aren't releasing they're only releasing the small version of"}, {"start": 307.6, "end": 316.96000000000004, "text": " GPT-2 along with the code. They're not releasing the data set training code or"}, {"start": 316.96, "end": 324.44, "text": " the GPT-2 this is the big model the model weights. Right and they do this they"}, {"start": 324.44, "end": 331.32, "text": " cite safety concerns. So I mean the community basically is is going nuts over"}, {"start": 331.32, "end": 336.4, "text": " over this the statement this decision not to release basically the code or the"}, {"start": 336.4, "end": 344.67999999999995, "text": " model or the data set to the to the world and so if you search on Twitter for"}, {"start": 344.68, "end": 353.12, "text": " hashtag GPT-2 then everyone basically has an opinion of whether or not this is a"}, {"start": 353.12, "end": 360.76, "text": " good thing or not apart from people testing it out. So they've given access to"}, {"start": 360.76, "end": 367.92, "text": " like a selected set of journalists to an API where they can query the model."}, {"start": 367.92, "end": 377.04, "text": " So there are some samples flying around but basically people people are just"}, {"start": 377.04, "end": 381.52000000000004, "text": " debating whether or not this is a good thing or not and I mean it's just"}, {"start": 381.52000000000004, "end": 389.6, "text": " hilarious to to go along with this and to just read all the all the people"}, {"start": 389.6, "end": 396.88, "text": " having opinions I mean I've given my opinion as well just chime in it's a fun"}, {"start": 396.88, "end": 404.64, "text": " it's fun ride especially like this post here on Reddit machine learning says"}, {"start": 404.64, "end": 408.76, "text": " should I release my in this model or keep it close to or is fearing malicious"}, {"start": 408.76, "end": 417.6, "text": " use. Today I trained a 23,000 layer resident got a 99% 6% accuracy on"}, {"start": 417.6, "end": 422.32, "text": " evidence I'd love to share the model but if here it being used maliciously"}, {"start": 422.32, "end": 427.96, "text": " whatever it is used to read documents by the Russians what are your thoughts. I"}, {"start": 427.96, "end": 435.36, "text": " mean yeah this is an essence I mean in essence it's it's that right it's like"}, {"start": 435.36, "end": 443.92, "text": " yeah come on. I mean so I can just give my opinion up front it's it's I think a"}, {"start": 443.92, "end": 453.04, "text": " lot of a lot of things came together here and I think that this being open AI"}, {"start": 453.04, "end": 456.88, "text": " being kind of this initiative and he's never been there before they're not an"}, {"start": 456.88, "end": 462.8, "text": " academic institution institution they're not a company but still they you"}, {"start": 462.8, "end": 466.6, "text": " know their researchers they want to have a career so there's lots of"}, {"start": 466.6, "end": 472.84000000000003, "text": " pressures on them there's pressure to further publish so I think that that's"}, {"start": 472.84, "end": 477.15999999999997, "text": " kind of an underlying motivation to not release your model and your code and"}, {"start": 477.15999999999997, "end": 482.52, "text": " your data set is you actually you know there's a lot of work in this and you"}, {"start": 482.52, "end": 487.4, "text": " actually might want to get more than one paper out of it so keeping these"}, {"start": 487.4, "end": 493.12, "text": " things yourself is basically a surefire guarantee you're gonna get another two"}, {"start": 493.12, "end": 501.55999999999995, "text": " three four five papers out of this data or model or whatever it's also a good"}, {"start": 501.56, "end": 507.04, "text": " way to kind of generate press if you basically say oh we're not releasing but"}, {"start": 507.04, "end": 511.0, "text": " we have this really good model and there's there's one thing on Twitter right"}, {"start": 511.0, "end": 515.92, "text": " I mean you can't probably can't find it but it says like step one my model is"}, {"start": 515.92, "end": 521.08, "text": " state of the art step two my model is state of the art but generalize is better"}, {"start": 521.08, "end": 525.96, "text": " to other tasks step three my model does the same thing but with fewer parameters"}, {"start": 525.96, "end": 537.32, "text": " and step four my model is so good I can't even talk about it yeah so so"}, {"start": 537.32, "end": 545.5600000000001, "text": " basically I think a lot of things came together this this press generating"}, {"start": 545.5600000000001, "end": 552.36, "text": " the pressure to to create more kind of papers out of it and generally"}, {"start": 552.36, "end": 557.0, "text": " security concerns I think being open AI and the open AI kind of was"}, {"start": 557.0, "end": 563.72, "text": " established as a way to let's say the demo like their statutes pretty"}, {"start": 563.72, "end": 569.08, "text": " clearly say we want to open AI and research it in ethical use and you have"}, {"start": 569.08, "end": 575.04, "text": " backers like Elon Musk that talk all the time about kind of safety related"}, {"start": 575.04, "end": 579.24, "text": " issues in AI I think there's a lot of pressure on these people to with"}, {"start": 579.24, "end": 585.0, "text": " everything they do basically have an ethical component all right so so every"}, {"start": 585.0, "end": 591.52, "text": " everything they do is kind of scrutiny to this does this have ethical"}, {"start": 591.52, "end": 595.6800000000001, "text": " implications and where can we basically stand out from from the rest of the"}, {"start": 595.6800000000001, "end": 600.64, "text": " community by doing something well it doesn't need to be more ethical it just"}, {"start": 600.64, "end": 605.12, "text": " needs to be different with an ethical reason behind reasoning behind it and"}, {"start": 605.12, "end": 609.44, "text": " this I think this is it I think there's a lot of things coming together I don't"}, {"start": 609.44, "end": 615.44, "text": " I don't think anyone like maliciously thought oh this you know we're gonna do"}, {"start": 615.44, "end": 620.72, "text": " this it's gonna generate so much press or I don't think anyone actively thought"}, {"start": 620.72, "end": 627.2, "text": " ah well just keep it secret we're gonna get so much more papers out of it I"}, {"start": 627.2, "end": 631.92, "text": " honestly think that the reasoning was there's a lot of you know a lot of"}, {"start": 631.92, "end": 637.52, "text": " pressure to do this ethical things when there's there's not if you think about"}, {"start": 637.52, "end": 641.92, "text": " it it's yeah it's a good language model and it can generate text but you can"}, {"start": 641.92, "end": 647.4399999999999, "text": " also hire you know people to generate text to generate fake news to do"}, {"start": 647.4399999999999, "end": 653.9599999999999, "text": " phishing spam it's just a bit more efficient now right and yeah so it's it's"}, {"start": 653.9599999999999, "end": 657.92, "text": " unprecedented that you don't you don't release this research kind of cold war"}, {"start": 657.92, "end": 666.92, "text": " style so it's it's not really dangerous to release this and it's just delaying"}, {"start": 666.92, "end": 672.4399999999999, "text": " the inevitable basically but I think the the pressure the pressure was on and"}, {"start": 672.4399999999999, "end": 677.36, "text": " they made a decision that they thought was in line with their values and I"}, {"start": 677.36, "end": 683.5999999999999, "text": " think the this just neatly aligns with the underlying the other benefits to"}, {"start": 683.6, "end": 690.28, "text": " them that yeah all right so let's dive into the paper the paper is actually"}, {"start": 690.28, "end": 697.12, "text": " not you know too too much content there or they basically say so far is that"}, {"start": 697.12, "end": 703.48, "text": " a lot of a lot of these kind of papers on these tasks they they say the"}, {"start": 703.48, "end": 706.8000000000001, "text": " dominant approached creating ML systems is to collect the data set of"}, {"start": 706.8000000000001, "end": 710.5600000000001, "text": " training examples demonstrate correct behavior training system to imitate"}, {"start": 710.56, "end": 717.8399999999999, "text": " test its performance in IID samples so they basically say the there's kind of"}, {"start": 717.8399999999999, "end": 722.76, "text": " the single task training on single domain data sets is a major contribute to"}, {"start": 722.76, "end": 725.76, "text": " the lack of generalization of their thinker and systems so they basically say"}, {"start": 725.76, "end": 729.0799999999999, "text": " these language systems they don't generalize like a QA system might be"}, {"start": 729.0799999999999, "end": 734.8, "text": " trained on a QA task but it you know has nothing to to do with the task is"}, {"start": 734.8, "end": 739.76, "text": " basically a little bit different and even in multitask learning they say"}, {"start": 739.76, "end": 745.04, "text": " multitask learning is a promising framework but also it's kind of say it's"}, {"start": 745.04, "end": 752.24, "text": " still nascent and there's only very few different tasks right to do so they"}, {"start": 752.24, "end": 757.76, "text": " basically say you need basically a big big unsupervised model and it will"}, {"start": 757.76, "end": 768.04, "text": " implicitly learn all of the kind of special tasks yeah so they say there"}, {"start": 768.04, "end": 774.52, "text": " there are approaches that basically basically learn these language models but"}, {"start": 774.52, "end": 780.68, "text": " then still require supervised training so basically fine tuning this has"}, {"start": 780.68, "end": 786.4, "text": " been the this is for example the bird paper we discussed in the in the last"}, {"start": 786.4, "end": 791.92, "text": " video or two two videos ago that learns a giant language model but then does"}, {"start": 791.92, "end": 796.24, "text": " fine tuning for each of these tasks and gets really well what they want to do"}, {"start": 796.24, "end": 802.72, "text": " here is basically simply learn a language model and then investigate whether"}, {"start": 802.72, "end": 807.28, "text": " or not the language model can perform downstream tasks in a zero shot"}, {"start": 807.28, "end": 812.5600000000001, "text": " setting that means without any parameter or architecture modifications or no"}, {"start": 812.5600000000001, "end": 820.96, "text": " fine tuning all right so what they do so basically what a language model is if"}, {"start": 820.96, "end": 828.2800000000001, "text": " for those who don't know it's if you have a sequence of text let's say a b c d"}, {"start": 828.2800000000001, "end": 837.2, "text": " these are words let's say what's actually words the cat set on the mat and so on"}, {"start": 837.2, "end": 842.6, "text": " and you and you you a language model is you can remove the end of the sentence at"}, {"start": 842.6, "end": 850.2800000000001, "text": " some point and ask the model what comes next right that's a language model"}, {"start": 850.28, "end": 855.56, "text": " I mean there's different kinds of language models specific language"}, {"start": 855.56, "end": 860.88, "text": " ones but that's the basic the basic thing so the you just asked the model what's"}, {"start": 860.88, "end": 864.3199999999999, "text": " next so you can you can do a lot of unsupervised training because you don't"}, {"start": 864.3199999999999, "end": 868.92, "text": " need a labeled dataset for this you simply need a text corpus and that's"}, {"start": 868.92, "end": 872.36, "text": " basically all they do they use transformers which we've also discussed in"}, {"start": 872.36, "end": 877.6, "text": " attention is all you need paper so if you if you don't know what"}, {"start": 877.6, "end": 886.48, "text": " transformers are go back and look at that yeah all right so basically they say"}, {"start": 886.48, "end": 891.72, "text": " a lot of these special tasks like translation and question answering can be"}, {"start": 891.72, "end": 898.76, "text": " framed in language model way for example if you simply input if you this is"}, {"start": 898.76, "end": 903.24, "text": " your text translate to French and then you English text and then the French"}, {"start": 903.24, "end": 910.36, "text": " text right and then at at test time basically you leave away the French text"}, {"start": 910.36, "end": 917.16, "text": " you simply ask the language model what comes next right if and it's an input"}, {"start": 917.16, "end": 922.76, "text": " is translate to French and then English text this is the translation framed as"}, {"start": 922.76, "end": 927.04, "text": " a language model task because you can specify the task that the language"}, {"start": 927.04, "end": 932.76, "text": " allows to do also as language so this is quite this is quite an interesting approach"}, {"start": 932.76, "end": 938.1999999999999, "text": " and one they exploit here and they basically say well since in a large"}, {"start": 938.1999999999999, "end": 945.04, "text": " and diverse corpus of web pages that they collect here some there is going to"}, {"start": 945.04, "end": 950.76, "text": " be some websites that basically all do translation from English to French"}, {"start": 950.76, "end": 957.2, "text": " and the model can learn from that so here in this paragraph they basically list"}, {"start": 957.2, "end": 961.92, "text": " examples of naturally occurring demonstrations of English to French and"}, {"start": 961.92, "end": 967.04, "text": " French to English translation found throughout the training dataset so basically"}, {"start": 967.04, "end": 973.48, "text": " this is this is how the model could learn let's just look at one I hate the word"}, {"start": 973.48, "end": 977.08, "text": " perfume versus it's somewhat better in French"}, {"start": 977.08, "end": 984.9200000000001, "text": " right so that there's a way in just an unsupervised setting where the"}, {"start": 984.9200000000001, "end": 989.84, "text": " language model could learn right if if you just cross out this word"}, {"start": 989.84, "end": 997.64, "text": " paffa at the end and you just ask a model what comes next right the model"}, {"start": 997.64, "end": 1001.44, "text": " sees I hate the word perfume versus it's somewhat better in French"}, {"start": 1001.44, "end": 1006.0, "text": " here colon then the model has to put something there in the most logical"}, {"start": 1006.0, "end": 1010.84, "text": " continuation is to put the French word for perfume right so that that's kind"}, {"start": 1010.84, "end": 1017.04, "text": " of have a frame translation and these other tasks in language model way all"}, {"start": 1017.04, "end": 1021.68, "text": " right so they talk about the training dataset which is a major component here"}, {"start": 1021.68, "end": 1027.96, "text": " they say they make a new training dataset because all of the current ones"}, {"start": 1027.96, "end": 1032.12, "text": " aren't sufficient they say most problem source of diverse nearly only"}, {"start": 1032.12, "end": 1037.08, "text": " text is web site such as common crawl while these archives are many orders of"}, {"start": 1037.08, "end": 1039.8, "text": " magnitude larger than current language modeling datasets they have"}, {"start": 1039.8, "end": 1045.3999999999999, "text": " significant data quality issues so to say content are mostly unintelligible"}, {"start": 1045.3999999999999, "end": 1051.8799999999999, "text": " and so on so they basically describe here how they scrape a new web"}, {"start": 1051.8799999999999, "end": 1061.52, "text": " scrape which emphasizes document quality they go on Reddit basically and"}, {"start": 1061.52, "end": 1067.04, "text": " scrape all outbound links from Reddit that have received at least three"}, {"start": 1067.04, "end": 1073.92, "text": " karma which means that it yeah three upvotes for a post of a link which"}, {"start": 1073.92, "end": 1083.84, "text": " basically means that three humans agreed that this was a good link so so they"}, {"start": 1083.84, "end": 1088.2, "text": " that's how they collected dataset resulting dataset web site web text"}, {"start": 1088.2, "end": 1097.0800000000002, "text": " contains the text subset of the 45 million links they then kind of clean this"}, {"start": 1097.0800000000002, "end": 1103.64, "text": " and scrape it down and remove some stuff and they they end up with a large"}, {"start": 1103.64, "end": 1109.16, "text": " corpus right and then they talk about how they represent the input which is"}, {"start": 1109.16, "end": 1114.24, "text": " byte pair encoding style it's not exactly byte pair encoding it's a byte"}, {"start": 1114.24, "end": 1123.92, "text": " pair encoding inspired encoding we want make a video about this by itself"}, {"start": 1123.92, "end": 1127.92, "text": " because it's really interesting but basically it's you can think of it as"}, {"start": 1127.92, "end": 1136.36, "text": " tokenization and pre-processing right then they say they show their models so"}, {"start": 1136.36, "end": 1142.16, "text": " architecture hyper parameters basically these are these are their models this"}, {"start": 1142.16, "end": 1147.76, "text": " is the smallest one this second smallest one they say it's the same size as"}, {"start": 1147.76, "end": 1156.0800000000002, "text": " birthed so the the language model by Google that we've looked at and then the"}, {"start": 1156.0800000000002, "end": 1167.0800000000002, "text": " largest one 1.5 billion parameters I mean that's huge and yeah they say it's"}, {"start": 1167.08, "end": 1174.52, "text": " 10 times larger than the previous so the first one is their previous model and"}, {"start": 1174.52, "end": 1182.84, "text": " this now is this this is the GPT2 model that that gets all these these nice"}, {"start": 1182.84, "end": 1189.6799999999998, "text": " results so they do experiments first they do experiments on language modeling"}, {"start": 1189.6799999999998, "end": 1195.8, "text": " itself right so they train on their on their corpus and then they evaluate on a"}, {"start": 1195.8, "end": 1201.32, "text": " bunch of other language modeling corpus so these up here are language modeling"}, {"start": 1201.32, "end": 1209.8, "text": " corp corpuses and the state of the art isn't this row and then you just look at"}, {"start": 1209.8, "end": 1217.68, "text": " basically the bottom row compare to their largest model this this is perplexity"}, {"start": 1217.68, "end": 1229.28, "text": " where it says ppl and the I think this here is is accuracy"}, {"start": 1230.6000000000001, "end": 1237.3200000000002, "text": " so perplexity lower is better which you can you can see here the previous state"}, {"start": 1237.3200000000002, "end": 1243.6000000000001, "text": " of the art was 39 on week text 2 they get to 18 with accuracy obviously higher"}, {"start": 1243.6, "end": 1251.8799999999999, "text": " is better so the the kind of previous accuracy in lambada was 59 they get to 63"}, {"start": 1251.8799999999999, "end": 1257.52, "text": " basically they improve everything except for this 1 billion word corpus and"}, {"start": 1257.52, "end": 1263.1599999999999, "text": " they also explain why they say this is the most heavily pre-processed text and"}, {"start": 1263.1599999999999, "end": 1270.48, "text": " so on so that basically they basically are really good at language modeling"}, {"start": 1270.48, "end": 1275.56, "text": " even though they train on a different data set that's the point right the"}, {"start": 1275.56, "end": 1279.28, "text": " point is they train on their own corpus and then they go and just evaluate on"}, {"start": 1279.28, "end": 1285.92, "text": " the test set of these of these new of these new tasks and they become better"}, {"start": 1285.92, "end": 1289.68, "text": " basically than the models that trained on the training data set of that"}, {"start": 1289.68, "end": 1298.84, "text": " particular task all right so they do a number of further experiments where"}, {"start": 1298.84, "end": 1304.52, "text": " they basically show that the model has now learned kind of implicitly learned"}, {"start": 1304.52, "end": 1313.32, "text": " a number of different tasks so let's look at for example summarization this"}, {"start": 1313.32, "end": 1317.3999999999999, "text": " just want to show an example of how you can do this so summarization"}, {"start": 1317.3999999999999, "end": 1323.56, "text": " summarization task is you get a long text you need to produce a short text and"}, {"start": 1323.56, "end": 1330.36, "text": " that short text is then compared to short texts that humans wrote when the"}, {"start": 1330.36, "end": 1335.36, "text": " task was to summarize the long text and you get points on how much your"}, {"start": 1335.36, "end": 1341.3999999999999, "text": " text overlaps with these human texts so they they say we test GPT2's ability"}, {"start": 1341.3999999999999, "end": 1348.1599999999999, "text": " to perform summarization on the CNN and Daily Mail data set to induce"}, {"start": 1348.16, "end": 1355.24, "text": " summarization here's what I found interesting we add the text TLDR after the"}, {"start": 1355.24, "end": 1360.52, "text": " article and generate a hundred tokens right then they say they need to reduce"}, {"start": 1360.52, "end": 1368.4, "text": " repetition and so on but basically this this this is right this is the way you"}, {"start": 1368.4, "end": 1378.8000000000002, "text": " can frame summarization by text input so I find this just kind of a really nice"}, {"start": 1378.8000000000002, "end": 1385.52, "text": " way to think about these problems the fact that instructions of the task can"}, {"start": 1385.52, "end": 1391.2800000000002, "text": " be given as text this is a very nice example here so basically you put you as"}, {"start": 1391.28, "end": 1400.2, "text": " input you put the entire article right and so you here is the CNN article blah"}, {"start": 1400.2, "end": 1408.44, "text": " blah blah super long right and then here you put TLDR which is for those who"}, {"start": 1408.44, "end": 1414.3999999999999, "text": " don't know it's too long didn't read so people use this this phrase to"}, {"start": 1414.4, "end": 1421.16, "text": " indicate that then they will write a short summary of whatever was before here"}, {"start": 1421.16, "end": 1426.24, "text": " they will either put this at the beginning or at the end of a long text right to"}, {"start": 1426.24, "end": 1430.72, "text": " say to people okay if you don't want to read all this just read this down here"}, {"start": 1430.72, "end": 1435.3600000000001, "text": " gives you the gist of it which is a great summarization so if you then take this"}, {"start": 1435.3600000000001, "end": 1442.0800000000002, "text": " away and ask the language model what's here right basically throughout the"}, {"start": 1442.08, "end": 1447.84, "text": " training corpus the language model will have encountered such pieces of text"}, {"start": 1447.84, "end": 1452.6399999999999, "text": " with a TLDR in it and the language model might have learned that whatever is"}, {"start": 1452.6399999999999, "end": 1459.8, "text": " down here is a short version of whatever is up here and thereby if you then ask"}, {"start": 1459.8, "end": 1466.36, "text": " language model what comes next here right language model might learn I need to"}, {"start": 1466.36, "end": 1473.08, "text": " summarize whatever is above and that's my best shot at completing at at"}, {"start": 1473.08, "end": 1481.12, "text": " answering the question what comes next and yeah so they get you know surprisingly"}, {"start": 1481.12, "end": 1492.9199999999998, "text": " good results from from this so they say on the commonly reported Rouge 1-2"}, {"start": 1492.92, "end": 1496.44, "text": " metrics to generate the summaries only begin to approach the performance of"}, {"start": 1496.44, "end": 1500.28, "text": " classic neural baselines just barely outperform selecting three random"}, {"start": 1500.28, "end": 1510.92, "text": " sentences from the article but still it it while qualitatively the"}, {"start": 1510.92, "end": 1516.0800000000002, "text": " generations resemble summaries they often focus on recent content from the"}, {"start": 1516.0800000000002, "end": 1520.2, "text": " article of confused specific details so this is kind of a task where it kind of"}, {"start": 1520.2, "end": 1526.72, "text": " worked but not really but I just find it it's really interesting that it kind"}, {"start": 1526.72, "end": 1532.24, "text": " of how they frame the task and how this can still so it still kind of works"}, {"start": 1532.24, "end": 1536.6000000000001, "text": " and that's the the just here in all of these tasks is also with like"}, {"start": 1536.6000000000001, "end": 1541.3600000000001, "text": " translation they obviously don't get near the performance of a system"}, {"start": 1541.3600000000001, "end": 1547.88, "text": " specifically trained to do this task but they all also always say it kind of"}, {"start": 1547.88, "end": 1555.92, "text": " works right it's sort of sort of it learns something and their entire point of"}, {"start": 1555.92, "end": 1573.64, "text": " this paper is to say well look yeah the the diversity of tasks the model is"}, {"start": 1573.64, "end": 1578.2800000000002, "text": " able to perform and I would say kind of perform in a zero-shot setting suggests"}, {"start": 1578.2800000000002, "end": 1581.3200000000002, "text": " that high capacity models trained to maximize the likelihood of a"}, {"start": 1581.3200000000002, "end": 1586.72, "text": " sufficiently very text corpus begin to learn how to perform a surprising amount"}, {"start": 1586.72, "end": 1591.4, "text": " of tasks without the need for explicit supervision so yeah their entire point is"}, {"start": 1591.4, "end": 1601.0400000000002, "text": " if we train on such very data that kind of that spans the entire range of"}, {"start": 1601.04, "end": 1607.2, "text": " human language expression the kind of tasks we want these systems to do will"}, {"start": 1607.2, "end": 1612.8, "text": " be learned implicitly so basically it points to let's get an even bigger"}, {"start": 1612.8, "end": 1618.76, "text": " corpus let's get even bigger models and we might get even better unsupervised"}, {"start": 1618.76, "end": 1624.32, "text": " zero-shot way in these kind of special tasks and general language"}, {"start": 1624.32, "end": 1629.6, "text": " understanding all right so that that was basically it I've jumped over a lot of"}, {"start": 1629.6, "end": 1633.36, "text": " points but I encourage you to look into this to look into the specific"}, {"start": 1633.36, "end": 1639.1999999999998, "text": " experiments they're really interesting we have they framed things and give"}, {"start": 1639.1999999999998, "end": 1645.4399999999998, "text": " just just shout your opinion around about whether or not the publishing is a"}, {"start": 1645.44, "end": 1659.68, "text": " good thing or not it's really funny I love it and without having a good day"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=OioFONrSETc | Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift | https://arxiv.org/abs/1502.03167
Abstract:
Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.
Authors:
Sergey Ioffe, Christian Szegedy | Hi, today we're looking at batch normalization, accelerating deep network training by reducing internal covariate shift by Sergei Iof and Christian Skittes Sizzes, Skittety, yeah, not my not the best pernounser. Sergei, close enough. Alright, so this is a bit of an older paper and I think it's still good to look at it, it's irrelevant and people just kind of throw batch normalization into networks and maybe don't really know what it's doing. So let's look at it. So what these people argue is that in a network usually you have structures like this. So if something like that, it means that your your loss kind of this is a two layer network, your loss is a composition of the first layer on the inputs you with parameters theta one and the second layer with parameters theta two. So conceptually that would look something like this, you have your input, maybe it's an image, right, and you put it through the network and it becomes some intermediate representation, right, that's x zero, that's x one and or maybe we'll call it even h one hidden representation, right, then that becomes then through the layer becomes h two and so on, right, so this this stuff here, this would be weight matrices w one, w two that transform the image into a new image or whatever. So what they're arguing is that well, if you only consider a single layer like the first layer here, it's kind of the same if you only consider the second layer with the h one now as the input, right, it's pretty natural to see each layer of the neural network is kind of like its own transformation taking inputs and producing some outputs. So what people usually do with the very first input here with your data in machine learning generally is so called whitening the data, which means that they have this over here. Usually data is whitened, I can't find it, but what it means is you basically want to, if you have data, let's say here is a coordinated axis, you have 2d data and you want to you might want to do a kind of a linear regression on it and you have data that's kind of like that, right, it suits you to transform this data into by, first of all, looking where it's mean is, mean is about here, and subtracting that. So here, here, and then kind of dividing by its standard deviation in each direction. So there's a standard deviation here and there is a standard deviation here. So you would transform this data into something like maybe something like this. So you see that the mean, the mean is now in the middle and the, it's not so elongated anymore. So you have a much easier time to kind of learn, learn something on this data than on this data over here simply because our classifier is usually tend to rely on like inner products and if you, if you do an inner product here, you have one of these vectors here and you do some inner product, it's always going to be far away from, from the mean and thereby the inner products are going to be large, no matter what, right, whereas here, if you take a random one and then another one, so if you take two random points here, there are two vectors from the mean are almost the same, whereas if you take two random points here, they tend to look uniformly in the direction. So it's kind of the sense we know that machine learning methods work better if you widen the data first. So these people ask, hey, why do, why do we only do this at the very beginning, right? Why don't we, why if each layer is basically takes its input and learn something, each layer is basically a machine learning method, why don't we just widen the data to every single layer or, you know, every single subcomponent of a deep network. And that's the kind of basic step here. So they argue how this has been kind of tried before or what kind of methods you would usually get and why these aren't so good, mainly because you kind of need to intermingle this widening with training the network. And thereby, if you just go about this naively, then you would not, you would not, you would kind of produce artifacts from training. So that's, that's this section, this section here, where they argue that it's not really, you can't really go about this super naively, but what they do isn't super complicated, but they did just do it in a smart way. So we'll jump directly to that. What they say is, okay, let's look at what they call normalization of your mini batch statistics. All right, let's say we have some, some D dimensional input X, right? And we're just going to look at per dimension. So we only care about per individual dimension normalization. All right. So what do we get, what do we need to do? We're going to take the K of dimension, we're going to subtract from it the mean of the K of dimension within a mini batch, right, within a mini batch of data. So mini batch may be something like 32 examples or 100 examples or something like this. And then we'll divide by the variance of that mini batch. So this is, this is done over here in, in basic. So you compute mu, B, mu of the mini batch, which is simply the empirical mean of that of the data at that particular layer. And then you compute sigma squared B, which is simply the, the empirical estimate of the variance of that of computed on that particular mini batch. And then you transform your data by subtracting that and by dividing it by this. And this, this, this constant here is simply to prevent from dividing to by, you know, two, two small values. So you get like in numerical problems. So what does it do? It does basically what we, what we did above. But now what they say is, okay, we want to make sure that this transformation can potentially, you know, represent the identity because sometimes or like a natural, natural, if you had to do something with your input when giving it to the next layer, like the very baseline is to do nothing to it, right, to do the identity transform. But if you do, if you do this, you probably won't end up with the identity transform except if the mean is exactly zero and the variance is exactly one, right. So what they say is, okay, we'll also introduce two new parameters to this, here, this, this, uh, gamma and this beta here. And these are learned like other parameters in the network. We learn the parameter gamma and beta and gamma and beta are simply, gamma is simply a scalar that this transformed X is multiplied by and beta is simply a scalar that is then added to it. So in each dimension of your hidden representation, you basically learn how to scale it and how to shift it, scale and shift after you've done the normalization. So first, first you do the normalization, what is it? Right. First you go from this type of data to this type of data and then you say, well, but maybe it's actually more beneficial to, you know, have it not centered or whatever. So so that the network can actually learn to transform this somewhere. This might seem, this might seem redundant, but it's really powerful because what you're basically saying is that, okay, this probably isn't the best, you know, distribution. This probably is better, but if the network kind of, if the back propagation algorithm or the training algorithm decides that this first representation was actually useful, it has the option of going back, but it also has the option of going to any other kind of form of distribution. So so it's it's pretty powerful in terms of what it does. Okay, it's not really correct here that it has the power to go to any distribution because it's only kind of per dimension scalar that it learns, but still it the potential to transform the distribution by these learned scalars is pretty big. Alright. So basically that's it. That's that's that's the whole that's the whole shebang. You normalize your inputs to each layer by this formula and then you introduce new parameters that you learn along with your network parameters. So this kind of has some implications. First of all, one implication is this here. If you build a batch norm into your network, it kind of learns this this plus beta, which is basically a bias parameter. If you think of a traditional kind of fully connected layer, this isn't a fully connected layer because this scalar here is only per dimension, but the bias in a fully connected layer is also just per dimension. So the beta is equal to a bias in a fully connected layer. So if you have a batch normalization after or after a after a fully connected or convolutional layer or anything that can or sometimes has a bias parameter, it's almost not worth it to kind of learn both. So you would rather just only have the one from the batch normalization and leave and use the convolution or fully connected layer without a bias. So that's kind of one implication. Another implication is we have just lost the kind of the ability to have deterministic test time inference. So much like dropout, which is kind of random dropping out of nodes here, we have quantities that depend on the mini batch. So not only the individuals sample, but they actually depend on what other samples are randomly selected to be trained with that particular sample. So that's kind of awkward if you kind of want to have some deterministic reproducible thing at test time. So what people do is, and here, this is discussed, what people do is while training, they use these quantities, the quantities we just discussed, but they keep kind of a running average over them. So what I would do is in each mini batch, I would compute this mini batch mean and this mini batch variance. And I would keep quantities, I would keep running averages of them. And at test time, I'm going to plug in these running averages. So there's nothing dependent on the mini batch anymore. So that's pretty neat trick, I think. And you can even imagine at the end of your network training, simply using these here to kind of fine tune the weights to these exact parameters. So that's one thing that's that's kind of you have to pay attention to. So usually in neural network libraries, there are there are parameters you can set whether or not this network is in train mode or in test mode. And depending on that, the batch norm layer will use the mini batch statistics or will use the kind of all over data sets statistics. All right, the second thing is training. So how do you actually train this thing? Because now you can't just, right, we we started with our with our multi layer network up here. Right. F2, F1, right. First, I'm going to put my things through F1 and then I'll put my things through F2, right. And the the back propagation here is is quite easy. So let me get rid of this. The back prop here is quite easy. You go to L and maybe you want to drive it by theta one. Right. So you first going to drive it by the hidden representation one and then the hidden representation one with respect to theta one. So the hidden representation would be whatever comes out of here or H1 sorry, not I. And so on. So you kind of chain rule your way through here. But now in between these layers here, you have these batch norm things. And so the the authors discuss how we now do back propagation in the face of these things. So here is basically what they discuss. It actually pays to to have a graph of what's going on. So here is X. This is the input to our layer. Right. So what do we compute from X? We compute mu. Let's just call it me or maybe it's called here. Right. This is the mean of X of all the X's. So this is X XI until X. Well X1 until XN. This is the mini batch. We compute the mean. And then from this and from this, we can compute this estimate of the variance. Right. We need both. Right. So we now have the mean and the variance over the mini batch. So we're going to take one of these X's just the if one, right. And we're going to use this and this to compute X. What? Compute X. Is it called hat? Yeah, probably it's called X hat. Right. Yeah, we saw about X hat. So X hat is X X hat I is X I minus mu B divided by sigma squared B. The square root of it plus this kind of little constant here. We're going to leave away the little little constant for clarity sake. Actually, it's in the calculations here. But right. So then we have a new parameter, gamma. Right. And we're going to use it and our X hat to compute. And also this beta here to compute Y hat Y or Y just Y. And of course, this is I. This is I. Right. So and this here is our final output of the layer. So you can see now the back propagation paths if you go through here. So the back propagation path, if we have some loss coming in here, we backprop through Y I. Right. So here is del L, the loss to Y I that's here. Right. So if we want the, for example, the backprop with respect to beta, what we do is we simply, and this is, this is over the mini batch, of course, we simply backprop here through this path. So in our, in our formula for beta, there should be only mention Y I. And that's what we see here, right. In our formula for gamma, there should only be mention of Y I. So because the path leads only through Y I. Oh, no, I'm sorry. Actually, because the of the, what I mean is of the derivative with respect to Y I, of course, the, we also have to pay in taking to attention that this is multiplied here by this X hat I where, of course, that's not the case when we just add something because the derivative of two of, of an addition, like X plus B with respect to be, this regards X, whereas if it's X times B, it doesn't this risk or this regard X. Alright. So if we, yeah, so you can, you can go back. So the interesting bit basically comes when we, when I find out, okay, how, because here is, here is another layer right down here somewhere, there is another layer. And we basically want to know this input here to the next layer. How do we compute it in the face of this mess here? Because we, it's not, it's not so easy, right. So you have to see we have three paths here. We go back through X and let me get rid of these blue ones. We go, we go back through X hat directly to X. We go one path is through here. And one path is through this, this mu. So basically have to compute derivatives with respect to sigma squared and mu. And for that, we need to derivative with, with respect to X hat. So basically the way back prop works is you just find all paths from where you are, to where you want to go. And then you, you kind of intuitively compute this. So this one here is the easiest, the easiest, as you see here, they did it on top. Well, first they did this one, which is simply going from Y to X hat I start. Then they go from X hat I to sigma squared, which simply involves kind of the reverse operations of how you got it. This is simply a derivative formula here of the, of the division by square root. Um, then you can use this, you can use this quantity here to compute that. So basically you just go in reverse of how you computed the operations in the first place. We said we needed mu B to compute sigma squared B. Now we need the derivative with respect to sigma squared B in order to compute the derivative to mu B. Um, and once you have that, and you see the, the addition here, the ad here is the fact that whoops, is the fact that two things contribute to mu B. So two paths lead to lead to mu B. One path is from, from here, and one path is through here. Right. So here, this should be a green. Um, since two paths, you have two components to your derivative and you add each of them. Uh, so that's how that's going to be. And then this here, with respect to this X here, we have three paths, right, because we have three arrows going out of XI one here, one here, and one here. So you have to take into account all of them, right. So this one is pretty easy. That's the first one. Then the second one, um, sorry, this, the second one, uh, goes through this mu B, which we've already computed. And the third one goes through the, the sigma, which we've also already computed, right. And these are added, um, because all the paths, you have to, I'll add all the paths in the backprop algorithm. Maybe we'll do a, actually, video on backprop, uh, later to, to get to really dive into how this works. Um, and finally, they, uh, they compute these, these we've already discussed. So in essence, the whole thing is differentiable. Um, you just have to kind of pay attention how, how to do it. Um, but the whole thing is differentiable. And thereby, you can basically backprop through a network that has these batch normal layers in a built in. So that's pretty cool. Um, I just want to quickly jump over to the results. Um, and yeah, keep in mind, this paper is from 2015. So networks weren't that big, uh, back then, um, we didn't know that much about training yet. But the interesting thing is they basically discovered, look, we can, we can have drastically fewer steps in order to reach the same accuracies. And these are kind of the activations of the network over the course of training. So without patch norm, you see, especially at the beginning, there's large fluctuations in the activations. And, um, because, because they use batch norm, now there's no such thing. So basically, the reason for that is pretty, is pretty simple. Right. While you learn, and you learn your layered representation here, let's say there's X and X is fed through layers. And there is hidden representations each in between. Right. So you're trying to learn all these parameters. Let's say this one here, W three, but at the beginning of training, everything is kind of prone to shifting around a lot. So when you change W one, that kind of changes the entire distribution of your hidden representations after the fact. So basically, whatever you learn for W three is now already almost obsolete because you've changed W one basically. And W three was kind of assuming that its inputs would would remain the same because that's what you assume in machine learning. Your input distribution is kind of the same. So, um, that's why at the beginning of training, you see these kind of large variance and with batch norm, this tends to go away. So that's pretty cool. Um, they also kind of show, they mainly show that they can reach the same accuracies as other, as other training methods, but with much, much fewer steps and they can go much higher learning rates than others. So, um, because, because of that. So that's pretty cool. Um, I encourage you to, you to check out the rest of the paper use batch norm in your networks. Sometimes works, sometimes doesn't work strangely enough. Um, but I know, I guess that's just a matter of experimentation. Alright, that was it for me. Bye bye. | [{"start": 0.0, "end": 5.38, "text": " Hi, today we're looking at batch normalization, accelerating deep network"}, {"start": 5.38, "end": 13.68, "text": " training by reducing internal covariate shift by Sergei Iof and Christian Skittes"}, {"start": 13.68, "end": 22.92, "text": " Sizzes, Skittety, yeah, not my not the best pernounser."}, {"start": 22.92, "end": 34.44, "text": " Sergei, close enough. Alright, so this is a bit of an older paper and I think it's still"}, {"start": 34.44, "end": 41.28, "text": " good to look at it, it's irrelevant and people just kind of throw batch normalization"}, {"start": 41.28, "end": 47.400000000000006, "text": " into networks and maybe don't really know what it's doing. So let's look at it."}, {"start": 47.4, "end": 54.6, "text": " So what these people argue is that in a network usually you have structures like this."}, {"start": 54.6, "end": 62.08, "text": " So if something like that, it means that your your loss kind of this is a two layer network,"}, {"start": 62.08, "end": 67.92, "text": " your loss is a composition of the first layer on the inputs you with parameters theta one"}, {"start": 67.92, "end": 73.44, "text": " and the second layer with parameters theta two. So conceptually that would look something"}, {"start": 73.44, "end": 79.96, "text": " like this, you have your input, maybe it's an image, right, and you put it through the network"}, {"start": 79.96, "end": 89.84, "text": " and it becomes some intermediate representation, right, that's x zero, that's x one and or maybe"}, {"start": 89.84, "end": 97.0, "text": " we'll call it even h one hidden representation, right, then that becomes then through the"}, {"start": 97.0, "end": 104.64, "text": " layer becomes h two and so on, right, so this this stuff here, this would be weight matrices"}, {"start": 104.64, "end": 113.6, "text": " w one, w two that transform the image into a new image or whatever. So what they're arguing"}, {"start": 113.6, "end": 122.24000000000001, "text": " is that well, if you only consider a single layer like the first layer here, it's kind"}, {"start": 122.24, "end": 128.32, "text": " of the same if you only consider the second layer with the h one now as the input, right,"}, {"start": 128.32, "end": 133.64, "text": " it's pretty natural to see each layer of the neural network is kind of like its own transformation"}, {"start": 133.64, "end": 141.76, "text": " taking inputs and producing some outputs. So what people usually do with the very first"}, {"start": 141.76, "end": 148.64, "text": " input here with your data in machine learning generally is so called whitening the data,"}, {"start": 148.64, "end": 161.04, "text": " which means that they have this over here. Usually data is whitened, I can't find it, but"}, {"start": 161.04, "end": 169.56, "text": " what it means is you basically want to, if you have data, let's say here is a coordinated"}, {"start": 169.56, "end": 175.88, "text": " axis, you have 2d data and you want to you might want to do a kind of a linear regression"}, {"start": 175.88, "end": 183.51999999999998, "text": " on it and you have data that's kind of like that, right, it suits you to transform this"}, {"start": 183.51999999999998, "end": 191.6, "text": " data into by, first of all, looking where it's mean is, mean is about here, and subtracting"}, {"start": 191.6, "end": 201.0, "text": " that. So here, here, and then kind of dividing by its standard deviation in each direction."}, {"start": 201.0, "end": 205.96, "text": " So there's a standard deviation here and there is a standard deviation here. So you would"}, {"start": 205.96, "end": 215.2, "text": " transform this data into something like maybe something like this. So you see that the"}, {"start": 215.2, "end": 227.36, "text": " mean, the mean is now in the middle and the, it's not so elongated anymore. So you have"}, {"start": 227.36, "end": 232.72000000000003, "text": " a much easier time to kind of learn, learn something on this data than on this data over"}, {"start": 232.72000000000003, "end": 237.96, "text": " here simply because our classifier is usually tend to rely on like inner products and if"}, {"start": 237.96, "end": 243.36, "text": " you, if you do an inner product here, you have one of these vectors here and you do some"}, {"start": 243.36, "end": 249.28000000000003, "text": " inner product, it's always going to be far away from, from the mean and thereby the inner"}, {"start": 249.28000000000003, "end": 255.24, "text": " products are going to be large, no matter what, right, whereas here, if you take a random"}, {"start": 255.24, "end": 261.2, "text": " one and then another one, so if you take two random points here, there are two vectors"}, {"start": 261.2, "end": 266.24, "text": " from the mean are almost the same, whereas if you take two random points here, they tend"}, {"start": 266.24, "end": 272.04, "text": " to look uniformly in the direction. So it's kind of the sense we know that machine learning"}, {"start": 272.04, "end": 277.24, "text": " methods work better if you widen the data first. So these people ask, hey, why do, why"}, {"start": 277.24, "end": 283.76, "text": " do we only do this at the very beginning, right? Why don't we, why if each layer is basically"}, {"start": 283.76, "end": 289.4, "text": " takes its input and learn something, each layer is basically a machine learning method,"}, {"start": 289.4, "end": 296.92, "text": " why don't we just widen the data to every single layer or, you know, every single subcomponent"}, {"start": 296.92, "end": 302.03999999999996, "text": " of a deep network. And that's the kind of basic step here. So they argue how this has"}, {"start": 302.03999999999996, "end": 307.44, "text": " been kind of tried before or what kind of methods you would usually get and why these"}, {"start": 307.44, "end": 316.08, "text": " aren't so good, mainly because you kind of need to intermingle this widening with training"}, {"start": 316.08, "end": 321.88, "text": " the network. And thereby, if you just go about this naively, then you would not, you would"}, {"start": 321.88, "end": 328.44, "text": " not, you would kind of produce artifacts from training. So that's, that's this section,"}, {"start": 328.44, "end": 334.68, "text": " this section here, where they argue that it's not really, you can't really go about this"}, {"start": 334.68, "end": 340.48, "text": " super naively, but what they do isn't super complicated, but they did just do it in a smart"}, {"start": 340.48, "end": 351.12, "text": " way. So we'll jump directly to that. What they say is, okay, let's look at what they"}, {"start": 351.12, "end": 357.24, "text": " call normalization of your mini batch statistics. All right, let's say we have some, some"}, {"start": 357.24, "end": 365.36, "text": " D dimensional input X, right? And we're just going to look at per dimension. So we only"}, {"start": 365.36, "end": 374.6, "text": " care about per individual dimension normalization. All right. So what do we get, what do we need"}, {"start": 374.6, "end": 381.72, "text": " to do? We're going to take the K of dimension, we're going to subtract from it the mean of"}, {"start": 381.72, "end": 388.8, "text": " the K of dimension within a mini batch, right, within a mini batch of data. So mini batch"}, {"start": 388.8, "end": 394.96000000000004, "text": " may be something like 32 examples or 100 examples or something like this. And then we'll"}, {"start": 394.96000000000004, "end": 405.88000000000005, "text": " divide by the variance of that mini batch. So this is, this is done over here in, in basic."}, {"start": 405.88, "end": 413.84, "text": " So you compute mu, B, mu of the mini batch, which is simply the empirical mean of that"}, {"start": 413.84, "end": 421.0, "text": " of the data at that particular layer. And then you compute sigma squared B, which is"}, {"start": 421.0, "end": 429.68, "text": " simply the, the empirical estimate of the variance of that of computed on that particular"}, {"start": 429.68, "end": 437.64, "text": " mini batch. And then you transform your data by subtracting that and by dividing it by"}, {"start": 437.64, "end": 444.52, "text": " this. And this, this, this constant here is simply to prevent from dividing to by, you"}, {"start": 444.52, "end": 454.0, "text": " know, two, two small values. So you get like in numerical problems. So what does it do?"}, {"start": 454.0, "end": 461.8, "text": " It does basically what we, what we did above. But now what they say is, okay, we want to"}, {"start": 461.8, "end": 469.16, "text": " make sure that this transformation can potentially, you know, represent the identity because"}, {"start": 469.16, "end": 476.0, "text": " sometimes or like a natural, natural, if you had to do something with your input when"}, {"start": 476.0, "end": 481.12, "text": " giving it to the next layer, like the very baseline is to do nothing to it, right, to"}, {"start": 481.12, "end": 489.56, "text": " do the identity transform. But if you do, if you do this, you probably won't end up with"}, {"start": 489.56, "end": 494.16, "text": " the identity transform except if the mean is exactly zero and the variance is exactly"}, {"start": 494.16, "end": 503.36, "text": " one, right. So what they say is, okay, we'll also introduce two new parameters to this,"}, {"start": 503.36, "end": 512.36, "text": " here, this, this, uh, gamma and this beta here. And these are learned like other parameters"}, {"start": 512.36, "end": 519.2, "text": " in the network. We learn the parameter gamma and beta and gamma and beta are simply, gamma"}, {"start": 519.2, "end": 527.08, "text": " is simply a scalar that this transformed X is multiplied by and beta is simply a scalar"}, {"start": 527.08, "end": 533.28, "text": " that is then added to it. So in each dimension of your hidden representation, you basically"}, {"start": 533.28, "end": 541.72, "text": " learn how to scale it and how to shift it, scale and shift after you've done the normalization."}, {"start": 541.72, "end": 550.12, "text": " So first, first you do the normalization, what is it? Right. First you go from this type"}, {"start": 550.12, "end": 556.1999999999999, "text": " of data to this type of data and then you say, well, but maybe it's actually more beneficial"}, {"start": 556.2, "end": 563.5200000000001, "text": " to, you know, have it not centered or whatever. So so that the network can actually learn"}, {"start": 563.5200000000001, "end": 568.6400000000001, "text": " to transform this somewhere. This might seem, this might seem redundant, but it's really"}, {"start": 568.6400000000001, "end": 578.0400000000001, "text": " powerful because what you're basically saying is that, okay, this probably isn't the best,"}, {"start": 578.0400000000001, "end": 583.72, "text": " you know, distribution. This probably is better, but if the network kind of, if the back"}, {"start": 583.72, "end": 588.72, "text": " propagation algorithm or the training algorithm decides that this first representation was"}, {"start": 588.72, "end": 594.5600000000001, "text": " actually useful, it has the option of going back, but it also has the option of going to"}, {"start": 594.5600000000001, "end": 603.08, "text": " any other kind of form of distribution. So so it's it's pretty powerful in terms of what"}, {"start": 603.08, "end": 607.52, "text": " it does. Okay, it's not really correct here that it has the power to go to any distribution"}, {"start": 607.52, "end": 615.8, "text": " because it's only kind of per dimension scalar that it learns, but still it the potential"}, {"start": 615.8, "end": 627.1999999999999, "text": " to transform the distribution by these learned scalars is pretty big. Alright. So basically"}, {"start": 627.1999999999999, "end": 633.8, "text": " that's it. That's that's that's the whole that's the whole shebang. You normalize your"}, {"start": 633.8, "end": 641.4399999999999, "text": " inputs to each layer by this formula and then you introduce new parameters that you learn"}, {"start": 641.4399999999999, "end": 652.4, "text": " along with your network parameters. So this kind of has some implications. First of all,"}, {"start": 652.4, "end": 661.92, "text": " one implication is this here. If you build a batch norm into your network, it kind of"}, {"start": 661.92, "end": 668.8399999999999, "text": " learns this this plus beta, which is basically a bias parameter. If you think of a traditional"}, {"start": 668.8399999999999, "end": 672.4399999999999, "text": " kind of fully connected layer, this isn't a fully connected layer because this scalar"}, {"start": 672.4399999999999, "end": 678.0, "text": " here is only per dimension, but the bias in a fully connected layer is also just per dimension."}, {"start": 678.0, "end": 685.0, "text": " So the beta is equal to a bias in a fully connected layer. So if you have a batch normalization"}, {"start": 685.0, "end": 695.52, "text": " after or after a after a fully connected or convolutional layer or anything that can or"}, {"start": 695.52, "end": 701.84, "text": " sometimes has a bias parameter, it's almost not worth it to kind of learn both. So you"}, {"start": 701.84, "end": 707.68, "text": " would rather just only have the one from the batch normalization and leave and use the"}, {"start": 707.68, "end": 713.0, "text": " convolution or fully connected layer without a bias. So that's kind of one implication."}, {"start": 713.0, "end": 721.12, "text": " Another implication is we have just lost the kind of the ability to have deterministic test"}, {"start": 721.12, "end": 729.6, "text": " time inference. So much like dropout, which is kind of random dropping out of nodes here,"}, {"start": 729.6, "end": 735.84, "text": " we have quantities that depend on the mini batch. So not only the individuals sample,"}, {"start": 735.84, "end": 741.4, "text": " but they actually depend on what other samples are randomly selected to be trained with"}, {"start": 741.4, "end": 748.92, "text": " that particular sample. So that's kind of awkward if you kind of want to have some deterministic"}, {"start": 748.92, "end": 760.92, "text": " reproducible thing at test time. So what people do is, and here, this is discussed, what"}, {"start": 760.92, "end": 775.36, "text": " people do is while training, they use these quantities, the quantities we just discussed,"}, {"start": 775.36, "end": 780.92, "text": " but they keep kind of a running average over them. So what I would do is in each mini batch,"}, {"start": 780.92, "end": 787.5999999999999, "text": " I would compute this mini batch mean and this mini batch variance. And I would keep quantities,"}, {"start": 787.6, "end": 798.2, "text": " I would keep running averages of them. And at test time, I'm going to plug in these running"}, {"start": 798.2, "end": 804.6, "text": " averages. So there's nothing dependent on the mini batch anymore. So that's pretty"}, {"start": 804.6, "end": 813.2, "text": " neat trick, I think. And you can even imagine at the end of your network training, simply"}, {"start": 813.2, "end": 822.0400000000001, "text": " using these here to kind of fine tune the weights to these exact parameters. So that's one"}, {"start": 822.0400000000001, "end": 828.84, "text": " thing that's that's kind of you have to pay attention to. So usually in neural network"}, {"start": 828.84, "end": 834.0400000000001, "text": " libraries, there are there are parameters you can set whether or not this network is in"}, {"start": 834.04, "end": 843.12, "text": " train mode or in test mode. And depending on that, the batch norm layer will use the mini batch"}, {"start": 843.12, "end": 851.24, "text": " statistics or will use the kind of all over data sets statistics. All right, the second"}, {"start": 851.24, "end": 858.0799999999999, "text": " thing is training. So how do you actually train this thing? Because now you can't just,"}, {"start": 858.08, "end": 868.2800000000001, "text": " right, we we started with our with our multi layer network up here. Right. F2, F1, right."}, {"start": 868.2800000000001, "end": 873.5600000000001, "text": " First, I'm going to put my things through F1 and then I'll put my things through F2, right."}, {"start": 873.5600000000001, "end": 881.08, "text": " And the the back propagation here is is quite easy. So let me get rid of this. The back"}, {"start": 881.08, "end": 889.5600000000001, "text": " prop here is quite easy. You go to L and maybe you want to drive it by theta one. Right."}, {"start": 889.5600000000001, "end": 897.76, "text": " So you first going to drive it by the hidden representation one and then the hidden representation"}, {"start": 897.76, "end": 904.32, "text": " one with respect to theta one. So the hidden representation would be whatever comes out"}, {"start": 904.32, "end": 912.12, "text": " of here or H1 sorry, not I. And so on. So you kind of chain rule your way through here."}, {"start": 912.12, "end": 920.6, "text": " But now in between these layers here, you have these batch norm things. And so the the"}, {"start": 920.6, "end": 930.44, "text": " authors discuss how we now do back propagation in the face of these things. So here is basically"}, {"start": 930.44, "end": 939.8000000000001, "text": " what they discuss. It actually pays to to have a graph of what's going on. So here is X."}, {"start": 939.8000000000001, "end": 948.32, "text": " This is the input to our layer. Right. So what do we compute from X? We compute mu. Let's"}, {"start": 948.32, "end": 954.5200000000001, "text": " just call it me or maybe it's called here. Right. This is the mean of X of all the X's."}, {"start": 954.52, "end": 967.4399999999999, "text": " So this is X XI until X. Well X1 until XN. This is the mini batch. We compute the mean."}, {"start": 967.4399999999999, "end": 974.88, "text": " And then from this and from this, we can compute this estimate of the variance. Right. We"}, {"start": 974.88, "end": 983.16, "text": " need both. Right. So we now have the mean and the variance over the mini batch. So we're"}, {"start": 983.16, "end": 991.88, "text": " going to take one of these X's just the if one, right. And we're going to use this and"}, {"start": 991.88, "end": 1006.92, "text": " this to compute X. What? Compute X. Is it called hat? Yeah, probably it's called X hat."}, {"start": 1006.92, "end": 1017.68, "text": " Right. Yeah, we saw about X hat. So X hat is X X hat I is X I minus mu B divided by"}, {"start": 1017.68, "end": 1024.6, "text": " sigma squared B. The square root of it plus this kind of little constant here. We're going"}, {"start": 1024.6, "end": 1029.6, "text": " to leave away the little little constant for clarity sake. Actually, it's in the calculations"}, {"start": 1029.6, "end": 1038.6, "text": " here. But right. So then we have a new parameter, gamma. Right. And we're going to use it and"}, {"start": 1038.6, "end": 1053.12, "text": " our X hat to compute. And also this beta here to compute Y hat Y or Y just Y. And of course,"}, {"start": 1053.12, "end": 1061.8799999999999, "text": " this is I. This is I. Right. So and this here is our final output of the layer. So you can"}, {"start": 1061.8799999999999, "end": 1066.9599999999998, "text": " see now the back propagation paths if you go through here. So the back propagation path,"}, {"start": 1066.9599999999998, "end": 1076.36, "text": " if we have some loss coming in here, we backprop through Y I. Right. So here is del L, the"}, {"start": 1076.36, "end": 1086.24, "text": " loss to Y I that's here. Right. So if we want the, for example, the backprop with respect"}, {"start": 1086.24, "end": 1093.28, "text": " to beta, what we do is we simply, and this is, this is over the mini batch, of course,"}, {"start": 1093.28, "end": 1098.8, "text": " we simply backprop here through this path. So in our, in our formula for beta, there should"}, {"start": 1098.8, "end": 1106.8799999999999, "text": " be only mention Y I. And that's what we see here, right. In our formula for gamma, there"}, {"start": 1106.8799999999999, "end": 1115.84, "text": " should only be mention of Y I. So because the path leads only through Y I. Oh, no, I'm"}, {"start": 1115.84, "end": 1122.28, "text": " sorry. Actually, because the of the, what I mean is of the derivative with respect to"}, {"start": 1122.28, "end": 1128.76, "text": " Y I, of course, the, we also have to pay in taking to attention that this is multiplied here"}, {"start": 1128.76, "end": 1133.6, "text": " by this X hat I where, of course, that's not the case when we just add something because"}, {"start": 1133.6, "end": 1145.24, "text": " the derivative of two of, of an addition, like X plus B with respect to be, this regards"}, {"start": 1145.24, "end": 1155.04, "text": " X, whereas if it's X times B, it doesn't this risk or this regard X. Alright. So if we,"}, {"start": 1155.04, "end": 1161.68, "text": " yeah, so you can, you can go back. So the interesting bit basically comes when we, when I find"}, {"start": 1161.68, "end": 1168.88, "text": " out, okay, how, because here is, here is another layer right down here somewhere, there is"}, {"start": 1168.88, "end": 1175.64, "text": " another layer. And we basically want to know this input here to the next layer. How do"}, {"start": 1175.64, "end": 1181.88, "text": " we compute it in the face of this mess here? Because we, it's not, it's not so easy, right."}, {"start": 1181.88, "end": 1187.3600000000001, "text": " So you have to see we have three paths here. We go back through X and let me get rid of"}, {"start": 1187.3600000000001, "end": 1197.8000000000002, "text": " these blue ones. We go, we go back through X hat directly to X. We go one path is through"}, {"start": 1197.8, "end": 1206.0, "text": " here. And one path is through this, this mu. So basically have to compute derivatives with"}, {"start": 1206.0, "end": 1211.48, "text": " respect to sigma squared and mu. And for that, we need to derivative with, with respect"}, {"start": 1211.48, "end": 1218.2, "text": " to X hat. So basically the way back prop works is you just find all paths from where you"}, {"start": 1218.2, "end": 1225.96, "text": " are, to where you want to go. And then you, you kind of intuitively compute this. So this"}, {"start": 1225.96, "end": 1231.8400000000001, "text": " one here is the easiest, the easiest, as you see here, they did it on top. Well, first"}, {"start": 1231.8400000000001, "end": 1243.44, "text": " they did this one, which is simply going from Y to X hat I start. Then they go from X hat"}, {"start": 1243.44, "end": 1251.8, "text": " I to sigma squared, which simply involves kind of the reverse operations of how you got"}, {"start": 1251.8, "end": 1259.04, "text": " it. This is simply a derivative formula here of the, of the division by square root."}, {"start": 1259.04, "end": 1267.8, "text": " Um, then you can use this, you can use this quantity here to compute that. So basically"}, {"start": 1267.8, "end": 1271.8, "text": " you just go in reverse of how you computed the operations in the first place. We said"}, {"start": 1271.8, "end": 1277.9199999999998, "text": " we needed mu B to compute sigma squared B. Now we need the derivative with respect to sigma"}, {"start": 1277.92, "end": 1284.24, "text": " squared B in order to compute the derivative to mu B. Um, and once you have that, and you"}, {"start": 1284.24, "end": 1295.5600000000002, "text": " see the, the addition here, the ad here is the fact that whoops, is the fact that two"}, {"start": 1295.5600000000002, "end": 1307.72, "text": " things contribute to mu B. So two paths lead to lead to mu B. One path is from, from here,"}, {"start": 1307.72, "end": 1316.44, "text": " and one path is through here. Right. So here, this should be a green. Um, since two paths,"}, {"start": 1316.44, "end": 1323.0, "text": " you have two components to your derivative and you add each of them. Uh, so that's how"}, {"start": 1323.0, "end": 1331.76, "text": " that's going to be. And then this here, with respect to this X here, we have three paths,"}, {"start": 1331.76, "end": 1338.52, "text": " right, because we have three arrows going out of XI one here, one here, and one here."}, {"start": 1338.52, "end": 1344.48, "text": " So you have to take into account all of them, right. So this one is pretty easy. That's"}, {"start": 1344.48, "end": 1352.6, "text": " the first one. Then the second one, um, sorry, this, the second one, uh, goes through this"}, {"start": 1352.6, "end": 1357.44, "text": " mu B, which we've already computed. And the third one goes through the, the sigma, which"}, {"start": 1357.44, "end": 1365.72, "text": " we've also already computed, right. And these are added, um, because all the paths, you"}, {"start": 1365.72, "end": 1369.88, "text": " have to, I'll add all the paths in the backprop algorithm. Maybe we'll do a, actually, video"}, {"start": 1369.88, "end": 1377.2, "text": " on backprop, uh, later to, to get to really dive into how this works. Um, and finally,"}, {"start": 1377.2, "end": 1382.44, "text": " they, uh, they compute these, these we've already discussed. So in essence, the whole thing"}, {"start": 1382.44, "end": 1388.68, "text": " is differentiable. Um, you just have to kind of pay attention how, how to do it. Um, but"}, {"start": 1388.68, "end": 1395.3600000000001, "text": " the whole thing is differentiable. And thereby, you can basically backprop through a network"}, {"start": 1395.3600000000001, "end": 1404.64, "text": " that has these batch normal layers in a built in. So that's pretty cool. Um, I just want"}, {"start": 1404.64, "end": 1410.56, "text": " to quickly jump over to the results. Um, and yeah, keep in mind, this paper is from 2015."}, {"start": 1410.56, "end": 1417.1599999999999, "text": " So networks weren't that big, uh, back then, um, we didn't know that much about training"}, {"start": 1417.1599999999999, "end": 1423.36, "text": " yet. But the interesting thing is they basically discovered, look, we can, we can have drastically"}, {"start": 1423.36, "end": 1429.48, "text": " fewer steps in order to reach the same accuracies. And these are kind of the activations of the"}, {"start": 1429.48, "end": 1434.52, "text": " network over the course of training. So without patch norm, you see, especially at the beginning,"}, {"start": 1434.52, "end": 1441.6399999999999, "text": " there's large fluctuations in the activations. And, um, because, because they use batch"}, {"start": 1441.6399999999999, "end": 1448.96, "text": " norm, now there's no such thing. So basically, the reason for that is pretty, is pretty simple."}, {"start": 1448.96, "end": 1453.56, "text": " Right. While you learn, and you learn your layered representation here, let's say there's"}, {"start": 1453.56, "end": 1460.6, "text": " X and X is fed through layers. And there is hidden representations each in between. Right."}, {"start": 1460.6, "end": 1466.76, "text": " So you're trying to learn all these parameters. Let's say this one here, W three, but at the"}, {"start": 1466.76, "end": 1471.32, "text": " beginning of training, everything is kind of prone to shifting around a lot. So when you"}, {"start": 1471.32, "end": 1478.7199999999998, "text": " change W one, that kind of changes the entire distribution of your hidden representations"}, {"start": 1478.7199999999998, "end": 1485.6, "text": " after the fact. So basically, whatever you learn for W three is now already almost obsolete"}, {"start": 1485.6, "end": 1491.9599999999998, "text": " because you've changed W one basically. And W three was kind of assuming that its inputs"}, {"start": 1491.9599999999998, "end": 1495.76, "text": " would would remain the same because that's what you assume in machine learning. Your input"}, {"start": 1495.76, "end": 1501.76, "text": " distribution is kind of the same. So, um, that's why at the beginning of training, you see"}, {"start": 1501.76, "end": 1507.6399999999999, "text": " these kind of large variance and with batch norm, this tends to go away. So that's pretty"}, {"start": 1507.6399999999999, "end": 1514.56, "text": " cool. Um, they also kind of show, they mainly show that they can reach the same accuracies"}, {"start": 1514.56, "end": 1519.96, "text": " as other, as other training methods, but with much, much fewer steps and they can go much"}, {"start": 1519.96, "end": 1526.8799999999999, "text": " higher learning rates than others. So, um, because, because of that. So that's pretty"}, {"start": 1526.8799999999999, "end": 1531.8, "text": " cool. Um, I encourage you to, you to check out the rest of the paper use batch norm in your"}, {"start": 1531.8, "end": 1539.0, "text": " networks. Sometimes works, sometimes doesn't work strangely enough. Um, but I know, I guess"}, {"start": 1539.0, "end": 1543.72, "text": " that's just a matter of experimentation. Alright, that was it for me. Bye bye."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=-9evrZnBorM | BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding | https://arxiv.org/abs/1810.04805
Abstract:
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.
BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.
Authors:
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova | Hello everyone. Today we're looking at Bert pre-training of deep bi-directional transformers for language understanding by Jacob Devlin and Min Wai Chang, Kenton Lee, Christina Tatsunova. These are people from Google AI language so you're about to see the most hyped model currently. So basically Bert is a model that takes as an input language, so token sequences and outputs various things. So it can be made to do various things, almost any NLP task with basically little trading because the Bert model comes pre-trained on a very large corpus and we're going to see how that's done. Alright so the paper introduces basically the kind of current state of the art of of language models and they say okay what they want to do new is they want to do bi-directional training. I'm going to go down here and see their comparison. Right so here they compare three models and these are representative of three types of models. So first here is for example the open AI transformer. So this is a this is the classic or one of the classic transformer models. We've talked about transformers before and the attention is all you need video. So what a transformer does is it uses attention and for those who forgot what attention is if you have like a token sequence A, B, C, D, E then a classic model to use that would be an LSTM. So the LSTM would go here it would like have a vector representation a hidden state and then it would take this A it would take this hidden state and compute a new hidden state and then it would go on and take the B and incorporate this into the hidden state. The hidden state kind of always stays the same size but the the recurrent model will update the hidden state as it goes over the input sequence. So this is one way of dealing with language but people have kind of done another way and that's the attention based mechanism is where basically for each of these you compute a compute a vector independently of each other. So each one has a vector representation and then you have a vector representation of what you want. It's called an attention head and you can have multiple of these but in the simplest case let's just say we are looking for the subject in this sentence. So A, B, C, D, E is a sentence and one of the words is the subject of the sentence. Then we could have a vector here that's called a query vector. So these are called these are called values V and this is called a query Q and then these vectors are the same size. I don't very poor at this. You're going to compute the inner product with each of these. So the inner product you want to do, okay I already screwed this up, you're actually computing two vectors for each token. So one is the but this is this is not too important for this step. One is the key and one is the value. All right, value and this is called the key and you have your query Q and you compute the inner products actually with the key. Sorry. Values aren't too important for what I want to demonstrate but you compute key with query. All right and that gives you basically for each key it's going to give you an output and so you're going to have you're going to have a for this A, B, C, D, E you're going to have like a this much inner product, this much inner product, this much this much this much inner product. So after maybe a softmax you have like a nice distribution and then you can say aha here this is the biggest the biggest alignment of the particular key with my query and my query is which one is the subject. You're of course you're going to train all these queries and keys producing procedures. So this is this is attention mechanism and if you then want that that's where the value comes in you can if your query is not only which one's the subject but it's actually a generic query that okay I'm going to extract some information from some token that I'm going to use later then you would actually take B and say aha B is the best one okay I'm going to take the value of B or basically going to take a weighted average of the values according to these values here. So this is very shortly what attention is if you if you want to lengthen the explanation go to the attention is all you need video. So open AI GPT uses attention here and it's a it's a left to right transformer that's what it says here and what that means is it goes also step by step but in each step it uses attention so here is the input tokens and as you can see it goes in this direction so each one of the and these are multiple layers of attention so you can also layer these of course so each one of the attention intermediate steps can only attend to whatever is onto the left of it right you can see this here so it goes step by step and it goes left to right basically so it can it can kind of take the sequence in as a left to right input basically what that means is whenever you interpret a particular token you can your context is only to the left of that token you don't know what's coming yet it's like when you when you read a sentence from left to right but then as humans unconsciously we probably go and at the end of the sentence kind of makes sense of the thing as a whole but here the model is forced to make sense of what the thing only from whatever is to the left of it so that's a basic limitation of these left to right models then there's another approach which is called Elmo which has been popular recently as a substitute for word vectors so if you know word vectors word vectors are basically the kind of first stage in most language processing task where for each word say the cat sat on something for each word you have you have a big giant table and for each word you associate a vector of fixed size dimension right so you place every word in a vector space and these vectors you pre compute for something like word to veck or glove and to give you a nice way to basically deal with these words in a canonical way and you can pre-trained the word vectors that's all really nice but people have realized okay words kind of multiple meanings and words can kind of slightly change meaning depending on words around them and so on so what Elmo does is Elmo uses two LSTM's one LSTM goes into this direction one LSTM goes into this direction and basically a single LSTM as we saw before it takes in the input sequence one by one so here E1 and E2 E2 E2 and it produces hidden states at each step produces a hidden state that is a result of previous hidden state and the current token and and then what it says is okay now these hidden states here basically these are now the embeddings of the token E1 E1 and so on right these are the embeddings so the word vectors as to say aren't are no longer just one vector per word so they're not an isolation anymore but basically you need the entire sequence to compute the word vectors as a result of this of this LSTM this is more powerful because it can give individual words multiple or each it basically each word has kind of a unique embedding depending on the surrounding words you would still hope that given word would have similar similar embedding or similar word vector all across the language but you can kind of fine tune it to the particular sentence it is in and also you can completely change its meaning if it's if it's kind of a word that has a completely new meaning in that sentence so basically uses two LSTM's one as as I said here forward one backward these are also have multiple layers and so and each of these produce one such hidden vector per token and you simply concatenate the two from the from so from here these this LSTM on the left produces one this LSTM on the right produces maybe here another one and you simply concatenate the two to get the the final embedding the final word vector for each token so the fundamental limitation here is that this is kind of you have information from the left and you have information from the right so other than here the original transformer you actually have you actually can condition on the left context and the right context but it's very it's very shallow because it's simply a concatenation of the left facing LSTM and the concatenation of the right facing LSTM and and these ultimately intrinsically they have nothing to do with each other they so you simply concatenate the two things that the left facing LSTM still can only see to the left and the right facing LSTM still can only see to the right so you basically have two half blind models and then you kind of concatenate so the it's still suboptimal because what you want is you want a single model to output your word vectors or to interpret the language that can look at both the left and the right at the same time and then incorporate information from both of them simultaneously and not just at the end by concatenation this is what BERT does so BERT here and this is kind of what they claim is the new contribution BERT at each in each layer here of the model the let's look at this and for a particular token they look at all of the context so every every other token in the input they look at that and so the basically it seems kind of it seems kind of obvious but it's it's actually there's there's reasons why these other models don't do this but so this is the entire point of BERT is at each layer in this in this transformer architecture is still an attention mechanism by the way so that there's there's the mechanism of attention here and here is exactly the same or almost the same they actually keep it close on purpose in order to compare but now we have attention not only to the left but also to the right to everything right so why do these other model why do for example the open AI transformer only look to the left that's because somehow you need to task to train on right and most of the time if you especially if you want unsupervised training you going to do something like language modeling and language modeling what you have is a sentence A B C D and you're asking what comes next here all right so by by the definition of the task you can only look to the left that's that's just how how these like how the task works so it makes sense that these other models kind of do this because they they pre-training this BERT has a different pre-training because they can they can only they have to look to the left and the right the other thing is what you want to use the model for so the good thing if you go left to right is you can use the model now for generating language in the same plane if if you have A B C D and you ask and the model is trying to produce the next character only looking to the left right then you can you can say what's the next character the model says E and then you can feed the same thing into the model and say okay what's now the next character what's now the next character G so there's pretty useful if you only look to the left you can actually use the model then for generating language which is something you can't do with BERT or it's not it's not really obvious now how to do it with BERT people are I know people are investigating into language producing producing entire sequences with BERT but as you guys it's not super clear how to do this with this model that being said the model is pretty good at pretty much everything else so let's jump in to how they train they train let's see where we are here they train using masked basically masked language modeling so I want to actually go into that first mask language modeling what they do is they basically replace some words by the mask token and they don't have a good they don't have a nice all right they have one here all right here if you just look at kind of the the top sentence here the man went to mask store right don't don't worry about the set and so on just this the man went to mask store and the model simply asked to predict what's here which word is there so it needs to incorporate information from the right and from the left to do this so that's basically how you train it they simply drop out some of the words some of the time and they they have different techniques so you can clearly tell a lot of work has gone into kind of fine tuning everything in this model like how to train it and so on so let's say we don't always do this sometimes we do this other thing and sometimes we do that and there's several ways of bison this model but basically you do this mask language modeling and then because they also want to evaluate on let's say entire sequence tasks or tasks that span multiple sentences what they do is the second free training task at the same time as you can see here where they feed two sentences so that's the first sentence that's the second sentence they feed these two sentences as an input so at first they have this token and these separate the sentences and then they ask the model to predict a label is next and is next is next is true if the second sentence follows the first sentence so it's if it's like a logical continuation and the way you do this on supervised is really easy you take a big giant corpus and you take a sentence for the first sentence and then 50% of the time you take the next sentence in the corpus and the label is true and 50% of the time you take some random sentence here you say for example the man the man mask to the store and the next sentence is penguin mask or flightless birds and that's kind of a random sentence so the model is asked to predict well that's probably not the next sentence following this first sentence so you do these two tasks you pre-train and you can do this on supervised you don't need supervised data for that you just need a corpus and they do this for a long time with a lot of data and the model itself is giant has 24 I think of these transformers layers so it's giant and then you kind of pre-train this model here's a here's an illustration of some extra things so what they do is they first this is the input up here so the first token is this CLS token which is kind of the start token and then this is the first sentence then the set is the separator of two sentences then this is the second sentence and then again set we'll get to these hashtags in a second but first they say okay first we have the token embedding so they kind of start with with the original concept of word vectors at the very basis because you need to start with actually going to a vector space to use these models but they don't they then they then kind of transform these through the transform layers they also use segment embeddings and segment embeddings as you can see here is simply kind of a binary label EA being the label for the first sentence and eb that being the label for the second sentence so just the model can differentiate which one's the first and which one's the second because it's kind of hard to learn for a transformer architecture that the the set tokens kind of separate the sentences so you kind of want to help it and the last thing is positional embeddings and we've already talked about these in attention as all you need this where you can kind of the model since it's a transformer it doesn't go step by step it doesn't go one done done done done so it's kind of hard for the model to make out how far things are apart from each other how far to tokens if they're neighbors or if they're really far apart and these positional embeddings kind of help the model decide if two tokens are close to each other in input if they're right if they're just neighbors or if they are actually really far apart all right so this is this is how the kind of first input is constructed out of these embeddings and then it's fed through these transformer layers as we saw with the mask that I'm task and the is next task I want to quickly get to these hashtags what what they mean these so the input here is separated into word pieces so-called word pieces and what that is is so in language processing task you have kind of a choice you have you have a choice of how to tokenize your input so what let's look at a sentence here subscribe to PewDiePie so this is a sentence and the sentence is rather let's say wordwise complicated so why why my language model have problem with this so first you need to tokenize this sentence all right so what most people do is they say okay here are the word boundaries we're not tokenize this into three segments first is subscribe to PewDiePie okay so three things and each of these now needs a word vector associated with it now two the thing is the word vectors let's assume you have them pre-trained or something in any case you need a big table a big big table and this goes down here where for each word a the two I you you have a vector associated with it right so you need to to keep this in your model and as you can as you know English has a lot of words here so this table is gonna be really big and the problem is how do you make this table right okay you could make it kind of dynamically and so on but in general you're gonna create this table with all the words you know and that's going to be too big because English has so many words and then you can say all right we'll only take the top whatever is used in 90% of the language which turns out to be it's kind of perito distributed so it turns out to be like 5% of the words are used in 90 percent of the language so you just take these but then you you're gonna have the problem okay here two two is not a problem why not two is used super often we're gonna have it at the very top somewhere and we're gonna have a vector subscribe is it's already it's not so common right so maybe you have a word for it somewhere down but then PewDiePie is a name so there is no there's not even a word like that's not even a word it's just so what you what you usually do what people usually do is they have this out of vocabulary token and then they have a vector associated somewhere here with the out of a vocabulary token is it whatever and I don't know what it is I just know that I don't have it in my vocabulary and the model kind of deals that that's kind of it's not it's not really ideal especially if you don't want to generate language also your model tends to generate out of vocabulary tokens if you allow that if you don't allow that you have a problem during training so it's all kind of messy what's the alternative alternative is to go character level so let's look at character level in character level you say all right my words are obviously made of characters and characters I'm just gonna split at each character right here the white space can be a character do so I'm gonna split at each character and then I'm simply going to have one vector for each character and there's only like 20 something six of those and so I can keep 26 vectors but this tends to be rather problematic because a character by itself having a meaning that that can be encapsulated by a vector is kind of it's kind of shady because the character character by itself usually doesn't mean and it doesn't have a meaning so what's the solution here the solution is to go in between the solution is to say well let's actually go for word pieces and you can kind of think of them as syllables but you can you can split you can make them in a way that you have a fixed size vocabulary say okay I have 4,000 entry places in my big table it's I can afford 4,000 size table so first of all I'm going to have for each character a b c d and so on I'm gonna have a vector but then I only have 26 have 3000 some left I'm going to have also the most common words now a is already here but maybe I can have two and from and so the most common words they also get there and then for the other things I'm going to split the words maybe in sub scribe right so these are two syllables and sub can be pre kind of a prefix to many things and I only need then one one so I've sub here sub I only need one vector for that and then the rest if scribe scribe is by the way also word so I can have that but if scribe weren't in my vocabulary I can divide scribe then up into into characters and then describe them with the character level so basically I can mix and match here I can sub that's that I have that and then scribe I don't have it I don't have any of the pieces but so I can just use the character the character so this would would be sub and then s c r i b e so these these would be the tokens that I work with now as as my input and this this tags here so this is what would happen to PewDiePie you could simply split along each character so you basically this kind of an interpolation between the token model and the character model and it's really neat and it usually works quite well the as I said the the hashtag sign here simply means that these two have originally been one word and now this this in here is just the word piece token is a really good example where word work piece come in because play by itself is a word and I can make playing instead of having an own vector for that I can divide it into play which already has a meaning and presumably playing and play would have similar meaning so it makes sense to have play as a to that's the token singled out here and then in as is as a suffix also makes sense to have a token for that in my table and then I simply have these two tokens here that probably already gives me more information than simply having the word playing right by the way you should subscribe to PewDiePie just FYI all right let's go on so we do we do work piece tokenization we do the mask language model we do the next sentence prediction pre-training what do we have now we have a model that can really really well predict some masked words now how do we use it now they evaluate on these I believe it's 11 tasks 11 different tasks of or is it I don't know how many it is it is a lot with the same model so this pre-trained model then how claim can be fine tuned to do all of these tasks and it gets up it can select state of the art on everyone it's crazy so how do they fine tune it so the easiest tasks are the one are the so-called sequence level task where you basically have the sequence and you're you're about to predict one class label for the entire sequence so here with the sentence pair classification tasks for example the task we saw before that is next task but there is more sophisticated tasks that you need kind of supervised data for and so with the supervised data you'd have a class label that you train on so what you do is let's look at one of them M and M and L I they had it up here nope here multi-genre natural language inference crowdsourced entailment classification task so given a pair of sentences the goes to predict whether the second sentence is an entailment contradiction or neutral with respect to the first one right two sentences and you're about to predict what which one of these three labels it is so you put the two sentences here but can already take two sentences as an input as we saw right the embeddings are the the the a and b embeddings and the position of embeddings are left out of the picture here but they would be added to it and these these would be the embeddings for it and then you pass this through the birth model and this is the final layer and what they do is they simply take now the the embedding the final embedding for this first one corresponding to this start token and they simply put a single layer of classification so basically a logistic regression on it and that's how they then get a class label so if this is whatever let's say this is this this gives you here a hidden vector of 512 dimensions 512 and you have three labels to output here one two three you simply need a a matrix that's 512 by 3 of size and these are the these are the weights that you would then have to train in addition to birth so birth is pre-trained and you have to simply only now learn these weights of course they also kind of fine tune the entire birth model but that's really fine tuning the only thing you have to learn from scratch is is this these these weights here that's pretty first of all it's pretty neat because you can be very quick at learning new tasks because you simply start from the pre-trained birth and then you go and learn a single class for a layer on top and astonishingly this works extremely well for these tasks a bit of a bit of a more challenging task is this here squat is a question answering task and we're gonna jump down here where they explain the task so you have an input question oops if an input question and the input question is where do water droplets collide with ice crystals to form precipitation and you have an input paragraph which is kind of a paragraph from Wikipedia page and you know that the answer is somewhere in this paragraph right the dataset is constructed such that the answer is in the paragraph so the input paragraph reads precipitation forms as small as smaller droplets call ask via collision with other rain drops or ice crystals within a cloud so you the question is where do water droplets collide to form precipitation the answer here is within a cloud so that's this this thing here so usually what squat models do is they they predict a span they predict where's the start of the answer and where's the end of the answer that's also what kind of births trained to do so in order to do this what you do is again you already have the ability to input two sequences so we've trained with two sentences but here this input you say oh well the first sequence is gonna be the question our second sequence is gonna be the entire paragraph from Wikipedia and then for each output for each output for the output of each token remember there's as many outputs as there's inputs because the transformer will always transform to the same length of sequence for each token in the output we classify it is this token the start token and or is this token the end token or is this token none of all now what they do effectively is that here each each one outputs each one is a vector and they as we said at the beginning of finding out which ones the subject now here we have two queries namely query one which is is this the start let's call it query s and query e is is this the end token so these are two queries and I'm gonna just produce compute the inner product of each query with each of these outputs right and over my sequence here this is gonna give me a distribution so start for start maybe this token is not much then this token is a lot and so on there's five tokens and for the end not so much not so probable not so probable very probable not so probable so what you get gonna get is from these inner products is a distribution over which ones to start and which ones to end right then you can say okay this one's probably to start and this one's probably the end so that's how you predict this span and again what you have to ultimately learn is these these queries here and so not that much and this is name then to t recognition and name then to recognition you have a sentence and you're supposed to recognize name then to tease like up here we saw subscribe to PewDiePie and the named entity would be PewDiePie right this is a name and you're supposed to recognize that this is a name and they do it the same same way that they do the squat basically or a similar way sorry they basically for each of the outputs here they simply classify whether or not is it's part of an entity or not so what they have to do is they have to simply train if they also have different labels for which kind of entity is this this is like a person and this is this is no entity so if you have ten of the labels then each for each thing you'd classify it into one of ten classes you need a classifier of input size versus number of classes that's all you have to train in addition to pre-to fine tuning bird itself all right so they kind of evaluate on all of these tasks they get super duper numbers on all of them here birthed large wins on pretty much everything and this model is big just saying and they trained it on TPUs which is available in kind of Google Cloud infrastructure so far and it trained it on a lot of data so to a way it's it's kind of expected that you would outperform but it's very surprising that you outperform everyone else by this much and they've done a lot of kind of Appalachian studies where they show that it's really due to the fact that they do this left-end right context they take into account the left-end right context of a given token when doing the the attention that it's that that's why it's better so here for example they compare the the bird base model and they say okay what if we don't do the NSP the next sentence prediction task then you can see the numbers they already kind of they drop on these tasks and what if we then additionally do only left to right training and the numbers they drop pretty seriously again you see sometimes here for example you see a pretty serious drop in the number also here so there really seems to be a real value in doing this kind of left-end right context attention so it's not just about the model size and the amount of data that's basically what they show here and this is really cool that the paper actually shows this because usually people have an idea and they throw a lot more resources at it and they're better you'd never know why and this is pretty cool that they actually show all right so this is all I have to say about this paper but check it out that models are here pre-trained you can actually download them you can fine tune it for yourself for your own task and they're pretty pretty powerful there are smaller models for if you don't have a TVU that are also pre-trained so check these out as well and thanks a lot for watching you guys soon sun fireworks | [{"start": 0.0, "end": 5.12, "text": " Hello everyone. Today we're looking at Bert pre-training of deep bi-directional"}, {"start": 5.12, "end": 10.96, "text": " transformers for language understanding by Jacob Devlin and Min Wai Chang,"}, {"start": 10.96, "end": 17.88, "text": " Kenton Lee, Christina Tatsunova. These are people from Google AI language so"}, {"start": 17.88, "end": 27.2, "text": " you're about to see the most hyped model currently. So basically Bert is a"}, {"start": 27.2, "end": 34.56, "text": " model that takes as an input language, so token sequences and outputs various"}, {"start": 34.56, "end": 41.16, "text": " things. So it can be made to do various things, almost any NLP task with"}, {"start": 41.16, "end": 46.76, "text": " basically little trading because the Bert model comes pre-trained on a very"}, {"start": 46.76, "end": 52.28, "text": " large corpus and we're going to see how that's done. Alright so the paper"}, {"start": 52.28, "end": 60.28, "text": " introduces basically the kind of current state of the art of of language"}, {"start": 60.28, "end": 67.24000000000001, "text": " models and they say okay what they want to do new is they want to do bi-directional"}, {"start": 67.24000000000001, "end": 76.68, "text": " training. I'm going to go down here and see their comparison. Right so here they"}, {"start": 76.68, "end": 82.68, "text": " compare three models and these are representative of three types of models. So"}, {"start": 82.68, "end": 90.64000000000001, "text": " first here is for example the open AI transformer. So this is a this is the"}, {"start": 90.64000000000001, "end": 95.64000000000001, "text": " classic or one of the classic transformer models. We've talked about"}, {"start": 95.64000000000001, "end": 101.92000000000002, "text": " transformers before and the attention is all you need video. So what a"}, {"start": 101.92, "end": 108.04, "text": " transformer does is it uses attention and for those who forgot what attention is"}, {"start": 108.04, "end": 116.6, "text": " if you have like a token sequence A, B, C, D, E then a classic model to use"}, {"start": 116.6, "end": 122.16, "text": " that would be an LSTM. So the LSTM would go here it would like have a vector"}, {"start": 122.16, "end": 127.68, "text": " representation a hidden state and then it would take this A it would take this"}, {"start": 127.68, "end": 132.52, "text": " hidden state and compute a new hidden state and then it would go on and take the"}, {"start": 132.52, "end": 137.20000000000002, "text": " B and incorporate this into the hidden state. The hidden state kind of always"}, {"start": 137.20000000000002, "end": 143.4, "text": " stays the same size but the the recurrent model will update the hidden state as"}, {"start": 143.4, "end": 150.52, "text": " it goes over the input sequence. So this is one way of dealing with language"}, {"start": 150.52, "end": 155.60000000000002, "text": " but people have kind of done another way and that's the attention based"}, {"start": 155.6, "end": 163.88, "text": " mechanism is where basically for each of these you compute a compute a vector"}, {"start": 163.88, "end": 172.28, "text": " independently of each other. So each one has a vector representation and then you"}, {"start": 172.28, "end": 180.28, "text": " have a vector representation of what you want. It's called an attention head"}, {"start": 180.28, "end": 185.16, "text": " and you can have multiple of these but in the simplest case let's just say we"}, {"start": 185.16, "end": 190.76, "text": " are looking for the subject in this sentence. So A, B, C, D, E is a sentence and"}, {"start": 190.76, "end": 196.48, "text": " one of the words is the subject of the sentence. Then we could have a vector"}, {"start": 196.48, "end": 203.44, "text": " here that's called a query vector. So these are called these are called values V"}, {"start": 203.44, "end": 208.96, "text": " and this is called a query Q and then these vectors are the same size. I don't"}, {"start": 208.96, "end": 213.96, "text": " very poor at this. You're going to compute the inner product with each of these."}, {"start": 213.96, "end": 224.08, "text": " So the inner product you want to do, okay I already screwed this up, you're"}, {"start": 224.08, "end": 231.16, "text": " actually computing two vectors for each token. So one is the but this is this is"}, {"start": 231.16, "end": 237.60000000000002, "text": " not too important for this step. One is the key and one is the value. All right,"}, {"start": 237.60000000000002, "end": 243.64000000000001, "text": " value and this is called the key and you have your query Q and you compute the"}, {"start": 243.64, "end": 248.48, "text": " inner products actually with the key. Sorry. Values aren't too important for what"}, {"start": 248.48, "end": 254.76, "text": " I want to demonstrate but you compute key with query. All right and that gives"}, {"start": 254.76, "end": 260.68, "text": " you basically for each key it's going to give you an output and so you're going"}, {"start": 260.68, "end": 268.56, "text": " to have you're going to have a for this A, B, C, D, E you're going to have like a"}, {"start": 268.56, "end": 274.2, "text": " this much inner product, this much inner product, this much this much this much"}, {"start": 274.2, "end": 278.76, "text": " inner product. So after maybe a softmax you have like a nice distribution and"}, {"start": 278.76, "end": 284.56, "text": " then you can say aha here this is the biggest the biggest alignment of the"}, {"start": 284.56, "end": 290.04, "text": " particular key with my query and my query is which one is the subject. You're"}, {"start": 290.04, "end": 294.88, "text": " of course you're going to train all these queries and keys producing procedures."}, {"start": 294.88, "end": 299.68, "text": " So this is this is attention mechanism and if you then want that that's where the"}, {"start": 299.68, "end": 305.24, "text": " value comes in you can if your query is not only which one's the subject but it's"}, {"start": 305.24, "end": 310.76, "text": " actually a generic query that okay I'm going to extract some information from"}, {"start": 310.76, "end": 316.12, "text": " some token that I'm going to use later then you would actually take B and say"}, {"start": 316.12, "end": 320.12, "text": " aha B is the best one okay I'm going to take the value of B or basically"}, {"start": 320.12, "end": 324.76, "text": " going to take a weighted average of the values according to these values here."}, {"start": 324.76, "end": 330.24, "text": " So this is very shortly what attention is if you if you want to lengthen the"}, {"start": 330.24, "end": 337.28, "text": " explanation go to the attention is all you need video. So open AI GPT uses"}, {"start": 337.28, "end": 345.84, "text": " attention here and it's a it's a left to right transformer that's what it says"}, {"start": 345.84, "end": 350.4, "text": " here and what that means is it goes also step by step but in each step it uses"}, {"start": 350.4, "end": 355.56, "text": " attention so here is the input tokens and as you can see it goes in this"}, {"start": 355.56, "end": 360.56, "text": " direction so each one of the and these are multiple layers of attention so you"}, {"start": 360.56, "end": 369.2, "text": " can also layer these of course so each one of the attention intermediate steps"}, {"start": 369.2, "end": 376.23999999999995, "text": " can only attend to whatever is onto the left of it right you can see this"}, {"start": 376.24, "end": 381.36, "text": " here so it goes step by step and it goes left to right basically so it can it"}, {"start": 381.36, "end": 387.12, "text": " can kind of take the sequence in as a left to right input basically what that"}, {"start": 387.12, "end": 392.96000000000004, "text": " means is whenever you interpret a particular token you can your context is"}, {"start": 392.96000000000004, "end": 397.0, "text": " only to the left of that token you don't know what's coming yet it's like when"}, {"start": 397.0, "end": 403.28000000000003, "text": " you when you read a sentence from left to right but then as humans unconsciously"}, {"start": 403.28, "end": 407.88, "text": " we probably go and at the end of the sentence kind of makes sense of the thing as"}, {"start": 407.88, "end": 413.91999999999996, "text": " a whole but here the model is forced to make sense of what the thing only from"}, {"start": 413.91999999999996, "end": 419.84, "text": " whatever is to the left of it so that's a basic limitation of these left to"}, {"start": 419.84, "end": 426.28, "text": " right models then there's another approach which is called Elmo which has"}, {"start": 426.28, "end": 432.15999999999997, "text": " been popular recently as a substitute for word vectors so if you know word"}, {"start": 432.16, "end": 439.28000000000003, "text": " vectors word vectors are basically the kind of first stage in most language"}, {"start": 439.28000000000003, "end": 449.24, "text": " processing task where for each word say the cat sat on something for each word"}, {"start": 449.24, "end": 455.64000000000004, "text": " you have you have a big giant table and for each word you associate a vector of"}, {"start": 455.64000000000004, "end": 461.12, "text": " fixed size dimension right so you place every word in a vector space and these"}, {"start": 461.12, "end": 467.04, "text": " vectors you pre compute for something like word to veck or glove and to give"}, {"start": 467.04, "end": 472.52, "text": " you a nice way to basically deal with these words in a canonical way and you"}, {"start": 472.52, "end": 476.12, "text": " can pre-trained the word vectors that's all really nice but people have"}, {"start": 476.12, "end": 481.28000000000003, "text": " realized okay words kind of multiple meanings and words can kind of slightly"}, {"start": 481.28000000000003, "end": 487.0, "text": " change meaning depending on words around them and so on so what Elmo does is"}, {"start": 487.0, "end": 493.2, "text": " Elmo uses two LSTM's one LSTM goes into this direction one LSTM goes into"}, {"start": 493.2, "end": 499.64, "text": " this direction and basically a single LSTM as we saw before it takes in the"}, {"start": 499.64, "end": 506.72, "text": " input sequence one by one so here E1 and E2 E2 E2 and it produces hidden states at"}, {"start": 506.72, "end": 512.64, "text": " each step produces a hidden state that is a result of previous hidden state and"}, {"start": 512.64, "end": 519.96, "text": " the current token and and then what it says is okay now these hidden states"}, {"start": 519.96, "end": 528.72, "text": " here basically these are now the embeddings of the token E1 E1 and so on right"}, {"start": 528.72, "end": 538.0, "text": " these are the embeddings so the word vectors as to say aren't are no longer just"}, {"start": 538.0, "end": 542.88, "text": " one vector per word so they're not an isolation anymore but basically you need"}, {"start": 542.88, "end": 548.2, "text": " the entire sequence to compute the word vectors as a result of this of this LSTM"}, {"start": 548.2, "end": 554.96, "text": " this is more powerful because it can give individual words multiple or each"}, {"start": 554.96, "end": 559.88, "text": " it basically each word has kind of a unique embedding depending on the"}, {"start": 559.88, "end": 564.2, "text": " surrounding words you would still hope that given word would have similar"}, {"start": 564.2, "end": 570.5600000000001, "text": " similar embedding or similar word vector all across the language but you can"}, {"start": 570.5600000000001, "end": 574.5600000000001, "text": " kind of fine tune it to the particular sentence it is in and also you can"}, {"start": 574.5600000000001, "end": 578.1600000000001, "text": " completely change its meaning if it's if it's kind of a word that has a"}, {"start": 578.1600000000001, "end": 583.5600000000001, "text": " completely new meaning in that sentence so basically uses two LSTM's one as"}, {"start": 583.5600000000001, "end": 589.0400000000001, "text": " as I said here forward one backward these are also have multiple layers and so"}, {"start": 589.04, "end": 595.0, "text": " and each of these produce one such hidden vector per token and you simply"}, {"start": 595.0, "end": 601.28, "text": " concatenate the two from the from so from here these this LSTM on the left"}, {"start": 601.28, "end": 605.56, "text": " produces one this LSTM on the right produces maybe here another one and you"}, {"start": 605.56, "end": 611.5999999999999, "text": " simply concatenate the two to get the the final embedding the final word"}, {"start": 611.6, "end": 621.16, "text": " vector for each token so the fundamental limitation here is that this is"}, {"start": 621.16, "end": 626.6, "text": " kind of you have information from the left and you have information from the"}, {"start": 626.6, "end": 631.2, "text": " right so other than here the original transformer you actually have you"}, {"start": 631.2, "end": 635.5600000000001, "text": " actually can condition on the left context and the right context but it's"}, {"start": 635.5600000000001, "end": 640.96, "text": " very it's very shallow because it's simply a concatenation of the left"}, {"start": 640.96, "end": 645.48, "text": " facing LSTM and the concatenation of the right facing LSTM and and these"}, {"start": 645.48, "end": 651.48, "text": " ultimately intrinsically they have nothing to do with each other they so you"}, {"start": 651.48, "end": 656.72, "text": " simply concatenate the two things that the left facing LSTM still can only see"}, {"start": 656.72, "end": 661.72, "text": " to the left and the right facing LSTM still can only see to the right so you"}, {"start": 661.72, "end": 668.48, "text": " basically have two half blind models and then you kind of concatenate so the"}, {"start": 668.48, "end": 674.48, "text": " it's still suboptimal because what you want is you want a single model to"}, {"start": 674.48, "end": 679.36, "text": " output your word vectors or to interpret the language that can look at both the"}, {"start": 679.36, "end": 683.48, "text": " left and the right at the same time and then incorporate information from both"}, {"start": 683.48, "end": 689.72, "text": " of them simultaneously and not just at the end by concatenation this is what"}, {"start": 689.72, "end": 695.64, "text": " BERT does so BERT here and this is kind of what they claim is the new"}, {"start": 695.64, "end": 703.36, "text": " contribution BERT at each in each layer here of the model the let's look at this"}, {"start": 703.36, "end": 711.88, "text": " and for a particular token they look at all of the context so every every"}, {"start": 711.88, "end": 721.92, "text": " other token in the input they look at that and so the basically it seems"}, {"start": 721.92, "end": 726.12, "text": " kind of it seems kind of obvious but it's it's actually there's there's"}, {"start": 726.12, "end": 733.5999999999999, "text": " reasons why these other models don't do this but so this is the entire point of"}, {"start": 733.5999999999999, "end": 738.4399999999999, "text": " BERT is at each layer in this in this transformer architecture is still an"}, {"start": 738.4399999999999, "end": 742.8399999999999, "text": " attention mechanism by the way so that there's there's the mechanism of"}, {"start": 742.8399999999999, "end": 749.24, "text": " attention here and here is exactly the same or almost the same they actually"}, {"start": 749.24, "end": 756.16, "text": " keep it close on purpose in order to compare but now we have attention not only"}, {"start": 756.16, "end": 763.4, "text": " to the left but also to the right to everything right so why do these other"}, {"start": 763.4, "end": 769.48, "text": " model why do for example the open AI transformer only look to the left that's"}, {"start": 769.48, "end": 775.36, "text": " because somehow you need to task to train on right and most of the time if you"}, {"start": 775.36, "end": 780.2, "text": " especially if you want unsupervised training you going to do something like"}, {"start": 780.2, "end": 786.48, "text": " language modeling and language modeling what you have is a sentence A B C D and"}, {"start": 786.48, "end": 794.48, "text": " you're asking what comes next here all right so by by the definition of the"}, {"start": 794.48, "end": 801.5600000000001, "text": " task you can only look to the left that's that's just how how these like how"}, {"start": 801.56, "end": 807.5999999999999, "text": " the task works so it makes sense that these other models kind of do this"}, {"start": 807.5999999999999, "end": 812.1199999999999, "text": " because they they pre-training this BERT has a different pre-training because"}, {"start": 812.1199999999999, "end": 818.88, "text": " they can they can only they have to look to the left and the right the other"}, {"start": 818.88, "end": 825.0, "text": " thing is what you want to use the model for so the good thing if you go left to"}, {"start": 825.0, "end": 829.8, "text": " right is you can use the model now for generating language in the same"}, {"start": 829.8, "end": 836.16, "text": " plane if if you have A B C D and you ask and the model is trying to produce the"}, {"start": 836.16, "end": 841.24, "text": " next character only looking to the left right then you can you can say what's"}, {"start": 841.24, "end": 845.5999999999999, "text": " the next character the model says E and then you can feed the same thing into the"}, {"start": 845.5999999999999, "end": 852.3599999999999, "text": " model and say okay what's now the next character what's now the next character"}, {"start": 852.3599999999999, "end": 857.16, "text": " G so there's pretty useful if you only look to the left you can actually use the"}, {"start": 857.16, "end": 862.7199999999999, "text": " model then for generating language which is something you can't do with BERT"}, {"start": 862.7199999999999, "end": 866.76, "text": " or it's not it's not really obvious now how to do it with BERT people are I"}, {"start": 866.76, "end": 872.92, "text": " know people are investigating into language producing producing entire"}, {"start": 872.92, "end": 880.0, "text": " sequences with BERT but as you guys it's not super clear how to do this with"}, {"start": 880.0, "end": 884.6, "text": " this model that being said the model is pretty good at pretty much everything"}, {"start": 884.6, "end": 891.6800000000001, "text": " else so let's jump in to how they train they train let's see where we are here"}, {"start": 891.6800000000001, "end": 902.36, "text": " they train using masked basically masked language modeling so I want to actually"}, {"start": 902.36, "end": 909.32, "text": " go into that first mask language modeling what they do is they basically"}, {"start": 909.32, "end": 915.6800000000001, "text": " replace some words by the mask token and they don't have a good they don't have"}, {"start": 915.6800000000001, "end": 923.6, "text": " a nice all right they have one here all right here if you just look at kind of"}, {"start": 923.6, "end": 931.9200000000001, "text": " the the top sentence here the man went to mask store right don't don't worry"}, {"start": 931.92, "end": 939.4799999999999, "text": " about the set and so on just this the man went to mask store and the model"}, {"start": 939.4799999999999, "end": 943.7199999999999, "text": " simply asked to predict what's here which word is there so it needs to"}, {"start": 943.7199999999999, "end": 949.04, "text": " incorporate information from the right and from the left to do this so that's"}, {"start": 949.04, "end": 953.8399999999999, "text": " basically how you train it they simply drop out some of the words some of the"}, {"start": 953.8399999999999, "end": 959.8, "text": " time and they they have different techniques so you can clearly tell a lot of"}, {"start": 959.8, "end": 964.9599999999999, "text": " work has gone into kind of fine tuning everything in this model like how to"}, {"start": 964.9599999999999, "end": 969.3199999999999, "text": " train it and so on so let's say we don't always do this sometimes we do this"}, {"start": 969.3199999999999, "end": 973.3199999999999, "text": " other thing and sometimes we do that and there's several ways of bison this"}, {"start": 973.3199999999999, "end": 979.3599999999999, "text": " model but basically you do this mask language modeling and then because they also"}, {"start": 979.3599999999999, "end": 984.5999999999999, "text": " want to evaluate on let's say entire sequence tasks or tasks that span"}, {"start": 984.6, "end": 990.16, "text": " multiple sentences what they do is the second free training task at the same time"}, {"start": 990.16, "end": 996.5600000000001, "text": " as you can see here where they feed two sentences so that's the first sentence"}, {"start": 996.5600000000001, "end": 1001.9200000000001, "text": " that's the second sentence they feed these two sentences as an input so at first"}, {"start": 1001.9200000000001, "end": 1007.72, "text": " they have this token and these separate the sentences and then they ask the"}, {"start": 1007.72, "end": 1016.5600000000001, "text": " model to predict a label is next and is next is next is true if the second"}, {"start": 1016.5600000000001, "end": 1020.88, "text": " sentence follows the first sentence so it's if it's like a logical continuation"}, {"start": 1020.88, "end": 1024.3600000000001, "text": " and the way you do this on supervised is really easy you take a big giant"}, {"start": 1024.3600000000001, "end": 1031.04, "text": " corpus and you take a sentence for the first sentence and then 50% of the"}, {"start": 1031.04, "end": 1036.68, "text": " time you take the next sentence in the corpus and the label is true and 50% of"}, {"start": 1036.68, "end": 1046.1200000000001, "text": " the time you take some random sentence here you say for example the man the man"}, {"start": 1046.1200000000001, "end": 1056.8400000000001, "text": " mask to the store and the next sentence is penguin mask or flightless birds and"}, {"start": 1056.8400000000001, "end": 1061.3200000000002, "text": " that's kind of a random sentence so the model is asked to predict well that's"}, {"start": 1061.32, "end": 1067.0, "text": " probably not the next sentence following this first sentence so you do these"}, {"start": 1067.0, "end": 1071.48, "text": " two tasks you pre-train and you can do this on supervised you don't need"}, {"start": 1071.48, "end": 1077.48, "text": " supervised data for that you just need a corpus and they do this for a long"}, {"start": 1077.48, "end": 1084.24, "text": " time with a lot of data and the model itself is giant has 24 I think of these"}, {"start": 1084.24, "end": 1091.64, "text": " transformers layers so it's giant and then you kind of pre-train this model"}, {"start": 1091.64, "end": 1102.6, "text": " here's a here's an illustration of some extra things so what they do is they"}, {"start": 1102.6, "end": 1108.44, "text": " first this is the input up here so the first token is this CLS token which is"}, {"start": 1108.44, "end": 1115.8400000000001, "text": " kind of the start token and then this is the first sentence then the set is the"}, {"start": 1115.8400000000001, "end": 1120.96, "text": " separator of two sentences then this is the second sentence and then again"}, {"start": 1120.96, "end": 1127.68, "text": " set we'll get to these hashtags in a second but first they say okay first we"}, {"start": 1127.68, "end": 1133.76, "text": " have the token embedding so they kind of start with with the original concept of"}, {"start": 1133.76, "end": 1139.76, "text": " word vectors at the very basis because you need to start with actually going to"}, {"start": 1139.76, "end": 1146.68, "text": " a vector space to use these models but they don't they then they then kind of"}, {"start": 1146.68, "end": 1150.48, "text": " transform these through the transform layers they also use segment embeddings"}, {"start": 1150.48, "end": 1155.48, "text": " and segment embeddings as you can see here is simply kind of a binary label"}, {"start": 1155.48, "end": 1162.8799999999999, "text": " EA being the label for the first sentence and eb that being the label for the"}, {"start": 1162.88, "end": 1167.0400000000002, "text": " second sentence so just the model can differentiate which one's the first and"}, {"start": 1167.0400000000002, "end": 1170.7600000000002, "text": " which one's the second because it's kind of hard to learn for a transformer"}, {"start": 1170.7600000000002, "end": 1176.7600000000002, "text": " architecture that the the set tokens kind of separate the sentences so you"}, {"start": 1176.7600000000002, "end": 1181.48, "text": " kind of want to help it and the last thing is positional embeddings and we've"}, {"start": 1181.48, "end": 1186.5600000000002, "text": " already talked about these in attention as all you need this where you can kind"}, {"start": 1186.56, "end": 1193.3999999999999, "text": " of the model since it's a transformer it doesn't go step by step it doesn't go"}, {"start": 1193.3999999999999, "end": 1198.1599999999999, "text": " one done done done done so it's kind of hard for the model to make out how far"}, {"start": 1198.1599999999999, "end": 1203.72, "text": " things are apart from each other how far to tokens if they're neighbors or if"}, {"start": 1203.72, "end": 1207.56, "text": " they're really far apart and these positional embeddings kind of help the model"}, {"start": 1207.56, "end": 1213.36, "text": " decide if two tokens are close to each other in input if they're right"}, {"start": 1213.36, "end": 1219.9599999999998, "text": " if they're just neighbors or if they are actually really far apart all right so"}, {"start": 1219.9599999999998, "end": 1225.8, "text": " this is this is how the kind of first input is constructed out of these"}, {"start": 1225.8, "end": 1230.8, "text": " embeddings and then it's fed through these transformer layers as we saw with"}, {"start": 1230.8, "end": 1235.76, "text": " the mask that I'm task and the is next task I want to quickly get to these"}, {"start": 1235.76, "end": 1244.12, "text": " hashtags what what they mean these so the input here is separated into word"}, {"start": 1244.12, "end": 1250.16, "text": " pieces so-called word pieces and what that is is so in language processing"}, {"start": 1250.16, "end": 1257.36, "text": " task you have kind of a choice you have you have a choice of how to tokenize"}, {"start": 1257.36, "end": 1272.32, "text": " your input so what let's look at a sentence here subscribe to PewDiePie"}, {"start": 1274.6, "end": 1280.24, "text": " so this is a sentence and the sentence is rather let's say wordwise"}, {"start": 1280.24, "end": 1285.52, "text": " complicated so why why my language model have problem with this so first you"}, {"start": 1285.52, "end": 1292.16, "text": " need to tokenize this sentence all right so what most people do is they say okay"}, {"start": 1292.16, "end": 1296.48, "text": " here are the word boundaries we're not tokenize this into three segments first"}, {"start": 1296.48, "end": 1302.68, "text": " is subscribe to PewDiePie okay so three things and each of these now needs a"}, {"start": 1302.68, "end": 1309.04, "text": " word vector associated with it now two the thing is the word vectors let's"}, {"start": 1309.04, "end": 1315.36, "text": " assume you have them pre-trained or something in any case you need a big table"}, {"start": 1315.36, "end": 1327.4799999999998, "text": " a big big table and this goes down here where for each word a the two I you you"}, {"start": 1327.4799999999998, "end": 1333.28, "text": " have a vector associated with it right so you need to to keep this in your"}, {"start": 1333.28, "end": 1340.52, "text": " model and as you can as you know English has a lot of words here so this table is"}, {"start": 1340.52, "end": 1350.8799999999999, "text": " gonna be really big and the problem is how do you make this table right okay you"}, {"start": 1350.8799999999999, "end": 1355.52, "text": " could make it kind of dynamically and so on but in general you're gonna create"}, {"start": 1355.52, "end": 1359.84, "text": " this table with all the words you know and that's going to be too big because"}, {"start": 1359.84, "end": 1365.12, "text": " English has so many words and then you can say all right we'll only take the"}, {"start": 1365.12, "end": 1372.08, "text": " top whatever is used in 90% of the language which turns out to be it's kind of"}, {"start": 1372.08, "end": 1377.4799999999998, "text": " perito distributed so it turns out to be like 5% of the words are used in 90"}, {"start": 1377.4799999999998, "end": 1381.9199999999998, "text": " percent of the language so you just take these but then you you're gonna have"}, {"start": 1381.9199999999998, "end": 1388.0, "text": " the problem okay here two two is not a problem why not two is used super often"}, {"start": 1388.0, "end": 1391.9199999999998, "text": " we're gonna have it at the very top somewhere and we're gonna have a vector"}, {"start": 1391.92, "end": 1400.92, "text": " subscribe is it's already it's not so common right so maybe you have a word for"}, {"start": 1400.92, "end": 1407.88, "text": " it somewhere down but then PewDiePie is a name so there is no there's not even a"}, {"start": 1407.88, "end": 1415.1200000000001, "text": " word like that's not even a word it's just so what you what you usually do"}, {"start": 1415.1200000000001, "end": 1421.68, "text": " what people usually do is they have this out of vocabulary token and then they"}, {"start": 1421.68, "end": 1425.68, "text": " have a vector associated somewhere here with the out of a vocabulary token is"}, {"start": 1425.68, "end": 1430.04, "text": " it whatever and I don't know what it is I just know that I don't have it in my"}, {"start": 1430.04, "end": 1435.3200000000002, "text": " vocabulary and the model kind of deals that that's kind of it's not it's not"}, {"start": 1435.3200000000002, "end": 1439.92, "text": " really ideal especially if you don't want to generate language also your model"}, {"start": 1439.92, "end": 1443.5600000000002, "text": " tends to generate out of vocabulary tokens if you allow that if you don't"}, {"start": 1443.5600000000002, "end": 1448.96, "text": " allow that you have a problem during training so it's all kind of messy what's"}, {"start": 1448.96, "end": 1454.08, "text": " the alternative alternative is to go character level so let's look at"}, {"start": 1454.08, "end": 1460.96, "text": " character level in character level you say all right my words are obviously made"}, {"start": 1460.96, "end": 1467.96, "text": " of characters and characters I'm just gonna split at each character right here"}, {"start": 1467.96, "end": 1473.6000000000001, "text": " the white space can be a character do so I'm gonna split at each character and"}, {"start": 1473.6000000000001, "end": 1478.52, "text": " then I'm simply going to have one vector for each character and there's"}, {"start": 1478.52, "end": 1487.4, "text": " only like 20 something six of those and so I can keep 26 vectors but this"}, {"start": 1487.4, "end": 1492.08, "text": " tends to be rather problematic because a character by itself having a meaning"}, {"start": 1492.08, "end": 1499.44, "text": " that that can be encapsulated by a vector is kind of it's kind of shady because"}, {"start": 1499.44, "end": 1502.8799999999999, "text": " the character character by itself usually doesn't mean and it doesn't have a"}, {"start": 1502.88, "end": 1508.64, "text": " meaning so what's the solution here the solution is to go in between the"}, {"start": 1508.64, "end": 1514.6000000000001, "text": " solution is to say well let's actually go for word pieces and you can kind of"}, {"start": 1514.6000000000001, "end": 1522.0, "text": " think of them as syllables but you can you can split you can make them in a way"}, {"start": 1522.0, "end": 1529.64, "text": " that you have a fixed size vocabulary say okay I have 4,000 entry places in my"}, {"start": 1529.64, "end": 1537.3600000000001, "text": " big table it's I can afford 4,000 size table so first of all I'm going to have"}, {"start": 1537.3600000000001, "end": 1543.3200000000002, "text": " for each character a b c d and so on I'm gonna have a vector but then I only have"}, {"start": 1543.3200000000002, "end": 1549.92, "text": " 26 have 3000 some left I'm going to have also the most common words now a is"}, {"start": 1549.92, "end": 1556.88, "text": " already here but maybe I can have two and from and so the most common words"}, {"start": 1556.88, "end": 1562.3200000000002, "text": " they also get there and then for the other things I'm going to split the words"}, {"start": 1562.3200000000002, "end": 1569.8000000000002, "text": " maybe in sub scribe right so these are two syllables and sub can be pre kind of"}, {"start": 1569.8000000000002, "end": 1578.0800000000002, "text": " a prefix to many things and I only need then one one so I've sub here sub I"}, {"start": 1578.0800000000002, "end": 1584.2, "text": " only need one vector for that and then the rest if scribe scribe is by the way"}, {"start": 1584.2, "end": 1589.0800000000002, "text": " also word so I can have that but if scribe weren't in my vocabulary I can divide"}, {"start": 1589.0800000000002, "end": 1594.92, "text": " scribe then up into into characters and then describe them with the character"}, {"start": 1594.92, "end": 1599.8400000000001, "text": " level so basically I can mix and match here I can sub that's that I have that"}, {"start": 1599.8400000000001, "end": 1604.48, "text": " and then scribe I don't have it I don't have any of the pieces but so I can"}, {"start": 1604.48, "end": 1611.44, "text": " just use the character the character so this would would be sub and then s c"}, {"start": 1611.44, "end": 1621.8, "text": " r i b e so these these would be the tokens that I work with now as as my input"}, {"start": 1621.8, "end": 1628.48, "text": " and this this tags here so this is what would happen to PewDiePie you could"}, {"start": 1628.48, "end": 1634.0, "text": " simply split along each character so you basically this kind of an interpolation"}, {"start": 1634.0, "end": 1642.84, "text": " between the token model and the character model and it's really neat and"}, {"start": 1642.84, "end": 1650.52, "text": " it usually works quite well the as I said the the hashtag sign here simply means"}, {"start": 1650.52, "end": 1657.24, "text": " that these two have originally been one word and now this this in here is just"}, {"start": 1657.24, "end": 1664.72, "text": " the word piece token is a really good example where word work piece come in because play by itself is a"}, {"start": 1664.72, "end": 1669.2, "text": " word and I can make playing instead of having an own vector for that I can"}, {"start": 1669.2, "end": 1674.52, "text": " divide it into play which already has a meaning and presumably playing and play"}, {"start": 1674.52, "end": 1679.8, "text": " would have similar meaning so it makes sense to have play as a to that's the token"}, {"start": 1679.8, "end": 1686.24, "text": " singled out here and then in as is as a suffix also makes sense to have a token"}, {"start": 1686.24, "end": 1691.48, "text": " for that in my table and then I simply have these two tokens here that probably"}, {"start": 1691.48, "end": 1698.1200000000001, "text": " already gives me more information than simply having the word playing right by"}, {"start": 1698.1200000000001, "end": 1711.08, "text": " the way you should subscribe to PewDiePie just FYI all right let's go on so we"}, {"start": 1711.08, "end": 1716.9199999999998, "text": " do we do work piece tokenization we do the mask language model we do the next"}, {"start": 1716.9199999999998, "end": 1721.96, "text": " sentence prediction pre-training what do we have now we have a model that can"}, {"start": 1721.96, "end": 1728.6, "text": " really really well predict some masked words now how do we use it now they"}, {"start": 1728.6, "end": 1740.3999999999999, "text": " evaluate on these I believe it's 11 tasks 11 different tasks of or is it"}, {"start": 1740.4, "end": 1745.64, "text": " I don't know how many it is it is a lot with the same model so this pre-trained"}, {"start": 1745.64, "end": 1751.76, "text": " model then how claim can be fine tuned to do all of these tasks and it gets"}, {"start": 1751.76, "end": 1757.8400000000001, "text": " up it can select state of the art on everyone it's crazy so how do they"}, {"start": 1757.8400000000001, "end": 1765.76, "text": " fine tune it so the easiest tasks are the one are the so-called sequence level"}, {"start": 1765.76, "end": 1771.44, "text": " task where you basically have the sequence and you're you're about to predict"}, {"start": 1771.44, "end": 1775.8, "text": " one class label for the entire sequence so here with the sentence pair"}, {"start": 1775.8, "end": 1782.0, "text": " classification tasks for example the task we saw before that is next task but"}, {"start": 1782.0, "end": 1788.68, "text": " there is more sophisticated tasks that you need kind of supervised data for and"}, {"start": 1788.68, "end": 1793.36, "text": " so with the supervised data you'd have a class label that you train on so what"}, {"start": 1793.36, "end": 1803.52, "text": " you do is let's look at one of them M and M and L I they had it up here"}, {"start": 1803.7199999999998, "end": 1812.1999999999998, "text": " nope here multi-genre natural language inference crowdsourced entailment"}, {"start": 1812.1999999999998, "end": 1817.04, "text": " classification task so given a pair of sentences the goes to predict whether"}, {"start": 1817.04, "end": 1820.9199999999998, "text": " the second sentence is an entailment contradiction or neutral with respect to"}, {"start": 1820.92, "end": 1826.44, "text": " the first one right two sentences and you're about to predict what which one of"}, {"start": 1826.44, "end": 1831.92, "text": " these three labels it is so you put the two sentences here but can already take"}, {"start": 1831.92, "end": 1841.48, "text": " two sentences as an input as we saw right the embeddings are the the the a and"}, {"start": 1841.48, "end": 1844.52, "text": " b embeddings and the position of embeddings are left out of the picture here"}, {"start": 1844.52, "end": 1850.16, "text": " but they would be added to it and these these would be the embeddings for it and"}, {"start": 1850.16, "end": 1855.92, "text": " then you pass this through the birth model and this is the final layer and what they"}, {"start": 1855.92, "end": 1860.8400000000001, "text": " do is they simply take now the the embedding the final embedding for this"}, {"start": 1860.8400000000001, "end": 1870.24, "text": " first one corresponding to this start token and they simply put a single layer"}, {"start": 1870.24, "end": 1875.92, "text": " of classification so basically a logistic regression on it and that's how they"}, {"start": 1875.92, "end": 1881.0, "text": " then get a class label so if this is whatever let's say this is this this gives"}, {"start": 1881.0, "end": 1887.8400000000001, "text": " you here a hidden vector of 512 dimensions 512 and you have three labels to"}, {"start": 1887.8400000000001, "end": 1900.3600000000001, "text": " output here one two three you simply need a a matrix that's 512 by 3 of size"}, {"start": 1900.36, "end": 1906.3999999999999, "text": " and these are the these are the weights that you would then have to train in"}, {"start": 1906.3999999999999, "end": 1912.7199999999998, "text": " addition to birth so birth is pre-trained and you have to simply only now"}, {"start": 1912.7199999999998, "end": 1918.04, "text": " learn these weights of course they also kind of fine tune the entire birth"}, {"start": 1918.04, "end": 1922.08, "text": " model but that's really fine tuning the only thing you have to learn from"}, {"start": 1922.08, "end": 1927.52, "text": " scratch is is this these these weights here that's pretty first of all it's"}, {"start": 1927.52, "end": 1931.92, "text": " pretty neat because you can be very quick at learning new tasks because you"}, {"start": 1931.92, "end": 1938.12, "text": " simply start from the pre-trained birth and then you go and learn a single"}, {"start": 1938.12, "end": 1943.0, "text": " class for a layer on top and astonishingly this works extremely well for"}, {"start": 1943.0, "end": 1952.56, "text": " these tasks a bit of a bit of a more challenging task is this here squat is a"}, {"start": 1952.56, "end": 1958.76, "text": " question answering task and we're gonna jump down here where they explain the"}, {"start": 1958.76, "end": 1966.9199999999998, "text": " task so you have an input question oops if an input question and the"}, {"start": 1966.9199999999998, "end": 1971.3999999999999, "text": " input question is where do water droplets collide with ice crystals to form"}, {"start": 1971.3999999999999, "end": 1977.48, "text": " precipitation and you have an input paragraph which is kind of a paragraph from"}, {"start": 1977.48, "end": 1984.1200000000001, "text": " Wikipedia page and you know that the answer is somewhere in this paragraph right"}, {"start": 1984.1200000000001, "end": 1988.32, "text": " the dataset is constructed such that the answer is in the paragraph so the"}, {"start": 1988.32, "end": 1992.24, "text": " input paragraph reads precipitation forms as small as smaller droplets"}, {"start": 1992.24, "end": 1999.4, "text": " call ask via collision with other rain drops or ice crystals within a cloud"}, {"start": 1999.4, "end": 2008.76, "text": " so you the question is where do water droplets collide to form precipitation the"}, {"start": 2008.76, "end": 2014.6000000000001, "text": " answer here is within a cloud so that's this this thing here so usually what"}, {"start": 2014.6000000000001, "end": 2020.0, "text": " squat models do is they they predict a span they predict where's the start of"}, {"start": 2020.0, "end": 2024.92, "text": " the answer and where's the end of the answer that's also what kind of births"}, {"start": 2024.92, "end": 2032.48, "text": " trained to do so in order to do this what you do is again you already have the"}, {"start": 2032.48, "end": 2039.48, "text": " ability to input two sequences so we've trained with two sentences but here"}, {"start": 2039.48, "end": 2043.4, "text": " this input you say oh well the first sequence is gonna be the question our"}, {"start": 2043.4, "end": 2049.16, "text": " second sequence is gonna be the entire paragraph from Wikipedia and then for"}, {"start": 2049.16, "end": 2057.16, "text": " each output for each output for the output of each token remember there's as"}, {"start": 2057.16, "end": 2061.8399999999997, "text": " many outputs as there's inputs because the transformer will always transform to"}, {"start": 2061.8399999999997, "end": 2070.8799999999997, "text": " the same length of sequence for each token in the output we classify it is this"}, {"start": 2070.8799999999997, "end": 2078.64, "text": " token the start token and or is this token the end token or is this token"}, {"start": 2078.64, "end": 2085.72, "text": " none of all now what they do effectively is that here each each one outputs each"}, {"start": 2085.72, "end": 2091.6, "text": " one is a vector and they as we said at the beginning of finding out which"}, {"start": 2091.6, "end": 2097.72, "text": " ones the subject now here we have two queries namely query one which is is this"}, {"start": 2097.72, "end": 2103.96, "text": " the start let's call it query s and query e is is this the end token so these"}, {"start": 2103.96, "end": 2109.2, "text": " are two queries and I'm gonna just produce compute the inner product of each"}, {"start": 2109.2, "end": 2115.7200000000003, "text": " query with each of these outputs right and over my sequence here this is gonna"}, {"start": 2115.7200000000003, "end": 2123.8, "text": " give me a distribution so start for start maybe this token is not much then"}, {"start": 2123.8, "end": 2133.32, "text": " this token is a lot and so on there's five tokens and for the end not so much"}, {"start": 2133.32, "end": 2139.36, "text": " not so probable not so probable very probable not so probable so what you get"}, {"start": 2139.36, "end": 2144.96, "text": " gonna get is from these inner products is a distribution over which ones to"}, {"start": 2144.96, "end": 2149.44, "text": " start and which ones to end right then you can say okay this one's probably"}, {"start": 2149.44, "end": 2153.04, "text": " to start and this one's probably the end so that's how you predict this"}, {"start": 2153.04, "end": 2164.2799999999997, "text": " span and again what you have to ultimately learn is these these queries here and so"}, {"start": 2164.2799999999997, "end": 2171.52, "text": " not that much and this is name then to t recognition and name then to"}, {"start": 2171.52, "end": 2177.04, "text": " recognition you have a sentence and you're supposed to recognize name then to"}, {"start": 2177.04, "end": 2186.2, "text": " tease like up here we saw subscribe to PewDiePie and the named entity would"}, {"start": 2186.2, "end": 2191.96, "text": " be PewDiePie right this is a name and you're supposed to recognize that this"}, {"start": 2191.96, "end": 2200.2, "text": " is a name and they do it the same same way that they do the squat basically or"}, {"start": 2200.2, "end": 2207.8799999999997, "text": " a similar way sorry they basically for each of the outputs here they simply"}, {"start": 2207.8799999999997, "end": 2216.6, "text": " classify whether or not is it's part of an entity or not so what they have to do"}, {"start": 2216.6, "end": 2222.68, "text": " is they have to simply train if they also have different labels for which kind"}, {"start": 2222.68, "end": 2228.9199999999996, "text": " of entity is this this is like a person and this is this is no entity so if you"}, {"start": 2228.92, "end": 2236.12, "text": " have ten of the labels then each for each thing you'd classify it into one"}, {"start": 2236.12, "end": 2243.52, "text": " of ten classes you need a classifier of input size versus number of classes"}, {"start": 2243.52, "end": 2249.64, "text": " that's all you have to train in addition to pre-to fine tuning bird itself"}, {"start": 2249.64, "end": 2256.6, "text": " all right so they kind of evaluate on all of these tasks they get super duper"}, {"start": 2256.6, "end": 2265.0, "text": " numbers on all of them here birthed large wins on pretty much everything and"}, {"start": 2265.0, "end": 2277.2799999999997, "text": " this model is big just saying and they trained it on TPUs which is available in"}, {"start": 2277.2799999999997, "end": 2284.88, "text": " kind of Google Cloud infrastructure so far and it trained it on a lot of data"}, {"start": 2284.88, "end": 2292.96, "text": " so to a way it's it's kind of expected that you would outperform but it's"}, {"start": 2292.96, "end": 2298.1600000000003, "text": " very surprising that you outperform everyone else by this much and they've done a"}, {"start": 2298.1600000000003, "end": 2304.6, "text": " lot of kind of Appalachian studies where they show that it's really due to the"}, {"start": 2304.6, "end": 2312.44, "text": " fact that they do this left-end right context they take into account the"}, {"start": 2312.44, "end": 2319.16, "text": " left-end right context of a given token when doing the the attention that it's"}, {"start": 2319.16, "end": 2325.4, "text": " that that's why it's better so here for example they compare the the bird"}, {"start": 2325.4, "end": 2331.84, "text": " base model and they say okay what if we don't do the NSP the next sentence"}, {"start": 2331.84, "end": 2337.08, "text": " prediction task then you can see the numbers they already kind of they drop on"}, {"start": 2337.08, "end": 2343.7599999999998, "text": " these tasks and what if we then additionally do only left to right training and"}, {"start": 2343.7599999999998, "end": 2349.12, "text": " the numbers they drop pretty seriously again you see sometimes here for"}, {"start": 2349.12, "end": 2355.7599999999998, "text": " example you see a pretty serious drop in the number also here so there really"}, {"start": 2355.7599999999998, "end": 2363.36, "text": " seems to be a real value in doing this kind of left-end right context"}, {"start": 2363.36, "end": 2369.4, "text": " attention so it's not just about the model size and the amount of data that's"}, {"start": 2369.4, "end": 2372.92, "text": " basically what they show here and this is really cool that the paper actually"}, {"start": 2372.92, "end": 2377.04, "text": " shows this because usually people have an idea and they throw a lot more"}, {"start": 2377.04, "end": 2381.44, "text": " resources at it and they're better you'd never know why and this is pretty"}, {"start": 2381.44, "end": 2387.1600000000003, "text": " cool that they actually show all right so this is all I have to say about this"}, {"start": 2387.16, "end": 2393.6, "text": " paper but check it out that models are here pre-trained you can actually"}, {"start": 2393.6, "end": 2398.72, "text": " download them you can fine tune it for yourself for your own task and they're"}, {"start": 2398.72, "end": 2404.72, "text": " pretty pretty powerful there are smaller models for if you don't have a"}, {"start": 2404.72, "end": 2412.16, "text": " TVU that are also pre-trained so check these out as well and thanks a lot for"}, {"start": 2412.16, "end": 2428.92, "text": " watching you guys soon sun fireworks"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=nPB0ppcnzZA | What’s in a name? The need to nip NIPS | http://tensorlab.cms.caltech.edu/users/anima/pubs/NIPS_Name_Debate.pdf
Abstract:
There has been substantial recent controversy surrounding the use of the acronym "NIPS" for the Neural Information Processing Systems conference, stemming from the fact that the word "nips" is common slang for nipples, and has historically been used as a racial slur targeting people of Japanese origin. Here, we outline the ways in which this acronym has contributed to a hostile environment towards women in machine learning. We argue that an October 2018 decision by the Neural Information Processing Systems board not to change the name of the conference was based on a misunderstanding of the issues that women face in STEM fields, a poorly-designed survey, and a faulty statistical analysis. We applaud the board for a more recent announcement of the new abbreviation "NeurIPS", and emphasize that this name change is an important first step towards the creation of a more inclusive environment in machine learning.
Authors:
Daniela M. Witten, Elana J. Fertig, Animashree Anandkumar, Jeff Dean
References:
https://medium.com/@kristianlum/statistics-we-have-a-problem-304638dc5de5
https://nips.cc/Conferences/2018/News
https://twitter.com/AnimaAnandkumar/status/1055278000588021762
https://www.change.org/p/members-of-nips-board-protestnips-nips-acronym-encourages-sexism-and-is-a-slur-change-the-name
https://twitter.com/AnimaAnandkumar/status/1056971852248018944 | Hello and welcome. Today we're going to look at what's in a name, the Neetnip Nips by Daniela Whitten, Alino Ferdig, Anemasheri, Anankomar and Jeff Dean. This is a bit of a special paper as it's not an academic topic. The paper in fact is about the change of name or other change in acronym for the conference neural information processing systems. Previously abbreviated Nips, but now for the first year this conference has been hosted under the acronym NURRIPS. The people here on the paper are not the organizers of the conference. They are advocates for the name change and the paper basically outlines their arguments and a bit of description of what happened. So they're also pretty big names in the community, so it should be interesting to see what they have to say. The paper is pretty short, it's three parts, three pages and we're going to go through it and yeah, let's jump into it. So I have it over here. All right. So the first part of the paper basically describes, it's called what's all the flaws about. It basically describes why an name change was necessary in their perspective. So they say in machine learning, like the rest of Samson suffers from severe gender imbalance, low retention rates for women and so on. They also describe the Me Too movement, increased awareness of sexual harassment faced by many female researchers, pervasiveness of sexual harassment at computational conferences and their reference in article here. I want to kind of show you this article. It's this article here. So if you haven't seen this yet, I encourage you to read it. It's pretty horrifying to read, but it gives you an idea of what people talk about when they say sexual harassment is a problem is pervasive at conferences and so on. So yeah, just I don't want to go into this specifically, just go ahead read it and see what people are talking about. So I think it's important context to understand where people are coming from. So they go on to say, have more subtle acts of gender harassment. They defined in this report, this includes like sexist hostility, crude behavior and so on, have gotten less public attention. Nonetheless, gender harassment is extremely pervasive. It's a direct contributor to the challenges faced by women in this STEM field. In this article, we argue that NIPs, the former acronym of the NERL Information Processing Systems Conference, constituted gender harassment towards women. So that's what their arguments basically about. So the acronym led to basically had its part in gender harassment towards women basically led to an environment where women could not feel comfortable at this conference. So here's their description. In popular slang, the word NIPs is an abbreviation for NIPs. Furthermore, it has a story that has been used as a racial slur targeting people of Japanese origin, but will not go into this deeper because that's kind of a historic use of the word. The current use of the word in fact is the slang for NIPs and so we'll focus on that. They say at first glance the fact that a major machine learning conference shared its name with this slang is an unfortunate but unimportant coincidence. When it really is a coincidence, I think the conference name has been around for a longer than this slang has been kind of popular. This slang word has been popular. So it really is a coincidence. Many other conferences have same coincidences like cult, for example. Maybe actually that's even less a coincidence than here. They say in fact one might hope that members of the machine learning community are sufficiently mature that the conference is name is unimportant. That's basically what everyone would hope. Maybe people don't even notice and if they notice, maybe they'll have like a two second, oh that's the other word. But then we basically just go on with our lives and no one cares too much. So that's kind of the ideal scenario and they acknowledge that here. It's really important that they do. They say, right? Unfortunately this appears not to be the case and they detail a few examples here at the 2017 conference Elon Musk made inappropriate jokes about the acronym, Participants War Loot T-shirts. I think once said my NIPs are NP hard, which is kind of a double computer science joke, I guess. There was a pre-conference event named where I can't probably say out loud without getting some sort of strike. You can clearly see that even though the kind of original name is going sedental and one would hope that people are like, just putting it off, be adults about it. There have been jokes, there have been t-shirts made and you can say the name collision is not like is unintended but I think this word here is very intended. So I think the main argument here, one of the main arguments is this really, first of all, it creates an environment where certain people don't feel comfortable. It creates kind of a sexualized environment. Second of all, in a more broader sense, it's just unprofessional as a community, especially since the kind of community is booming. We want to represent machine learning to the wider world, one can say, okay, it's just unprofessional that we kind of bring, intertwine these things, it doesn't make a good impression. They say further, more reminders of the unfortunate acronym are everywhere. Online search for the acronym, let's not say for content, the hashtag nipsisdevoted to pornography. If you misspell the conference website, you get to an adult site and I think this further goes into the argument that it's just an unprofessional appearance towards the outside. It's unfortunate, the conference has been here longer, but still there's a need to do something about it. I largely agree with these arguments, these are good arguments to make for a change of name. This paragraph down here, it's a bit of a, we'll go into that later. It's not very connected to the arguments made here, so it's more connected to what's been happening, so we'll go into that later. People have been circulating these arguments and calling for a name change for a while, and then the board of the conference, the nips board, made a survey, surveying the attendance of the last five years conferences, whether or not the conference should change its name. The next section is dedicated to how the survey turned out and what the response of the board was. Actually, let's first go to the decision by the board. Here is the press release. This is a press release after the survey results had been collected. They said our survey was returned by about 2,200 people here. As I said, I have attended nips in the last five years. Of the mail respondents, about 28% are in favor of the conference name change. Of the female respondents, about 44% are in favor of a name change. 40% prefer existing names, 16% express no preferences. In fact, let's go look at the detailed results, which they have down here. You can see, overall, there is a big slant towards not agree, so negative 2 is strongly disagree with a name change, well, positive 2 is strongly agree. You can see there is a big slant towards the not agree. If you split this by gender of respondents, then you can see the mail distribution is that slant, while the female distribution is a bit different, as you can see here. The first thing is mostly towards the extremes. So there are more people strongly saying something, non-strongly saying something, to either side. The second of all, it seems very divided and very evenly divided. So, in fact, if you look at the numbers, if you count the disagrees and the agrees, you'll find there is a slight majority in the agrees. There is a slight majority in the disagrees if you only consider the strong, but ultimately these numbers are pretty close, so that there's people on either side feeling strongly, and there's about in the survey, about as many on either side. So that's basically the outcome of this. Here, I find very interesting some quotes from respondents. So, you had the opportunity to put quotes, to put a comment, and these are quoted from these comments. So, they say, for example, this, thanks for considering a name change. I'm not personally bothered by the current name, but I think the gesture will send a much needed, inclusive, five in the right direction. One person says, if you were up to me at call of this nice, but symbolic gesture, use whatever time, money, and energy to make actual changes, then someone says, please, please, please change the name, it is sexist and racist, Schler. Schler, I'm embarrassed every time I have to say the name of the conference, his feeds into the, on professionalism argument. The next one I find very interesting, it says, as a woman, I find it offensive, that the board is seriously considering changing the name of the meeting because of an adolescent reference to a woman's body. From my point of view, it shows that the board does not see me as an equal member of the community, but as a woman first and second, this is extremely interesting. This is one of the people who was a female respondent and said strongly disagree with the name change, or disagree with the name change. I mean, I can guess. We've only heard so far that the name or the acronym is offensive to women, but here we have a woman saying that the consideration to change the acronym is actually offensive to her. That's very special, understandable. I can understand why that happens. I can understand the argument made here. This woman feels like, okay, it shows me that basically my gender is important and not really my being scientist. It's an argument. The next one goes into the same direction, says, I am a woman, I've experienced being harassed by male academics, and I would like this problem to be discussed and addressed, but not in this frankly, almost offensive way. So another person saying, basically, that's changing the name is kind of the almost offensive, and it's not the right way to go to achieve these results. There's another one saying, I'm in favor of the name change, but this is cosmetic. So you have basically people coming from all angles, giving their opinions, and you can clearly see why there is, especially in the female respondent group, why there is a divide. And so the board, overall, said the following. Wow, this is never mind. The board overall said the following. Here, after extensive discussions, the NIPS board has decided not to change the name of the conference for now. The poll itself did not yield a clear consensus on the name change, or well regarded alternative name. Further, they state, instead, we ask the community supporting implementing complete steps to improve the inclusiveness of the conference. So these are described down here. They have a number of changes to make the conference basically more inclusive. So they basically said, okay, so the name change, the name change survey was inconclusive. And they clearly say, whatever we do here, regardless of which decision we take, we're failing to accommodate the opinions about half the women in the community, which is true. This is clearly what you can see from the results, from the quotes. So basically what they say is, we'll not change the conference name for now. We'll implement these steps because what they, I can guess, what they felt was, okay, even the people against the name change were in support of making the conference more inclusive. So they basically say, okay, we do these things. We've strengthened their code of conduct. We have two inclusion diversity chairs. We have an inclusion town hall. We have childcare support, gender inclusive restrooms, and so on, and so on, mentoring breakfasts for women and other minorities. So they take these steps concretely. They say, this is what we do. And even further, if you look at their page on diversity and inclusion, which I have here, they say here on the top, in addition to hosting diversity related event, the conference is also making consider instruction changes, include a new code of conduct, we've already seen, and in depth discussion of the potential of changing the name of the conference. So in total, what they're saying is, we've done this poll. It came back inconclusive, which I think has been clearly demonstrated, will not change the name of the conference for now, and will do all of these other, all of these other things, right, down there. And at the conference, we'll hold a meeting and discuss the name change. So we could maybe potentially change it in upcoming years, right. I think this is a really sensible decision by the board. I mean, given the state, given all of that, this is probably the most sensible decision, like let's take concrete steps, the name change seems to be, you know, debatable. So let's actually debate it at the conference with the actual community. So that was the basically result of the poll. Let's now go back to what the paper has to say about this. So here's the paper again, and they say in order to collect data about the machine learning community's feelings about the conference name, the conference boards end out a survey to people who have attended the conference during the past five years. However, serving only conference at these results in a very biased sample of a much larger community of potential machine learning researchers, bias arises due to the fact that some people who are made uncomfortable by the name or by other aspects of the machine learning culture may have decided not to enter or to remain in the field, have chosen not to attend the conference. So basically, you're saying, well, if you only ask this one group of people, right, then this other group of people, you know, doesn't have a chance to make their voice heard. And there is basically bias because in this other group of people, the people who have not attended the conference, they would have a severely different opinion from the people who have not attended the conference. So first of all, I think this can be a valid point here. Of course, always if you ask one group of people and exclude another one, there's if the group you ask and the target group, which here is really unclear what it is. I guess it's the machine learning community considering going to the conference. If those don't overlap, then you will introduce some sort of bias. And they say, okay, bias could come from the fact, you know, some people who actually are affected by these problems of which this name is one, they may have, you know, not attended the conference because they may have left the field because the gender harassment is so pervasive and they just didn't didn't stay. And so I think this can be a good point. But the problem I have with it here is that it's simply stated without anything. It's simply said, okay, there is bias, bias arises. And my question would be, how much is that bias? If any data, like any data on this, you can't just criticize these, the survey for being biased and then not provide actual data, like how many people are there who are made uncomfortable by the name or have left the field in, who have left the field because of these things. And is it really viable to count them in, I guess, okay, we can argue it is. But how would they have responded to this? We've clearly seen that a lot of affected people that even have experienced harassment are not in favor of the name change. So in this case, I would really like to see some data on how much this bias is, right? And I cannot also say it's not that bad of a decision to what the board did to send the survey to the last five years at attendees. I think it's a very sensible choice if you want to gather the communities feelings towards these kinds of things. I mean, you can't just ask the entire world because the entire world is not the machine learning community. So I think the, this is a very sensible decision to ask last five years attendees. And if you have real evidence that this causes a notifiable, like a significant bias, then we could potentially correct for that bias. But without any data on that, I think the, the asking last five years participants was completely reasonable. And one of, I don't really see how you can do a much better job without much, much more manual work. And I want to make this point a bit clearer on how hard it actually is to do that by pointing to the response to this. So here's a tweet thread by one of the authors of this paper. After the conference decision came out, she basically tweeted out this protest nips, I am starting this new hashtag, please retweet if you're in support of the nips conference changing its name. So basically kind of launching a, a Twitter campaign, a Twitter hashtag under this to come, you know, get into a conversation with people about this. People could express their support. She also, that was a misclick. She also here made a change.org petition to change the name. So a petition, basically, the petition is here. The text of the petition basically says something similar to the, to the what we've already seen, including there is the criticism of the survey. And as you can see here, about 2000 people have signed it. So I mean, a Twitter hashtag is all good, you know, you can do that. A petition is all good. You can do that. But it's a bit ironic because a change to the work petition literally anyone can sign this. And in addition to that, there's only one option. You can only say yes, you can't even say no. Right. So and even more, who's going to see the change that or petition is going to be the social media followers of these people, right? So basically, you have now a, you have it now, what's basically a survey of the social media network of people in favor of changing the name, where there's only one option to respond. I find it. And so I've gone through here, the people who actually publicly associate their name, give a reason for signing a lot of these, they, you know, they give them argument why they've signed the petition. But I've tried searching these people for any sort of academic track record. And in my sample, I've come up with between 10 and 20% of people who somehow have an academic track record. So this is, I mean, certainly a valid thing to make your voice heard and to show your numbers. And but I mean, look at this, it's a bot, signing twice. Hello, Jack Nelson and Richard Chi. Very nice. But so basically, I'm not here to criticize petitions, but what I want to say is you can't like criticize this, this poll so hard for being biased. And then launching basically an own poll that's even more biased and even more non-representative of the community. To me, that's, that's kind of ironic. And just goes to show how hard this is and my argument would be it's actually not that unsensible of a decision of the board the way they did it. And if you have, again, if you have data, to actually quantify the bias here, then it's viable to go and correct for that. All right. So to the, they go on to analyze the survey results conference board, simply noted that of the 294 women surveyed, the number who strongly support or support the name change is comparable to the number of women who are strongly opposed to opposed. However, this analysis implicitly assumes that one person's feeling of discomfort or marginalization as a result of the name should be given the same weight as another's persons preference for the status quo. This amounts to giving the same way to false positives and false negatives. Of course, we learn in an introductory statistics course that false positives and false negatives should be assigned weights dependent on context. In this context, we feel that a much greater weight should be given to the views of a person who feels marginalized as a result of the name. So up here, I find this a bit strange. They say, this amounts to giving the same way to false positives and false negatives. To me, the false is here a bit confusing because it seems to me it's simply giving the same way to negatives and positives. I don't think there's a need to dress this up in statistical lingo here. It's simply we give the same way to people who responded positively and to people who responded negatively. I think that's it. There's no false. Of course, we learn in an introductory statistics class that false positives and false negatives should be assigned weights dependent on context. In this context, we feel that a much greater weight should be given to the views of a person who feels marginalized as a result of the name. I would say to this, it's the problem for me. This is one of the things that where you at really first and you say like, oh yeah, this makes sense. But first of all, it's framed extremely one-sided. It's framed as all the people who are for the name change like they feel discomforted, they feel marginalized and the people who are against the name change, they simply, and here specifically, they talk about the women group. So in argument, they're all affected, the people against it simply prefer the status quo. But we've clearly seen in the press release and we'll go over to that now. These quotes here, we've clearly seen that the offense and the marginalization happens on both sides. So here, as a woman, I find it offensive that the board is considering changing the name. It shows that the board does not see me as an equal member of the community, but as a woman first and the scientist second. I mean, this is almost a textbook definition of marginalization. And this is clearly happening on the other side as well. So I think the framing here is extremely dishonest in one-sided and there is given basically the side that we just seen in this quote is given absolutely no, not even a mention that it exists. It's simply framed as this side is marginalized and oppressed and discomforted and the other side simply prefers the status quo. But we've clearly seen that, yeah, this almost fits exactly this definition. It's just one person's feeling or discomfort or marginalization as a result of the name. It's just as a result of the name change. Second of all, I think the bigger problem and this goes into the statement down here to state this last point more explicitly. An issue adversely affecting the minority of participants should not be decided by a majority vote. Again, something at first you say, oh yeah, that makes sense. But if you think about it, this is a really, really outrageous statement. And the reason is it's outrageous is if it's not majority vote, if it's not one person, one vote, then someone has to decide who gets to vote and who doesn't. And more so specifically here, someone basically needs to decide who should be given what weight in a vote. You need someone to decide this. And here you can say, well, it's easy. It's just the women because they're affected. I did, but they go further. They say, well, it's the women who feel discomforted and marginalized, who should be given more weight than the ones who simply prefer the status quo. But then you have to have someone assessing whether someone is really marginalized and discomforted or simply prefers the status quo. And it's not like an environment where there is kind of a sexist undertone isn't also discomforting or can't also be discomforting to men, to men of any sort or people of any sort of gender. It's just not clear. The fact that people should be given different weight in crafting an opinion. I mean, this can be true if you have like some clear area of expertise. But in this case, it's really unclear. And the fact is if it's not majority vote, you need someone deciding the weight. And the someone deciding the weight automatically decides on the outcome of the vote. And then why do you need a vote in the first place? Basically, up here they say, yeah, we feel the great weight should be aligned like this. And but down here, there is no more we feel. It's be an issue adverse affecting the minority presumed to should not be decided by majority vote. They're basically calling for a dictatorship. In this case, and I'm going to guess like everyone has the opinion that dictatorship would be an awesome idea if the dictator were me. Right? That's that's what everyone thinks, of course. And that's basically the argument made here. But it's not it's not true. And there's some really, really disturbing implicit things in here. And maybe I want to quickly go over how I think a democratic decision works. So imagine you have a person. And the person has a decision to make four or against. In this case, the name change, right? And the person must decide on one of these two things on a let's say on a continuous scale. But it doesn't matter. What this, what this stuff up here basically implicitly assumes is that the person looks at themselves and they think, well, am I personally discomforted or marginalized by the name or the climate it creates? No, and I'm obviously against the name change because it doesn't help me. Or another person go and I personally affected. Yes. Well, I feel discomfort or marginalized. Well, then I'm obviously for a name change. So the basic assumption here is that people simply vote purely their own egotistical interests. And that's that's it. So basically, if you're in one of these minorities, then you'll vote for the name change because it affects you, which we've already seen is not, it's not a given that people vote that way. And if you're not in this, then you know, you you'd vote against, but you're not affected. So your vote shouldn't count. It's completely untrue. What people do, especially smart people, and I believe the machine learning community consists largely of these. What they do is they'll make a list of arguments, argument one, argument two, argument three, argument four. Everyone has the same arguments. Everyone's heard the same arguments. If not, then maybe there's some work to do in actually getting arguments to people. But that's not the same as weighing the people differently. You get the arguments to the people and then you weigh each of them equally. Why? Because what every person does is they say, okay, argument one is maybe it's unprofessional, right? Name is unprofessional. All right, how important is that to me? Give it a weight, weight one. Cool. That's really important to me. I'll give you the big weight. Argument two. Some people feel really discomforted, if you're marginalized, by the name creates a bad environment for them. How much weight am I going to give to that? So people can actually consider other people's feelings and other people's problems and decide on what's the best also for them in their own mind. So they give it a weight two. Then there's maybe two arguments against them giving these weight three, weight four. At the end, what you have is you have argument i. You will sum it up by the weight w i j. You will sum it up over all people. So basically now and this will give you like a final number a, which is either positive or negative. If it's positive, you do the name change. If it's negative, you don't do the name change. If you do this over all people, what you've basically done is you have just determined these weightings here by a democratic process. You've crowdsourced the weighting. This is exactly what these people say up here. We feel that here, not false, false, positive, false. We feel that positives and negatives should be assigned weights dependent on context. So the positive and negative arguments in this case are assigned weights dependent on context, but the weights are crowdsourced to the community. Right? And each person, this who participates in that, each person who participates is one more brain power in a complicated decision that no one basically, no one has the authority to just decide for themselves. So these people are calling for a different weighting. This is the way to do it. The democratic majority vote is the exact way to determine these weights. What these people basically are, is no, no, no, no, no, no, we should determine the weights. We who know, I'm a bit corny here, but this is basically it's still it's two alternatives. Either you do democratic process, one person, one brain, one vote, and that will give you a crowdsourced crowdsourced true weighting of the arguments, what the community feels, or someone needs to decide. Some one needs to decide by force basically, and that's a dictatorship. So these are the choices you have. And clearly, now you can maybe understand why I say this is an outrageous statement because to me, the dictatorship option is not an option. Note that I'm not saying that democracy can never be wrong or the majority can never be wrong, but in fact, it's the best system there is. It can be wrong, but anything else will undoubtedly go more wrong. So that's my point here. All right, so that was maybe a bit ranty, but let's go on. A false choice and a minimization of a real issue. So they go on to say what they think of the decision that the board made in response to this. So up was how they analyze the poll, and now it's the decision. In announcing their decision, not to change the conference name, conference board expressed commitment to implement concrete steps to improve the inclusiveness of the conference. And they list them here and they say we sincerely applaud conference board for these efforts. Okay, I think the community feels like that as well. However, the wording of the decision implied the need to choose between changing the name of the conference and taking concrete steps to improve its inclusiveness. I don't see that at all. Say this was a false choice. There's no reason that the board could not do both. Yes, there's no reason that they couldn't do both. And I believe we've read this together before. I don't think the board ever said that there was a choice between one or the other. I think they've said very much the opposite. Let's go back. I think what they mean here is the word instead. So here they say we won't change the name. And then here's they say instead we ask for the community support and implementing concrete steps. I think this this must be it because I don't really see any other way you would ever think that. And the reason is this here. They say well not change the name of the comments for now. On another page they say we'll discuss the name change at the conference. And then here instead I think what is meant is instead what we will do right now is these things. We'll discuss about the name change but what we will do right now which was basically not the real problem in the first place. The real issue raised was the name. So instead of that issue we'll do these other things which we feel the community wants. I think that's the I think there's no I think everyone reading this comes to the same conclusion after after reading that but I so I really don't see how you you can say that this is kind of presented as an either or by the board. I don't think that at all. And you decide for yourself. I believe the real real the real crux here is the for now and the promise to discuss at the conference which if you can see here in the paper is never ever ever touched right. They make it basically seem that the board has decided to not change the name and that's it which is completely wrong. They've clearly stated their openness to a name change they want to discuss it. It was just inconclusive so they want to basically not do anything rash and then have the community is against it anyway so they want to discuss it. I to say that this is the basically that that the wording imply the need to choose I don't see that. But you know you decide for yourselves. The board suggested an name change would only be symbolic and so on would have no real consequences so that these these are some of the arguments basically made in the quotes as well. But you know the fact that the name change would only be symbolic and so on these are all things you could actually discuss at the conference meeting. You could even correct for your for your poll right. You could invite people who have left the community to represent those. You could invite new potentially researchers. You could give everyone their voice and then actually listen to all of them. I think that's a very sensible decision by the board and I think this is misrepresented here. Lastly to say another argument though not explicitly mentioned a number of machine learning researchers told us that changing the name of the conference would lead to too much confusion in the community while we understand we respectfully do not share it. I mean this is is basically an argument against the name change. I think it's also a point worthy of discussion. They say we respectfully do not share this point. Yeah okay they don't share it. Other people do it's a point of discussion. You know you could actually discuss it at the conference but I actually agree with the authors here. I think changing the name will not have a big impact on the kind of recognizableity of the conference especially now down here we'll actually get into what actually happened. In November the in response to extensive public backlash the conference board announced a change to the official conference acronym to NIRPS. They say we are pleased. Provides this provides a reasonable compromise. So in my opinion this is it as far as solutions go this is a good solution. The NIRPS acronym I think it's cool. You don't have to change the name of the conference itself. You simply changed the acronym which was the reported problem in the first place. I think the oldenew papers will like people will still recognize the old NIRPS acronym or the new conference. It will be clear that it's the same thing and I think this is a very good a very good new name and I think people will get used to it pretty quickly. Also you know to say NIRPS it's also rolls off the tongue easily. So it's as far as solutions go I like it. Further they say however the work for the conference board is far from done. We encourage the board to continue its efforts blah blah blah. So they say okay you have to do more than just change the name and so on. They say together these steps will help ensure that the NIRPS conference retains its place in the forefront of machine learning research while also creating a welcoming environment for women and members of other represent groups on other underrepresented groups. We all hope that. To me the problem is a bit how this how this went down and if we go back and look at the actual press release of the name change they say here do your members of the neural information processing systems community. Something remarkable has happened in our community. The name NIRPS has sprung up organically as an alternative acronym we're delighted to see it being adopted. Indeed one forward thinking member of the community purchased NIRPS.com described size purposes hosting conference content under different acronym until the board catches up. We've caught up. We're considering alternative acronyms when the community support for NIRPS became apparent. We ask all attendees to respect the solution from the community use a new acronym. So basically they've rebranded the entire conference about a month before the actual meeting asked all sponsors all invited companies asked all invited papers to rebrand the acronym. To me this the wording here is a bit is a bit funny. It's like something remarkable has happened in our community has sprung up organically and now we'll just adopt it. It seems like much less of the fairy tale that is described here but the actual like there's a mob with pitch forks around your house and this is like the first kind of straw that you can grab to make them calm down. I also know that some companies have begun pulling out funding for the conference. So I think this is really this was really you know much more backed by force and and back yeah what they say in the paper extensive public backlash so loud screaming basically then this this kind of the name has sprung up organically and has been adopted and it seems much more bit forceful. To me it would have still been a viable path the most valuable path to actually wait for the conference and then have that discussion and then if indeed this name NIRPS would be presented as a good alternative and you know people would be fine with that then you could still make the name change for last for next year. I think this this would have been a good alternative. My fear now is this has been extremely rash extremely forceful as as I've said also accompanied by with like by withdrawal of funding that I believe these things usually provoke a backlash and that's really something that I wouldn't look forward to. So I hope that this con that this paragraph down here is true that actually we will see a more welcoming environment for everyone but I believe things like this tend in society to have the sometimes very opposite effects of what's intended and so I hope this does not produce a backlash. I think having had the actual discussion doing things non-rashely would have done much more in the direction of preventing such a backlash. So this is the end of the paper. So to recap they basically say the acronym was was inappropriate which I agree with. They say the survey was bad which I could believe if there was data they say that an issue adversely affecting the minority of participants should not be cited by majority vote which I absolutely disagree with and then they say the board has basically stated this as an either order decision which is I believe not true and misrepresenting or maybe I've missed something it's always possible. Lastly I want to get to this paragraph. In recent months a number of women including some of the authors of this article who publicly expressed support for a change of the conference name have been relentlessly trolled, harassed, probably abused and even physically threatened on Twitter Reddit other online forums. Much of this harassment they say has been anonymous and typically has had an extremely gendered tone. Furthermore some students have reached out to us the authors lamenting the fact that they felt unable to open expressed their support for renaming the conference due to fear of bullying or retaliation by faculty advisors or others in position of power. This I believe is really bad. The fact that people can't speak out about something like this without being bullied or harassed or having to fear for their careers basically is bad and I would really discourage everyone from engaging in such behavior. Verbal abuse physically threatened. To one point you can say all right if you've been on the internet for a longer than a week then this probably has happened to you if you have had any sort of serious discussion on the internet but you can also say that doesn't make it right. So I believe it's really important to separate what is harassment basically from actual disagreement and criticism and please engage in the latter do not engage in the former. My problem with this paragraph it's again it's very one-sided it's basically stated here some students have reached out to us lamenting fact that it felt unable to openly express their support for renaming the conference due to fear of bullying retaliation by faculty or advisors of others in other position of power. To me I'm gonna say this probably happens on both sides what one could argue where it happens more but this very much happens on both sides of this issue and it's real shame for both sides basically I think anyone should be able to express your opinion to demonstrate that here I'm gonna show another Twitter thread by one of the authors of this paper where basically this is a thread where she posts screenshots of conversations basically people reaching out to her saying exactly that like I can't share my I have trouble sharing my opinion I get mocked for my opinion I can't do so publicly because I fear you know from my from my faculty and so on but then there's also this one here where a person wrote an email to the author basically saying they disagree with her and I've read this email I don't you know I don't agree with the arguments here made but I can say that the this is not verbal abuse it's not personal attack it's not physically threatening it's actually quite respectful disagreement the person actually goes through length to say how respectful they are how much you know how much this is meant as a as a disagreement on factual terms and further what they say is that they want to be anonymous maybe you see it on the very bottom for example I haven't done too much to anonymize myself but I ask you to respect my wishes of remaining anonymous don't try to figure out who I am further up they state basically they want to remain anonymous because they fear for their latter for their later career right they fear of a backlash up here wish to remain anonymous as I'm an early in my career someday we may work together so basically they say here I disagree here's why I disagree and they wish to remain anonymous because they fear for their career right so this is almost like this is this is very much here feeling unable and will will go feeling unable to openly express their in the case to support against renaming the conference due to fear of bullying or retaliation by faculty advisor others in position of power so this author here is obviously a real person in position of power and in very famous senior researcher and this person basically says I'm afraid and I can you know that that's why I'm anonymous and the way the author responded here as you can read read is what an anonymous coward of course I will do everything to guess you and it's it's difficult to to kind of put this off as I mean even if it's I don't know how it's meant right I will do everything to guess you and the least it means she will try to figure out who that is right and she doesn't go as far as saying that she will then basically either you know remember that name in case of any future thing or share it or whatnot but it's certainly you can't argue that this is a real deterrent for other people to even anonymously voice their opinion to if this person announces I will do everything to guess you to me that that shows that this fear that we discuss here is very much present on both sides and it's absolutely not okay if if either side reacts by basically my basically retaliation or even even the the possibility of retaliation and I believe everyone should be able to say their opinion I respect really everyone even like these these authors here clearly took a lot of effort and a lot of a lot of beating basically they say they've been relentlessly trolled harassed verbally abused even physically threatened this is just really bad and have lots of respect for them saying their opinions stating their opinions anyway I think everyone should be able to do that without these things happening so to everyone watching I encourage you to not engage in these things and that alone will probably make the environment much much more inclusive and nice for everybody irregardless of of affiliation so that was it for me for this paper it's a bit longer it's a bit ranty if you agree disagree let me know in the comments yes and other than that have a nice week weekend whatever you do bye | [{"start": 0.0, "end": 6.12, "text": " Hello and welcome. Today we're going to look at what's in a name, the Neetnip Nips by Daniela"}, {"start": 6.12, "end": 13.08, "text": " Whitten, Alino Ferdig, Anemasheri, Anankomar and Jeff Dean. This is a bit of a special paper"}, {"start": 13.08, "end": 20.16, "text": " as it's not an academic topic. The paper in fact is about the change of name or other"}, {"start": 20.16, "end": 25.96, "text": " change in acronym for the conference neural information processing systems. Previously"}, {"start": 25.96, "end": 31.36, "text": " abbreviated Nips, but now for the first year this conference has been hosted under the"}, {"start": 31.36, "end": 38.88, "text": " acronym NURRIPS. The people here on the paper are not the organizers of the conference."}, {"start": 38.88, "end": 45.400000000000006, "text": " They are advocates for the name change and the paper basically outlines their arguments"}, {"start": 45.400000000000006, "end": 52.96, "text": " and a bit of description of what happened. So they're also pretty big names in the community,"}, {"start": 52.96, "end": 58.64, "text": " so it should be interesting to see what they have to say. The paper is pretty short, it's"}, {"start": 58.64, "end": 65.88, "text": " three parts, three pages and we're going to go through it and yeah, let's jump into"}, {"start": 65.88, "end": 74.44, "text": " it. So I have it over here. All right. So the first part of the paper basically describes,"}, {"start": 74.44, "end": 79.48, "text": " it's called what's all the flaws about. It basically describes why an name change was"}, {"start": 79.48, "end": 86.64, "text": " necessary in their perspective. So they say in machine learning, like the rest of Samson"}, {"start": 86.64, "end": 96.48, "text": " suffers from severe gender imbalance, low retention rates for women and so on. They also"}, {"start": 96.48, "end": 102.60000000000001, "text": " describe the Me Too movement, increased awareness of sexual harassment faced by many female"}, {"start": 102.60000000000001, "end": 109.2, "text": " researchers, pervasiveness of sexual harassment at computational conferences and their reference"}, {"start": 109.2, "end": 119.60000000000001, "text": " in article here. I want to kind of show you this article. It's this article here. So if"}, {"start": 119.60000000000001, "end": 125.96000000000001, "text": " you haven't seen this yet, I encourage you to read it. It's pretty horrifying to read,"}, {"start": 125.96000000000001, "end": 131.48000000000002, "text": " but it gives you an idea of what people talk about when they say sexual harassment is a problem"}, {"start": 131.48000000000002, "end": 138.48000000000002, "text": " is pervasive at conferences and so on. So yeah, just I don't want to go into this specifically,"}, {"start": 138.48, "end": 147.16, "text": " just go ahead read it and see what people are talking about. So I think it's important context"}, {"start": 147.16, "end": 154.92, "text": " to understand where people are coming from. So they go on to say, have more subtle acts"}, {"start": 154.92, "end": 164.79999999999998, "text": " of gender harassment. They defined in this report, this includes like sexist hostility,"}, {"start": 164.8, "end": 170.08, "text": " crude behavior and so on, have gotten less public attention. Nonetheless, gender harassment"}, {"start": 170.08, "end": 174.88000000000002, "text": " is extremely pervasive. It's a direct contributor to the challenges faced by women in this"}, {"start": 174.88000000000002, "end": 179.68, "text": " STEM field. In this article, we argue that NIPs, the former acronym of the NERL Information"}, {"start": 179.68, "end": 184.36, "text": " Processing Systems Conference, constituted gender harassment towards women. So that's"}, {"start": 184.36, "end": 192.4, "text": " what their arguments basically about. So the acronym led to basically had its part in"}, {"start": 192.4, "end": 199.4, "text": " gender harassment towards women basically led to an environment where women could not"}, {"start": 199.4, "end": 210.68, "text": " feel comfortable at this conference. So here's their description. In popular slang, the word"}, {"start": 210.68, "end": 217.92000000000002, "text": " NIPs is an abbreviation for NIPs. Furthermore, it has a story that has been used as a racial"}, {"start": 217.92, "end": 223.32, "text": " slur targeting people of Japanese origin, but will not go into this deeper because that's"}, {"start": 223.32, "end": 229.79999999999998, "text": " kind of a historic use of the word. The current use of the word in fact is the slang for NIPs"}, {"start": 229.79999999999998, "end": 237.51999999999998, "text": " and so we'll focus on that. They say at first glance the fact that a major machine learning"}, {"start": 237.51999999999998, "end": 245.32, "text": " conference shared its name with this slang is an unfortunate but unimportant coincidence."}, {"start": 245.32, "end": 250.12, "text": " When it really is a coincidence, I think the conference name has been around for a longer"}, {"start": 250.12, "end": 256.56, "text": " than this slang has been kind of popular. This slang word has been popular. So it really"}, {"start": 256.56, "end": 264.71999999999997, "text": " is a coincidence. Many other conferences have same coincidences like cult, for example."}, {"start": 264.71999999999997, "end": 271.32, "text": " Maybe actually that's even less a coincidence than here. They say in fact one might hope"}, {"start": 271.32, "end": 276.0, "text": " that members of the machine learning community are sufficiently mature that the conference"}, {"start": 276.0, "end": 282.24, "text": " is name is unimportant. That's basically what everyone would hope. Maybe people don't"}, {"start": 282.24, "end": 287.56, "text": " even notice and if they notice, maybe they'll have like a two second, oh that's the other"}, {"start": 287.56, "end": 295.32, "text": " word. But then we basically just go on with our lives and no one cares too much. So that's"}, {"start": 295.32, "end": 302.32, "text": " kind of the ideal scenario and they acknowledge that here. It's really important that they"}, {"start": 302.32, "end": 310.64, "text": " do. They say, right? Unfortunately this appears not to be the case and they detail a few examples"}, {"start": 310.64, "end": 315.24, "text": " here at the 2017 conference Elon Musk made inappropriate jokes about the acronym,"}, {"start": 315.24, "end": 321.56, "text": " Participants War Loot T-shirts. I think once said my NIPs are NP hard, which is kind of"}, {"start": 321.56, "end": 331.84, "text": " a double computer science joke, I guess. There was a pre-conference event named where"}, {"start": 331.84, "end": 339.48, "text": " I can't probably say out loud without getting some sort of strike. You can clearly see that"}, {"start": 339.48, "end": 345.68, "text": " even though the kind of original name is going sedental and one would hope that people"}, {"start": 345.68, "end": 353.76, "text": " are like, just putting it off, be adults about it. There have been jokes, there have been"}, {"start": 353.76, "end": 361.88, "text": " t-shirts made and you can say the name collision is not like is unintended but I think this"}, {"start": 361.88, "end": 371.56, "text": " word here is very intended. So I think the main argument here, one of the main arguments"}, {"start": 371.56, "end": 378.4, "text": " is this really, first of all, it creates an environment where certain people don't feel"}, {"start": 378.4, "end": 384.36, "text": " comfortable. It creates kind of a sexualized environment. Second of all, in a more broader"}, {"start": 384.36, "end": 391.28, "text": " sense, it's just unprofessional as a community, especially since the kind of community is"}, {"start": 391.28, "end": 398.32, "text": " booming. We want to represent machine learning to the wider world, one can say, okay, it's"}, {"start": 398.32, "end": 407.28, "text": " just unprofessional that we kind of bring, intertwine these things, it doesn't make a good"}, {"start": 407.28, "end": 412.24, "text": " impression. They say further, more reminders of the unfortunate acronym are everywhere."}, {"start": 412.24, "end": 417.64, "text": " Online search for the acronym, let's not say for content, the hashtag nipsisdevoted to"}, {"start": 417.64, "end": 425.76, "text": " pornography. If you misspell the conference website, you get to an adult site and I think"}, {"start": 425.76, "end": 430.84, "text": " this further goes into the argument that it's just an unprofessional appearance towards"}, {"start": 430.84, "end": 436.84, "text": " the outside. It's unfortunate, the conference has been here longer, but still there's a need"}, {"start": 436.84, "end": 442.36, "text": " to do something about it. I largely agree with these arguments, these are good arguments"}, {"start": 442.36, "end": 452.32, "text": " to make for a change of name. This paragraph down here, it's a bit of a, we'll go into that"}, {"start": 452.32, "end": 459.88, "text": " later. It's not very connected to the arguments made here, so it's more connected to what's"}, {"start": 459.88, "end": 465.36, "text": " been happening, so we'll go into that later. People have been circulating these arguments"}, {"start": 465.36, "end": 471.15999999999997, "text": " and calling for a name change for a while, and then the board of the conference, the nips"}, {"start": 471.15999999999997, "end": 478.4, "text": " board, made a survey, surveying the attendance of the last five years conferences, whether"}, {"start": 478.4, "end": 487.44, "text": " or not the conference should change its name. The next section is dedicated to how the survey"}, {"start": 487.44, "end": 499.88, "text": " turned out and what the response of the board was. Actually, let's first go to the decision"}, {"start": 499.88, "end": 508.28, "text": " by the board. Here is the press release. This is a press release after the survey results"}, {"start": 508.28, "end": 521.0, "text": " had been collected. They said our survey was returned by about 2,200 people here. As I"}, {"start": 521.0, "end": 527.28, "text": " said, I have attended nips in the last five years. Of the mail respondents, about 28% are"}, {"start": 527.28, "end": 532.4399999999999, "text": " in favor of the conference name change. Of the female respondents, about 44% are in favor"}, {"start": 532.44, "end": 539.4000000000001, "text": " of a name change. 40% prefer existing names, 16% express no preferences. In fact, let's"}, {"start": 539.4000000000001, "end": 545.0, "text": " go look at the detailed results, which they have down here. You can see, overall, there"}, {"start": 545.0, "end": 552.7600000000001, "text": " is a big slant towards not agree, so negative 2 is strongly disagree with a name change,"}, {"start": 552.7600000000001, "end": 560.48, "text": " well, positive 2 is strongly agree. You can see there is a big slant towards the not agree."}, {"start": 560.48, "end": 569.2, "text": " If you split this by gender of respondents, then you can see the mail distribution is"}, {"start": 569.2, "end": 575.24, "text": " that slant, while the female distribution is a bit different, as you can see here. The"}, {"start": 575.24, "end": 584.44, "text": " first thing is mostly towards the extremes. So there are more people strongly saying something,"}, {"start": 584.44, "end": 591.6800000000001, "text": " non-strongly saying something, to either side. The second of all, it seems very divided"}, {"start": 591.6800000000001, "end": 596.8800000000001, "text": " and very evenly divided. So, in fact, if you look at the numbers, if you count the"}, {"start": 596.8800000000001, "end": 603.6400000000001, "text": " disagrees and the agrees, you'll find there is a slight majority in the agrees. There"}, {"start": 603.6400000000001, "end": 609.8800000000001, "text": " is a slight majority in the disagrees if you only consider the strong, but ultimately"}, {"start": 609.88, "end": 615.76, "text": " these numbers are pretty close, so that there's people on either side feeling strongly, and"}, {"start": 615.76, "end": 626.32, "text": " there's about in the survey, about as many on either side. So that's basically the outcome"}, {"start": 626.32, "end": 631.4, "text": " of this. Here, I find very interesting some quotes from respondents. So, you had the"}, {"start": 631.4, "end": 637.92, "text": " opportunity to put quotes, to put a comment, and these are quoted from these comments. So,"}, {"start": 637.92, "end": 643.0799999999999, "text": " they say, for example, this, thanks for considering a name change. I'm not personally bothered by"}, {"start": 643.0799999999999, "end": 648.92, "text": " the current name, but I think the gesture will send a much needed, inclusive, five in the"}, {"start": 648.92, "end": 657.8, "text": " right direction. One person says, if you were up to me at call of this nice, but symbolic"}, {"start": 657.8, "end": 665.64, "text": " gesture, use whatever time, money, and energy to make actual changes, then someone says,"}, {"start": 665.64, "end": 669.64, "text": " please, please, please change the name, it is sexist and racist, Schler. Schler, I'm"}, {"start": 669.64, "end": 674.52, "text": " embarrassed every time I have to say the name of the conference, his feeds into the, on"}, {"start": 674.52, "end": 680.12, "text": " professionalism argument. The next one I find very interesting, it says, as a woman, I"}, {"start": 680.12, "end": 684.84, "text": " find it offensive, that the board is seriously considering changing the name of the meeting"}, {"start": 684.84, "end": 689.52, "text": " because of an adolescent reference to a woman's body. From my point of view, it shows that"}, {"start": 689.52, "end": 694.36, "text": " the board does not see me as an equal member of the community, but as a woman first and"}, {"start": 694.36, "end": 703.36, "text": " second, this is extremely interesting. This is one of the people who was a female respondent"}, {"start": 703.36, "end": 709.5600000000001, "text": " and said strongly disagree with the name change, or disagree with the name change. I mean,"}, {"start": 709.5600000000001, "end": 720.04, "text": " I can guess. We've only heard so far that the name or the acronym is offensive to women,"}, {"start": 720.04, "end": 727.9599999999999, "text": " but here we have a woman saying that the consideration to change the acronym is actually offensive"}, {"start": 727.9599999999999, "end": 738.68, "text": " to her. That's very special, understandable. I can understand why that happens. I can"}, {"start": 738.68, "end": 746.88, "text": " understand the argument made here. This woman feels like, okay, it shows me that basically"}, {"start": 746.88, "end": 756.16, "text": " my gender is important and not really my being scientist. It's an argument. The next one"}, {"start": 756.16, "end": 760.56, "text": " goes into the same direction, says, I am a woman, I've experienced being harassed by male"}, {"start": 760.56, "end": 765.2, "text": " academics, and I would like this problem to be discussed and addressed, but not in this"}, {"start": 765.2, "end": 771.6, "text": " frankly, almost offensive way. So another person saying, basically, that's changing the"}, {"start": 771.6, "end": 779.88, "text": " name is kind of the almost offensive, and it's not the right way to go to achieve these"}, {"start": 779.88, "end": 786.0, "text": " results. There's another one saying, I'm in favor of the name change, but this is cosmetic."}, {"start": 786.0, "end": 793.0400000000001, "text": " So you have basically people coming from all angles, giving their opinions, and you"}, {"start": 793.0400000000001, "end": 799.5600000000001, "text": " can clearly see why there is, especially in the female respondent group, why there is"}, {"start": 799.56, "end": 811.3599999999999, "text": " a divide. And so the board, overall, said the following. Wow, this is never mind. The"}, {"start": 811.3599999999999, "end": 819.4399999999999, "text": " board overall said the following. Here, after extensive discussions, the NIPS board has"}, {"start": 819.4399999999999, "end": 825.68, "text": " decided not to change the name of the conference for now. The poll itself did not yield a clear"}, {"start": 825.68, "end": 833.28, "text": " consensus on the name change, or well regarded alternative name. Further, they state, instead,"}, {"start": 833.28, "end": 837.64, "text": " we ask the community supporting implementing complete steps to improve the inclusiveness"}, {"start": 837.64, "end": 843.64, "text": " of the conference. So these are described down here. They have a number of changes to"}, {"start": 843.64, "end": 852.5999999999999, "text": " make the conference basically more inclusive. So they basically said, okay, so the name"}, {"start": 852.6, "end": 862.28, "text": " change, the name change survey was inconclusive. And they clearly say, whatever we do here,"}, {"start": 862.28, "end": 865.8000000000001, "text": " regardless of which decision we take, we're failing to accommodate the opinions about"}, {"start": 865.8000000000001, "end": 870.44, "text": " half the women in the community, which is true. This is clearly what you can see from"}, {"start": 870.44, "end": 875.5600000000001, "text": " the results, from the quotes. So basically what they say is, we'll not change the conference"}, {"start": 875.5600000000001, "end": 882.12, "text": " name for now. We'll implement these steps because what they, I can guess, what they felt"}, {"start": 882.12, "end": 887.64, "text": " was, okay, even the people against the name change were in support of making the conference"}, {"start": 887.64, "end": 891.64, "text": " more inclusive. So they basically say, okay, we do these things. We've strengthened their"}, {"start": 891.64, "end": 898.6, "text": " code of conduct. We have two inclusion diversity chairs. We have an inclusion town hall. We"}, {"start": 898.6, "end": 904.8, "text": " have childcare support, gender inclusive restrooms, and so on, and so on, mentoring breakfasts"}, {"start": 904.8, "end": 911.52, "text": " for women and other minorities. So they take these steps concretely. They say, this is"}, {"start": 911.52, "end": 918.96, "text": " what we do. And even further, if you look at their page on diversity and inclusion, which"}, {"start": 918.96, "end": 926.84, "text": " I have here, they say here on the top, in addition to hosting diversity related event, the"}, {"start": 926.84, "end": 930.88, "text": " conference is also making consider instruction changes, include a new code of conduct,"}, {"start": 930.88, "end": 936.0799999999999, "text": " we've already seen, and in depth discussion of the potential of changing the name of the"}, {"start": 936.08, "end": 946.88, "text": " conference. So in total, what they're saying is, we've done this poll. It came back"}, {"start": 946.88, "end": 953.8000000000001, "text": " inconclusive, which I think has been clearly demonstrated, will not change the name of"}, {"start": 953.8000000000001, "end": 960.6400000000001, "text": " the conference for now, and will do all of these other, all of these other things, right,"}, {"start": 960.64, "end": 966.76, "text": " down there. And at the conference, we'll hold a meeting and discuss the name change. So"}, {"start": 966.76, "end": 972.4399999999999, "text": " we could maybe potentially change it in upcoming years, right. I think this is a really sensible"}, {"start": 972.4399999999999, "end": 978.56, "text": " decision by the board. I mean, given the state, given all of that, this is probably the"}, {"start": 978.56, "end": 983.8, "text": " most sensible decision, like let's take concrete steps, the name change seems to be, you"}, {"start": 983.8, "end": 991.7199999999999, "text": " know, debatable. So let's actually debate it at the conference with the actual community."}, {"start": 991.7199999999999, "end": 997.92, "text": " So that was the basically result of the poll. Let's now go back to what the paper has"}, {"start": 997.92, "end": 1005.56, "text": " to say about this. So here's the paper again, and they say in order to collect data about"}, {"start": 1005.56, "end": 1009.4, "text": " the machine learning community's feelings about the conference name, the conference"}, {"start": 1009.4, "end": 1014.84, "text": " boards end out a survey to people who have attended the conference during the past five years."}, {"start": 1014.84, "end": 1022.76, "text": " However, serving only conference at these results in a very biased sample of a much larger"}, {"start": 1022.76, "end": 1027.8799999999999, "text": " community of potential machine learning researchers, bias arises due to the fact that some"}, {"start": 1027.8799999999999, "end": 1033.04, "text": " people who are made uncomfortable by the name or by other aspects of the machine learning"}, {"start": 1033.04, "end": 1040.2, "text": " culture may have decided not to enter or to remain in the field, have chosen not to"}, {"start": 1040.2, "end": 1045.76, "text": " attend the conference. So basically, you're saying, well, if you only ask this one group"}, {"start": 1045.76, "end": 1050.92, "text": " of people, right, then this other group of people, you know, doesn't have a chance to make"}, {"start": 1050.92, "end": 1057.12, "text": " their voice heard. And there is basically bias because in this other group of people,"}, {"start": 1057.12, "end": 1063.0, "text": " the people who have not attended the conference, they would have a severely different opinion"}, {"start": 1063.0, "end": 1069.0, "text": " from the people who have not attended the conference. So first of all, I think this can be a valid"}, {"start": 1069.0, "end": 1075.44, "text": " point here. Of course, always if you ask one group of people and exclude another one,"}, {"start": 1075.44, "end": 1084.56, "text": " there's if the group you ask and the target group, which here is really unclear what it"}, {"start": 1084.56, "end": 1091.36, "text": " is. I guess it's the machine learning community considering going to the conference. If"}, {"start": 1091.36, "end": 1099.32, "text": " those don't overlap, then you will introduce some sort of bias. And they say, okay, bias"}, {"start": 1099.32, "end": 1104.7199999999998, "text": " could come from the fact, you know, some people who actually are affected by these problems"}, {"start": 1104.7199999999998, "end": 1109.6799999999998, "text": " of which this name is one, they may have, you know, not attended the conference because"}, {"start": 1109.6799999999998, "end": 1114.32, "text": " they may have left the field because the gender harassment is so pervasive and they just"}, {"start": 1114.32, "end": 1120.1599999999999, "text": " didn't didn't stay. And so I think this can be a good point. But the problem I have with"}, {"start": 1120.16, "end": 1128.0800000000002, "text": " it here is that it's simply stated without anything. It's simply said, okay, there is bias,"}, {"start": 1128.0800000000002, "end": 1136.28, "text": " bias arises. And my question would be, how much is that bias? If any data, like any data"}, {"start": 1136.28, "end": 1145.5600000000002, "text": " on this, you can't just criticize these, the survey for being biased and then not provide"}, {"start": 1145.56, "end": 1150.12, "text": " actual data, like how many people are there who are made uncomfortable by the name or have"}, {"start": 1150.12, "end": 1158.28, "text": " left the field in, who have left the field because of these things. And is it really viable"}, {"start": 1158.28, "end": 1164.0, "text": " to count them in, I guess, okay, we can argue it is. But how would they have responded to"}, {"start": 1164.0, "end": 1172.6799999999998, "text": " this? We've clearly seen that a lot of affected people that even have experienced harassment"}, {"start": 1172.68, "end": 1178.76, "text": " are not in favor of the name change. So in this case, I would really like to see some"}, {"start": 1178.76, "end": 1193.1200000000001, "text": " data on how much this bias is, right? And I cannot also say it's not that bad of a decision"}, {"start": 1193.1200000000001, "end": 1199.52, "text": " to what the board did to send the survey to the last five years at attendees. I think"}, {"start": 1199.52, "end": 1205.76, "text": " it's a very sensible choice if you want to gather the communities feelings towards these"}, {"start": 1205.76, "end": 1211.84, "text": " kinds of things. I mean, you can't just ask the entire world because the entire world"}, {"start": 1211.84, "end": 1218.52, "text": " is not the machine learning community. So I think the, this is a very sensible decision"}, {"start": 1218.52, "end": 1225.96, "text": " to ask last five years attendees. And if you have real evidence that this causes a"}, {"start": 1225.96, "end": 1233.44, "text": " notifiable, like a significant bias, then we could potentially correct for that bias."}, {"start": 1233.44, "end": 1239.8, "text": " But without any data on that, I think the, the asking last five years participants was"}, {"start": 1239.8, "end": 1248.52, "text": " completely reasonable. And one of, I don't really see how you can do a much better job"}, {"start": 1248.52, "end": 1256.12, "text": " without much, much more manual work. And I want to make this point a bit clearer on how"}, {"start": 1256.12, "end": 1265.12, "text": " hard it actually is to do that by pointing to the response to this. So here's a tweet"}, {"start": 1265.12, "end": 1271.16, "text": " thread by one of the authors of this paper. After the conference decision came out, she"}, {"start": 1271.16, "end": 1277.92, "text": " basically tweeted out this protest nips, I am starting this new hashtag, please retweet"}, {"start": 1277.92, "end": 1283.5600000000002, "text": " if you're in support of the nips conference changing its name. So basically kind of launching"}, {"start": 1283.5600000000002, "end": 1290.04, "text": " a, a Twitter campaign, a Twitter hashtag under this to come, you know, get into a conversation"}, {"start": 1290.04, "end": 1298.8400000000001, "text": " with people about this. People could express their support. She also, that was a misclick."}, {"start": 1298.8400000000001, "end": 1306.48, "text": " She also here made a change.org petition to change the name. So a petition, basically,"}, {"start": 1306.48, "end": 1315.88, "text": " the petition is here. The text of the petition basically says something similar to the,"}, {"start": 1315.88, "end": 1326.6, "text": " to the what we've already seen, including there is the criticism of the survey. And as"}, {"start": 1326.6, "end": 1337.9199999999998, "text": " you can see here, about 2000 people have signed it. So I mean, a Twitter hashtag is all good,"}, {"start": 1337.9199999999998, "end": 1342.76, "text": " you know, you can do that. A petition is all good. You can do that. But it's a bit ironic"}, {"start": 1342.76, "end": 1349.6799999999998, "text": " because a change to the work petition literally anyone can sign this. And in addition to that,"}, {"start": 1349.68, "end": 1358.4, "text": " there's only one option. You can only say yes, you can't even say no. Right. So and even"}, {"start": 1358.4, "end": 1363.6000000000001, "text": " more, who's going to see the change that or petition is going to be the social media followers"}, {"start": 1363.6000000000001, "end": 1369.44, "text": " of these people, right? So basically, you have now a, you have it now, what's basically"}, {"start": 1369.44, "end": 1376.16, "text": " a survey of the social media network of people in favor of changing the name, where there's"}, {"start": 1376.16, "end": 1384.64, "text": " only one option to respond. I find it. And so I've gone through here, the people who actually"}, {"start": 1384.64, "end": 1390.64, "text": " publicly associate their name, give a reason for signing a lot of these, they, you know,"}, {"start": 1390.64, "end": 1396.48, "text": " they give them argument why they've signed the petition. But I've tried searching these"}, {"start": 1396.48, "end": 1402.8000000000002, "text": " people for any sort of academic track record. And in my sample, I've come up with between"}, {"start": 1402.8, "end": 1415.44, "text": " 10 and 20% of people who somehow have an academic track record. So this is, I mean, certainly"}, {"start": 1416.48, "end": 1423.84, "text": " a valid thing to make your voice heard and to show your numbers. And but I mean, look at this,"}, {"start": 1423.84, "end": 1435.76, "text": " it's a bot, signing twice. Hello, Jack Nelson and Richard Chi. Very nice. But so basically,"}, {"start": 1436.32, "end": 1443.6, "text": " I'm not here to criticize petitions, but what I want to say is you can't like criticize this,"}, {"start": 1443.6, "end": 1452.9599999999998, "text": " this poll so hard for being biased. And then launching basically an own poll that's even more"}, {"start": 1452.96, "end": 1460.56, "text": " biased and even more non-representative of the community. To me, that's, that's kind of ironic."}, {"start": 1461.8400000000001, "end": 1468.32, "text": " And just goes to show how hard this is and my argument would be it's actually not that unsensible"}, {"start": 1468.32, "end": 1473.8400000000001, "text": " of a decision of the board the way they did it. And if you have, again, if you have data,"}, {"start": 1473.8400000000001, "end": 1479.76, "text": " to actually quantify the bias here, then it's viable to go and correct for that."}, {"start": 1479.76, "end": 1487.68, "text": " All right. So to the, they go on to analyze the survey results conference board,"}, {"start": 1487.68, "end": 1494.48, "text": " simply noted that of the 294 women surveyed, the number who strongly support or support the name"}, {"start": 1494.48, "end": 1500.0, "text": " change is comparable to the number of women who are strongly opposed to opposed."}, {"start": 1501.44, "end": 1506.96, "text": " However, this analysis implicitly assumes that one person's feeling of discomfort or marginalization"}, {"start": 1506.96, "end": 1512.8, "text": " as a result of the name should be given the same weight as another's persons preference"}, {"start": 1512.8, "end": 1520.56, "text": " for the status quo. This amounts to giving the same way to false positives and false negatives."}, {"start": 1521.1200000000001, "end": 1526.48, "text": " Of course, we learn in an introductory statistics course that false positives and false negatives"}, {"start": 1526.48, "end": 1533.44, "text": " should be assigned weights dependent on context. In this context, we feel that a much greater weight"}, {"start": 1533.44, "end": 1537.92, "text": " should be given to the views of a person who feels marginalized as a result of the name."}, {"start": 1540.24, "end": 1546.72, "text": " So up here, I find this a bit strange. They say, this amounts to giving the same"}, {"start": 1546.72, "end": 1555.3600000000001, "text": " way to false positives and false negatives. To me, the false is here a bit confusing because it"}, {"start": 1555.3600000000001, "end": 1562.8, "text": " seems to me it's simply giving the same way to negatives and positives. I don't think there's a"}, {"start": 1562.8, "end": 1569.84, "text": " need to dress this up in statistical lingo here. It's simply we give the same way to people who"}, {"start": 1569.84, "end": 1577.04, "text": " responded positively and to people who responded negatively. I think that's it. There's no false."}, {"start": 1579.36, "end": 1584.8799999999999, "text": " Of course, we learn in an introductory statistics class that false positives and false negatives"}, {"start": 1584.8799999999999, "end": 1589.6, "text": " should be assigned weights dependent on context. In this context, we feel that a much greater weight"}, {"start": 1589.6, "end": 1593.04, "text": " should be given to the views of a person who feels marginalized as a result of the name."}, {"start": 1595.1999999999998, "end": 1601.76, "text": " I would say to this, it's the problem for me. This is one of the things that where you"}, {"start": 1601.76, "end": 1607.4399999999998, "text": " at really first and you say like, oh yeah, this makes sense. But first of all, it's framed"}, {"start": 1607.4399999999998, "end": 1614.0, "text": " extremely one-sided. It's framed as all the people who are for the name change like they"}, {"start": 1614.0, "end": 1620.64, "text": " feel discomforted, they feel marginalized and the people who are against the name change, they"}, {"start": 1620.64, "end": 1629.44, "text": " simply, and here specifically, they talk about the women group. So in argument, they're all"}, {"start": 1629.44, "end": 1638.56, "text": " affected, the people against it simply prefer the status quo. But we've clearly seen in the press"}, {"start": 1638.56, "end": 1649.28, "text": " release and we'll go over to that now. These quotes here, we've clearly seen that the offense"}, {"start": 1649.28, "end": 1656.0, "text": " and the marginalization happens on both sides. So here, as a woman, I find it offensive that the"}, {"start": 1656.0, "end": 1662.56, "text": " board is considering changing the name. It shows that the board does not see me as an equal member"}, {"start": 1662.56, "end": 1668.0, "text": " of the community, but as a woman first and the scientist second. I mean, this is almost a textbook"}, {"start": 1668.0, "end": 1675.6, "text": " definition of marginalization. And this is clearly happening on the other side as well. So I think the"}, {"start": 1675.6, "end": 1685.92, "text": " framing here is extremely dishonest in one-sided and there is given basically the side that we"}, {"start": 1685.92, "end": 1692.24, "text": " just seen in this quote is given absolutely no, not even a mention that it exists. It's simply"}, {"start": 1692.24, "end": 1698.88, "text": " framed as this side is marginalized and oppressed and discomforted and the other side simply prefers"}, {"start": 1698.88, "end": 1706.96, "text": " the status quo. But we've clearly seen that, yeah, this almost fits exactly this definition. It's just"}, {"start": 1709.1200000000001, "end": 1715.92, "text": " one person's feeling or discomfort or marginalization as a result of the name. It's just as a result"}, {"start": 1715.92, "end": 1725.3600000000001, "text": " of the name change. Second of all, I think the bigger problem and this goes into the statement down"}, {"start": 1725.3600000000001, "end": 1731.92, "text": " here to state this last point more explicitly. An issue adversely affecting the minority of"}, {"start": 1731.92, "end": 1739.52, "text": " participants should not be decided by a majority vote. Again, something at first you say, oh yeah,"}, {"start": 1739.52, "end": 1748.16, "text": " that makes sense. But if you think about it, this is a really, really outrageous statement. And the"}, {"start": 1748.16, "end": 1760.56, "text": " reason is it's outrageous is if it's not majority vote, if it's not one person, one vote, then someone"}, {"start": 1760.56, "end": 1768.16, "text": " has to decide who gets to vote and who doesn't. And more so specifically here, someone basically"}, {"start": 1768.16, "end": 1776.88, "text": " needs to decide who should be given what weight in a vote. You need someone to decide this. And"}, {"start": 1776.88, "end": 1783.8400000000001, "text": " here you can say, well, it's easy. It's just the women because they're affected. I did, but they"}, {"start": 1783.8400000000001, "end": 1789.3600000000001, "text": " go further. They say, well, it's the women who feel discomforted and marginalized, who should"}, {"start": 1789.3600000000001, "end": 1794.0, "text": " be given more weight than the ones who simply prefer the status quo. But then you have to have"}, {"start": 1794.0, "end": 1799.12, "text": " someone assessing whether someone is really marginalized and discomforted or simply"}, {"start": 1799.12, "end": 1807.28, "text": " prefers the status quo. And it's not like an environment where there is kind of a sexist"}, {"start": 1807.28, "end": 1816.56, "text": " undertone isn't also discomforting or can't also be discomforting to men, to men of any sort"}, {"start": 1816.56, "end": 1828.56, "text": " or people of any sort of gender. It's just not clear. The fact that people should be given"}, {"start": 1828.56, "end": 1834.3999999999999, "text": " different weight in crafting an opinion. I mean, this can be true if you have like some"}, {"start": 1834.3999999999999, "end": 1842.56, "text": " clear area of expertise. But in this case, it's really unclear. And the fact is if it's not"}, {"start": 1842.56, "end": 1851.6, "text": " majority vote, you need someone deciding the weight. And the someone deciding the weight automatically"}, {"start": 1851.6, "end": 1857.52, "text": " decides on the outcome of the vote. And then why do you need a vote in the first place?"}, {"start": 1858.3999999999999, "end": 1865.28, "text": " Basically, up here they say, yeah, we feel the great weight should be aligned like this. And"}, {"start": 1865.28, "end": 1870.8, "text": " but down here, there is no more we feel. It's be an issue adverse affecting the minority"}, {"start": 1870.8, "end": 1876.3999999999999, "text": " presumed to should not be decided by majority vote. They're basically calling for a dictatorship."}, {"start": 1877.28, "end": 1883.04, "text": " In this case, and I'm going to guess like everyone has the opinion that dictatorship"}, {"start": 1883.04, "end": 1890.08, "text": " would be an awesome idea if the dictator were me. Right? That's that's what everyone thinks,"}, {"start": 1890.08, "end": 1896.8799999999999, "text": " of course. And that's basically the argument made here. But it's not it's not true. And there's"}, {"start": 1896.88, "end": 1906.48, "text": " some really, really disturbing implicit things in here. And maybe I want to quickly go over how I"}, {"start": 1906.48, "end": 1915.92, "text": " think a democratic decision works. So imagine you have a person. And the person has a decision to make"}, {"start": 1915.92, "end": 1924.64, "text": " four or against. In this case, the name change, right? And the person must decide on one of these"}, {"start": 1924.64, "end": 1932.0, "text": " two things on a let's say on a continuous scale. But it doesn't matter. What this, what this"}, {"start": 1932.0, "end": 1939.2800000000002, "text": " stuff up here basically implicitly assumes is that the person looks at themselves and they think,"}, {"start": 1939.2800000000002, "end": 1947.5200000000002, "text": " well, am I personally discomforted or marginalized by the name or the climate it creates? No,"}, {"start": 1947.5200000000002, "end": 1952.72, "text": " and I'm obviously against the name change because it doesn't help me. Or another person"}, {"start": 1952.72, "end": 1959.6000000000001, "text": " go and I personally affected. Yes. Well, I feel discomfort or marginalized. Well, then I'm obviously"}, {"start": 1959.6000000000001, "end": 1968.56, "text": " for a name change. So the basic assumption here is that people simply vote purely their own"}, {"start": 1968.56, "end": 1975.76, "text": " egotistical interests. And that's that's it. So basically, if you're in one of these minorities,"}, {"start": 1975.76, "end": 1980.8, "text": " then you'll vote for the name change because it affects you, which we've already seen is not,"}, {"start": 1980.8, "end": 1987.84, "text": " it's not a given that people vote that way. And if you're not in this, then you know, you"}, {"start": 1987.84, "end": 1992.8, "text": " you'd vote against, but you're not affected. So your vote shouldn't count. It's completely untrue."}, {"start": 1992.8, "end": 1998.8, "text": " What people do, especially smart people, and I believe the machine learning community consists"}, {"start": 1998.8, "end": 2007.12, "text": " largely of these. What they do is they'll make a list of arguments, argument one, argument two,"}, {"start": 2007.12, "end": 2012.9599999999998, "text": " argument three, argument four. Everyone has the same arguments. Everyone's heard the same"}, {"start": 2012.9599999999998, "end": 2017.6799999999998, "text": " arguments. If not, then maybe there's some work to do in actually getting arguments to people."}, {"start": 2019.4399999999998, "end": 2025.1999999999998, "text": " But that's not the same as weighing the people differently. You get the arguments to the people"}, {"start": 2025.1999999999998, "end": 2031.52, "text": " and then you weigh each of them equally. Why? Because what every person does is they say, okay,"}, {"start": 2031.52, "end": 2037.68, "text": " argument one is maybe it's unprofessional, right? Name is unprofessional. All right, how important"}, {"start": 2037.68, "end": 2042.48, "text": " is that to me? Give it a weight, weight one. Cool. That's really important to me. I'll give you"}, {"start": 2042.48, "end": 2052.24, "text": " the big weight. Argument two. Some people feel really discomforted, if you're marginalized, by the name"}, {"start": 2052.24, "end": 2057.44, "text": " creates a bad environment for them. How much weight am I going to give to that? So people can"}, {"start": 2057.44, "end": 2063.44, "text": " actually consider other people's feelings and other people's problems and decide on what's the"}, {"start": 2063.44, "end": 2071.92, "text": " best also for them in their own mind. So they give it a weight two. Then there's maybe two"}, {"start": 2071.92, "end": 2078.08, "text": " arguments against them giving these weight three, weight four. At the end, what you have is you have"}, {"start": 2078.08, "end": 2090.72, "text": " argument i. You will sum it up by the weight w i j. You will sum it up over all people. So basically"}, {"start": 2091.36, "end": 2097.36, "text": " now and this will give you like a final number a, which is either positive or negative. If it's"}, {"start": 2097.36, "end": 2101.2799999999997, "text": " positive, you do the name change. If it's negative, you don't do the name change. If you do this"}, {"start": 2101.28, "end": 2110.32, "text": " over all people, what you've basically done is you have just determined these weightings here"}, {"start": 2111.2000000000003, "end": 2116.96, "text": " by a democratic process. You've crowdsourced the weighting. This is exactly what these people"}, {"start": 2116.96, "end": 2127.52, "text": " say up here. We feel that here, not false, false, positive, false. We feel that positives and negatives"}, {"start": 2127.52, "end": 2134.72, "text": " should be assigned weights dependent on context. So the positive and negative arguments in this case"}, {"start": 2135.36, "end": 2141.2, "text": " are assigned weights dependent on context, but the weights are crowdsourced to the community."}, {"start": 2141.2, "end": 2148.48, "text": " Right? And each person, this who participates in that, each person who participates is one more"}, {"start": 2148.48, "end": 2157.2, "text": " brain power in a complicated decision that no one basically, no one has the authority to just"}, {"start": 2157.2, "end": 2161.68, "text": " decide for themselves. So these people are calling for a different weighting. This is the way to do"}, {"start": 2161.68, "end": 2168.3999999999996, "text": " it. The democratic majority vote is the exact way to determine these weights. What these people"}, {"start": 2168.3999999999996, "end": 2173.7599999999998, "text": " basically are, is no, no, no, no, no, no, we should determine the weights. We who know,"}, {"start": 2176.48, "end": 2182.24, "text": " I'm a bit corny here, but this is basically it's still it's two alternatives. Either you do"}, {"start": 2182.24, "end": 2190.0, "text": " democratic process, one person, one brain, one vote, and that will give you a crowdsourced"}, {"start": 2190.8799999999997, "end": 2198.0, "text": " crowdsourced true weighting of the arguments, what the community feels, or someone needs to decide."}, {"start": 2198.0, "end": 2207.2, "text": " Some one needs to decide by force basically, and that's a dictatorship. So these are the"}, {"start": 2207.2, "end": 2213.4399999999996, "text": " choices you have. And clearly, now you can maybe understand why I say this is an outrageous"}, {"start": 2213.4399999999996, "end": 2221.9199999999996, "text": " statement because to me, the dictatorship option is not an option. Note that I'm not saying that"}, {"start": 2222.72, "end": 2229.68, "text": " democracy can never be wrong or the majority can never be wrong, but in fact, it's the best"}, {"start": 2229.68, "end": 2240.08, "text": " system there is. It can be wrong, but anything else will undoubtedly go more wrong. So that's my point here."}, {"start": 2241.68, "end": 2246.08, "text": " All right, so that was maybe a bit ranty, but let's go on."}, {"start": 2249.8399999999997, "end": 2257.04, "text": " A false choice and a minimization of a real issue. So they go on to say what they think of the"}, {"start": 2257.04, "end": 2262.96, "text": " decision that the board made in response to this. So up was how they analyze the poll, and now it's"}, {"start": 2262.96, "end": 2268.32, "text": " the decision. In announcing their decision, not to change the conference name, conference board"}, {"start": 2268.32, "end": 2272.96, "text": " expressed commitment to implement concrete steps to improve the inclusiveness of the conference."}, {"start": 2272.96, "end": 2277.7599999999998, "text": " And they list them here and they say we sincerely applaud conference board for these efforts."}, {"start": 2277.76, "end": 2288.1600000000003, "text": " Okay, I think the community feels like that as well. However, the wording of the decision"}, {"start": 2288.1600000000003, "end": 2295.28, "text": " implied the need to choose between changing the name of the conference and taking concrete steps"}, {"start": 2295.28, "end": 2305.84, "text": " to improve its inclusiveness. I don't see that at all. Say this was a false choice. There's no reason"}, {"start": 2305.84, "end": 2310.56, "text": " that the board could not do both. Yes, there's no reason that they couldn't do both. And I believe"}, {"start": 2310.56, "end": 2316.1600000000003, "text": " we've read this together before. I don't think the board ever said that there was a choice between"}, {"start": 2316.1600000000003, "end": 2324.0, "text": " one or the other. I think they've said very much the opposite. Let's go back. I think what they"}, {"start": 2324.0, "end": 2335.1200000000003, "text": " mean here is the word instead. So here they say we won't change the name. And then here's they say"}, {"start": 2335.12, "end": 2340.7999999999997, "text": " instead we ask for the community support and implementing concrete steps. I think this this"}, {"start": 2340.7999999999997, "end": 2347.2799999999997, "text": " must be it because I don't really see any other way you would ever think that. And the reason is"}, {"start": 2348.16, "end": 2354.0, "text": " this here. They say well not change the name of the comments for now. On another page they say"}, {"start": 2354.0, "end": 2359.68, "text": " we'll discuss the name change at the conference. And then here instead I think what is meant is"}, {"start": 2359.68, "end": 2366.96, "text": " instead what we will do right now is these things. We'll discuss about the name change but what we"}, {"start": 2366.96, "end": 2374.08, "text": " will do right now which was basically not the real problem in the first place. The real issue raised"}, {"start": 2374.08, "end": 2379.68, "text": " was the name. So instead of that issue we'll do these other things which we feel the community wants."}, {"start": 2380.48, "end": 2386.8799999999997, "text": " I think that's the I think there's no I think everyone reading this comes to the same conclusion"}, {"start": 2386.88, "end": 2393.6800000000003, "text": " after after reading that but I so I really don't see how you you can say that this is kind of"}, {"start": 2393.6800000000003, "end": 2400.7200000000003, "text": " presented as an either or by the board. I don't think that at all. And you decide for yourself."}, {"start": 2400.7200000000003, "end": 2409.36, "text": " I believe the real real the real crux here is the for now and the promise to discuss at the conference"}, {"start": 2409.36, "end": 2417.92, "text": " which if you can see here in the paper is never ever ever touched right. They make it basically"}, {"start": 2417.92, "end": 2424.08, "text": " seem that the board has decided to not change the name and that's it which is completely wrong."}, {"start": 2424.08, "end": 2430.1600000000003, "text": " They've clearly stated their openness to a name change they want to discuss it. It was just"}, {"start": 2430.1600000000003, "end": 2435.6, "text": " inconclusive so they want to basically not do anything rash and then have the community is"}, {"start": 2435.6, "end": 2441.92, "text": " against it anyway so they want to discuss it. I to say that this is the basically"}, {"start": 2444.24, "end": 2454.88, "text": " that that the wording imply the need to choose I don't see that. But you know you decide for"}, {"start": 2454.88, "end": 2461.7599999999998, "text": " yourselves. The board suggested an name change would only be symbolic and so on would have no real"}, {"start": 2461.76, "end": 2467.6800000000003, "text": " consequences so that these these are some of the arguments basically made in the quotes as well."}, {"start": 2470.0800000000004, "end": 2475.92, "text": " But you know the fact that the name change would only be symbolic and so on these are all things"}, {"start": 2475.92, "end": 2482.2400000000002, "text": " you could actually discuss at the conference meeting. You could even correct for your"}, {"start": 2482.8, "end": 2488.5600000000004, "text": " for your poll right. You could invite people who have left the community to represent those."}, {"start": 2488.56, "end": 2494.32, "text": " You could invite new potentially researchers. You could give everyone their voice and then actually"}, {"start": 2494.32, "end": 2499.7599999999998, "text": " listen to all of them. I think that's a very sensible decision by the board and I think this is"}, {"start": 2499.7599999999998, "end": 2507.68, "text": " misrepresented here. Lastly to say another argument though not explicitly mentioned a number of"}, {"start": 2507.68, "end": 2510.7999999999997, "text": " machine learning researchers told us that changing the name of the conference would lead to too much"}, {"start": 2510.7999999999997, "end": 2516.7999999999997, "text": " confusion in the community while we understand we respectfully do not share it. I mean this is"}, {"start": 2516.8, "end": 2522.32, "text": " is basically an argument against the name change. I think it's also a point worthy of discussion."}, {"start": 2524.88, "end": 2530.96, "text": " They say we respectfully do not share this point. Yeah okay they don't share it. Other people do"}, {"start": 2530.96, "end": 2536.8, "text": " it's a point of discussion. You know you could actually discuss it at the conference but I actually"}, {"start": 2536.8, "end": 2543.76, "text": " agree with the authors here. I think changing the name will not have a big impact on the kind of"}, {"start": 2543.76, "end": 2550.48, "text": " recognizableity of the conference especially now down here we'll actually get into what actually"}, {"start": 2550.48, "end": 2558.32, "text": " happened. In November the in response to extensive public backlash the conference board announced"}, {"start": 2558.32, "end": 2562.88, "text": " a change to the official conference acronym to NIRPS. They say we are pleased."}, {"start": 2564.7200000000003, "end": 2573.36, "text": " Provides this provides a reasonable compromise. So in my opinion this is it as far as solutions go"}, {"start": 2573.36, "end": 2580.6400000000003, "text": " this is a good solution. The NIRPS acronym I think it's cool. You don't have to change the name"}, {"start": 2580.6400000000003, "end": 2587.1200000000003, "text": " of the conference itself. You simply changed the acronym which was the reported problem"}, {"start": 2587.6800000000003, "end": 2594.1600000000003, "text": " in the first place. I think the oldenew papers will like people will still recognize the old"}, {"start": 2594.1600000000003, "end": 2602.2400000000002, "text": " NIRPS acronym or the new conference. It will be clear that it's the same thing and I think this is"}, {"start": 2602.24, "end": 2609.6, "text": " a very good a very good new name and I think people will get used to it pretty quickly. Also you know"}, {"start": 2610.24, "end": 2620.16, "text": " to say NIRPS it's also rolls off the tongue easily. So it's as far as solutions go I like it."}, {"start": 2621.8399999999997, "end": 2629.7599999999998, "text": " Further they say however the work for the conference board is far from done. We encourage"}, {"start": 2629.76, "end": 2636.8, "text": " the board to continue its efforts blah blah blah. So they say okay you have to do more than"}, {"start": 2636.8, "end": 2643.1200000000003, "text": " just change the name and so on. They say together these steps will help ensure that the NIRPS"}, {"start": 2643.1200000000003, "end": 2646.6400000000003, "text": " conference retains its place in the forefront of machine learning research while also creating"}, {"start": 2646.6400000000003, "end": 2652.4, "text": " a welcoming environment for women and members of other represent groups on other underrepresented"}, {"start": 2652.4, "end": 2663.44, "text": " groups. We all hope that. To me the problem is a bit how this how this went down and if we go"}, {"start": 2663.44, "end": 2670.08, "text": " back and look at the actual press release of the name change they say here do your members"}, {"start": 2670.08, "end": 2677.2000000000003, "text": " of the neural information processing systems community. Something remarkable has happened in our"}, {"start": 2677.2, "end": 2682.64, "text": " community. The name NIRPS has sprung up organically as an alternative acronym we're delighted to see"}, {"start": 2682.64, "end": 2688.7999999999997, "text": " it being adopted. Indeed one forward thinking member of the community purchased NIRPS.com"}, {"start": 2688.7999999999997, "end": 2693.3599999999997, "text": " described size purposes hosting conference content under different acronym until the board catches"}, {"start": 2693.3599999999997, "end": 2701.12, "text": " up. We've caught up. We're considering alternative acronyms when the community support for NIRPS became"}, {"start": 2701.12, "end": 2707.68, "text": " apparent. We ask all attendees to respect the solution from the community use a new acronym. So"}, {"start": 2707.68, "end": 2714.4, "text": " basically they've rebranded the entire conference about a month before the actual meeting asked all"}, {"start": 2715.04, "end": 2721.44, "text": " sponsors all invited companies asked all invited papers to rebrand the acronym."}, {"start": 2723.3599999999997, "end": 2729.44, "text": " To me this the wording here is a bit is a bit funny. It's like something remarkable has happened"}, {"start": 2729.44, "end": 2737.2000000000003, "text": " in our community has sprung up organically and now we'll just adopt it. It seems like much less"}, {"start": 2737.2000000000003, "end": 2743.12, "text": " of the fairy tale that is described here but the actual like there's a mob with pitch forks"}, {"start": 2743.12, "end": 2753.68, "text": " around your house and this is like the first kind of straw that you can grab to make them calm"}, {"start": 2753.68, "end": 2760.8799999999997, "text": " down. I also know that some companies have begun pulling out funding for the conference. So I think"}, {"start": 2760.8799999999997, "end": 2771.7599999999998, "text": " this is really this was really you know much more backed by force and and back yeah what they say"}, {"start": 2771.7599999999998, "end": 2780.3199999999997, "text": " in the paper extensive public backlash so loud screaming basically then this this kind of the name"}, {"start": 2780.32, "end": 2787.2000000000003, "text": " has sprung up organically and has been adopted and it seems much more bit forceful."}, {"start": 2789.04, "end": 2795.6800000000003, "text": " To me it would have still been a viable path the most valuable path to actually wait for the"}, {"start": 2795.6800000000003, "end": 2803.6800000000003, "text": " conference and then have that discussion and then if indeed this name NIRPS would be presented"}, {"start": 2803.6800000000003, "end": 2809.52, "text": " as a good alternative and you know people would be fine with that then you could still make the name"}, {"start": 2809.52, "end": 2817.36, "text": " change for last for next year. I think this this would have been a good alternative. My fear now is"}, {"start": 2817.36, "end": 2825.28, "text": " this has been extremely rash extremely forceful as as I've said also accompanied by with like by"}, {"start": 2825.28, "end": 2835.04, "text": " withdrawal of funding that I believe these things usually provoke a backlash and that's really"}, {"start": 2835.04, "end": 2840.8, "text": " something that I wouldn't look forward to. So I hope that this con that this paragraph down here"}, {"start": 2840.8, "end": 2846.8, "text": " is true that actually we will see a more welcoming environment for everyone but I believe things"}, {"start": 2846.8, "end": 2856.88, "text": " like this tend in society to have the sometimes very opposite effects of what's intended and so"}, {"start": 2856.88, "end": 2864.16, "text": " I hope this does not produce a backlash. I think having had the actual discussion doing"}, {"start": 2864.16, "end": 2869.92, "text": " things non-rashely would have done much more in the direction of preventing such a backlash."}, {"start": 2871.92, "end": 2882.7999999999997, "text": " So this is the end of the paper. So to recap they basically say the acronym was was inappropriate"}, {"start": 2882.7999999999997, "end": 2892.8799999999997, "text": " which I agree with. They say the survey was bad which I could believe if there was data they say"}, {"start": 2892.88, "end": 2898.88, "text": " that an issue adversely affecting the minority of participants should not be cited by majority vote"}, {"start": 2898.88, "end": 2908.56, "text": " which I absolutely disagree with and then they say the board has basically stated this as an"}, {"start": 2908.56, "end": 2917.12, "text": " either order decision which is I believe not true and misrepresenting or maybe I've missed"}, {"start": 2917.12, "end": 2923.7599999999998, "text": " something it's always possible. Lastly I want to get to this paragraph. In recent months a number"}, {"start": 2923.7599999999998, "end": 2928.88, "text": " of women including some of the authors of this article who publicly expressed support for a change"}, {"start": 2928.88, "end": 2933.92, "text": " of the conference name have been relentlessly trolled, harassed, probably abused and even"}, {"start": 2933.92, "end": 2939.52, "text": " physically threatened on Twitter Reddit other online forums. Much of this harassment"}, {"start": 2939.52, "end": 2947.44, "text": " they say has been anonymous and typically has had an extremely gendered tone. Furthermore some"}, {"start": 2947.44, "end": 2953.84, "text": " students have reached out to us the authors lamenting the fact that they felt unable to open"}, {"start": 2953.84, "end": 2959.52, "text": " expressed their support for renaming the conference due to fear of bullying or retaliation by"}, {"start": 2959.52, "end": 2966.48, "text": " faculty advisors or others in position of power. This I believe is really bad. The fact that"}, {"start": 2966.48, "end": 2972.72, "text": " people can't speak out about something like this without being bullied or harassed or having to"}, {"start": 2972.72, "end": 2981.28, "text": " fear for their careers basically is bad and I would really discourage everyone from engaging in"}, {"start": 2981.28, "end": 2990.32, "text": " such behavior. Verbal abuse physically threatened. To one point you can say all right if you've been"}, {"start": 2990.32, "end": 2995.68, "text": " on the internet for a longer than a week then this probably has happened to you if you have had"}, {"start": 2995.68, "end": 3001.68, "text": " any sort of serious discussion on the internet but you can also say that doesn't make it right."}, {"start": 3003.12, "end": 3011.04, "text": " So I believe it's really important to separate what is harassment basically from actual"}, {"start": 3011.04, "end": 3018.16, "text": " disagreement and criticism and please engage in the latter do not engage in the former."}, {"start": 3018.16, "end": 3027.7599999999998, "text": " My problem with this paragraph it's again it's very one-sided it's basically stated here"}, {"start": 3028.64, "end": 3033.44, "text": " some students have reached out to us lamenting fact that it felt unable to openly express their"}, {"start": 3033.44, "end": 3040.7999999999997, "text": " support for renaming the conference due to fear of bullying retaliation by faculty or advisors"}, {"start": 3040.8, "end": 3053.04, "text": " of others in other position of power. To me I'm gonna say this probably happens on both sides"}, {"start": 3055.04, "end": 3060.7200000000003, "text": " what one could argue where it happens more but this very much happens on both sides of this issue"}, {"start": 3060.7200000000003, "end": 3066.2400000000002, "text": " and it's real shame for both sides basically I think anyone should be able to express"}, {"start": 3066.24, "end": 3073.68, "text": " your opinion to demonstrate that here I'm gonna show another Twitter thread by one of the authors"}, {"start": 3073.68, "end": 3080.0, "text": " of this paper where basically this is a thread where she posts screenshots of conversations"}, {"start": 3080.8799999999997, "end": 3084.24, "text": " basically people reaching out to her saying exactly that like I can't share"}, {"start": 3084.9599999999996, "end": 3093.2799999999997, "text": " my I have trouble sharing my opinion I get mocked for my opinion I can't do so publicly because I fear"}, {"start": 3093.28, "end": 3101.52, "text": " you know from my from my faculty and so on but then there's also this one here where a person"}, {"start": 3102.2400000000002, "end": 3111.44, "text": " wrote an email to the author basically saying they disagree with her and I've read this email I"}, {"start": 3111.44, "end": 3119.92, "text": " don't you know I don't agree with the arguments here made but I can say that the this is not"}, {"start": 3119.92, "end": 3128.08, "text": " verbal abuse it's not personal attack it's not physically threatening it's actually quite"}, {"start": 3128.96, "end": 3134.88, "text": " respectful disagreement the person actually goes through length to say how respectful they"}, {"start": 3134.88, "end": 3143.2000000000003, "text": " are how much you know how much this is meant as a as a disagreement on factual terms and"}, {"start": 3143.2, "end": 3153.04, "text": " further what they say is that they want to be anonymous maybe you see it on the very bottom for"}, {"start": 3153.04, "end": 3157.2, "text": " example I haven't done too much to anonymize myself but I ask you to respect my wishes of"}, {"start": 3157.2, "end": 3163.6, "text": " remaining anonymous don't try to figure out who I am further up they state basically they want"}, {"start": 3163.6, "end": 3169.2799999999997, "text": " to remain anonymous because they fear for their latter for their later career right they fear of"}, {"start": 3169.28, "end": 3176.48, "text": " a backlash up here wish to remain anonymous as I'm an early in my career someday we may work together"}, {"start": 3179.6800000000003, "end": 3188.32, "text": " so basically they say here I disagree here's why I disagree and they wish to remain anonymous"}, {"start": 3188.32, "end": 3193.52, "text": " because they fear for their career right so this is almost like this is this is very much"}, {"start": 3193.52, "end": 3203.28, "text": " here feeling unable and will will go feeling unable to openly express their in the case to support"}, {"start": 3204.08, "end": 3211.68, "text": " against renaming the conference due to fear of bullying or retaliation by faculty advisor others"}, {"start": 3211.68, "end": 3217.36, "text": " in position of power so this author here is obviously a real person in position of power and in"}, {"start": 3217.36, "end": 3224.7200000000003, "text": " very famous senior researcher and this person basically says I'm afraid and I can you know that"}, {"start": 3224.7200000000003, "end": 3231.2000000000003, "text": " that's why I'm anonymous and the way the author responded here as you can read read is what an"}, {"start": 3231.2000000000003, "end": 3238.48, "text": " anonymous coward of course I will do everything to guess you and it's it's difficult to"}, {"start": 3238.48, "end": 3247.04, "text": " to kind of put this off as I mean even if it's I don't know how it's meant right I will do"}, {"start": 3247.04, "end": 3253.52, "text": " everything to guess you and the least it means she will try to figure out who that is right and"}, {"start": 3254.2400000000002, "end": 3260.48, "text": " she doesn't go as far as saying that she will then basically either you know remember that name"}, {"start": 3260.48, "end": 3267.92, "text": " in case of any future thing or share it or whatnot but it's certainly you can't argue that this"}, {"start": 3267.92, "end": 3277.28, "text": " is a real deterrent for other people to even anonymously voice their opinion to if this person"}, {"start": 3277.28, "end": 3285.84, "text": " announces I will do everything to guess you to me that that shows that this fear that we discuss"}, {"start": 3285.84, "end": 3294.56, "text": " here is very much present on both sides and it's absolutely not okay if if either side reacts by"}, {"start": 3294.56, "end": 3304.88, "text": " basically my basically retaliation or even even the the possibility of retaliation and I believe"}, {"start": 3304.88, "end": 3310.08, "text": " everyone should be able to say their opinion I respect really everyone even like these these"}, {"start": 3310.08, "end": 3316.72, "text": " authors here clearly took a lot of effort and a lot of a lot of beating basically they say they've"}, {"start": 3316.72, "end": 3322.7999999999997, "text": " been relentlessly trolled harassed verbally abused even physically threatened this is just"}, {"start": 3322.8, "end": 3328.96, "text": " really bad and have lots of respect for them saying their opinions stating their opinions anyway I"}, {"start": 3328.96, "end": 3335.2000000000003, "text": " think everyone should be able to do that without these things happening so to everyone watching I"}, {"start": 3335.2000000000003, "end": 3341.92, "text": " encourage you to not engage in these things and that alone will probably make the environment"}, {"start": 3342.48, "end": 3351.6800000000003, "text": " much much more inclusive and nice for everybody irregardless of of affiliation so that was it for"}, {"start": 3351.68, "end": 3360.16, "text": " me for this paper it's a bit longer it's a bit ranty if you agree disagree let me know in the"}, {"start": 3360.16, "end": 3390.08, "text": " comments yes and other than that have a nice week weekend whatever you do bye"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=_PyusGsbBPY | Stochastic RNNs without Teacher-Forcing | We present a stochastic non-autoregressive RNN that does not require teacher-forcing for training. The content is based on our 2018 NeurIPS paper:
Deep State Space Models for Unconditional Word Generation
https://arxiv.org/abs/1806.04550 | Hi everybody, my name is Florion and Janik was nice enough to host me here as a guest to talk about Storastic Arnans without teacher forcing This is based on recent work deep state space models for unconditional word generation which we presented at this year's new ribs and If you feel like any more details, please check out the paper We focus on a De facto standard training hack for any Arnans that generate text It's called teacher forcing and it's used in any model whether unconditional or conditional such as in a sentence auto encoder or in a translation model To understand where teacher forcing comes from we first need to understand where text generation comes from for the good or the bad and here we'll focus on the bad Text generation has its roots in language modeling. So language modeling is the problem of predicting the next word given all the previous words People used to use angram models for this but to tape people use recurrent neural networks to do that such recurrent neural networks or Arnans factorize the joint observation Probability of a sequence that I here depict as w into Into independent softmax distributions over individual tokens. So for every time step There's a softmax function and the softmax is conditioned on a hidden state and all the magic of the Arn goes into the function that gives you the new state given the old hidden state Usually this is called a transition function f and as an input it gets the last state and the last word So f could be a G or U function on LSTM function Just like any other language model you can turn this into a generative model of text Let's look at the dependencies that you would have a test time There's an initial hidden state h1 We sample a new word we use our transition function f and gives us the new state h2 Then we can sample a new word w2 feed it back get a new state sample a new word feed it back It's important to know that all the store elasticity in the output is It's solely due to the store elasticity in the sampling process because the transition function is deterministic So far there's nothing to complain about But so far I've only talked about test time a training time. There's a catch This is where teacher forcing kicks in it turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions You have to use teacher forcing and that means you substitute your own prediction by the ground truth So a training time there's no sampling loop you just take the ground truth token and feed it into your state transition function So that feels unintuitive because that test time we do something else then we do a training time It's also known in the literature for a few years to cause biases So why is that problematic remember we come from language modeling In language modeling we could argue that if our only goal is to predict one word given the previous words Then of course we can use the ground truth context the ground truth previous words But if we're interested in generating like longer sequences Then we need to learn what to memorize and in particular we need to become robust against our own predictions Because we might make mistakes at test time and there's no ground truth at test time Just to get this confirmed by somebody who is working the field for years at the new ribs representation learning workshop Alex Grave mentioned teacher forcing as one of the big three problems for other aggressive models and in his own words Teacher forcing might lead to predict one step ahead not many and potentially brittle generation and myopic representations Half people address teacher forcing so far They're approaches that try to mitigate the problem for example by blending together these two views training time and test time So that sometimes you use your own prediction during training, but sometimes you use the ground truth We believe for rigorous model of text generation we need a rigorous model of uncertainty This should be an integral part of any generative model and therefore it should be the same model both a training time and test time without any hacks We propose a fundamentally different approach by proposing a new transition function the new transition function is Non-order-regressive that means it depends on the last stage Ht minus one, but it doesn't depend on the last word That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore Instead the transition function accepts a white noise vector as the second input Now you might wonder why do we need noise at all as an input to the transition function? Well, for given prefix there might be different continuations, so we need some source of entropy to model the entropy in different Continuations the rest of the paper pretty much focuses on the following two questions a Which function f is powerful enough to turn the most simple noise source? Just a standard Gaussian vector into something that is powerful enough to replace the order-regressive feedback My hemispom of a standard RNN and the second question is of course, how do we train this what framework do we train this in? And it will turn out that variational flows are suitable functions f and Variational inference is the right way framework to train them So here's the roadmap to complete the model First we need to cast the generative model as a probabilistic method because so far I've only sketched a procedure that involves sampling some noise and then applying some function And then predicting observations Then we need to propose a variational inference model so that we can do maximum likelihood training We will derive an elbow which is our objective then the paper we also describe how the tightness of the elbow can be improved and here I will finish by talking a bit about the evaluation and what we do to inspect the model Since this work is based a lot on variational flows, let me give you a quick summary of variational flows A variational flow is a Diffian morphism f Which maps from what I will call a simple noise space xi to a complex noise space h And here I'm already using the notation for a sequence model Simply by the change of variable formula we know that the probability of an event h in the complex space is simply the probability of the event In the simplest space xi as given by the inverse of f times a Jacobian term It's respect to f evaluated at xi How can we use this in our sequential setting? First let me fix some notation because sequential models are pretty prone to overloaded notation I'll write time as t running from one to capital T and Whenever I talk about a sequence of variables like w. I don't index them. I just write w without an index And only when I need a specific element I'll write it as w t. Let's Formalize the generative model We start out with the probability of observing a sequence w and since we use the latent variable model We marginalize our latent variables h and then we will assume that the overall dependencies between hidden states h and observation Stubw follow ligand h mm type of dependency that means the new state only depends on the last state and the current observation only depends on the current state And our question is how do we model these transitions? I've so far pitched the ideas of sampling noise and then using some transition function f And we have seen flows already now we are ready to combine the two We propose a transition function f g Which has the signature as I mentioned before it gets a hidden state and a noise vector as an input and it gives you a new state as an output This can be seen as a conditional flow because any ht minus one any last state inspected as the first argument into f g and uses a flow which maps from the simple noise distribution to the space of new hidden states and And as I've said before for the prior distribution in the simple noise base we simply assume a standard Gaussian Let's look at this graphically because in the end this is a graphical model I copied over the formulas from the last slide and at the bottom you see the graphical model First we have a sequence of stochastic variables xi Those deterministically Induced via the transition function f via the flow a sequence of hidden states and those independently predict the observations All the magic is in a transition so let me sketch this process here in the big circle How do we get from the last state h2 to the new state h3? Let's say h2 encodes a prefix and they're two possible combinations They're equally likely in the corpus so there are two potential new states the blue state h3 and the yellow stage h3 I've sketched a standard Gaussian noise distribution at the top They're yellow samples and they're blue samples the flow realizes and mapping that takes any yellow sample and maps it to the yellow hidden state And it's maps any blue sample to the blue hidden state So with probability one half in this situation we either get a blue or a yellow sample from the simple noise distribution and the ability to induce new states blue h3 or the yellow h3 So far we have proposed the generator for more now the question is how do we train it if we don't know the hidden states? The answer is rational inference and in particular amortized variational inference The key idea of variation inference is to introduce a parameterized approximate inference model How do we propose such a model? Well, the good recipe is to first look at the true posterior the probability of a state sequence given an observation sequence The true posterior turns out to factorize into individual components Which gives us the probability of a state given the last state and the future observations It turns out that we can formulate this inference model using two ingredients that should be familiar First we use a transition function fq which induces a flow It has the same signature as fg for the gerentive model and we use a noise source q But now the noise source isn't uninformative anymore In variation inference the inference network is a form to bother data So there's a base distribution q of psi t which is allowed to look at data Wt Now compare this to teacher forcing and teacher forcing we substitute our own predictions by inserting ground-truths information into the generative model In variational inference it's very clear how to use the data the data enters through the inference model And it enters in the form of future observation because the past observation we want to store in the hidden state It remains to derive an elbow which is the usual evidence law about objective used for variational inference Any elbow whether it's in the sequential setting or not factorizes into two parts a reconstruction loss and a model mismatch term Here reconstruction loss means probability of a observation given a state and Modern mismatch is between the gerentive model p and the inference model q This is what is usually written as a KL divergence to derive our elbow. We follow the literature on flows In a first step we introduced the flow on the inference model fq We turn the expectation with respect to the complex state space Age into an expectation with respect to the simple noise distribution And then of course at the same time the flow appears inside the expectation and we got the log determined in terms that I've mentioned before In a second step we introduced the generative flow fg Using the same change of variable technique It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows And so that the generative model always appears as the inverse Concaitinated with the inference flow in a second. I'll show you what the interpretation of that is Let's quickly recap what we've seen so far There's a generative model it consists of a generative flow fg and an uninformed noise source There's an inference model which contains an inference flow fq and a simple base distribution Across the noise variables q of xi And the elbow the two flows appear concatenated and we can interpret this in the following way The inference model q Proposes a noise vector xi t that is formed about the future The inference flow maps this to a hidden state At the hidden state the reconstruction loss lives This is where we pay a price for making a bad prediction However, the inference model cannot encode all the possible information about the future Into the hidden state ht Because the mapping continues to the simple noise base of the generative model and the inference model must make sure that the proposal also covers significant Providity mass under the uninformed prior this trade-off between reconstruction and model mismatch is common to all elbows But here we highlight the special situation where we have two flows one for the inference model and one fine generative model In our paper we also show how we can use the recently proposed important weighted auto encoder to improve the tightness of our bond But I'll skip those steps here Instead let's quickly talk about evaluation We apply our model to unconditional generation so why in hell would somebody look into unconditional generation? Well, actually turns out it's harder than conditional generation If you know what the French sentence looks like it's much easier to continue a partial English translation But it's not only harder. It's also more interesting to inspect which information doesn't sequence model need to store and which information can it forget We use two metrics to evaluate our model first we look at sequence cross entropy So we compare the models sequence distribution to the data sequence distribution Usually estimating the data distribution is impossible You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data However for words we can use Unigram frequencies of words and a corpus as a pretty reliable estimate Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling We take the marginal likelihood sample K trajectories and assess the probability that the trajectories assigned to the given sequence Since our model is not order aggressive a sequence isn't tied to an observation So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary Since we've pitched our noise model as the key to contribution to our generative model We want to empirically verify that the model is being used Working with a clean probabilistic model allows us to use tools from probability theory to assess that We use the mutual information between a noise vector at time T and the observation of time T So this measures how much information in the output is actually due to the noise model Before showing you the numbers, let's quickly go across the parameterization of our model For the flows we look at shift scaling transformations And if the scaling g is lower triangular we can compute efficiently the Jacobian determinant We also look at real MVP and we compose flows by concordination The best distribution of our inference model depends on the future observations which we summarize using a GRURN The best distribution itself is the diagonal Gaussian We use a state size of 8 and also run some experiments for 16 and 32 All the numbers are in the paper, so here are just to take home messages We on par are better than a deterministic RNN with teacher forcing trainer at the same state size Also we observe that a powerful generative flow is essential to achieve good performance Furthermore, we can confirm that important vectors elbow improve the results This is the first model applying generative flows to sequence modeling So naturally we are interested in comparing the expressiveness of fg and fq our paper has a table that compares Four choices for both flows our findings are that the generative flow should be powerful and the infants flow should be slightly less powerful To understand our noise model we look at the mutual information at every time step and show a box spot for all of them Initially the mutual information is highest which means the initial character is most important to remember The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences A non-order regressive model needs to have lower entropy in the observation model Because any under entropy under the observation model is being forgotten because there's no feedback The purple line shows you the observation model entropy during training The dashed red line shows you the entropy on the observation model of a baseline So indeed we have lower entropy in the observation model and at the same time in green you see the mutual information increasing Let's summarize our findings Using rational flows non-order aggressive modeling of sequences is possible and teacher forcing is not necessary At the same time we get a noise model that is the driving factor of the sequence model and is easy to interpret For any details please check out the paper and find any question should mean email | [{"start": 0.0, "end": 6.12, "text": " Hi everybody, my name is Florion and Janik was nice enough to host me here as a guest to talk about"}, {"start": 6.92, "end": 8.92, "text": " Storastic Arnans without teacher forcing"}, {"start": 9.48, "end": 15.08, "text": " This is based on recent work deep state space models for unconditional word generation"}, {"start": 15.4, "end": 18.32, "text": " which we presented at this year's new ribs and"}, {"start": 19.16, "end": 22.56, "text": " If you feel like any more details, please check out the paper"}, {"start": 23.64, "end": 25.64, "text": " We focus on a"}, {"start": 25.64, "end": 30.200000000000003, "text": " De facto standard training hack for any Arnans that generate text"}, {"start": 30.8, "end": 34.84, "text": " It's called teacher forcing and it's used in any model whether"}, {"start": 35.64, "end": 41.4, "text": " unconditional or conditional such as in a sentence auto encoder or in a translation model"}, {"start": 42.400000000000006, "end": 48.24, "text": " To understand where teacher forcing comes from we first need to understand where text generation comes from"}, {"start": 48.8, "end": 52.32, "text": " for the good or the bad and here we'll focus on the bad"}, {"start": 52.32, "end": 57.96, "text": " Text generation has its roots in language modeling. So language modeling is the problem of"}, {"start": 58.6, "end": 61.4, "text": " predicting the next word given all the previous words"}, {"start": 62.92, "end": 68.76, "text": " People used to use angram models for this but to tape people use recurrent neural networks to do that"}, {"start": 69.12, "end": 71.12, "text": " such recurrent neural networks or"}, {"start": 71.12, "end": 73.84, "text": " Arnans factorize the joint observation"}, {"start": 74.32, "end": 77.96000000000001, "text": " Probability of a sequence that I here depict as w into"}, {"start": 77.96, "end": 84.0, "text": " Into independent softmax distributions over individual tokens. So for every time step"}, {"start": 84.16, "end": 90.03999999999999, "text": " There's a softmax function and the softmax is conditioned on a hidden state and all the magic of the Arn"}, {"start": 90.28, "end": 94.63999999999999, "text": " goes into the function that gives you the new state given the old hidden state"}, {"start": 95.11999999999999, "end": 100.96, "text": " Usually this is called a transition function f and as an input it gets the last state and the last word"}, {"start": 101.36, "end": 105.44, "text": " So f could be a G or U function on LSTM function"}, {"start": 105.44, "end": 110.67999999999999, "text": " Just like any other language model you can turn this into a generative model of text"}, {"start": 110.96, "end": 113.64, "text": " Let's look at the dependencies that you would have a test time"}, {"start": 114.28, "end": 116.28, "text": " There's an initial hidden state h1"}, {"start": 116.75999999999999, "end": 122.36, "text": " We sample a new word we use our transition function f and gives us the new state h2"}, {"start": 123.16, "end": 128.6, "text": " Then we can sample a new word w2 feed it back get a new state sample a new word feed it back"}, {"start": 129.6, "end": 133.12, "text": " It's important to know that all the store elasticity in the output is"}, {"start": 133.12, "end": 139.24, "text": " It's solely due to the store elasticity in the sampling process because the transition function is deterministic"}, {"start": 139.8, "end": 141.8, "text": " So far there's nothing to complain about"}, {"start": 142.08, "end": 146.64000000000001, "text": " But so far I've only talked about test time a training time. There's a catch"}, {"start": 147.12, "end": 155.6, "text": " This is where teacher forcing kicks in it turns out that you can't learn this model by basing the evolution of the hidden states on your own predictions"}, {"start": 155.92000000000002, "end": 160.64000000000001, "text": " You have to use teacher forcing and that means you substitute your own prediction by the ground truth"}, {"start": 160.64, "end": 168.27999999999997, "text": " So a training time there's no sampling loop you just take the ground truth token and feed it into your state transition function"}, {"start": 168.88, "end": 173.35999999999999, "text": " So that feels unintuitive because that test time we do something else then we do a training time"}, {"start": 173.6, "end": 177.0, "text": " It's also known in the literature for a few years to cause biases"}, {"start": 177.23999999999998, "end": 180.95999999999998, "text": " So why is that problematic remember we come from language modeling"}, {"start": 181.56, "end": 187.51999999999998, "text": " In language modeling we could argue that if our only goal is to predict one word given the previous words"}, {"start": 187.52, "end": 191.68, "text": " Then of course we can use the ground truth context the ground truth previous words"}, {"start": 192.08, "end": 195.16000000000003, "text": " But if we're interested in generating like longer sequences"}, {"start": 195.44, "end": 201.56, "text": " Then we need to learn what to memorize and in particular we need to become robust against our own predictions"}, {"start": 201.8, "end": 206.08, "text": " Because we might make mistakes at test time and there's no ground truth at test time"}, {"start": 206.68, "end": 213.24, "text": " Just to get this confirmed by somebody who is working the field for years at the new ribs representation learning workshop"}, {"start": 213.24, "end": 220.88, "text": " Alex Grave mentioned teacher forcing as one of the big three problems for other aggressive models and in his own words"}, {"start": 221.84, "end": 228.84, "text": " Teacher forcing might lead to predict one step ahead not many and potentially brittle generation and myopic representations"}, {"start": 229.72, "end": 232.28, "text": " Half people address teacher forcing so far"}, {"start": 232.76000000000002, "end": 239.12, "text": " They're approaches that try to mitigate the problem for example by blending together these two views training time and test time"}, {"start": 239.12, "end": 243.64000000000001, "text": " So that sometimes you use your own prediction during training, but sometimes you use the ground truth"}, {"start": 244.36, "end": 249.4, "text": " We believe for rigorous model of text generation we need a rigorous model of uncertainty"}, {"start": 249.88, "end": 257.56, "text": " This should be an integral part of any generative model and therefore it should be the same model both a training time and test time without any hacks"}, {"start": 258.6, "end": 265.24, "text": " We propose a fundamentally different approach by proposing a new transition function the new transition function is"}, {"start": 265.24, "end": 268.92, "text": " Non-order-regressive that means it depends on the last stage"}, {"start": 269.48, "end": 272.76, "text": " Ht minus one, but it doesn't depend on the last word"}, {"start": 273.24, "end": 278.76, "text": " That means teacher forcing is not an option anymore, but it also means teacher forcing is not a problem anymore"}, {"start": 279.68, "end": 284.16, "text": " Instead the transition function accepts a white noise vector as the second input"}, {"start": 284.92, "end": 289.32, "text": " Now you might wonder why do we need noise at all as an input to the transition function?"}, {"start": 289.32, "end": 296.71999999999997, "text": " Well, for given prefix there might be different continuations, so we need some source of entropy to model the entropy in different"}, {"start": 297.04, "end": 302.2, "text": " Continuations the rest of the paper pretty much focuses on the following two questions a"}, {"start": 302.92, "end": 308.2, "text": " Which function f is powerful enough to turn the most simple noise source?"}, {"start": 308.52, "end": 314.6, "text": " Just a standard Gaussian vector into something that is powerful enough to replace the order-regressive feedback"}, {"start": 314.6, "end": 321.56, "text": " My hemispom of a standard RNN and the second question is of course, how do we train this what framework do we train this in?"}, {"start": 322.28000000000003, "end": 326.92, "text": " And it will turn out that variational flows are suitable functions f and"}, {"start": 327.48, "end": 330.44, "text": " Variational inference is the right way framework to train them"}, {"start": 331.32000000000005, "end": 333.8, "text": " So here's the roadmap to complete the model"}, {"start": 334.24, "end": 338.48, "text": " First we need to cast the generative model as a probabilistic method because so far"}, {"start": 338.48, "end": 344.20000000000005, "text": " I've only sketched a procedure that involves sampling some noise and then applying some function"}, {"start": 344.2, "end": 346.2, "text": " And then predicting observations"}, {"start": 346.71999999999997, "end": 351.24, "text": " Then we need to propose a variational inference model so that we can do maximum likelihood training"}, {"start": 351.64, "end": 358.71999999999997, "text": " We will derive an elbow which is our objective then the paper we also describe how the tightness of the elbow can be improved"}, {"start": 359.2, "end": 364.88, "text": " and here I will finish by talking a bit about the evaluation and what we do to inspect the model"}, {"start": 365.8, "end": 371.84, "text": " Since this work is based a lot on variational flows, let me give you a quick summary of variational flows"}, {"start": 371.84, "end": 374.56, "text": " A variational flow is a Diffian morphism f"}, {"start": 375.28, "end": 381.56, "text": " Which maps from what I will call a simple noise space xi to a complex noise space h"}, {"start": 381.84, "end": 385.35999999999996, "text": " And here I'm already using the notation for a sequence model"}, {"start": 386.12, "end": 394.64, "text": " Simply by the change of variable formula we know that the probability of an event h in the complex space is simply the probability of the event"}, {"start": 394.88, "end": 400.4, "text": " In the simplest space xi as given by the inverse of f times a Jacobian term"}, {"start": 400.4, "end": 403.08, "text": " It's respect to f evaluated at xi"}, {"start": 403.76, "end": 406.4, "text": " How can we use this in our sequential setting?"}, {"start": 406.84, "end": 412.0, "text": " First let me fix some notation because sequential models are pretty prone to overloaded notation"}, {"start": 412.88, "end": 417.59999999999997, "text": " I'll write time as t running from one to capital T and"}, {"start": 418.15999999999997, "end": 425.23999999999995, "text": " Whenever I talk about a sequence of variables like w. I don't index them. I just write w without an index"}, {"start": 425.24, "end": 431.36, "text": " And only when I need a specific element I'll write it as w t. Let's"}, {"start": 431.36, "end": 433.36, "text": " Formalize the generative model"}, {"start": 433.64, "end": 440.2, "text": " We start out with the probability of observing a sequence w and since we use the latent variable model"}, {"start": 440.2, "end": 448.72, "text": " We marginalize our latent variables h and then we will assume that the overall dependencies between hidden states h and observation"}, {"start": 448.72, "end": 458.20000000000005, "text": " Stubw follow ligand h mm type of dependency that means the new state only depends on the last state and the current observation only depends on the current state"}, {"start": 458.8, "end": 461.44000000000005, "text": " And our question is how do we model these transitions?"}, {"start": 461.8, "end": 466.68, "text": " I've so far pitched the ideas of sampling noise and then using some transition function f"}, {"start": 467.28000000000003, "end": 471.16, "text": " And we have seen flows already now we are ready to combine the two"}, {"start": 472.08000000000004, "end": 474.84000000000003, "text": " We propose a transition function f g"}, {"start": 474.84, "end": 483.59999999999997, "text": " Which has the signature as I mentioned before it gets a hidden state and a noise vector as an input and it gives you a new state as an output"}, {"start": 484.44, "end": 486.52, "text": " This can be seen as a conditional flow"}, {"start": 487.28, "end": 491.03999999999996, "text": " because any ht minus one any last state"}, {"start": 491.76, "end": 498.47999999999996, "text": " inspected as the first argument into f g and uses a flow which maps from the simple noise distribution"}, {"start": 498.76, "end": 501.76, "text": " to the space of new hidden states and"}, {"start": 501.76, "end": 509.56, "text": " And as I've said before for the prior distribution in the simple noise base we simply assume a standard Gaussian"}, {"start": 509.88, "end": 513.48, "text": " Let's look at this graphically because in the end this is a graphical model"}, {"start": 514.08, "end": 519.12, "text": " I copied over the formulas from the last slide and at the bottom you see the graphical model"}, {"start": 519.4, "end": 522.48, "text": " First we have a sequence of stochastic variables xi"}, {"start": 523.24, "end": 525.28, "text": " Those deterministically"}, {"start": 525.28, "end": 532.56, "text": " Induced via the transition function f via the flow a sequence of hidden states and those independently predict the observations"}, {"start": 533.4, "end": 538.9599999999999, "text": " All the magic is in a transition so let me sketch this process here in the big circle"}, {"start": 540.04, "end": 543.9599999999999, "text": " How do we get from the last state h2 to the new state h3?"}, {"start": 545.0, "end": 549.12, "text": " Let's say h2 encodes a prefix and they're two possible combinations"}, {"start": 549.12, "end": 557.84, "text": " They're equally likely in the corpus so there are two potential new states the blue state h3 and the yellow stage h3"}, {"start": 558.12, "end": 561.8, "text": " I've sketched a standard Gaussian noise distribution at the top"}, {"start": 562.36, "end": 570.0, "text": " They're yellow samples and they're blue samples the flow realizes and mapping that takes any yellow sample and maps it to the yellow hidden state"}, {"start": 570.0, "end": 573.8, "text": " And it's maps any blue sample to the blue hidden state"}, {"start": 573.8, "end": 579.8399999999999, "text": " So with probability one half in this situation we either get a blue or a yellow sample from the simple noise"}, {"start": 580.0799999999999, "end": 584.68, "text": " distribution and the ability to induce new states blue h3 or the yellow h3"}, {"start": 586.28, "end": 592.3599999999999, "text": " So far we have proposed the generator for more now the question is how do we train it if we don't know the hidden states?"}, {"start": 592.8, "end": 597.76, "text": " The answer is rational inference and in particular amortized variational inference"}, {"start": 597.76, "end": 603.76, "text": " The key idea of variation inference is to introduce a parameterized approximate inference model"}, {"start": 604.2, "end": 606.2, "text": " How do we propose such a model?"}, {"start": 606.48, "end": 614.12, "text": " Well, the good recipe is to first look at the true posterior the probability of a state sequence given an observation sequence"}, {"start": 614.3199999999999, "end": 618.52, "text": " The true posterior turns out to factorize into individual components"}, {"start": 618.84, "end": 624.52, "text": " Which gives us the probability of a state given the last state and the future observations"}, {"start": 624.52, "end": 630.96, "text": " It turns out that we can formulate this inference model using two ingredients that should be familiar"}, {"start": 631.6, "end": 635.84, "text": " First we use a transition function fq which induces a flow"}, {"start": 635.84, "end": 641.52, "text": " It has the same signature as fg for the gerentive model and we use a noise source q"}, {"start": 642.12, "end": 645.3199999999999, "text": " But now the noise source isn't uninformative anymore"}, {"start": 645.88, "end": 649.4, "text": " In variation inference the inference network is a form to bother data"}, {"start": 649.4, "end": 655.72, "text": " So there's a base distribution q of psi t which is allowed to look at data Wt"}, {"start": 656.36, "end": 663.24, "text": " Now compare this to teacher forcing and teacher forcing we substitute our own predictions by inserting ground-truths"}, {"start": 663.4399999999999, "end": 665.4399999999999, "text": " information into the generative model"}, {"start": 666.0799999999999, "end": 671.68, "text": " In variational inference it's very clear how to use the data the data enters through the inference model"}, {"start": 672.0, "end": 678.36, "text": " And it enters in the form of future observation because the past observation we want to store in the hidden state"}, {"start": 678.36, "end": 685.48, "text": " It remains to derive an elbow which is the usual evidence law about objective used for variational inference"}, {"start": 686.0, "end": 688.92, "text": " Any elbow whether it's in the sequential setting or not"}, {"start": 689.5600000000001, "end": 693.88, "text": " factorizes into two parts a reconstruction loss and a model mismatch term"}, {"start": 694.48, "end": 698.8000000000001, "text": " Here reconstruction loss means probability of a observation given a state and"}, {"start": 699.36, "end": 704.24, "text": " Modern mismatch is between the gerentive model p and the inference model q"}, {"start": 704.24, "end": 712.04, "text": " This is what is usually written as a KL divergence to derive our elbow. We follow the literature on flows"}, {"start": 712.84, "end": 717.4, "text": " In a first step we introduced the flow on the inference model fq"}, {"start": 718.24, "end": 722.52, "text": " We turn the expectation with respect to the complex state space"}, {"start": 723.04, "end": 726.8, "text": " Age into an expectation with respect to the simple noise distribution"}, {"start": 726.8, "end": 734.92, "text": " And then of course at the same time the flow appears inside the expectation and we got the log determined in terms that I've mentioned before"}, {"start": 735.68, "end": 739.4799999999999, "text": " In a second step we introduced the generative flow fg"}, {"start": 739.76, "end": 742.0, "text": " Using the same change of variable technique"}, {"start": 742.64, "end": 747.76, "text": " It's possible to write out the elbow in a way so that there's only one Jacobian term for both flows"}, {"start": 747.76, "end": 751.76, "text": " And so that the generative model always appears as the inverse"}, {"start": 751.76, "end": 756.56, "text": " Concaitinated with the inference flow in a second. I'll show you what the interpretation of that is"}, {"start": 757.12, "end": 759.92, "text": " Let's quickly recap what we've seen so far"}, {"start": 760.3199999999999, "end": 766.28, "text": " There's a generative model it consists of a generative flow fg and an uninformed noise source"}, {"start": 766.76, "end": 773.36, "text": " There's an inference model which contains an inference flow fq and a simple base distribution"}, {"start": 773.88, "end": 776.16, "text": " Across the noise variables q of xi"}, {"start": 776.16, "end": 782.04, "text": " And the elbow the two flows appear concatenated and we can interpret this in the following way"}, {"start": 782.8, "end": 784.3199999999999, "text": " The inference model q"}, {"start": 784.92, "end": 788.28, "text": " Proposes a noise vector xi t that is formed about the future"}, {"start": 789.36, "end": 791.8399999999999, "text": " The inference flow maps this to a hidden state"}, {"start": 792.48, "end": 795.3199999999999, "text": " At the hidden state the reconstruction loss lives"}, {"start": 795.8399999999999, "end": 798.8399999999999, "text": " This is where we pay a price for making a bad prediction"}, {"start": 799.56, "end": 804.16, "text": " However, the inference model cannot encode all the possible information about the future"}, {"start": 804.16, "end": 806.16, "text": " Into the hidden state ht"}, {"start": 806.8, "end": 815.3199999999999, "text": " Because the mapping continues to the simple noise base of the generative model and the inference model must make sure that the proposal also covers significant"}, {"start": 815.3199999999999, "end": 822.0799999999999, "text": " Providity mass under the uninformed prior this trade-off between reconstruction and model mismatch is common to all elbows"}, {"start": 822.68, "end": 828.9599999999999, "text": " But here we highlight the special situation where we have two flows one for the inference model and one fine generative model"}, {"start": 828.96, "end": 837.08, "text": " In our paper we also show how we can use the recently proposed important weighted auto encoder to improve the tightness of our bond"}, {"start": 837.08, "end": 839.08, "text": " But I'll skip those steps here"}, {"start": 839.52, "end": 841.52, "text": " Instead let's quickly talk about evaluation"}, {"start": 843.2, "end": 849.2, "text": " We apply our model to unconditional generation so why in hell would somebody look into unconditional generation?"}, {"start": 849.5600000000001, "end": 852.5600000000001, "text": " Well, actually turns out it's harder than conditional generation"}, {"start": 852.56, "end": 859.0, "text": " If you know what the French sentence looks like it's much easier to continue a partial English translation"}, {"start": 859.8, "end": 868.28, "text": " But it's not only harder. It's also more interesting to inspect which information doesn't sequence model need to store and which information can it forget"}, {"start": 869.16, "end": 873.56, "text": " We use two metrics to evaluate our model first we look at sequence cross entropy"}, {"start": 873.7199999999999, "end": 878.5999999999999, "text": " So we compare the models sequence distribution to the data sequence distribution"}, {"start": 878.6, "end": 882.5600000000001, "text": " Usually estimating the data distribution is impossible"}, {"start": 882.9200000000001, "end": 888.28, "text": " You don't want to say that the probability of a sentence is how many times the sentence has appeared in the training data"}, {"start": 889.0, "end": 891.16, "text": " However for words we can use"}, {"start": 891.76, "end": 894.88, "text": " Unigram frequencies of words and a corpus as a pretty reliable estimate"}, {"start": 895.6, "end": 901.64, "text": " Also, we can get an estimate of our model's probability assigned to a sequence by using MC sampling"}, {"start": 901.64, "end": 909.3199999999999, "text": " We take the marginal likelihood sample K trajectories and assess the probability that the trajectories assigned to the given sequence"}, {"start": 909.76, "end": 914.08, "text": " Since our model is not order aggressive a sequence isn't tied to an observation"}, {"start": 914.08, "end": 920.52, "text": " So we can actually use the same sequences of hidden states to evaluate probabilities for all the words in the vocabulary"}, {"start": 921.3199999999999, "end": 925.6, "text": " Since we've pitched our noise model as the key to contribution to our generative model"}, {"start": 925.84, "end": 929.24, "text": " We want to empirically verify that the model is being used"}, {"start": 929.24, "end": 935.36, "text": " Working with a clean probabilistic model allows us to use tools from probability theory to assess that"}, {"start": 935.6800000000001, "end": 941.72, "text": " We use the mutual information between a noise vector at time T and the observation of time T"}, {"start": 941.72, "end": 946.48, "text": " So this measures how much information in the output is actually due to the noise model"}, {"start": 947.44, "end": 951.96, "text": " Before showing you the numbers, let's quickly go across the parameterization of our model"}, {"start": 952.52, "end": 956.2, "text": " For the flows we look at shift scaling transformations"}, {"start": 956.2, "end": 962.4000000000001, "text": " And if the scaling g is lower triangular we can compute efficiently the Jacobian determinant"}, {"start": 962.6800000000001, "end": 966.72, "text": " We also look at real MVP and we compose flows by concordination"}, {"start": 967.6, "end": 974.1600000000001, "text": " The best distribution of our inference model depends on the future observations which we summarize using a GRURN"}, {"start": 974.32, "end": 977.4000000000001, "text": " The best distribution itself is the diagonal Gaussian"}, {"start": 977.8000000000001, "end": 982.36, "text": " We use a state size of 8 and also run some experiments for 16 and 32"}, {"start": 982.36, "end": 986.5600000000001, "text": " All the numbers are in the paper, so here are just to take home messages"}, {"start": 987.04, "end": 992.12, "text": " We on par are better than a deterministic RNN with teacher forcing trainer at the same state size"}, {"start": 992.76, "end": 997.04, "text": " Also we observe that a powerful generative flow is essential to achieve good performance"}, {"start": 997.76, "end": 1001.64, "text": " Furthermore, we can confirm that important vectors elbow improve the results"}, {"start": 1003.8000000000001, "end": 1007.24, "text": " This is the first model applying generative flows to sequence modeling"}, {"start": 1007.24, "end": 1014.72, "text": " So naturally we are interested in comparing the expressiveness of fg and fq our paper has a table that compares"}, {"start": 1014.72, "end": 1022.8, "text": " Four choices for both flows our findings are that the generative flow should be powerful and the infants flow should be slightly less powerful"}, {"start": 1024.2, "end": 1030.6, "text": " To understand our noise model we look at the mutual information at every time step and show a box spot for all of them"}, {"start": 1030.6, "end": 1037.1599999999999, "text": " Initially the mutual information is highest which means the initial character is most important to remember"}, {"start": 1037.4399999999998, "end": 1045.6799999999998, "text": " The noise model is never being ignored and we see increased variance in the remaining time steps because we are averaging here across different sequences"}, {"start": 1046.32, "end": 1050.9199999999998, "text": " A non-order regressive model needs to have lower entropy in the observation model"}, {"start": 1051.36, "end": 1056.28, "text": " Because any under entropy under the observation model is being forgotten because there's no feedback"}, {"start": 1056.28, "end": 1061.28, "text": " The purple line shows you the observation model entropy during training"}, {"start": 1061.6, "end": 1066.08, "text": " The dashed red line shows you the entropy on the observation model of a baseline"}, {"start": 1066.56, "end": 1074.0, "text": " So indeed we have lower entropy in the observation model and at the same time in green you see the mutual information increasing"}, {"start": 1075.48, "end": 1077.48, "text": " Let's summarize our findings"}, {"start": 1078.0, "end": 1084.72, "text": " Using rational flows non-order aggressive modeling of sequences is possible and teacher forcing is not necessary"}, {"start": 1084.72, "end": 1091.44, "text": " At the same time we get a noise model that is the driving factor of the sequence model and is easy to interpret"}, {"start": 1091.44, "end": 1116.44, "text": " For any details please check out the paper and find any question should mean email"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=WYrvh50yu6s | Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations | https://arxiv.org/abs/1811.12359
Abstract:
In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learning algorithms. A large number of unsupervised learning approaches based on auto-encoding and quantitative evaluation metrics of disentanglement have been proposed; yet, the efficacy of the proposed approaches and utility of proposed notions of disentanglement has not been challenged in prior work. In this paper, we provide a sober look on recent progress in the field and challenge some common assumptions.
We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data. Then, we train more than 12000 models covering the six most prominent methods, and evaluate them across six disentanglement metrics in a reproducible large-scale experimental study on seven different data sets. On the positive side, we observe that different methods successfully enforce properties "encouraged" by the corresponding losses. On the negative side, we observe in our study that well-disentangled models seemingly cannot be identified without access to ground-truth labels even if we are allowed to transfer hyperparameters across data sets. Furthermore, increased disentanglement does not seem to lead to a decreased sample complexity of learning for downstream tasks.
These results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision, investigate concrete benefits of enforcing disentanglement of the learned representations, and consider a reproducible experimental setup covering several data sets.
Authors:
Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem | All right, hello everyone. Today we're going to look at this paper, challenging common assumptions in the unsupervised learning of disentanglement representations by Francesco Luc Tello and a bunch of other people at Google AI, ETAG Zurich and MPI. Full disclaimer, I know these people, and I've talked to them about this work, so just say no where I'm coming from. It's a good paper and it's fairly short to explain. So let's go over it. The main thing here is what's called disentanglement. So disentanglement is kind of a property of data in unsupervised learning or not data of your model that you would like to have in unsupervised learning, here especially in generative models. So what they focus on is auto encoding here. And what that means is I have some data point, which could be an image. Let's draw an image here. And I compress this usually into a vector, and the vector has a couple of dimensions. This is a representation of the data. And from this representation, what I can do is I can produce an image again. And if I train an auto encoder, I will enforce that my model. So both of these are my model. This is called an encoder, and this is called a decoder. That what they do is that the final image then looks like the original image. This is an auto encoder. Basically, a compression algorithm that tries to find representations such that it can reconstruct the original image again. Here, we go a little further in that we use what's called variational auto encoders. So all of these experiments here use variance of the variational auto encoder. And what a variational auto encoder. Let's skip some here. A variational auto encoder is the same thing as an auto encoder, except it's a probabilistic framework. So what you do is here on the bottom, you can see an equation that basically is the objective for a VAE. And what it does is it says, OK, I have an image. Let's say this is my image. And I use an encoder like in an auto encoder. And that gives me a representation. OK, but now I don't use this representation directly to decode, but this representation is simply the parameters from a bunch of distributions. So here, let's say I have four, I want four latent factors. And the latent factors are basically the latent variables that describe this image. So the images could be images of, let's say, cats. And four latent factors could be the color of the fur of the cat, the size of the cat, the position in the image, and the general lighting of how bright the image is. So these could be four latent factors that would explain best the image. And from that, the image could be best reconstructed. Let's say. So the four latent factors, we consider as probability distributions. So our encoder needs to be our encoder needs to produce eight numbers in this case. Eight numbers, why? Because for each of these four distributions, we want a mean and a standard deviation. So these eight numbers here, each one or each pair of numbers, one of them is going to be the mean. And the other one is going to be the standard deviation of a distribution. And then from these, we're going to construct a distribution. So like, OK, here's the mean, here's the standard deviation. So the distribution somehow looks like this. And then we're going to sample from this distribution. So one sample could be here. One sample could be here. One sample could be here, here. So of course, in the middle here, we're going to have more samples. But so the, whereas the autoencoder directly uses the encoding to reproduce the image, the variational autoencoder, the, the, what the output, what the encoder produces here is simply parameterization for a, for a distribution. And that distribution then is sampled. So we're going to take one sample here. So from, from each of these. So there's going to be multiple of those distributions. Because we have eight numbers. We are going to produce four distributions, in particular. So we're going to sample four different numbers. So we're going to sample a new vector with four, two, three, four. Well, I didn't have eight at the beginning, but nevermind. So here, this gives us four numbers. But these are samples. So these are going to be different every time, even if we feed the same image. And from this, the decoder is going to try to reproduce the image. And then, again, the images, the end image, and the beginning image are going to be forced to be close to each other. But also, now, since this is a probabilistic framework, we also kind of need, we need a different loss function. For the autoencoder, you can simply penalize how far the images are in, let's say, L2 norm. But here, we have two distinct parts to the loss term. So everything is probabilistic. So let's walk through this here. The first part of the, so we have two parts of the loss term. And here, in particular, Q is, you can see here, it takes the distribution of z condition on x. And z will always be our latent representation of the data. And x will be the data, itself, the data point. So Q will take the data point and produce z. And z, specifically here, what's meant is this thing here. This is z, whereas this is x. And this is also, well, this is x till there or something, whatever is produced by the decoder. So basically, what we're going to do is we're going to punish the KL distance, which is a probabilistic distance measure. We're going to measure the distance between the distribution of z under x with the prior over z. So P of z here, this here, is the prior distribution over z. And the prior distribution in VIE is often to be taken as a Gaussian. So we'll say, all right, so our kind of default assumption on the z variables is that they're Gaussian's here. And we're going to force, basically, we're going to force the encoder to come up with encodings generally over the data set that are Gaussian's, that are conformal to our prior. So here, we say specific prior PZ. I didn't mean to cross that out. So this second term enforces the encoder to produce things that are Gaussian. It's specifically with our if our prior is, let's say, 0, 0 mean unit variance, Gaussian's, it's going to enforce that. The first term here is different. The first term makes the image that has been input to the variational encoder and the image that has been output close together. Again, this is a probabilistic loss. So what we're going to do here is we're going to take expectations. So the care of distance is also an expectation, by the way. We're going to take expectations over Px, which is the distribution of the data, and also over Q. And Q is, again, our encoding mechanism. And we're simply going to punish, or we're going to here maximize the delog probability, which is equivalent to minimizing the negative log likelihood, which you might be familiar with, of the data given the Z variables. So this is an expectation over Q given x. So what that means is, basically, we want the probability of this original data point. We want, here we output x tilde. We want this to be close to x here. So what we can say is, we want the probability that our model outputs x, which has been the original input given this particular Z that it produced to be high as an expectation of Q of z given x. So it's a bit cryptic, but it means here, I input x into Q. I get out Z, and when I have the Z, what I produce, here is what I produce. The likelihood that x, the original image, these are the same, is produced should be high. So that's a variation order encoder. I simply encourage the latent representations to be close to my prior, which is often Gaussian. And I encourage the output to be similar to the input, which I do by encouraging the likelihood that the output is the input. All right. So cool. So what's that have to do with disentanglement? Dessentanglement is property that now I would like to have in my model, which is that these things here, or we can also focus on these things here, everyone, view it, or these things here. These latent things that my encoder outputs somehow give me information about the data in a way that's disentangled. What that means is I've already made an example. This already disentangled, whereas I said, let's say we have images of a cat of cats. And the fur color is going to be one variable, and the color of the eyes of the cat is going to be another one. And the position in the image is going to be another one. So these are all fairly independent. And so if I change some latent factor, I can change some pretty much independently. So here this could be the fur color. I can change it pretty much independently. And the cat will just have a different fur and so on. What would be non-distantangled representations would be, let's say, one encodes the fur of the cat, and the other one encodes the species of cat. Because these are highly entangled. So the fur color is highly dependent on what species the cat is. And it's not really so they can't. You can imagine it as these things being correlated. But it's slightly different. And there are, there's not an agreement on what disentanglement means, really. We just kind of imagine data is somehow entangled. And we want to kind of pull out these disentangled factors. So what they focus on here and the easiest, the easiest measure here is the following. I might want to have some space. All right, so these is the measure of disentanglement that is come up with here is the following. It's an assumption. The assumption is, let's say there's data x, right? We'll call it random error. And we know, we assume that this data is generated by a bunch of latent variables, z1, z2, z3, which are independent, which means that, and the technical infertuses, that the p of z, which is all of them, can be factorized into p of zi. So they are independent. And these kind of determine independently the data x. Now, what does that disentanglement, when my model has produced a disentanglement representation, means I now have a model, some model m, which is going to give me a representation of x. And the representation, as we saw before, could be these things here. That's the representation. Specifically, what these people do is they say, OK, the mean of the distribution that my encoder gives me, that's the representation of x. All right, so this gives you a representation of x, from which you then might want to reconstruct x over here, x. But so the important thing is, when is the representation disentanglement? The representation is disentangled in the easiest sense if the following holds. When I change zi, so I introduce a delta to zi to any of these three, that means that in the representation of x, which we're just going to say, so if there's three dimensions of z, we just assume we know that. And we also make the representation three dimensional. Then exactly one factor in this is going to change. So if I change one factor of the true underlying distribution, which is independently, which all the latent factors are independent, then only one factor in my representation changes. So if that's the case, then kind of I can be fairly sure that I've captured the true latent structure of the data. If one of the, if I change one of the z here, I'll say I change z3. And only then are three. So I change z3. Let's say I've access to the true underlying distribution. I ask the world to give me a picture of a cat that where the fur color is different. And then I put it, I get a data point. And then I put it through my model. I get a representation. And only from the cat that I had before, only one of the factors of my representation changes, then I call it disentangled. Then I can be fairly sure, OK, my representation, this dimension of my representation, captures the fur color independently of the other factors. All right, so that's disentanglement. And you notice it requires actually access here to the true distribution of how the data is generated by the world. So this is something you generally don't have, but it's a technical notion. So you can certainly postulated. And it's a nice framework. And this paper basically proves that generally learning disentangled representation in that way is impossible. If you don't have some, if you don't make some assumptions, some pre-oriented assumptions on your data and your model. So this is a theorem here. And we see here P is any generative model, which admits this factorization. That's what we talked about. The true underlying generative process has independent in its constituents. That means there's a bunch of latent variables. They independently from each other produce a data point. Access the data observations. Then there exists an infinite family of bijective functions, such that, and this, and this, and this. OK, well, that means. So this thing here basically just means that the distributions agree. So the overall distributions, let's say, that's not exactly that, but the posterior distributions, let's say, the data looks the same. What comes out of the process looks the same. So there is functions that transform the latent distribution into some other distribution, but they look the same in cumulatively. All right. And then this part here means you'll see the derivative of fi of u with respect to some uj, which you'll notice i and j are different. This means that basically the dimensions are entangled. It means that if I take the derivative of one entry in the f in the function output, and I drive it by another entry, then I get a non-zero derivative, which means that this uj influences fi, which basically means that I can produce, I can take the z, I can transform it in. So z is independent. So it means the i of dimension has no influence on the jth dimension of the output, and I can transform it into something where that's no longer the case, where the i of and the jth dimension very much are entangled or covariate. So this means I can take the z that's kind of everything is independent. I can transform it into something where everything is dependent. And they give a nice example here. So they say, let's say we have Gaussian's in two dimensions. So we have one Gaussian here. And let me see if I can draw this, one Gaussian here. Right? In two dimensions, they're completely independent. What you'll find is that the kind of distribution overall has iso lines like this, right? It gives you kind of a hump in the middle, two dimension, where you can maybe imagine, like a bit of a mountain in the middle. All right, so this is the kind of output distribution. If you don't know about the underlying factors, you simply see the cumulative distribution, which would be the big p here. All right, now we transform this into with f. And f is simply a rotation by 45 degrees. So two new axes, this and that. And again, our two Gaussian's are going to be transformed, like these. So these are not disentangled anymore, well, in the notion. I can't say like this, but this is easiest to say. So these are kind of not that it's rotated in terms of the original coordinate system, which would go like this. These very much depend on each other, right? Did it? JF dimension, the IF dimension, depend on each other. Because if I sample from one of the Gaussian's, I need now basically two coordinates to describe where it is. Or one isn't just. So if I sample from one Gaussian, I change and need both the coordinates. But the cumulative distribution, or the that is still going to look exactly the same. So it's basically an isometric company in every direction. If I rotate that, it looks exactly the same. This is the p here. But now the IF dimension, and JF dimension very much influence each other. And yeah, interestingly, if you now look at this entanglement, if I just have, if I now produce data x here, x1. And here I produce data x2. And both go through my model and give me our representation of x1 and the representation of x2. I have, without seeing the underlying structure, I have no idea which one of those two it comes from. And thereby, I have zero chance, basically it's a lucky guess. Which one it comes from? And there's an infinite family. So I will never find the true underlying distribution here. And thereby, I will never be able to satisfy this property that if one of the z changes, then only one of the factors of my representation will change. Because if I say, oh, well, obviously, this is the case, then I'm going to make a different model. And if I say, well, this is the case, I'm going to make a different model. I don't know which one it is. So I have to choose one. And it could be the other one. So I'm bound to be wrong, in this case, 50% of the time. But if it's an infinite family, I'm bound to be wrong. Every time, basically. So that's what the theorem basically says. I can't cite on the true underlying distribution. There's an infinite family that transforms it into it, it transforms every distribution into some other distribution that has basically a complete opposite properties of entanglement, and I need to choose one, and I will never choose the right one, because I'm not that lucky. And thereby, I can't do representation learning. That's disentangled. All right. So that's the main claim of the paper. And there is a lot of experiments here. So what the paper also does is they produce something data sets and they test a lot of hard architectures. Basically, they say just because it's theoretically impossible, it's not impractical, because we can actually make these underlying assumptions. Like we can make some assumptions on the data and then we can attempt to do disentanglement learning. So they do these data sets and they test different VIEs architectures on it. And they basically establish where more work should go. So that's kind of the rest of the paper. I encourage you to look at the rest of the paper. I just wanted to give a quick introduction to VIEs and to disentanglement, disentangled representation learning. I wasn't technically correct in every detail, but I hope that it's enough. And that's fun. | [{"start": 0.0, "end": 2.0, "text": " All right, hello everyone."}, {"start": 2.0, "end": 5.2, "text": " Today we're going to look at this paper,"}, {"start": 5.2, "end": 8.34, "text": " challenging common assumptions in the unsupervised learning"}, {"start": 8.34, "end": 12.4, "text": " of disentanglement representations by Francesco Luc Tello"}, {"start": 12.4, "end": 15.48, "text": " and a bunch of other people at Google AI,"}, {"start": 15.48, "end": 18.72, "text": " ETAG Zurich and MPI."}, {"start": 18.72, "end": 21.36, "text": " Full disclaimer, I know these people,"}, {"start": 21.36, "end": 25.32, "text": " and I've talked to them about this work,"}, {"start": 25.32, "end": 28.04, "text": " so just say no where I'm coming from."}, {"start": 28.04, "end": 32.28, "text": " It's a good paper and it's fairly short to explain."}, {"start": 32.28, "end": 33.6, "text": " So let's go over it."}, {"start": 35.24, "end": 38.8, "text": " The main thing here is what's called disentanglement."}, {"start": 38.8, "end": 42.28, "text": " So disentanglement is kind of a property of data"}, {"start": 42.28, "end": 46.04, "text": " in unsupervised learning or not data"}, {"start": 46.04, "end": 49.72, "text": " of your model that you would like to have"}, {"start": 49.72, "end": 54.76, "text": " in unsupervised learning, here especially in generative models."}, {"start": 54.76, "end": 63.599999999999994, "text": " So what they focus on is auto encoding here."}, {"start": 63.599999999999994, "end": 67.0, "text": " And what that means is I have some data point,"}, {"start": 67.0, "end": 68.67999999999999, "text": " which could be an image."}, {"start": 68.67999999999999, "end": 71.36, "text": " Let's draw an image here."}, {"start": 71.36, "end": 79.64, "text": " And I compress this usually into a vector,"}, {"start": 79.64, "end": 84.0, "text": " and the vector has a couple of dimensions."}, {"start": 84.0, "end": 89.72, "text": " This is a representation of the data."}, {"start": 89.72, "end": 94.32, "text": " And from this representation, what I can do is I can produce"}, {"start": 94.32, "end": 96.52, "text": " an image again."}, {"start": 96.52, "end": 101.16, "text": " And if I train an auto encoder, I will enforce that my model."}, {"start": 101.16, "end": 102.96000000000001, "text": " So both of these are my model."}, {"start": 102.96000000000001, "end": 109.2, "text": " This is called an encoder, and this is called a decoder."}, {"start": 109.2, "end": 115.04, "text": " That what they do is that the final image then looks"}, {"start": 115.04, "end": 117.96000000000001, "text": " like the original image."}, {"start": 117.96000000000001, "end": 119.32000000000001, "text": " This is an auto encoder."}, {"start": 119.32000000000001, "end": 121.2, "text": " Basically, a compression algorithm"}, {"start": 121.2, "end": 124.56, "text": " that tries to find representations"}, {"start": 124.56, "end": 128.48000000000002, "text": " such that it can reconstruct the original image again."}, {"start": 128.48000000000002, "end": 132.24, "text": " Here, we go a little further in that we use what's"}, {"start": 132.24, "end": 134.32, "text": " called variational auto encoders."}, {"start": 134.32, "end": 142.88, "text": " So all of these experiments here use variance of the variational auto encoder."}, {"start": 142.88, "end": 147.68, "text": " And what a variational auto encoder."}, {"start": 147.68, "end": 151.04, "text": " Let's skip some here."}, {"start": 151.04, "end": 158.76, "text": " A variational auto encoder is the same thing as an auto encoder, except"}, {"start": 158.76, "end": 160.72, "text": " it's a probabilistic framework."}, {"start": 160.72, "end": 166.48, "text": " So what you do is here on the bottom,"}, {"start": 166.48, "end": 169.04, "text": " you can see an equation that basically"}, {"start": 169.04, "end": 173.0, "text": " is the objective for a VAE."}, {"start": 173.0, "end": 177.64, "text": " And what it does is it says, OK, I have an image."}, {"start": 177.64, "end": 180.0, "text": " Let's say this is my image."}, {"start": 180.0, "end": 183.68, "text": " And I use an encoder like in an auto encoder."}, {"start": 186.52, "end": 189.12, "text": " And that gives me a representation."}, {"start": 189.12, "end": 195.84, "text": " OK, but now I don't use this representation directly to decode,"}, {"start": 195.84, "end": 201.52, "text": " but this representation is simply the parameters"}, {"start": 201.52, "end": 205.6, "text": " from a bunch of distributions."}, {"start": 205.6, "end": 213.52, "text": " So here, let's say I have four, I want four latent factors."}, {"start": 213.52, "end": 216.16, "text": " And the latent factors are basically the latent variables"}, {"start": 216.16, "end": 218.36, "text": " that describe this image."}, {"start": 218.36, "end": 221.44000000000003, "text": " So the images could be images of, let's say, cats."}, {"start": 221.44000000000003, "end": 226.48000000000002, "text": " And four latent factors could be the color of the fur of the cat,"}, {"start": 226.48000000000002, "end": 230.28, "text": " the size of the cat, the position in the image,"}, {"start": 230.28, "end": 237.16000000000003, "text": " and the general lighting of how bright the image is."}, {"start": 237.16000000000003, "end": 240.52, "text": " So these could be four latent factors that"}, {"start": 240.52, "end": 245.04000000000002, "text": " would explain best the image."}, {"start": 245.04000000000002, "end": 248.08, "text": " And from that, the image could be best reconstructed."}, {"start": 248.08, "end": 249.08, "text": " Let's say."}, {"start": 249.08, "end": 251.32000000000002, "text": " So the four latent factors, we consider"}, {"start": 251.32000000000002, "end": 253.68, "text": " as probability distributions."}, {"start": 253.68, "end": 257.24, "text": " So our encoder needs to be our encoder"}, {"start": 257.24, "end": 260.8, "text": " needs to produce eight numbers in this case."}, {"start": 260.8, "end": 261.92, "text": " Eight numbers, why?"}, {"start": 261.92, "end": 266.04, "text": " Because for each of these four distributions,"}, {"start": 266.04, "end": 273.48, "text": " we want a mean and a standard deviation."}, {"start": 273.48, "end": 279.6, "text": " So these eight numbers here, each one or each pair of numbers,"}, {"start": 279.6, "end": 282.6, "text": " one of them is going to be the mean."}, {"start": 282.6, "end": 285.52000000000004, "text": " And the other one is going to be the standard deviation"}, {"start": 285.52000000000004, "end": 287.96000000000004, "text": " of a distribution."}, {"start": 287.96000000000004, "end": 294.84000000000003, "text": " And then from these, we're going to construct a distribution."}, {"start": 294.84000000000003, "end": 299.76, "text": " So like, OK, here's the mean, here's the standard deviation."}, {"start": 299.76, "end": 302.84000000000003, "text": " So the distribution somehow looks like this."}, {"start": 302.84, "end": 306.35999999999996, "text": " And then we're going to sample from this distribution."}, {"start": 306.35999999999996, "end": 309.96, "text": " So one sample could be here."}, {"start": 309.96, "end": 311.23999999999995, "text": " One sample could be here."}, {"start": 311.23999999999995, "end": 312.96, "text": " One sample could be here, here."}, {"start": 312.96, "end": 315.84, "text": " So of course, in the middle here, we're going to have more samples."}, {"start": 315.84, "end": 318.47999999999996, "text": " But so the, whereas the autoencoder directly"}, {"start": 318.47999999999996, "end": 321.23999999999995, "text": " uses the encoding to reproduce the image,"}, {"start": 321.23999999999995, "end": 327.35999999999996, "text": " the variational autoencoder, the, the, what the output,"}, {"start": 327.35999999999996, "end": 329.88, "text": " what the encoder produces here is simply"}, {"start": 329.88, "end": 335.92, "text": " parameterization for a, for a distribution."}, {"start": 335.92, "end": 342.0, "text": " And that distribution then is sampled."}, {"start": 342.0, "end": 348.2, "text": " So we're going to take one sample here."}, {"start": 348.2, "end": 349.76, "text": " So from, from each of these."}, {"start": 349.76, "end": 352.96, "text": " So there's going to be multiple of those distributions."}, {"start": 352.96, "end": 355.64, "text": " Because we have eight numbers."}, {"start": 355.64, "end": 361.15999999999997, "text": " We are going to produce four distributions, in particular."}, {"start": 361.15999999999997, "end": 365.59999999999997, "text": " So we're going to sample four different numbers."}, {"start": 365.59999999999997, "end": 371.24, "text": " So we're going to sample a new vector with four, two, three, four."}, {"start": 371.24, "end": 373.59999999999997, "text": " Well, I didn't have eight at the beginning, but nevermind."}, {"start": 373.59999999999997, "end": 376.59999999999997, "text": " So here, this gives us four numbers."}, {"start": 376.59999999999997, "end": 377.64, "text": " But these are samples."}, {"start": 377.64, "end": 379.84, "text": " So these are going to be different every time,"}, {"start": 379.84, "end": 382.28, "text": " even if we feed the same image."}, {"start": 382.28, "end": 389.11999999999995, "text": " And from this, the decoder is going to try to reproduce the image."}, {"start": 389.11999999999995, "end": 396.67999999999995, "text": " And then, again, the images, the end image, and the beginning image"}, {"start": 396.67999999999995, "end": 401.88, "text": " are going to be forced to be close to each other."}, {"start": 401.88, "end": 405.76, "text": " But also, now, since this is a probabilistic framework,"}, {"start": 405.76, "end": 409.15999999999997, "text": " we also kind of need, we need a different loss function."}, {"start": 409.15999999999997, "end": 411.84, "text": " For the autoencoder, you can simply penalize how"}, {"start": 411.84, "end": 414.71999999999997, "text": " far the images are in, let's say, L2 norm."}, {"start": 414.71999999999997, "end": 419.64, "text": " But here, we have two distinct parts to the loss term."}, {"start": 419.64, "end": 423.0, "text": " So everything is probabilistic."}, {"start": 423.0, "end": 425.44, "text": " So let's walk through this here."}, {"start": 425.44, "end": 431.4, "text": " The first part of the, so we have two parts of the loss term."}, {"start": 431.4, "end": 439.52, "text": " And here, in particular, Q is, you can see here,"}, {"start": 439.52, "end": 443.68, "text": " it takes the distribution of z condition on x."}, {"start": 443.68, "end": 449.24, "text": " And z will always be our latent representation of the data."}, {"start": 449.24, "end": 454.52, "text": " And x will be the data, itself, the data point."}, {"start": 454.52, "end": 459.96, "text": " So Q will take the data point and produce z."}, {"start": 459.96, "end": 467.59999999999997, "text": " And z, specifically here, what's meant is this thing here."}, {"start": 467.6, "end": 474.04, "text": " This is z, whereas this is x."}, {"start": 474.04, "end": 479.84000000000003, "text": " And this is also, well, this is x till there or something,"}, {"start": 479.84000000000003, "end": 481.48, "text": " whatever is produced by the decoder."}, {"start": 486.0, "end": 492.04, "text": " So basically, what we're going to do is we're"}, {"start": 492.04, "end": 495.64000000000004, "text": " going to punish the KL distance, which is a probabilistic"}, {"start": 495.64000000000004, "end": 497.32000000000005, "text": " distance measure."}, {"start": 497.32, "end": 504.92, "text": " We're going to measure the distance between the distribution"}, {"start": 504.92, "end": 511.84, "text": " of z under x with the prior over z."}, {"start": 511.84, "end": 519.3199999999999, "text": " So P of z here, this here, is the prior distribution over z."}, {"start": 519.3199999999999, "end": 521.6, "text": " And the prior distribution in VIE is often"}, {"start": 521.6, "end": 524.04, "text": " to be taken as a Gaussian."}, {"start": 524.04, "end": 530.8399999999999, "text": " So we'll say, all right, so our kind of default assumption"}, {"start": 530.8399999999999, "end": 536.16, "text": " on the z variables is that they're Gaussian's here."}, {"start": 536.16, "end": 540.24, "text": " And we're going to force, basically,"}, {"start": 540.24, "end": 543.4, "text": " we're going to force the encoder to come up"}, {"start": 543.4, "end": 549.68, "text": " with encodings generally over the data set that are Gaussian's,"}, {"start": 549.68, "end": 554.56, "text": " that are conformal to our prior."}, {"start": 554.56, "end": 558.16, "text": " So here, we say specific prior PZ."}, {"start": 558.16, "end": 562.0799999999999, "text": " I didn't mean to cross that out."}, {"start": 562.0799999999999, "end": 566.4399999999999, "text": " So this second term enforces the encoder"}, {"start": 566.4399999999999, "end": 570.7199999999999, "text": " to produce things that are Gaussian."}, {"start": 570.7199999999999, "end": 578.4799999999999, "text": " It's specifically with our if our prior is, let's say, 0,"}, {"start": 578.48, "end": 583.28, "text": " 0 mean unit variance, Gaussian's, it's going to enforce that."}, {"start": 583.28, "end": 587.24, "text": " The first term here is different."}, {"start": 587.24, "end": 589.9200000000001, "text": " The first term makes the image that"}, {"start": 589.9200000000001, "end": 591.96, "text": " has been input to the variational encoder"}, {"start": 591.96, "end": 594.76, "text": " and the image that has been output close together."}, {"start": 594.76, "end": 599.28, "text": " Again, this is a probabilistic loss."}, {"start": 599.28, "end": 602.0, "text": " So what we're going to do here is we're"}, {"start": 602.0, "end": 603.48, "text": " going to take expectations."}, {"start": 603.48, "end": 608.88, "text": " So the care of distance is also an expectation, by the way."}, {"start": 608.88, "end": 613.12, "text": " We're going to take expectations over Px,"}, {"start": 613.12, "end": 616.32, "text": " which is the distribution of the data,"}, {"start": 616.32, "end": 625.08, "text": " and also over Q. And Q is, again, our encoding mechanism."}, {"start": 625.08, "end": 629.12, "text": " And we're simply going to punish, or we're"}, {"start": 629.12, "end": 633.64, "text": " going to here maximize the delog probability, which"}, {"start": 633.64, "end": 636.44, "text": " is equivalent to minimizing the negative log likelihood,"}, {"start": 636.44, "end": 641.44, "text": " which you might be familiar with, of the data given the Z"}, {"start": 641.44, "end": 642.72, "text": " variables."}, {"start": 642.72, "end": 650.76, "text": " So this is an expectation over Q given x."}, {"start": 650.76, "end": 652.48, "text": " So what that means is, basically, we"}, {"start": 652.48, "end": 661.8000000000001, "text": " want the probability of this original data point."}, {"start": 661.8000000000001, "end": 668.08, "text": " We want, here we output x tilde."}, {"start": 668.08, "end": 671.08, "text": " We want this to be close to x here."}, {"start": 671.08, "end": 674.8000000000001, "text": " So what we can say is, we want the probability"}, {"start": 674.8000000000001, "end": 680.36, "text": " that our model outputs x, which"}, {"start": 680.36, "end": 685.08, "text": " has been the original input given this particular Z"}, {"start": 685.08, "end": 698.6800000000001, "text": " that it produced to be high as an expectation of Q of z"}, {"start": 698.6800000000001, "end": 702.08, "text": " given x."}, {"start": 702.08, "end": 706.9200000000001, "text": " So it's a bit cryptic, but it means here, I input x into Q."}, {"start": 706.92, "end": 713.1999999999999, "text": " I get out Z, and when I have the Z, what I produce, here"}, {"start": 713.1999999999999, "end": 715.64, "text": " is what I produce."}, {"start": 715.64, "end": 721.28, "text": " The likelihood that x, the original image, these are the same,"}, {"start": 721.28, "end": 724.7199999999999, "text": " is produced should be high."}, {"start": 724.7199999999999, "end": 727.28, "text": " So that's a variation order encoder."}, {"start": 727.28, "end": 729.7199999999999, "text": " I simply encourage the latent representations"}, {"start": 729.7199999999999, "end": 733.16, "text": " to be close to my prior, which is often Gaussian."}, {"start": 733.16, "end": 738.76, "text": " And I encourage the output to be similar to the input, which"}, {"start": 738.76, "end": 743.68, "text": " I do by encouraging the likelihood that the output is the input."}, {"start": 743.68, "end": 744.6, "text": " All right."}, {"start": 744.6, "end": 745.4, "text": " So cool."}, {"start": 745.4, "end": 747.8399999999999, "text": " So what's that have to do with disentanglement?"}, {"start": 747.8399999999999, "end": 750.48, "text": " Dessentanglement is property that now I"}, {"start": 750.48, "end": 754.36, "text": " would like to have in my model, which"}, {"start": 754.36, "end": 762.56, "text": " is that these things here, or we can also focus on these things"}, {"start": 762.56, "end": 766.28, "text": " here, everyone, view it, or these things here."}, {"start": 766.28, "end": 771.1199999999999, "text": " These latent things that my encoder outputs somehow"}, {"start": 771.1199999999999, "end": 776.1199999999999, "text": " give me information about the data in a way that's disentangled."}, {"start": 776.1199999999999, "end": 778.88, "text": " What that means is I've already made an example."}, {"start": 778.88, "end": 782.5999999999999, "text": " This already disentangled, whereas I said, let's say"}, {"start": 782.5999999999999, "end": 786.0, "text": " we have images of a cat of cats."}, {"start": 786.0, "end": 790.92, "text": " And the fur color is going to be one variable,"}, {"start": 790.92, "end": 795.4799999999999, "text": " and the color of the eyes of the cat is going to be another one."}, {"start": 795.4799999999999, "end": 797.8399999999999, "text": " And the position in the image is going"}, {"start": 797.8399999999999, "end": 798.5999999999999, "text": " to be another one."}, {"start": 798.5999999999999, "end": 802.12, "text": " So these are all fairly independent."}, {"start": 802.12, "end": 808.0, "text": " And so if I change some latent factor,"}, {"start": 808.0, "end": 810.36, "text": " I can change some pretty much independently."}, {"start": 810.36, "end": 813.24, "text": " So here this could be the fur color."}, {"start": 813.24, "end": 815.24, "text": " I can change it pretty much independently."}, {"start": 815.24, "end": 818.0799999999999, "text": " And the cat will just have a different fur and so on."}, {"start": 818.08, "end": 821.6800000000001, "text": " What would be non-distantangled representations"}, {"start": 821.6800000000001, "end": 828.24, "text": " would be, let's say, one encodes the fur of the cat,"}, {"start": 828.24, "end": 833.9200000000001, "text": " and the other one encodes the species of cat."}, {"start": 833.9200000000001, "end": 836.72, "text": " Because these are highly entangled."}, {"start": 836.72, "end": 840.96, "text": " So the fur color is highly dependent on what species the cat is."}, {"start": 840.96, "end": 845.1600000000001, "text": " And it's not really so they can't."}, {"start": 845.1600000000001, "end": 847.96, "text": " You can imagine it as these things being correlated."}, {"start": 847.96, "end": 851.24, "text": " But it's slightly different."}, {"start": 851.24, "end": 852.9200000000001, "text": " And there are, there's not an agreement"}, {"start": 852.9200000000001, "end": 855.2, "text": " on what disentanglement means, really."}, {"start": 855.2, "end": 858.48, "text": " We just kind of imagine data is somehow entangled."}, {"start": 858.48, "end": 861.8000000000001, "text": " And we want to kind of pull out these disentangled factors."}, {"start": 861.8000000000001, "end": 865.88, "text": " So what they focus on here and the easiest, the easiest"}, {"start": 865.88, "end": 870.08, "text": " measure here is the following."}, {"start": 870.08, "end": 874.44, "text": " I might want to have some space."}, {"start": 874.44, "end": 878.36, "text": " All right, so these is the measure of disentanglement"}, {"start": 878.36, "end": 882.44, "text": " that is come up with here is the following."}, {"start": 882.44, "end": 883.5600000000001, "text": " It's an assumption."}, {"start": 883.5600000000001, "end": 889.7600000000001, "text": " The assumption is, let's say there's data x, right?"}, {"start": 889.7600000000001, "end": 891.44, "text": " We'll call it random error."}, {"start": 891.44, "end": 898.72, "text": " And we know, we assume that this data is generated"}, {"start": 898.72, "end": 908.1600000000001, "text": " by a bunch of latent variables, z1, z2, z3, which are independent,"}, {"start": 908.1600000000001, "end": 912.4, "text": " which means that, and the technical infertuses,"}, {"start": 912.4, "end": 917.24, "text": " that the p of z, which is all of them,"}, {"start": 917.24, "end": 923.72, "text": " can be factorized into p of zi."}, {"start": 923.72, "end": 928.36, "text": " So they are independent."}, {"start": 928.36, "end": 936.16, "text": " And these kind of determine independently the data x."}, {"start": 936.16, "end": 939.52, "text": " Now, what does that disentanglement,"}, {"start": 939.52, "end": 943.12, "text": " when my model has produced a disentanglement representation,"}, {"start": 943.12, "end": 948.32, "text": " means I now have a model, some model m,"}, {"start": 948.32, "end": 954.12, "text": " which is going to give me a representation of x."}, {"start": 954.12, "end": 961.36, "text": " And the representation, as we saw before,"}, {"start": 961.36, "end": 964.8, "text": " could be these things here."}, {"start": 964.8, "end": 967.32, "text": " That's the representation."}, {"start": 967.32, "end": 969.72, "text": " Specifically, what these people do is they say,"}, {"start": 969.72, "end": 974.04, "text": " OK, the mean of the distribution that my encoder gives me,"}, {"start": 974.04, "end": 975.72, "text": " that's the representation of x."}, {"start": 975.72, "end": 985.5600000000001, "text": " All right, so this gives you a representation of x,"}, {"start": 985.5600000000001, "end": 993.88, "text": " from which you then might want to reconstruct x over here, x."}, {"start": 993.88, "end": 997.88, "text": " But so the important thing is, when is the representation"}, {"start": 997.88, "end": 998.64, "text": " disentanglement?"}, {"start": 998.64, "end": 1002.28, "text": " The representation is disentangled in the easiest sense"}, {"start": 1002.28, "end": 1003.72, "text": " if the following holds."}, {"start": 1003.72, "end": 1015.1600000000001, "text": " When I change zi, so I introduce a delta to zi"}, {"start": 1015.1600000000001, "end": 1022.52, "text": " to any of these three, that means that in the representation"}, {"start": 1022.52, "end": 1028.08, "text": " of x, which we're just going to say,"}, {"start": 1028.08, "end": 1030.4, "text": " so if there's three dimensions of z,"}, {"start": 1030.4, "end": 1032.2, "text": " we just assume we know that."}, {"start": 1032.2, "end": 1035.68, "text": " And we also make the representation three dimensional."}, {"start": 1035.68, "end": 1042.64, "text": " Then exactly one factor in this is going to change."}, {"start": 1042.64, "end": 1049.96, "text": " So if I change one factor of the true underlying distribution,"}, {"start": 1049.96, "end": 1053.0800000000002, "text": " which is independently, which all the latent factors"}, {"start": 1053.0800000000002, "end": 1057.04, "text": " are independent, then only one factor in my representation"}, {"start": 1057.04, "end": 1057.68, "text": " changes."}, {"start": 1057.68, "end": 1061.0, "text": " So if that's the case, then kind of I"}, {"start": 1061.0, "end": 1064.16, "text": " can be fairly sure that I've captured the true latent"}, {"start": 1064.16, "end": 1066.4, "text": " structure of the data."}, {"start": 1066.4, "end": 1073.6, "text": " If one of the, if I change one of the z here,"}, {"start": 1073.6, "end": 1075.36, "text": " I'll say I change z3."}, {"start": 1075.36, "end": 1081.24, "text": " And only then are three."}, {"start": 1081.24, "end": 1082.52, "text": " So I change z3."}, {"start": 1082.52, "end": 1085.56, "text": " Let's say I've access to the true underlying distribution."}, {"start": 1085.56, "end": 1089.84, "text": " I ask the world to give me a picture of a cat"}, {"start": 1089.84, "end": 1092.9199999999998, "text": " that where the fur color is different."}, {"start": 1092.9199999999998, "end": 1097.56, "text": " And then I put it, I get a data point."}, {"start": 1097.56, "end": 1098.9599999999998, "text": " And then I put it through my model."}, {"start": 1098.9599999999998, "end": 1100.6799999999998, "text": " I get a representation."}, {"start": 1100.6799999999998, "end": 1104.84, "text": " And only from the cat that I had before,"}, {"start": 1104.84, "end": 1108.6, "text": " only one of the factors of my representation changes,"}, {"start": 1108.6, "end": 1109.76, "text": " then I call it disentangled."}, {"start": 1109.76, "end": 1112.32, "text": " Then I can be fairly sure, OK, my representation,"}, {"start": 1112.32, "end": 1114.04, "text": " this dimension of my representation,"}, {"start": 1114.04, "end": 1120.48, "text": " captures the fur color independently of the other factors."}, {"start": 1120.48, "end": 1122.48, "text": " All right, so that's disentanglement."}, {"start": 1122.48, "end": 1126.04, "text": " And you notice it requires actually access"}, {"start": 1126.04, "end": 1130.8, "text": " here to the true distribution of how"}, {"start": 1130.8, "end": 1133.24, "text": " the data is generated by the world."}, {"start": 1133.24, "end": 1136.04, "text": " So this is something you generally don't have,"}, {"start": 1136.04, "end": 1138.36, "text": " but it's a technical notion."}, {"start": 1138.36, "end": 1141.0, "text": " So you can certainly postulated."}, {"start": 1141.0, "end": 1145.6, "text": " And it's a nice framework."}, {"start": 1145.6, "end": 1151.2, "text": " And this paper basically proves that generally learning"}, {"start": 1151.2, "end": 1155.56, "text": " disentangled representation in that way is impossible."}, {"start": 1155.56, "end": 1157.52, "text": " If you don't have some, if you don't"}, {"start": 1157.52, "end": 1160.92, "text": " make some assumptions, some pre-oriented assumptions"}, {"start": 1160.92, "end": 1163.84, "text": " on your data and your model."}, {"start": 1163.84, "end": 1167.2, "text": " So this is a theorem here."}, {"start": 1167.2, "end": 1172.56, "text": " And we see here P is any generative model, which"}, {"start": 1172.56, "end": 1175.48, "text": " admits this factorization."}, {"start": 1175.48, "end": 1177.1200000000001, "text": " That's what we talked about."}, {"start": 1177.1200000000001, "end": 1182.52, "text": " The true underlying generative process"}, {"start": 1182.52, "end": 1188.92, "text": " has independent in its constituents."}, {"start": 1188.92, "end": 1190.72, "text": " That means there's a bunch of latent variables."}, {"start": 1190.72, "end": 1196.16, "text": " They independently from each other produce a data point."}, {"start": 1196.16, "end": 1198.6000000000001, "text": " Access the data observations."}, {"start": 1198.6000000000001, "end": 1204.48, "text": " Then there exists an infinite family of bijective functions,"}, {"start": 1204.48, "end": 1212.64, "text": " such that, and this, and this, and this."}, {"start": 1212.64, "end": 1215.96, "text": " OK, well, that means."}, {"start": 1215.96, "end": 1219.1200000000001, "text": " So this thing here basically just"}, {"start": 1219.1200000000001, "end": 1223.3200000000002, "text": " means that the distributions agree."}, {"start": 1223.32, "end": 1230.6, "text": " So the overall distributions, let's say, that's not exactly that,"}, {"start": 1230.6, "end": 1235.84, "text": " but the posterior distributions, let's say,"}, {"start": 1235.84, "end": 1239.72, "text": " the data looks the same."}, {"start": 1239.72, "end": 1244.36, "text": " What comes out of the process looks the same."}, {"start": 1244.36, "end": 1248.84, "text": " So there is functions that transform"}, {"start": 1248.84, "end": 1253.04, "text": " the latent distribution into some other distribution,"}, {"start": 1253.04, "end": 1260.96, "text": " but they look the same in cumulatively."}, {"start": 1260.96, "end": 1261.96, "text": " All right."}, {"start": 1261.96, "end": 1265.28, "text": " And then this part here means you'll"}, {"start": 1265.28, "end": 1272.8799999999999, "text": " see the derivative of fi of u with respect to some uj,"}, {"start": 1272.8799999999999, "end": 1276.44, "text": " which you'll notice i and j are different."}, {"start": 1276.44, "end": 1285.0, "text": " This means that basically the dimensions are entangled."}, {"start": 1285.0, "end": 1291.88, "text": " It means that if I take the derivative of one entry"}, {"start": 1291.88, "end": 1298.44, "text": " in the f in the function output, and I drive it by another entry,"}, {"start": 1298.44, "end": 1300.96, "text": " then I get a non-zero derivative, which"}, {"start": 1300.96, "end": 1308.64, "text": " means that this uj influences fi, which basically means"}, {"start": 1308.64, "end": 1312.96, "text": " that I can produce, I can take the z,"}, {"start": 1312.96, "end": 1314.68, "text": " I can transform it in."}, {"start": 1314.68, "end": 1316.0, "text": " So z is independent."}, {"start": 1316.0, "end": 1321.16, "text": " So it means the i of dimension has no influence on the jth dimension"}, {"start": 1321.16, "end": 1325.04, "text": " of the output, and I can transform it into something"}, {"start": 1325.04, "end": 1328.88, "text": " where that's no longer the case, where the i of and the jth dimension"}, {"start": 1328.88, "end": 1334.96, "text": " very much are entangled or covariate."}, {"start": 1334.96, "end": 1339.72, "text": " So this means I can take the z that's"}, {"start": 1339.72, "end": 1341.88, "text": " kind of everything is independent."}, {"start": 1341.88, "end": 1344.68, "text": " I can transform it into something where everything is dependent."}, {"start": 1344.68, "end": 1346.16, "text": " And they give a nice example here."}, {"start": 1346.16, "end": 1351.44, "text": " So they say, let's say we have Gaussian's in two dimensions."}, {"start": 1351.44, "end": 1354.0800000000002, "text": " So we have one Gaussian here."}, {"start": 1354.0800000000002, "end": 1357.68, "text": " And let me see if I can draw this, one Gaussian here."}, {"start": 1357.68, "end": 1358.68, "text": " Right?"}, {"start": 1358.68, "end": 1361.0, "text": " In two dimensions, they're completely independent."}, {"start": 1361.0, "end": 1365.3200000000002, "text": " What you'll find is that the kind of distribution"}, {"start": 1365.3200000000002, "end": 1369.44, "text": " overall has iso lines like this, right?"}, {"start": 1369.44, "end": 1372.48, "text": " It gives you kind of a hump in the middle, two dimension,"}, {"start": 1372.48, "end": 1377.28, "text": " where you can maybe imagine, like a bit of a mountain in the middle."}, {"start": 1377.28, "end": 1380.76, "text": " All right, so this is the kind of output distribution."}, {"start": 1380.76, "end": 1382.76, "text": " If you don't know about the underlying factors,"}, {"start": 1382.76, "end": 1385.88, "text": " you simply see the cumulative distribution, which"}, {"start": 1385.88, "end": 1389.5200000000002, "text": " would be the big p here."}, {"start": 1389.5200000000002, "end": 1393.5200000000002, "text": " All right, now we transform this into with f."}, {"start": 1393.5200000000002, "end": 1397.96, "text": " And f is simply a rotation by 45 degrees."}, {"start": 1397.96, "end": 1402.0800000000002, "text": " So two new axes, this and that."}, {"start": 1402.0800000000002, "end": 1404.96, "text": " And again, our two Gaussian's are going"}, {"start": 1404.96, "end": 1408.0800000000002, "text": " to be transformed, like these."}, {"start": 1408.0800000000002, "end": 1412.1200000000001, "text": " So these are not disentangled anymore,"}, {"start": 1412.1200000000001, "end": 1414.92, "text": " well, in the notion."}, {"start": 1414.92, "end": 1417.44, "text": " I can't say like this, but this is easiest to say."}, {"start": 1417.44, "end": 1420.96, "text": " So these are kind of not that it's rotated"}, {"start": 1420.96, "end": 1423.28, "text": " in terms of the original coordinate system, which"}, {"start": 1423.28, "end": 1426.44, "text": " would go like this."}, {"start": 1426.44, "end": 1428.92, "text": " These very much depend on each other, right?"}, {"start": 1428.92, "end": 1429.2, "text": " Did it?"}, {"start": 1429.2, "end": 1431.76, "text": " JF dimension, the IF dimension, depend on each other."}, {"start": 1431.76, "end": 1433.8000000000002, "text": " Because if I sample from one of the Gaussian's,"}, {"start": 1433.8000000000002, "end": 1436.92, "text": " I need now basically two coordinates"}, {"start": 1436.92, "end": 1439.0800000000002, "text": " to describe where it is."}, {"start": 1439.0800000000002, "end": 1443.68, "text": " Or one isn't just."}, {"start": 1443.68, "end": 1445.64, "text": " So if I sample from one Gaussian, I"}, {"start": 1445.64, "end": 1448.0, "text": " change and need both the coordinates."}, {"start": 1448.0, "end": 1453.4, "text": " But the cumulative distribution, or the that is still"}, {"start": 1453.4, "end": 1457.0800000000002, "text": " going to look exactly the same."}, {"start": 1457.0800000000002, "end": 1463.1200000000001, "text": " So it's basically an isometric company in every direction."}, {"start": 1463.1200000000001, "end": 1467.72, "text": " If I rotate that, it looks exactly the same."}, {"start": 1467.72, "end": 1470.16, "text": " This is the p here."}, {"start": 1470.16, "end": 1473.48, "text": " But now the IF dimension, and JF dimension"}, {"start": 1473.48, "end": 1476.2, "text": " very much influence each other."}, {"start": 1476.2, "end": 1483.6, "text": " And yeah, interestingly, if you now look at this entanglement,"}, {"start": 1483.6, "end": 1492.72, "text": " if I just have, if I now produce data x here, x1."}, {"start": 1492.72, "end": 1497.3600000000001, "text": " And here I produce data x2."}, {"start": 1497.3600000000001, "end": 1502.4, "text": " And both go through my model and give me"}, {"start": 1502.4, "end": 1509.44, "text": " our representation of x1 and the representation of x2."}, {"start": 1509.44, "end": 1512.88, "text": " I have, without seeing the underlying structure,"}, {"start": 1512.88, "end": 1517.24, "text": " I have no idea which one of those two it comes from."}, {"start": 1517.24, "end": 1524.3200000000002, "text": " And thereby, I have zero chance, basically it's a lucky guess."}, {"start": 1524.3200000000002, "end": 1525.3600000000001, "text": " Which one it comes from?"}, {"start": 1525.3600000000001, "end": 1526.72, "text": " And there's an infinite family."}, {"start": 1526.72, "end": 1531.64, "text": " So I will never find the true underlying distribution"}, {"start": 1531.64, "end": 1532.44, "text": " here."}, {"start": 1532.44, "end": 1538.5200000000002, "text": " And thereby, I will never be able to satisfy this property"}, {"start": 1538.5200000000002, "end": 1542.44, "text": " that if one of the z changes, then only one of the factors"}, {"start": 1542.44, "end": 1544.1200000000001, "text": " of my representation will change."}, {"start": 1544.1200000000001, "end": 1548.96, "text": " Because if I say, oh, well, obviously, this is the case,"}, {"start": 1548.96, "end": 1551.0800000000002, "text": " then I'm going to make a different model."}, {"start": 1551.0800000000002, "end": 1553.4, "text": " And if I say, well, this is the case,"}, {"start": 1553.4, "end": 1555.2800000000002, "text": " I'm going to make a different model."}, {"start": 1555.2800000000002, "end": 1556.3600000000001, "text": " I don't know which one it is."}, {"start": 1556.3600000000001, "end": 1557.68, "text": " So I have to choose one."}, {"start": 1557.68, "end": 1558.8000000000002, "text": " And it could be the other one."}, {"start": 1558.8, "end": 1561.8799999999999, "text": " So I'm bound to be wrong, in this case, 50% of the time."}, {"start": 1561.8799999999999, "end": 1564.76, "text": " But if it's an infinite family, I'm bound to be wrong."}, {"start": 1564.76, "end": 1567.08, "text": " Every time, basically."}, {"start": 1567.08, "end": 1570.32, "text": " So that's what the theorem basically says."}, {"start": 1570.32, "end": 1574.6, "text": " I can't cite on the true underlying distribution."}, {"start": 1574.6, "end": 1577.3999999999999, "text": " There's an infinite family that transforms it"}, {"start": 1577.3999999999999, "end": 1580.1599999999999, "text": " into it, it transforms every distribution"}, {"start": 1580.1599999999999, "end": 1583.48, "text": " into some other distribution that has basically"}, {"start": 1583.48, "end": 1586.1599999999999, "text": " a complete opposite properties of entanglement,"}, {"start": 1586.16, "end": 1590.24, "text": " and I need to choose one, and I will never choose the right one,"}, {"start": 1590.24, "end": 1592.4, "text": " because I'm not that lucky."}, {"start": 1592.4, "end": 1595.52, "text": " And thereby, I can't do representation learning."}, {"start": 1595.52, "end": 1597.92, "text": " That's disentangled."}, {"start": 1597.92, "end": 1598.52, "text": " All right."}, {"start": 1598.52, "end": 1601.6000000000001, "text": " So that's the main claim of the paper."}, {"start": 1601.6000000000001, "end": 1606.4, "text": " And there is a lot of experiments here."}, {"start": 1606.4, "end": 1611.2, "text": " So what the paper also does is they produce something data sets"}, {"start": 1611.2, "end": 1614.0800000000002, "text": " and they test a lot of hard architectures."}, {"start": 1614.08, "end": 1616.9199999999998, "text": " Basically, they say just because it's theoretically impossible,"}, {"start": 1616.9199999999998, "end": 1619.6399999999999, "text": " it's not impractical, because we can actually"}, {"start": 1619.6399999999999, "end": 1622.24, "text": " make these underlying assumptions."}, {"start": 1622.24, "end": 1626.3999999999999, "text": " Like we can make some assumptions on the data and then we"}, {"start": 1626.3999999999999, "end": 1630.56, "text": " can attempt to do disentanglement learning."}, {"start": 1630.56, "end": 1635.36, "text": " So they do these data sets and they test different VIEs"}, {"start": 1635.36, "end": 1636.6399999999999, "text": " architectures on it."}, {"start": 1636.6399999999999, "end": 1642.4399999999998, "text": " And they basically establish where more work should go."}, {"start": 1642.44, "end": 1644.56, "text": " So that's kind of the rest of the paper."}, {"start": 1644.56, "end": 1647.56, "text": " I encourage you to look at the rest of the paper."}, {"start": 1647.56, "end": 1650.64, "text": " I just wanted to give a quick introduction to VIEs"}, {"start": 1650.64, "end": 1653.6000000000001, "text": " and to disentanglement, disentangled representation"}, {"start": 1653.6000000000001, "end": 1654.68, "text": " learning."}, {"start": 1654.68, "end": 1659.68, "text": " I wasn't technically correct in every detail,"}, {"start": 1659.68, "end": 1661.76, "text": " but I hope that it's enough."}, {"start": 1661.76, "end": 1681.72, "text": " And that's fun."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=dPsXxLyqpfs | World Models | Authors: David Ha, Jürgen Schmidhuber
Abstract:
We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted from the world model as inputs to an agent, we can train a very compact and simple policy that can solve the required task. We can even train our agent entirely inside of its own hallucinated dream generated by its world model, and transfer this policy back into the actual environment.
https://arxiv.org/abs/1803.10122 | Hi, today we're looking at world models by David Ha and Jürgen Schmiduber. This is a paper that's concerned with reinforcement learning and especially with the problem of, say, you have an environment that you interact with and you can't need to learn to act in it, but it could be, for example, very expensive to always query the environment. So let's say you have like a robot and it needs to do something in the world and you kind of have a robot execute something and then observe it is quite expensive, called selectricity and so on. So you would like to sort of minimize how many times this happens. So here, searching for a good picture. We're concerned with problems, for example, like this. This is a race car simulator simulator. There's an open AI gym environment for that. The other one that they use is so called a doom experiment where as you look at this, there's a couple of monsters and they're shooting fireballs at you and the task is just to kind of avoid the fireballs. So the entire point of the paper is that I don't actually need to interact with the environment or to learn it. I can simply kind of learn a model of the environment and then learn using that model. So basically I can learn how the environment works and then simply use my imagination of the environment, my model, in order to learn from that. So I don't have to interact with the real environment anymore. So how do they do this? They do it in multiple stages. Here, first thing they do is they collect a bunch of samples from the environment. So they go to the environment, they simply do a random policy and then they collect a bunch of samples. I think the process is outlined not here somewhere. We saw it before. Here, collect 10,000 rollouts from a random policy. Next, they train a VAE here to kind of learn the environment. So that's where that comes in. This is all done in stages, not end to end. The VAE is simply a model that takes a, takes in this case a video frame here and it sends it through an encoder neural network to obtain a, what's called a latent representation, which is a much smaller dimensional representation. So if the image is like 64 by 64 pixels, then the latent code could be as little as like a hundred or even ten dimensional. So you see that there's quite a bit of compression going on. This is a variational auto encoder. It's not really important here that it's variational since the difference is the variational auto encoder is kind of a stochastic process, whereas the regular auto encoder isn't. But they introduce the castacity later again. So it's not particularly important, but so it's a variational auto encoder. Which means they obtain a latent representation that defines distribution over outputs. So they send this, they sample from this latent distribution that they obtain and then they feed this to the decoder and the decoder kind of gives back what it thinks, the encoder encoded. So the decoder tries to reconstruct as close as possible this original frame that was given to the encoder. But of course it can't because we've compressed it so much to this lower dimensional representation here. So it kind of does its best effort. So what you hope to achieve with this is that kind of the decoder learns, for example, there's always here, this is the ceiling right here. It's always gray. So basically you shouldn't actually need to encode this in your, in your Z if it's always gray. The decoder should learn this by itself. So your hope is that the disease, the latent representation will simply end up containing just the information that's kind of different or between the, between the individual frames. Which here I guess would be kind of the fireballs coming in your position relative to them. That's what's changing if you think about this environment. So your hope is that the latent representation captures only that. Whereas all the static parts that are irrelevant or never change are kind of captured by the encoder and the decoder architecture by itself. So yeah, it's important to note the encoder and decoder are obviously always the same for all frames. Whereas the Z representation, of course, is there is one per frame. So each frame will give you a different Z and that's, yeah, so you can imagine how that works or how that's going to be useful. So they train this on like a random randomly collected sample of the environment until they're confident they now have a good model of the environment. And then what they do next is they use this in order to train an RNN. So again, they kind of have their compression model of the environment. What they do now is they use these Z states. You see here, here, here, here, that they get from that and they train how these latent representations evolve over time. So with an RNN here, it goes over time. So the RNN will always kind of predict what's the next state of the environment going to be, but importantly, maybe compared to environment models that we've discussed before in the, for example, imagination augmented agent paper. There we always try to directly predict the future pixels, so to say, of the future frame. Here, the environment model is over the latent representation. Of course, this means that the, this is a much smaller space. So if your compression model is good, then this should be much easier to learn than say, like a full end to end environment model. So this model learns how your latent states evolve over time, given your actions. So you can imagine the Z being an abstract representation of your state and then your action and then this goes into the RNN and the RNN will predict what's the next latent representation. And there is a, what's called a temperature parameter to control the stochasticity. I've already told you this, this, there is a stochasticity built into this. So the RNN will simply output like some, some vector. What it thinks is the next thing going to be. And they don't use this directly as an X-State, but they parameterize a kind of a mixture of Gaussian distributions coupled with a decoder here in order to give a random distribution over the next state. And they control the amount of randomness with the temperature parameter. They argue that this comes in handy later. So all right, so what do we have? We have a system that can compress the environment into what we would call an essential part, right? Every frame we extract the, what's important in that frame. Then next we have a model that can predict given a state in the action, what's the next state going to be, the next latent state. So technically we now have an environment model, right? Given a state, we can simply, we're going to state in a policy. We can simply use this model to roll forward. So the last component is the actual policy. And the actual policy here, as you can see, is in their case, simply a linear model. The linear model will take the Z, which is the latent representation of the current state and the H, which is the current state of the RNN that models the environment over time. And it will, it simply is a linear, a linear function of the two gives you the action probabilities or I guess the logits of the actions. So it's a really, really simple controller over these things. And they do this in order kind of to show that the main part of the work is being done by this environment model. And given the environment model, you only need very few parameters basically to then learn a policy. Here is the kind of what I said in a diagram. So the observation goes into the, the compression of the VAE, the latent representation of that goes into the RNN together with the hidden state from the last step. And this will output a new hidden state, which goes here into the controller. And we also directly take this Z into the controller. And then from these two, we perform an action, which now we have a choice. It could go to the environment, right, give you the next observation. But also, or at the same time, since you kind of need to update your RNN, it can go here. And update your RNN because it will need to predict the next hidden state. The thing is, we can also now leave away this path, which means we can simply take our RNN and kind of imagine the next latent representation, put it through the decoder part of the VAE, and use that as an observation. I hope this makes sense. It's rather intuitive, right? You have a model of the environment. You can simply use this instead of the real environment. So there's a bit of pseudo code here, and they do a bunch of experiments, right? So we're primarily interested. So they say they see here how our compression works. This is the real frame, and this is the reconstructed frame. Kind of looks, you know, captures the essence of what's going on. I actually want to go down here, the vis-dume experiment. So what they do here in the core-facing experiment is they kind of learn this entire thing, right? And then they learn a policy in the real world. So in the environment, using this model up here, this procedure, where they always go to the environment, and here is the exact experiment set up. So first they collect, again, rollouts for random policy, they train the VAE, they train the RNN, and then they learn the latent, sorry, they learn the controller using the entire model, but in kind of the real world. So they always interact with the environment, but because they also have their kind of latent representation of the observation, and not directly the observation, they get a higher score. And also, the policy that they use in the real environment transfers to the environment model. So the policy that I learn in the true environment, it transfers to the imagined, so if they use the imagined model as an environment, it also performs well. In the next experiment, they're going to try to do this the other way around. They're going to try to learn only using their model of the environment and then see whether or not the policy transfers to the true environment. So that's what they do here. They collect, collect, again, a sample from the environment, they train the VAE, they train the RNN, and then they simply use this virtual environment, what they call it, in order to learn a policy. And at the end, they try to transfer, use the learn policy on the actual environment. And given the results, you see your... There we go. So they... You see, the kind of best it does, I would say, is about here where the actual score is, you can see in this and also in this setting, is higher than the kind of previous best algorithm in the OpenAge in when you go from virtual to actual. So what this means is kind of... Yeah, you can train using this imagined model and then it will actually transfer, but there's a crucial thing and that is this kind of temperature thing here. You can see a lot of times they actually don't manage to reach a good score if this parameter is wrong. What does this parameter do? This parameter controls, as we discussed, the stochasticity of the model. So basically, the environment model doesn't directly imagine a future state, but it imagines a distribution over future states. And the higher this parameter, the more stochastic this distribution is, basically the more uniform, I guess, the more entropy you have in these future states. We've seen this temperature parameter here. Which is important because they go into length explaining why in this entire page here that we skipped. And here you see just text there. Cheating the world model, which basically they say, okay, if you have a wrong model, if you have a model that's wrong of the environment, and you train a policy on it necessarily, it's going to make probably find like a policy that kind of exploits the wrongness of this model. So you might be able to walk through walls or fly or ignore the fireballs or basically, find some or find that if you stand next to a wall in your imagination, you never get hit. Something like this, which isn't true in real world. And so the policy will exploit that. And to counter this, they simply basically turn up this temperature parameter, giving them a more stochastic procedure. Meaning they imagine a lot of kind of different futures and they turn their policy on all of them or in expectation of a sample of them, which means that if the environment model is wrong, it is kind of, I want to say if it's wrong, this corrects for it, it doesn't. But if it's wrong, you still sample different futures. So if it has one wrong future, you still have the other ones to kind of punish the policy if it tries to exploit this one mistake. At least that's the reasoning behind it. So that's how they do this. You can interact with their trained environment models online somehow. They also give a kind of a look at what they would like to have is they would like to kind of, instead of collecting the environment model from random rollout, they kind of would try to train it and do use it again to collect more data, to train more environment model, then use the environment, better environment model to train more the policy and so on in a stepwise fashion. But they don't actually do it. They simply describe it. Yeah, and the rest of the paper is a bit of related work and discussion. It's very, it's very prosaically written, kind of different from what you're used to if you read a lot of these papers. But yeah, I hope you can now you know what's going on and see you next time. | [{"start": 0.0, "end": 7.12, "text": " Hi, today we're looking at world models by David Ha and J\u00fcrgen Schmiduber."}, {"start": 7.12, "end": 12.76, "text": " This is a paper that's concerned with reinforcement learning and especially with the problem"}, {"start": 12.76, "end": 20.080000000000002, "text": " of, say, you have an environment that you interact with and you can't need to learn to act"}, {"start": 20.080000000000002, "end": 26.68, "text": " in it, but it could be, for example, very expensive to always query the environment."}, {"start": 26.68, "end": 35.72, "text": " So let's say you have like a robot and it needs to do something in the world and you kind"}, {"start": 35.72, "end": 43.28, "text": " of have a robot execute something and then observe it is quite expensive, called selectricity"}, {"start": 43.28, "end": 44.28, "text": " and so on."}, {"start": 44.28, "end": 50.84, "text": " So you would like to sort of minimize how many times this happens."}, {"start": 50.84, "end": 55.92, "text": " So here, searching for a good picture."}, {"start": 55.92, "end": 59.800000000000004, "text": " We're concerned with problems, for example, like this."}, {"start": 59.800000000000004, "end": 62.88, "text": " This is a race car simulator simulator."}, {"start": 62.88, "end": 66.88, "text": " There's an open AI gym environment for that."}, {"start": 66.88, "end": 76.64, "text": " The other one that they use is so called a doom experiment where as you look at this,"}, {"start": 76.64, "end": 80.72, "text": " there's a couple of monsters and they're shooting fireballs at you and the task is just"}, {"start": 80.72, "end": 83.52000000000001, "text": " to kind of avoid the fireballs."}, {"start": 83.52, "end": 89.84, "text": " So the entire point of the paper is that I don't actually need to interact with the environment"}, {"start": 89.84, "end": 91.03999999999999, "text": " or to learn it."}, {"start": 91.03999999999999, "end": 98.96, "text": " I can simply kind of learn a model of the environment and then learn using that model."}, {"start": 98.96, "end": 104.8, "text": " So basically I can learn how the environment works and then simply use my imagination of"}, {"start": 104.8, "end": 109.19999999999999, "text": " the environment, my model, in order to learn from that."}, {"start": 109.2, "end": 114.8, "text": " So I don't have to interact with the real environment anymore."}, {"start": 114.8, "end": 117.28, "text": " So how do they do this?"}, {"start": 117.28, "end": 120.0, "text": " They do it in multiple stages."}, {"start": 120.0, "end": 128.96, "text": " Here, first thing they do is they collect a bunch of samples from the environment."}, {"start": 128.96, "end": 135.72, "text": " So they go to the environment, they simply do a random policy and then they collect a"}, {"start": 135.72, "end": 136.72, "text": " bunch of samples."}, {"start": 136.72, "end": 142.56, "text": " I think the process is outlined not here somewhere."}, {"start": 142.56, "end": 143.6, "text": " We saw it before."}, {"start": 143.6, "end": 147.76, "text": " Here, collect 10,000 rollouts from a random policy."}, {"start": 147.76, "end": 155.84, "text": " Next, they train a VAE here to kind of learn the environment."}, {"start": 155.84, "end": 158.12, "text": " So that's where that comes in."}, {"start": 158.12, "end": 161.64, "text": " This is all done in stages, not end to end."}, {"start": 161.64, "end": 170.16, "text": " The VAE is simply a model that takes a, takes in this case a video frame here and it sends"}, {"start": 170.16, "end": 174.83999999999997, "text": " it through an encoder neural network to obtain a, what's called a latent representation,"}, {"start": 174.83999999999997, "end": 178.04, "text": " which is a much smaller dimensional representation."}, {"start": 178.04, "end": 186.88, "text": " So if the image is like 64 by 64 pixels, then the latent code could be as little as like"}, {"start": 186.88, "end": 189.88, "text": " a hundred or even ten dimensional."}, {"start": 189.88, "end": 193.35999999999999, "text": " So you see that there's quite a bit of compression going on."}, {"start": 193.35999999999999, "end": 195.79999999999998, "text": " This is a variational auto encoder."}, {"start": 195.79999999999998, "end": 203.35999999999999, "text": " It's not really important here that it's variational since the difference is the variational auto"}, {"start": 203.35999999999999, "end": 209.88, "text": " encoder is kind of a stochastic process, whereas the regular auto encoder isn't."}, {"start": 209.88, "end": 213.07999999999998, "text": " But they introduce the castacity later again."}, {"start": 213.07999999999998, "end": 219.76, "text": " So it's not particularly important, but so it's a variational auto encoder."}, {"start": 219.76, "end": 226.28, "text": " Which means they obtain a latent representation that defines distribution over outputs."}, {"start": 226.28, "end": 233.6, "text": " So they send this, they sample from this latent distribution that they obtain and then they"}, {"start": 233.6, "end": 242.84, "text": " feed this to the decoder and the decoder kind of gives back what it thinks, the encoder"}, {"start": 242.84, "end": 244.23999999999998, "text": " encoded."}, {"start": 244.24, "end": 251.20000000000002, "text": " So the decoder tries to reconstruct as close as possible this original frame that was given"}, {"start": 251.20000000000002, "end": 253.36, "text": " to the encoder."}, {"start": 253.36, "end": 257.74, "text": " But of course it can't because we've compressed it so much to this lower dimensional"}, {"start": 257.74, "end": 259.08, "text": " representation here."}, {"start": 259.08, "end": 261.8, "text": " So it kind of does its best effort."}, {"start": 261.8, "end": 267.08, "text": " So what you hope to achieve with this is that kind of the decoder learns, for example,"}, {"start": 267.08, "end": 270.16, "text": " there's always here, this is the ceiling right here."}, {"start": 270.16, "end": 272.24, "text": " It's always gray."}, {"start": 272.24, "end": 279.2, "text": " So basically you shouldn't actually need to encode this in your, in your Z if it's always"}, {"start": 279.2, "end": 280.2, "text": " gray."}, {"start": 280.2, "end": 283.32, "text": " The decoder should learn this by itself."}, {"start": 283.32, "end": 289.68, "text": " So your hope is that the disease, the latent representation will simply end up containing"}, {"start": 289.68, "end": 296.56, "text": " just the information that's kind of different or between the, between the individual frames."}, {"start": 296.56, "end": 305.48, "text": " Which here I guess would be kind of the fireballs coming in your position relative to them."}, {"start": 305.48, "end": 307.92, "text": " That's what's changing if you think about this environment."}, {"start": 307.92, "end": 311.92, "text": " So your hope is that the latent representation captures only that."}, {"start": 311.92, "end": 318.84000000000003, "text": " Whereas all the static parts that are irrelevant or never change are kind of captured by the"}, {"start": 318.84000000000003, "end": 322.76, "text": " encoder and the decoder architecture by itself."}, {"start": 322.76, "end": 328.59999999999997, "text": " So yeah, it's important to note the encoder and decoder are obviously always the same for"}, {"start": 328.59999999999997, "end": 329.59999999999997, "text": " all frames."}, {"start": 329.59999999999997, "end": 334.44, "text": " Whereas the Z representation, of course, is there is one per frame."}, {"start": 334.44, "end": 339.88, "text": " So each frame will give you a different Z and that's, yeah, so you can imagine how that"}, {"start": 339.88, "end": 344.36, "text": " works or how that's going to be useful."}, {"start": 344.36, "end": 352.44, "text": " So they train this on like a random randomly collected sample of the environment until"}, {"start": 352.44, "end": 355.96, "text": " they're confident they now have a good model of the environment."}, {"start": 355.96, "end": 363.92, "text": " And then what they do next is they use this in order to train an RNN."}, {"start": 363.92, "end": 373.24, "text": " So again, they kind of have their compression model of the environment."}, {"start": 373.24, "end": 376.12, "text": " What they do now is they use these Z states."}, {"start": 376.12, "end": 384.44, "text": " You see here, here, here, here, that they get from that and they train how these latent"}, {"start": 384.44, "end": 386.76, "text": " representations evolve over time."}, {"start": 386.76, "end": 390.92, "text": " So with an RNN here, it goes over time."}, {"start": 390.92, "end": 400.56, "text": " So the RNN will always kind of predict what's the next state of the environment going to"}, {"start": 400.56, "end": 407.24, "text": " be, but importantly, maybe compared to environment models that we've discussed before in the,"}, {"start": 407.24, "end": 413.12, "text": " for example, imagination augmented agent paper."}, {"start": 413.12, "end": 419.8, "text": " There we always try to directly predict the future pixels, so to say, of the future frame."}, {"start": 419.8, "end": 425.08, "text": " Here, the environment model is over the latent representation."}, {"start": 425.08, "end": 429.64, "text": " Of course, this means that the, this is a much smaller space."}, {"start": 429.64, "end": 438.0, "text": " So if your compression model is good, then this should be much easier to learn than say,"}, {"start": 438.0, "end": 440.96, "text": " like a full end to end environment model."}, {"start": 440.96, "end": 449.64, "text": " So this model learns how your latent states evolve over time, given your actions."}, {"start": 449.64, "end": 455.0, "text": " So you can imagine the Z being an abstract representation of your state and then your"}, {"start": 455.0, "end": 460.0, "text": " action and then this goes into the RNN and the RNN will predict what's the next latent"}, {"start": 460.0, "end": 463.16, "text": " representation."}, {"start": 463.16, "end": 469.2, "text": " And there is a, what's called a temperature parameter to control the stochasticity."}, {"start": 469.2, "end": 476.72, "text": " I've already told you this, this, there is a stochasticity built into this."}, {"start": 476.72, "end": 480.88, "text": " So the RNN will simply output like some, some vector."}, {"start": 480.88, "end": 484.56, "text": " What it thinks is the next thing going to be."}, {"start": 484.56, "end": 490.32, "text": " And they don't use this directly as an X-State, but they parameterize a kind of a mixture"}, {"start": 490.32, "end": 496.96, "text": " of Gaussian distributions coupled with a decoder here in order to give a random distribution"}, {"start": 496.96, "end": 499.28000000000003, "text": " over the next state."}, {"start": 499.28000000000003, "end": 503.92, "text": " And they control the amount of randomness with the temperature parameter."}, {"start": 503.92, "end": 507.0, "text": " They argue that this comes in handy later."}, {"start": 507.0, "end": 508.32, "text": " So all right, so what do we have?"}, {"start": 508.32, "end": 517.28, "text": " We have a system that can compress the environment into what we would call an essential part, right?"}, {"start": 517.28, "end": 521.76, "text": " Every frame we extract the, what's important in that frame."}, {"start": 521.76, "end": 529.92, "text": " Then next we have a model that can predict given a state in the action, what's the next"}, {"start": 529.92, "end": 535.92, "text": " state going to be, the next latent state."}, {"start": 535.92, "end": 539.04, "text": " So technically we now have an environment model, right?"}, {"start": 539.04, "end": 542.52, "text": " Given a state, we can simply, we're going to state in a policy."}, {"start": 542.52, "end": 548.4399999999999, "text": " We can simply use this model to roll forward."}, {"start": 548.4399999999999, "end": 552.5999999999999, "text": " So the last component is the actual policy."}, {"start": 552.5999999999999, "end": 560.16, "text": " And the actual policy here, as you can see, is in their case, simply a linear model."}, {"start": 560.16, "end": 568.8, "text": " The linear model will take the Z, which is the latent representation of the current state"}, {"start": 568.8, "end": 578.28, "text": " and the H, which is the current state of the RNN that models the environment over time."}, {"start": 578.28, "end": 587.1999999999999, "text": " And it will, it simply is a linear, a linear function of the two gives you the action probabilities"}, {"start": 587.1999999999999, "end": 590.04, "text": " or I guess the logits of the actions."}, {"start": 590.04, "end": 593.92, "text": " So it's a really, really simple controller over these things."}, {"start": 593.92, "end": 599.48, "text": " And they do this in order kind of to show that the main part of the work is being done"}, {"start": 599.48, "end": 600.9599999999999, "text": " by this environment model."}, {"start": 600.9599999999999, "end": 606.9599999999999, "text": " And given the environment model, you only need very few parameters basically to then learn"}, {"start": 606.9599999999999, "end": 608.8, "text": " a policy."}, {"start": 608.8, "end": 613.28, "text": " Here is the kind of what I said in a diagram."}, {"start": 613.28, "end": 619.4399999999999, "text": " So the observation goes into the, the compression of the VAE, the latent representation of that"}, {"start": 619.44, "end": 625.1600000000001, "text": " goes into the RNN together with the hidden state from the last step."}, {"start": 625.1600000000001, "end": 631.84, "text": " And this will output a new hidden state, which goes here into the controller."}, {"start": 631.84, "end": 636.5600000000001, "text": " And we also directly take this Z into the controller."}, {"start": 636.5600000000001, "end": 643.36, "text": " And then from these two, we perform an action, which now we have a choice."}, {"start": 643.36, "end": 647.6, "text": " It could go to the environment, right, give you the next observation."}, {"start": 647.6, "end": 656.76, "text": " But also, or at the same time, since you kind of need to update your RNN, it can go here."}, {"start": 656.76, "end": 663.88, "text": " And update your RNN because it will need to predict the next hidden state."}, {"start": 663.88, "end": 669.48, "text": " The thing is, we can also now leave away this path, which means we can simply take our"}, {"start": 669.48, "end": 681.88, "text": " RNN and kind of imagine the next latent representation, put it through the decoder part of the VAE,"}, {"start": 681.88, "end": 686.52, "text": " and use that as an observation."}, {"start": 686.52, "end": 688.52, "text": " I hope this makes sense."}, {"start": 688.52, "end": 689.52, "text": " It's rather intuitive, right?"}, {"start": 689.52, "end": 691.2, "text": " You have a model of the environment."}, {"start": 691.2, "end": 695.72, "text": " You can simply use this instead of the real environment."}, {"start": 695.72, "end": 702.0400000000001, "text": " So there's a bit of pseudo code here, and they do a bunch of experiments, right?"}, {"start": 702.0400000000001, "end": 705.76, "text": " So we're primarily interested."}, {"start": 705.76, "end": 710.44, "text": " So they say they see here how our compression works."}, {"start": 710.44, "end": 714.0, "text": " This is the real frame, and this is the reconstructed frame."}, {"start": 714.0, "end": 720.0, "text": " Kind of looks, you know, captures the essence of what's going on."}, {"start": 720.0, "end": 729.64, "text": " I actually want to go down here, the vis-dume experiment."}, {"start": 729.64, "end": 737.76, "text": " So what they do here in the core-facing experiment is they kind of learn this entire thing, right?"}, {"start": 737.76, "end": 743.48, "text": " And then they learn a policy in the real world."}, {"start": 743.48, "end": 747.92, "text": " So in the environment, using this model up here, this procedure, where they always go"}, {"start": 747.92, "end": 752.36, "text": " to the environment, and here is the exact experiment set up."}, {"start": 752.36, "end": 759.64, "text": " So first they collect, again, rollouts for random policy, they train the VAE, they train"}, {"start": 759.64, "end": 771.76, "text": " the RNN, and then they learn the latent, sorry, they learn the controller using the entire"}, {"start": 771.76, "end": 775.9599999999999, "text": " model, but in kind of the real world."}, {"start": 775.96, "end": 782.2, "text": " So they always interact with the environment, but because they also have their kind of latent"}, {"start": 782.2, "end": 787.72, "text": " representation of the observation, and not directly the observation, they get a higher"}, {"start": 787.72, "end": 788.72, "text": " score."}, {"start": 788.72, "end": 797.88, "text": " And also, the policy that they use in the real environment transfers to the environment"}, {"start": 797.88, "end": 798.88, "text": " model."}, {"start": 798.88, "end": 804.9200000000001, "text": " So the policy that I learn in the true environment, it transfers to the imagined, so if they"}, {"start": 804.92, "end": 809.8399999999999, "text": " use the imagined model as an environment, it also performs well."}, {"start": 809.8399999999999, "end": 813.0799999999999, "text": " In the next experiment, they're going to try to do this the other way around."}, {"start": 813.0799999999999, "end": 820.12, "text": " They're going to try to learn only using their model of the environment and then see whether"}, {"start": 820.12, "end": 825.28, "text": " or not the policy transfers to the true environment."}, {"start": 825.28, "end": 826.52, "text": " So that's what they do here."}, {"start": 826.52, "end": 833.8399999999999, "text": " They collect, collect, again, a sample from the environment, they train the VAE, they train"}, {"start": 833.84, "end": 844.52, "text": " the RNN, and then they simply use this virtual environment, what they call it, in order"}, {"start": 844.52, "end": 845.52, "text": " to learn a policy."}, {"start": 845.52, "end": 852.84, "text": " And at the end, they try to transfer, use the learn policy on the actual environment."}, {"start": 852.84, "end": 861.84, "text": " And given the results, you see your..."}, {"start": 861.84, "end": 865.96, "text": " There we go."}, {"start": 865.96, "end": 869.96, "text": " So they..."}, {"start": 869.96, "end": 881.32, "text": " You see, the kind of best it does, I would say, is about here where the actual score is,"}, {"start": 881.32, "end": 889.24, "text": " you can see in this and also in this setting, is higher than the kind of previous best algorithm"}, {"start": 889.24, "end": 898.84, "text": " in the OpenAge in when you go from virtual to actual."}, {"start": 898.84, "end": 901.36, "text": " So what this means is kind of..."}, {"start": 901.36, "end": 908.5600000000001, "text": " Yeah, you can train using this imagined model and then it will actually transfer, but there's"}, {"start": 908.5600000000001, "end": 913.5600000000001, "text": " a crucial thing and that is this kind of temperature thing here."}, {"start": 913.56, "end": 920.8399999999999, "text": " You can see a lot of times they actually don't manage to reach a good score if this parameter"}, {"start": 920.8399999999999, "end": 921.8399999999999, "text": " is wrong."}, {"start": 921.8399999999999, "end": 923.0, "text": " What does this parameter do?"}, {"start": 923.0, "end": 927.88, "text": " This parameter controls, as we discussed, the stochasticity of the model."}, {"start": 927.88, "end": 936.16, "text": " So basically, the environment model doesn't directly imagine a future state, but it imagines"}, {"start": 936.16, "end": 939.2399999999999, "text": " a distribution over future states."}, {"start": 939.24, "end": 945.44, "text": " And the higher this parameter, the more stochastic this distribution is, basically the more uniform,"}, {"start": 945.44, "end": 952.04, "text": " I guess, the more entropy you have in these future states."}, {"start": 952.04, "end": 956.16, "text": " We've seen this temperature parameter here."}, {"start": 956.16, "end": 964.72, "text": " Which is important because they go into length explaining why in this entire page here"}, {"start": 964.72, "end": 965.72, "text": " that we skipped."}, {"start": 965.72, "end": 971.5600000000001, "text": " And here you see just text there."}, {"start": 971.5600000000001, "end": 975.44, "text": " Cheating the world model, which basically they say, okay, if you have a wrong model, if"}, {"start": 975.44, "end": 980.64, "text": " you have a model that's wrong of the environment, and you train a policy on it necessarily,"}, {"start": 980.64, "end": 986.6800000000001, "text": " it's going to make probably find like a policy that kind of exploits the wrongness of this"}, {"start": 986.6800000000001, "end": 987.6800000000001, "text": " model."}, {"start": 987.68, "end": 997.8, "text": " So you might be able to walk through walls or fly or ignore the fireballs or basically,"}, {"start": 997.8, "end": 1003.28, "text": " find some or find that if you stand next to a wall in your imagination, you never get"}, {"start": 1003.28, "end": 1004.28, "text": " hit."}, {"start": 1004.28, "end": 1006.04, "text": " Something like this, which isn't true in real world."}, {"start": 1006.04, "end": 1011.1999999999999, "text": " And so the policy will exploit that."}, {"start": 1011.1999999999999, "end": 1016.56, "text": " And to counter this, they simply basically turn up this temperature parameter, giving them"}, {"start": 1016.56, "end": 1020.4, "text": " a more stochastic procedure."}, {"start": 1020.4, "end": 1026.3999999999999, "text": " Meaning they imagine a lot of kind of different futures and they turn their policy on all of"}, {"start": 1026.3999999999999, "end": 1033.2, "text": " them or in expectation of a sample of them, which means that if the environment model"}, {"start": 1033.2, "end": 1042.2, "text": " is wrong, it is kind of, I want to say if it's wrong, this corrects for it, it doesn't."}, {"start": 1042.2, "end": 1048.96, "text": " But if it's wrong, you still sample different futures."}, {"start": 1048.96, "end": 1056.4, "text": " So if it has one wrong future, you still have the other ones to kind of punish the policy"}, {"start": 1056.4, "end": 1058.96, "text": " if it tries to exploit this one mistake."}, {"start": 1058.96, "end": 1063.48, "text": " At least that's the reasoning behind it."}, {"start": 1063.48, "end": 1067.4, "text": " So that's how they do this."}, {"start": 1067.4, "end": 1071.24, "text": " You can interact with their trained environment models online somehow."}, {"start": 1071.24, "end": 1077.68, "text": " They also give a kind of a look at what they would like to have is they would like to kind"}, {"start": 1077.68, "end": 1082.64, "text": " of, instead of collecting the environment model from random rollout, they kind of would"}, {"start": 1082.64, "end": 1087.56, "text": " try to train it and do use it again to collect more data, to train more environment model,"}, {"start": 1087.56, "end": 1093.16, "text": " then use the environment, better environment model to train more the policy and so on in"}, {"start": 1093.16, "end": 1094.16, "text": " a stepwise fashion."}, {"start": 1094.16, "end": 1095.44, "text": " But they don't actually do it."}, {"start": 1095.44, "end": 1098.16, "text": " They simply describe it."}, {"start": 1098.16, "end": 1105.3200000000002, "text": " Yeah, and the rest of the paper is a bit of related work and discussion."}, {"start": 1105.3200000000002, "end": 1112.8000000000002, "text": " It's very, it's very prosaically written, kind of different from what you're used to"}, {"start": 1112.8000000000002, "end": 1115.24, "text": " if you read a lot of these papers."}, {"start": 1115.24, "end": 1122.08, "text": " But yeah, I hope you can now you know what's going on and see you next time."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=_Z9ZP1eiKsI | Curiosity-driven Exploration by Self-supervised Prediction | https://arxiv.org/abs/1705.05363
Authors: Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
Abstract:
In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore its environment and learn skills that might be useful later in its life. We formulate curiosity as the error in an agent's ability to predict the consequence of its own actions in a visual feature space learned by a self-supervised inverse dynamics model. Our formulation scales to high-dimensional continuous state spaces like images, bypasses the difficulties of directly predicting pixels, and, critically, ignores the aspects of the environment that cannot affect the agent. The proposed approach is evaluated in two environments: VizDoom and Super Mario Bros. Three broad settings are investigated: 1) sparse extrinsic reward, where curiosity allows for far fewer interactions with the environment to reach the goal; 2) exploration with no extrinsic reward, where curiosity pushes the agent to explore more efficiently; and 3) generalization to unseen scenarios (e.g. new levels of the same game) where the knowledge gained from earlier experience helps the agent explore new places much faster than starting from scratch. | Hi there. Today we're going to look at this paper, Curiosity-Driven Exploration, by self-supervised prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental idea of the paper is to tackle the reward sparseness problem reinforcement learning. For example, if you have a Super Mario game like here, and there's a number of ways you can think of the reward, but one way you could formulate it is that you simply get kind of a plus one reward when you finish the game, like, or the level. Let's say you finish the level, you get plus one. If you die or don't make it in time, you get negative one. I think there's no way to not make it in. Oh yeah, there's actually, there's a time limit. So the problem here is that your algorithm kind of needs to learn to make things now, such that it gets to the end of the level. But the reward is only at the end of the level, so basically step by step it has no signal to go on, because the reward is always zero. And it kind of needs to learn these long range dependencies, and that's notoriously hard in reinforcement learning to step by step learn actions that kind of maximize some very long term goal. So you can also think of a game of chess where your reward is going to be whether you win or lose at the end, but step by step, it's kind of this reward is 50-ish steps away. So you have no way of kind of step by step optimizing your actions in a meaningful manner. So there are many ways to get around this one way that people have done is what's called reward shaping. And reward shaping is trying to introduce additional rewards kind of as a designer of the algorithm that you know are kind of good or helping to solve the problem or at least correlated with the reward you're going to get at the end. So in Mario this could be like the further right you go, the more reward you get, you get kind of an additional reward if you go right. And coincidentally, I think in real Mario this also gives you points, but don't our situation is that the reward is just going to be at the end. You could also say like if you if you kill the or if you if you stomp the the boom bus one boom by your stomp that actually gives you also a bit of reward in chess, you could say like the more pieces you have, that gives you a bit of reward if you have more pieces than your opponent if you opponent loses pieces. And you also get a bit of reward if you get more territory on the board and so on. So these are all things that we know kind of correlate with the end reward like because in Mario for example the end of the level is actually on the right. But of course it's not perfect because sometimes there are situations where you kind of have to go go back and go around something or go over something and not immediately go to the right as well as in chess. There are good sacrifices that you can make. So these these kind of additional rewards they help help, but they're not perfect and the biggest problem with them is they're very domain specific. So developer of the algorithm you basically have to know the domain like super Mario and you have to know the goal is on the right. So you have to construct your reward in order to reflect this and this is this is a very domain specific basically have to do it for every domain again and again and again in chess. You have to know something about chess to play and so on. So one way around this and this paper proposes one method to do this is to introduce an additional reward not based on the domain specifically but based on what they call this curiosity and it's specifically curiosity by self supervised prediction. So what does that mean? The idea is not new in that people have kind of done this before. If we go for example down here so here is this kind of doom environment and what you could say is in my agent. I have kind of a little module that's going to predict the future. So like if I'm if I'm here then I will basically choose an action my agent will choose an action like move forward like press the forward key and then I will predict how that's going to look. And of course we know this kind of a 3D environment so this probably going to be this part of the screen is going to be the full screen because you're now closer and so on the perspective changes a little bit. Basically this is this should be a learned like an oral network that predicts the future from the state now and the action now and basically the you can train this in a supervised fashion because you will perform some actions you will collect some data about this. So you can learn a network that is going to predict one step into the future basically how the environment will look and then this is by no means kind of a new idea to introduce rewards based on this type of learning how the environment acts. We've seen this in like the a3c paper original one where the additional reward is something that pixel control where they consider like okay this pixel here how much can I control it by my action like how does my action influence it how can I predict this and so on and to to learn how to control the pixels on the screen by your actions and to give a reward based on that so that's that's been around this idea and what this paper here does specifically is they say well I'm going to predict the future and if I am wrong about the prediction then that gives me a reward and that's the curiosity part. Basically it means like if I have a good model of what's going to happen in the future and then I predict the future and then I'm wrong it means something new has happened something special something that I hadn't expected and therefore if the goal is to get the algorithm to explore by itself which is what you need to do when you don't have a reward right when you don't have a reward what you want your algorithm to do is simply to go around and explore and in a sense they're saying okay the way to do is is to go by curiosity which means is to go to actively seek out environments that you haven't that you wouldn't expect basically so it whenever you don't expect something that means it's something new that means you haven't had this experience before right and that means that it's kind of a new state to explore that you haven't you have not seen this before so kind of in absence of any reward might as well go where you haven't been before and that's that's kind of the essence so they outline a number of problems that you might have with this approach they they give the example let's first actually go to what the model actually looks like so that's here you can see this is kind of what they call an intrinsic curiosity module so you have a state here you're in a state you have your policy and your policy gives you an action and the action goes to an environment and the environment gives you the next state and also the what's called the reward they call here are E is the extrinsic reward that you get from the environment but they also combine this within so what's called an intrinsic reward that you get from here that you get from the curiosity module and that's what we've discussed it kind of tries to assess how new is the state that I'm going to be in how surprising it is for me so the thing is that I'm going to first describe like the model how you would build it and how that gets you into problems and then how to fix it so how you would build this is to have this what's called this forward model so the forward model takes the action and the current state and it kind of predicts the next state that's in here don't worry about the fi hat right now it predicts the next state and then you compare this to the what's here to the actual next state you here is subtract you just look at the difference between what you predict the next state is going to be and what the next state really is and that gives you the intrinsic reward the more different these are the kind of higher the reward that's what we've discussed like how how much different is it from what I've expected so how does that get into problems and the authors give a very very good illustrative example of say you are in an environment let's actually go over here you're in an environment and you have your screen right and here is kind of a road that you need to maybe walk after and here are some leaves in the wind I'm very bad at drawing leaves so imagine these are leaves and there's wind right like winds coming from here and kind of shaking up these leaves and so on so if you simply try to predict this entire screen from as your forward model what's going to happen is you will never be able to kind of predict how these leaves are going to move because they're basically you can't influence them no no part of like you can predict a bit from the current state but the action you take has no influence on how these leaves are going to move because they're influenced by the wind and the wind is kind of this random ish process that that you you can't control so the the authors say because of this the your algorithms always going to find these leaves basically interesting curious be curious about because it can't predict them and we've seen that the reward that they model to give an addition is based on how well you cannot predict certain state and they say okay if we do like this then the the kind of these things these random things that we can't influence will always be surprising and therefore we will always be curious about them and therefore we will always kind of look at the leaves and be amazed and get reward after reward because we can't predict them that's not the goal so what what they're arguing is that why are these leaves not important for curiosity because we can't influence them with our actions like we can influence where we go on this road because we can kind of move and the road is kind of static not governed by these random processes but the leaves we we we would like to discard them we can't influence them and therefore what they say is what we need is an encoder that takes a state and I'm going to try to delete this annotation so we need an encoder here features that takes a state and it outputs features of the state and then we're not our forward model isn't fed with the state it's fed with the features of the state and it's not going to output the next state is such but the features of the next state like the it's predict the feature and then we're going to compare that with the features of the true next state and that's what we compare so how does this encoder these features need to look and they're saying well these features should kind of only consider things about the state that are actually dependent on our actions and they have a I think a very interesting way of achieving to train such an such an encoder such a feature producing function in that they say it's going to be a neural network that we train by this training this so-called inverse model so we take this encoder and we train this inverse model on top of it and the inverse model takes the features of the last state and the new state and is trying to predict this action right here so this is this action the action we took to get from the old state to the new state so this inverse model is trained to predict what action was taken to get from the old state to the new state and by training the encoder with this inverse model like training this end to end you will make the encoders such that it only considers things that are actually relevant to predicting this action so in the leaves example it would discard the leaves it will discard anything that you can't influence with your action and therefore it will only retain features that are dependent on your action I think that's quite an interesting way to get rid of the irrelevant information that they don't want and then they can use this encoder to train this forward model and to essentially get this intrinsic reward so I find this idea quite interesting and as I said the idea of intrinsic reward and curiosity to go for exploration is not new but I think this kind of approach and I'm sure it's been around in some variants but I've just stumbled across this and this is oh sorry this is quite interesting so we're gonna take a look and you can go about the math of yourself but they do these kind of experiments and they corrupt as you can see part of the screen with noise here and they of course show like okay since the noise is not dependent on our action our features do actually discard this noise only focus on the part that we can actually influence power actions so that's I think all know pretty interesting they show of course that their algorithm then outperforms the kind of baseline of a3c on these sparse reward tasks and the sparser here you can see like the left is like dense reward and then sparse reward and then very sparse reward and at some point you see the a3c simply doesn't doesn't do it anymore but what's also interesting is here if you have the ICM pixels which kind of means pixel based curiosity so where we don't have this encoder where we simply trying to predict the pixels of the environment and that works if you have like this kind of sparse reward thing but if you want to if you have the very sparse reward that also fails and you actually need this encoder that discards what's not relevant for predicting the actions yeah so you can you can take a look at the rest of the paper yourself I find I find it quite interesting they analyze how their their agent explore these mazes and things and they have more experiments on like benchmark tasks so I will look at it and I'll see you next time | [{"start": 0.0, "end": 2.0, "text": " Hi there."}, {"start": 2.0, "end": 9.0, "text": " Today we're going to look at this paper, Curiosity-Driven Exploration, by self-supervised prediction."}, {"start": 9.0, "end": 13.0, "text": " It's a relatively short idea, so it shouldn't take too long."}, {"start": 13.0, "end": 21.0, "text": " So the fundamental idea of the paper is to tackle the reward sparseness problem reinforcement learning."}, {"start": 21.0, "end": 39.0, "text": " For example, if you have a Super Mario game like here, and there's a number of ways you can think of the reward, but one way you could formulate it is that you simply get kind of a plus one reward when you finish the game, like, or the level."}, {"start": 39.0, "end": 42.0, "text": " Let's say you finish the level, you get plus one."}, {"start": 42.0, "end": 49.0, "text": " If you die or don't make it in time, you get negative one."}, {"start": 49.0, "end": 51.0, "text": " I think there's no way to not make it in."}, {"start": 51.0, "end": 55.0, "text": " Oh yeah, there's actually, there's a time limit."}, {"start": 55.0, "end": 66.0, "text": " So the problem here is that your algorithm kind of needs to learn to make things now, such that it gets to the end of the level."}, {"start": 66.0, "end": 74.0, "text": " But the reward is only at the end of the level, so basically step by step it has no signal to go on, because the reward is always zero."}, {"start": 74.0, "end": 85.0, "text": " And it kind of needs to learn these long range dependencies, and that's notoriously hard in reinforcement learning to step by step learn actions that kind of maximize some very long term goal."}, {"start": 85.0, "end": 96.0, "text": " So you can also think of a game of chess where your reward is going to be whether you win or lose at the end, but step by step, it's kind of this reward is 50-ish steps away."}, {"start": 96.0, "end": 107.0, "text": " So you have no way of kind of step by step optimizing your actions in a meaningful manner."}, {"start": 107.0, "end": 115.0, "text": " So there are many ways to get around this one way that people have done is what's called reward shaping."}, {"start": 115.0, "end": 134.0, "text": " And reward shaping is trying to introduce additional rewards kind of as a designer of the algorithm that you know are kind of good or helping to solve the problem or at least correlated with the reward you're going to get at the end."}, {"start": 134.0, "end": 143.0, "text": " So in Mario this could be like the further right you go, the more reward you get, you get kind of an additional reward if you go right."}, {"start": 143.0, "end": 151.0, "text": " And coincidentally, I think in real Mario this also gives you points, but don't our situation is that the reward is just going to be at the end."}, {"start": 151.0, "end": 171.0, "text": " You could also say like if you if you kill the or if you if you stomp the the boom bus one boom by your stomp that actually gives you also a bit of reward in chess, you could say like the more pieces you have, that gives you a bit of reward if you have more pieces than your opponent if you opponent loses pieces."}, {"start": 171.0, "end": 176.0, "text": " And you also get a bit of reward if you get more territory on the board and so on."}, {"start": 176.0, "end": 185.0, "text": " So these are all things that we know kind of correlate with the end reward like because in Mario for example the end of the level is actually on the right."}, {"start": 185.0, "end": 197.0, "text": " But of course it's not perfect because sometimes there are situations where you kind of have to go go back and go around something or go over something and not immediately go to the right as well as in chess."}, {"start": 197.0, "end": 201.0, "text": " There are good sacrifices that you can make."}, {"start": 201.0, "end": 210.0, "text": " So these these kind of additional rewards they help help, but they're not perfect and the biggest problem with them is they're very domain specific."}, {"start": 210.0, "end": 218.0, "text": " So developer of the algorithm you basically have to know the domain like super Mario and you have to know the goal is on the right."}, {"start": 218.0, "end": 232.0, "text": " So you have to construct your reward in order to reflect this and this is this is a very domain specific basically have to do it for every domain again and again and again in chess."}, {"start": 232.0, "end": 235.0, "text": " You have to know something about chess to play and so on."}, {"start": 235.0, "end": 256.0, "text": " So one way around this and this paper proposes one method to do this is to introduce an additional reward not based on the domain specifically but based on what they call this curiosity and it's specifically curiosity by self supervised prediction."}, {"start": 256.0, "end": 266.0, "text": " So what does that mean? The idea is not new in that people have kind of done this before."}, {"start": 266.0, "end": 283.0, "text": " If we go for example down here so here is this kind of doom environment and what you could say is in my agent."}, {"start": 283.0, "end": 291.0, "text": " I have kind of a little module that's going to predict the future."}, {"start": 291.0, "end": 309.0, "text": " So like if I'm if I'm here then I will basically choose an action my agent will choose an action like move forward like press the forward key and then I will predict how that's going to look."}, {"start": 309.0, "end": 322.0, "text": " And of course we know this kind of a 3D environment so this probably going to be this part of the screen is going to be the full screen because you're now closer and so on the perspective changes a little bit."}, {"start": 322.0, "end": 338.0, "text": " Basically this is this should be a learned like an oral network that predicts the future from the state now and the action now and basically the you can train this in a supervised fashion because you will perform some actions you will collect some data about this."}, {"start": 338.0, "end": 357.0, "text": " So you can learn a network that is going to predict one step into the future basically how the environment will look and then this is by no means kind of a new idea to introduce rewards based on this type of learning how the environment acts."}, {"start": 357.0, "end": 377.0, "text": " We've seen this in like the a3c paper original one where the additional reward is something that pixel control where they consider like okay this pixel here how much can I control it by my action like how does my action influence it how can I predict this and so on and"}, {"start": 377.0, "end": 404.0, "text": " to to learn how to control the pixels on the screen by your actions and to give a reward based on that so that's that's been around this idea and what this paper here does specifically is they say well I'm going to predict the future and if I am wrong about the prediction then that gives me a reward and that's the curiosity part."}, {"start": 404.0, "end": 433.0, "text": " Basically it means like if I have a good model of what's going to happen in the future and then I predict the future and then I'm wrong it means something new has happened something special something that I hadn't expected and therefore if the goal is to get the algorithm to explore by itself which is what you need to do when you don't have a reward right when you don't have a reward"}, {"start": 433.0, "end": 454.0, "text": " what you want your algorithm to do is simply to go around and explore and in a sense they're saying okay the way to do is is to go by curiosity which means is to go to actively seek out environments that you haven't that you wouldn't expect basically"}, {"start": 454.0, "end": 480.0, "text": " so it whenever you don't expect something that means it's something new that means you haven't had this experience before right and that means that it's kind of a new state to explore that you haven't you have not seen this before so kind of in absence of any reward might as well go where you haven't been before and that's that's kind of the essence"}, {"start": 480.0, "end": 508.0, "text": " so they outline a number of problems that you might have with this approach they they give the example let's first actually go to what the model actually looks like so that's here you can see this is kind of what they call an intrinsic curiosity module so you have a state here you're in a state you have your policy and your policy gives you an action"}, {"start": 508.0, "end": 535.0, "text": " and the action goes to an environment and the environment gives you the next state and also the what's called the reward they call here are E is the extrinsic reward that you get from the environment but they also combine this within so what's called an intrinsic reward that you get from here that you get from the curiosity module"}, {"start": 535.0, "end": 562.0, "text": " and that's what we've discussed it kind of tries to assess how new is the state that I'm going to be in how surprising it is for me so the thing is that I'm going to first describe like the model how you would build it and how that gets you into problems and then how to fix it"}, {"start": 562.0, "end": 587.0, "text": " so how you would build this is to have this what's called this forward model so the forward model takes the action and the current state and it kind of predicts the next state that's in here don't worry about the fi hat right now it predicts the next state and then you compare this to the what's here to the actual next state"}, {"start": 587.0, "end": 607.0, "text": " you here is subtract you just look at the difference between what you predict the next state is going to be and what the next state really is and that gives you the intrinsic reward the more different these are the kind of higher the reward that's what we've discussed like how how much different is it from what I've expected"}, {"start": 607.0, "end": 636.0, "text": " so how does that get into problems and the authors give a very very good illustrative example of say you are in an environment let's actually go over here you're in an environment and you have your screen right and here is kind of a road that you need to maybe walk after and here are some leaves in the wind I'm very bad at drawing leaves so imagine these are leaves"}, {"start": 636.0, "end": 660.0, "text": " and there's wind right like winds coming from here and kind of shaking up these leaves and so on so if you simply try to predict this entire screen from as your forward model what's going to happen is you will never be able to kind of predict how these leaves are going to move because they're basically you can't influence them"}, {"start": 660.0, "end": 683.0, "text": " no no part of like you can predict a bit from the current state but the action you take has no influence on how these leaves are going to move because they're influenced by the wind and the wind is kind of this random ish process that that you you can't control"}, {"start": 683.0, "end": 706.0, "text": " so the the authors say because of this the your algorithms always going to find these leaves basically interesting curious be curious about because it can't predict them and we've seen that the reward that they model to give an addition is based on how well you cannot predict certain state"}, {"start": 706.0, "end": 727.0, "text": " and they say okay if we do like this then the the kind of these things these random things that we can't influence will always be surprising and therefore we will always be curious about them and therefore we will always kind of look at the leaves and be amazed and get reward after reward because we can't predict them that's not the goal"}, {"start": 727.0, "end": 747.0, "text": " so what what they're arguing is that why are these leaves not important for curiosity because we can't influence them with our actions like we can influence where we go on this road because we can kind of move and the road is kind of static not governed by these random processes"}, {"start": 747.0, "end": 770.0, "text": " but the leaves we we we would like to discard them we can't influence them and therefore what they say is what we need is an encoder that takes a state and I'm going to try to delete this annotation"}, {"start": 770.0, "end": 787.0, "text": " so we need an encoder here features that takes a state and it outputs features of the state and then we're not our forward model isn't fed with the state it's fed with the features of the state"}, {"start": 787.0, "end": 800.0, "text": " and it's not going to output the next state is such but the features of the next state like the it's predict the feature and then we're going to compare that with the features of the true next state and that's what we compare"}, {"start": 800.0, "end": 815.0, "text": " so how does this encoder these features need to look and they're saying well these features should kind of only consider things about the state that are actually dependent on our actions"}, {"start": 815.0, "end": 834.0, "text": " and they have a I think a very interesting way of achieving to train such an such an encoder such a feature producing function in that they say it's going to be a neural network that we train by this training this so-called inverse model"}, {"start": 834.0, "end": 851.0, "text": " so we take this encoder and we train this inverse model on top of it and the inverse model takes the features of the last state and the new state and is trying to predict this action right here"}, {"start": 851.0, "end": 859.0, "text": " so this is this action the action we took to get from the old state to the new state"}, {"start": 859.0, "end": 875.0, "text": " so this inverse model is trained to predict what action was taken to get from the old state to the new state and by training the encoder with this inverse model like training this end to end"}, {"start": 875.0, "end": 891.0, "text": " you will make the encoders such that it only considers things that are actually relevant to predicting this action so in the leaves example it would discard the leaves it will discard anything that you can't influence with your action"}, {"start": 891.0, "end": 897.0, "text": " and therefore it will only retain features that are dependent on your action"}, {"start": 897.0, "end": 914.0, "text": " I think that's quite an interesting way to get rid of the irrelevant information that they don't want and then they can use this encoder to train this forward model and to essentially get this intrinsic reward"}, {"start": 914.0, "end": 938.0, "text": " so I find this idea quite interesting and as I said the idea of intrinsic reward and curiosity to go for exploration is not new but I think this kind of approach and I'm sure it's been around in some variants but I've just stumbled across this and this is oh sorry this is quite interesting"}, {"start": 938.0, "end": 962.0, "text": " so we're gonna take a look and you can go about the math of yourself but they do these kind of experiments and they corrupt as you can see part of the screen with noise here and they of course show like okay since the noise is not dependent on our action"}, {"start": 962.0, "end": 976.0, "text": " our features do actually discard this noise only focus on the part that we can actually influence power actions so that's I think all know pretty interesting they show of course that"}, {"start": 976.0, "end": 990.0, "text": " their algorithm then outperforms the kind of baseline of a3c on these sparse reward tasks and the sparser here you can see like the left is like dense reward"}, {"start": 990.0, "end": 1008.0, "text": " and then sparse reward and then very sparse reward and at some point you see the a3c simply doesn't doesn't do it anymore but what's also interesting is here if you have the ICM pixels which kind of means pixel based curiosity"}, {"start": 1008.0, "end": 1030.0, "text": " so where we don't have this encoder where we simply trying to predict the pixels of the environment and that works if you have like this kind of sparse reward thing but if you want to if you have the very sparse reward that also fails and you actually need this encoder that discards what's not relevant for predicting the actions"}, {"start": 1030.0, "end": 1054.0, "text": " yeah so you can you can take a look at the rest of the paper yourself I find I find it quite interesting they analyze how their their agent explore these mazes and things and they have more experiments on like benchmark tasks so I will look at it and I'll see you next time"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=BBp0tHcirtQ | git for research basics: fundamentals, commits, branches, merging | Don't watch this if you already know how to solve a merge conflict :) | Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations. So Git is like a tool to collaborate but when you research like when you work on a paper together with other people you won't use a lot of the features that Git offers and that are usually described by Git. So in this series I want to talk about how what's kind of the most simple way to collaborate with people on like a research project using Git. And today we're going to go over just the fundamentals which makes everything else a lot easier. So what you need to understand about Git is that fundamentally Git is a graph and it's a graph of commits. What I mean by this. So let's say you have your paper, you write some things and then this is kind of version 1. And then you have another paper or the same paper and you kind of change this line here. That's a version 2. And so on. You have kind of this chain of versions that you would like to keep in store. This is the classic example of version control where you would like to save these versions and do it in a way that you can at any point in time go back to any version previously. And this is exactly what Git does without you having to kind of rename like people usually copy the file and then rename like this version 2, version 3, you final version, really final version, really final version corrected blah blah blah. So Git fundamentally is a graph and a graph of an object we call a commit. So commit which I'm going to represent as a bubble here is simply a kind of an image of your hard drive or one folder of your hard drive at a particular point in time. So this will contain all kind of files. Let's call this file a file B. Oops. Well meant to make a square here. But all the files that are in your folder, which is called the Git repository or it's not correct but bear with me. You have this folder and all the files in this folder when you make a commit all these files are kind of saved as they are into one of these bubbles and they're saved forever basically in this status that they are. So what you can do now is you can go ahead and make a second commit. So you change a bunch of files. Let's say the file B is still the same. And but the file A has changed is now a prime. You make a second commit and the second commit references the first commit. So part of a commit except the very first commit. Part of a commit is always a pointer to its parent commit. And especially if you look at the commits they all have names and the name of a commit is always its hash. And the hash includes basically the hash of all the files that are in there. So a hash could be something like F5C259 and so on. And for the next commit the hash also includes the reference to the parent. That's why the integral part of a commit is to which parent it belongs. This ultimately is what makes the graph kind of the graph. And we commit and references its parent. So you can address every commit by its name as I said which is the hash of the commit. And you can also so the hash is like really long but you can also simply reference it by kind of the first couple of letters as long as that's unique. So we just want to make it look like it will let you do this whenever you need to reference some commit. All right. So we've discussed that basically a commit is a bunch of files as they are and it saved in this state. Of course smart. It will only kind of save the diff from one to the other commit. But you can just imagine that a commit is simply one the status of a forward writer particular point of time. So let me just take away these files here. There are a bunch of other things in in get. So the kind of one concept that get has is called a tag. And the tag is a name for a commit that you give yourself and the tag is like a little flag that sticks in a commit and you may say this V1 version one. This is simply a tag and as you make new commit and so on the tax and please stay certain at any time if you don't want to remember this big long hash you can simply refer to this commit as V1 because that's the tag. That kind of simple. The next form of a little flag that you can append to commit is called a branch and the branch the difference between a tag and a branch. So branches also this flag and we'll call it. I don't know. The difference is that when you are on this commit here right here and you make a commit on top of this commit while what's called you've checked out the blob branch. So right now you're looking at blah which is this commit and you make a commit on top of this commit what it will do automatically for you is it erase this flag and move it to this next commit. So this is kind of so you might know branch is kind of from subversion or other version control technologies is very similar but in get a branch is simply like a tag is simply a name for a commit but with the additional property that when you make a commit on top of that commit. So when it has the commit as it's parent then get will move the branch the little flag to the new commit. So basically you always have that one branch which is called master it creates this automatically for you if you just have this it's a little flag master. Right and you make a commit on top of master which would cause master to go here master and so on. So people usually say they work on the master branch which means they're simply making commits on top of the commit that currently has the master flag. It also allows you to move around both tags and these little branches basically to any commit so I could forcefully go erase this here and simply stick the master flag here. And sometimes if we kind of if we kind of decide these two commits are no good we would simply do this we would simply take the master flag put it here and then we when we make a new commit on top of the master now what we would make is we make a new commit point here then get would move the master flag because it's a branch master. And then we simply continue working here working here and get will happily move along this master. So in get there is no need to actually delete commits or something like this what we can simply do is is kind of move the move the branch that we're working on to the commit we like and garbage collection ultimately will at some point go and delete these two commits. This is a bit more difficult once you collaborate with other people because they might actually have made commits that reference the commits that you you just kind of deleted or so so. It's a bit tricky but ultimately this this something you can do so the next thing we're going to talk about is multiple branches having multiple branches basically boils down to you have few commits you have your graph and let's say this is your master branches so here we have master. But also or let's make them on before otherwise I don't have space master so what someone else would like to do like is say hey I want to like try out this new feature in the code it will probably change the code base and so on but I want to try it out maybe it'll introduce some bugs and so on and then what you can do is you can make a new branch f1 let's call the f1 for feature one so and then you can I can make a commit on top of feature one which are the move the feature one flag to here and so on I can make second and third commit and so on meanwhile the other people working on the project or maybe even you yourself can work on top of this commit of top of the master branch so in in kind of software engineering this is typically used when one part of the team wants to implement a new feature but the other part of the team kind of continues to do bug fixes or things like this development on the version of the software that doesn't yet have the new feature but they kind of need to fix bugs and since the new features not complete yet they can both work on the same code base so each work on on their own branch so to say and at the end when feature one kind of is ready that people say like OK we've implemented it it's all good there's no bugs we would like to integrate the feature one into the main software basically what you have to do is you would have to do a so called merge a merge is a process that generates a merge commit and a merge commit is this thing here as you notice is has more than one parent it has in this case two parents where it kind of combines so from this commit here both branches are based off of this commit and then changes were made individual changes in this branch and in this branch so there's a possibility that you know people change different things and what the merge commit needs to do somehow is to bring together these changes so actually both branches might have changed the same file but in a different way and now the question is how do we merge these different files and that's kind of the last topic we're going to today how does get to a merge so when you talk about merging Git has a bunch of built in algorithms that helps you with most of the time merging is automatic so if you have like files here a and b and one in one branch a is changed some here and in one branch b is changed Git simply assumes well this one branch has changed a the other one hasn't so the changes you know they mean something I'll take them and as well and b so basically whenever something has changed in one branch and not changed in the other it will it will assume that the changes are the thing that needs to continue to live basically it assumes that the changes were made for reason and that reason should you know continue so one might be a bug fix the other one might be the new feature the same goes in the same file so when you have the same file and one branch in one branch something on top is changed and the other branch something kind of on the bottom is changed and simply assumes both changes are wanted and takes takes both the only kind of time when it doesn't know what to do is when both branches change the same line so when I represent this with I don't know but when both branches change the same line in the same file kind of or close by so there is algorithms that get determines when there's a so-called merge conflict that's the only time where it doesn't know what to do and so as preliminary it's a good idea to structure files in a line based fashion especially if you write kind of lot of good practices to put every sentence on a new line and not have like giant lines of you know multiple sentences because if you put every sentence on a new line then you immediately kind of see where something was changed whereas if you have this big paragraph and get will simply tell you this line has changed which is an entire paragraph and you don't see what's happening so when you have a merge conflict it asks you what to do and it does this in a very simple way we're just going to kind of take a look here as a final demonstration so I have a I get repository here let me that so as you can see there's simply this test file in here and I've just made one commit to the initial commit and let's look at this test file it simply says hello so what I can do is for example I can say hi I can when I want to make a new commit first of all get that as well always tell you kind of what you can do what's happening in your repository here it says changes not staged for commit modified test.txt and it also tells you what you can do so it tells me for example use get checkout dash dash with the file name to discard changes or use get add to update what will be committed there's a I'll use get add with this so it tells me changes to be committed now it's green as you can see so when I now type get commit it should commit these changes and this is common occurrence and get whenever you see a text editor opening get expects you to type a text message like a commit message in this case that it like a log message basically that hashtags are comments which one not go in here this is all is all described right here actually in these comments the thing about these things is when you type an empty message then get will abort commit so if you notice you've done something wrong you can simply save this file with empty being empty being nothing but commit comments basically get will abort so it's super useful I'll just say I added high and then save this file so this is not a special thing all you need to do this is a this is an editor a text editor that edits a text file you simply need to save the file and close the editor and get will be like okay cool I'll continue so with get log now you can see we have two commits we have my initial commit and we have the commit called added high if you look at the test file see high so what we'll do now is finally will make two branches as I as we've discussed before so this is my initial commit I've made one more commit and we're on branch master right now which gets that as we'll tell you see on branch master so this is now master what we'll do is will make a new branch called f1 will make a commit on f1 meaning we'll move this f1 then we'll make a commit on top of master like this which which means we'll move this master and then we will merge f1 back into master such that kind of this master is here and at the end we can even kind of remove the f1 branch and we'll do this while we're having a merge conflict so that you see the whole process so okay so what I want to do is first I want to make a branch f1 for this we can use check out minus b for making a new branch f1 if the branch already exists you simply need to check out which means I simply go to where this branch is to the commit that the branch references to we also say we put head where to the to this commit head is always the thing you're looking at basically the thing you've currently checked out so make a new branch f1 and we'll immediately switch to f1 if I type status it says on branch f1 it's the all the same commit but we're just in a different branch so we'll make kind of a change to this file here I'm gonna say hello cool save the file status it says it's modified I want to add and commit it and there's a shortcut commit minus a minus m so the ape simply says all the files that have changed add them so I don't need to add get add all the it changed files separately though this only counts for kind of changed files if you have completely new files that get isn't tracking yet you need to add them yourself but so I hear with the minus a I skip the need to first add the files and with the minus m I can give directly the commit message more oh cool so now what we've done is we're we have made this commit here and moved the f1 flag to this commit what we'll do now is we'll go back to this commit which is currently master branch and we'll make this commit so first what we need to do is we'll go back to some commit which is a check out check out master since master is still referring to that commit as you can see when I open the test file there's no hello it's the status from before hello I can now say I can now change the file in some other manner in this case I say hello because I want many ease and I can say I can commit this because I'm now on the branch master it will make this new commit here and move the master branch to that more e if you look at get log you see all these commits on this kind of branch you don't see the commit on the f1 branch for that I would have to go back f to the f1 branch I log and you see here it's a different story after the added high commit there's the more oh commit whereas up here after the added high commit there's the more e commit so merging and by the way merging also is not only happens when you have different branches but also because it's the same thing when you collaborate with other people and these people make commits and you make commits independent of each other and you kind of try to sync as your work often you need to do a merge and then a merge conflicts can also happen so what we do now is we can pack to master because we've oops get checkout master there's like shortcuts for all of these but so we're on this branch right here and we want what we're going to do is we want to make the merge commit and so we want to merge f1 into master so while I am on master I can say get merge f1 and it will try to merge but it will tell me conflict automatic merge failed fixed conflicts and then commit the result I can say get status and it will tell me you're currently merging you have on merge paths and this test.txt file is both branches modified it so I'm going to the test and this is very strange if you see for the first time but it's actually very intuitive so what it will do is wherever the line is that both branches have changed or wherever the block of lines is that both branches have changed it will basically indicate this with by writing directly into the file so it will make these smaller smaller smaller smaller smaller smaller than sign and then it says head which means this is the thing you're currently looking at which we know it's master has changed this first line to this hellow then it will be like equal equal equal equal and then it will say down here will say the f1 branch has changed this line the same line to hello and it will able to notice with like the end of this with like larger larger larger larger greater than signs so what you need to do to in order to merge is simply you need to make this file as you wish it is in the merged state first of all you can always start by kind of removing actually good practice maybe to remove these equal lines and then within this the limiters kind of change what you how you want the file to look in essence I simply want to have these o's here at the end I just want to many like this or like this I like that so I'm going to call that the merge and then I delete these lines so this is the file that I would like to the merged commit to have so what I can do I save this file again I say get status it still tells me it's un merged but it tells me what to do it says use get add to more resolution right I've resolved it get add test tasty get status and it says all conflicts fixed but you are still merging use get commit to conclude merge all right get commit then and I'm still I'm still have to enter a commit message which is already predefined here saying I merged the branch F1 and there's conflict the conflict but that's fine I like this message so I'm simply going to save the file right here and when I look into get log it now gives me the full story first I have this added high commit then I have the more O commit and the more e commit which were in parallel to each other and then I merged both branches into one so we're now right here what I can do now is I can even delete the F1 flag because I don't need it anymore basically which I do by get branch minus the F1 and it has deleted branch F1 so no commits are actually deleted when you delete the branch it's it's simply the little flag is deleted the only danger is when you delete the little flag the name and you're unable to reach the commit from any other but here of course we have this master and by following this edge here we can reach this commit just fine so get won't delete it or garbage collected but it will also tell you when or about to do something dangerous so don't worry alright so with this I think you should already have many tools or many many insights into Git and another video we're going to look at how to collaborate online with people which isn't much harder than this it's it's simply like two more steps to do to push and pull your work from a server together with other people alright so that was it take care | [{"start": 0.0, "end": 8.0, "text": " Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research"}, {"start": 8.0, "end": 16.0, "text": " collaborations. So Git is like a tool to collaborate but when you research like when you work"}, {"start": 16.0, "end": 22.5, "text": " on a paper together with other people you won't use a lot of the features that Git offers"}, {"start": 22.5, "end": 27.5, "text": " and that are usually described by Git. So in this series I want to talk about how what's"}, {"start": 27.5, "end": 33.0, "text": " kind of the most simple way to collaborate with people on like a research project using Git."}, {"start": 33.0, "end": 40.0, "text": " And today we're going to go over just the fundamentals which makes everything else a lot easier."}, {"start": 40.0, "end": 51.0, "text": " So what you need to understand about Git is that fundamentally Git is a graph and it's a graph of commits."}, {"start": 51.0, "end": 61.5, "text": " What I mean by this. So let's say you have your paper, you write some things and then this is kind of version 1."}, {"start": 61.5, "end": 69.5, "text": " And then you have another paper or the same paper and you kind of change this line here. That's a version 2."}, {"start": 69.5, "end": 76.5, "text": " And so on. You have kind of this chain of versions that you would like to keep in store."}, {"start": 76.5, "end": 88.0, "text": " This is the classic example of version control where you would like to save these versions and do it in a way that you can at any point in time go back to any version previously."}, {"start": 88.0, "end": 96.5, "text": " And this is exactly what Git does without you having to kind of rename like people usually copy the file and then rename like this version 2,"}, {"start": 96.5, "end": 102.0, "text": " version 3, you final version, really final version, really final version corrected blah blah blah."}, {"start": 102.0, "end": 109.0, "text": " So Git fundamentally is a graph and a graph of an object we call a commit."}, {"start": 109.0, "end": 121.0, "text": " So commit which I'm going to represent as a bubble here is simply a kind of an image of your hard drive or one folder of your hard drive at a particular point in time."}, {"start": 121.0, "end": 140.0, "text": " So this will contain all kind of files. Let's call this file a file B. Oops. Well meant to make a square here. But all the files that are in your folder, which is called the Git repository or it's not correct but bear with me."}, {"start": 140.0, "end": 159.0, "text": " You have this folder and all the files in this folder when you make a commit all these files are kind of saved as they are into one of these bubbles and they're saved forever basically in this status that they are."}, {"start": 159.0, "end": 172.0, "text": " So what you can do now is you can go ahead and make a second commit. So you change a bunch of files. Let's say the file B is still the same."}, {"start": 172.0, "end": 182.0, "text": " And but the file A has changed is now a prime. You make a second commit and the second commit references the first commit. So part of a commit except the very first commit."}, {"start": 182.0, "end": 196.0, "text": " Part of a commit is always a pointer to its parent commit. And especially if you look at the commits they all have names and the name of a commit is always its hash."}, {"start": 196.0, "end": 215.0, "text": " And the hash includes basically the hash of all the files that are in there. So a hash could be something like F5C259 and so on. And for the next commit the hash also includes the reference to the parent."}, {"start": 215.0, "end": 228.0, "text": " That's why the integral part of a commit is to which parent it belongs. This ultimately is what makes the graph kind of the graph."}, {"start": 228.0, "end": 247.0, "text": " And we commit and references its parent. So you can address every commit by its name as I said which is the hash of the commit. And you can also so the hash is like really long but you can also simply reference it by kind of the first couple of letters as long as that's unique."}, {"start": 247.0, "end": 263.0, "text": " So we just want to make it look like it will let you do this whenever you need to reference some commit. All right. So we've discussed that basically a commit is a bunch of files as they are and it saved in this state."}, {"start": 263.0, "end": 275.0, "text": " Of course smart. It will only kind of save the diff from one to the other commit. But you can just imagine that a commit is simply one the status of a forward writer particular point of time."}, {"start": 275.0, "end": 293.0, "text": " So let me just take away these files here. There are a bunch of other things in in get. So the kind of one concept that get has is called a tag."}, {"start": 293.0, "end": 306.0, "text": " And the tag is a name for a commit that you give yourself and the tag is like a little flag that sticks in a commit and you may say this V1 version one."}, {"start": 306.0, "end": 319.0, "text": " This is simply a tag and as you make new commit and so on the tax and please stay certain at any time if you don't want to remember this big long hash you can simply refer to this commit as V1 because that's the tag."}, {"start": 319.0, "end": 332.0, "text": " That kind of simple. The next form of a little flag that you can append to commit is called a branch and the branch the difference between a tag and a branch."}, {"start": 332.0, "end": 336.0, "text": " So branches also this flag and we'll call it. I don't know."}, {"start": 336.0, "end": 353.0, "text": " The difference is that when you are on this commit here right here and you make a commit on top of this commit while what's called you've checked out the blob branch."}, {"start": 353.0, "end": 368.0, "text": " So right now you're looking at blah which is this commit and you make a commit on top of this commit what it will do automatically for you is it erase this flag and move it to this next commit."}, {"start": 368.0, "end": 389.0, "text": " So this is kind of so you might know branch is kind of from subversion or other version control technologies is very similar but in get a branch is simply like a tag is simply a name for a commit but with the additional property that when you make a commit on top of that commit."}, {"start": 389.0, "end": 399.0, "text": " So when it has the commit as it's parent then get will move the branch the little flag to the new commit."}, {"start": 399.0, "end": 413.0, "text": " So basically you always have that one branch which is called master it creates this automatically for you if you just have this it's a little flag master."}, {"start": 413.0, "end": 423.0, "text": " Right and you make a commit on top of master which would cause master to go here master and so on."}, {"start": 423.0, "end": 433.0, "text": " So people usually say they work on the master branch which means they're simply making commits on top of the commit that currently has the master flag."}, {"start": 433.0, "end": 449.0, "text": " It also allows you to move around both tags and these little branches basically to any commit so I could forcefully go erase this here and simply stick the master flag here."}, {"start": 449.0, "end": 472.0, "text": " And sometimes if we kind of if we kind of decide these two commits are no good we would simply do this we would simply take the master flag put it here and then we when we make a new commit on top of the master now what we would make is we make a new commit point here then get would move the master flag because it's a branch master."}, {"start": 472.0, "end": 482.0, "text": " And then we simply continue working here working here and get will happily move along this master."}, {"start": 482.0, "end": 501.0, "text": " So in get there is no need to actually delete commits or something like this what we can simply do is is kind of move the move the branch that we're working on to the commit we like and garbage collection ultimately will at some point go and delete these two commits."}, {"start": 501.0, "end": 515.0, "text": " This is a bit more difficult once you collaborate with other people because they might actually have made commits that reference the commits that you you just kind of deleted or so so."}, {"start": 515.0, "end": 542.0, "text": " It's a bit tricky but ultimately this this something you can do so the next thing we're going to talk about is multiple branches having multiple branches basically boils down to you have few commits you have your graph and let's say this is your master branches so here we have master."}, {"start": 542.0, "end": 570.0, "text": " But also or let's make them on before otherwise I don't have space master so what someone else would like to do like is say hey I want to like try out this new feature in the code it will probably change the code base and so on but I want to try it out maybe"}, {"start": 570.0, "end": 594.0, "text": " it'll introduce some bugs and so on and then what you can do is you can make a new branch f1 let's call the f1 for feature one so and then you can I can make a commit on top of feature one which are the move the feature one flag to here and so on I can make second and third commit and so on"}, {"start": 594.0, "end": 623.0, "text": " meanwhile the other people working on the project or maybe even you yourself can work on top of this commit of top of the master branch so in in kind of software engineering this is typically used when one part of the team wants to implement a new feature but the other part of the team kind of continues to do bug fixes or things like this development on the version of the software that doesn't yet have"}, {"start": 623.0, "end": 652.0, "text": " the new feature but they kind of need to fix bugs and since the new features not complete yet they can both work on the same code base so each work on on their own branch so to say and at the end when feature one kind of is ready that people say like OK we've implemented it it's all good there's no bugs"}, {"start": 652.0, "end": 666.0, "text": " we would like to integrate the feature one into the main software basically what you have to do is you would have to do a so called merge"}, {"start": 666.0, "end": 693.0, "text": " a merge is a process that generates a merge commit and a merge commit is this thing here as you notice is has more than one parent it has in this case two parents where it kind of combines so from this commit here both branches are based off of this commit"}, {"start": 693.0, "end": 717.0, "text": " and then changes were made individual changes in this branch and in this branch so there's a possibility that you know people change different things and what the merge commit needs to do somehow is to bring together these changes so actually both branches might have changed the same file"}, {"start": 717.0, "end": 744.0, "text": " but in a different way and now the question is how do we merge these different files and that's kind of the last topic we're going to today how does get to a merge so when you talk about merging Git has a bunch of built in algorithms that helps you with"}, {"start": 744.0, "end": 760.0, "text": " most of the time merging is automatic so if you have like files here a and b and one in one branch a is changed some here and in one branch b is changed"}, {"start": 760.0, "end": 787.0, "text": " Git simply assumes well this one branch has changed a the other one hasn't so the changes you know they mean something I'll take them and as well and b so basically whenever something has changed in one branch and not changed in the other it will it will assume that the changes are the thing that needs to continue to live basically"}, {"start": 787.0, "end": 811.0, "text": " it assumes that the changes were made for reason and that reason should you know continue so one might be a bug fix the other one might be the new feature the same goes in the same file so when you have the same file and one branch in one branch something on top is changed and the other branch something kind of on the bottom is changed"}, {"start": 811.0, "end": 840.0, "text": " and simply assumes both changes are wanted and takes takes both the only kind of time when it doesn't know what to do is when both branches change the same line so when I represent this with I don't know but when both branches change the same line in the same file kind of or close by"}, {"start": 840.0, "end": 869.0, "text": " so there is algorithms that get determines when there's a so-called merge conflict that's the only time where it doesn't know what to do and so as preliminary it's a good idea to structure files in a line based fashion especially if you write kind of lot of good practices to put every sentence on a new line and not have like giant lines of you know multiple sentences"}, {"start": 869.0, "end": 878.0, "text": " because if you put every sentence on a new line then you immediately kind of see where something was changed"}, {"start": 878.0, "end": 888.0, "text": " whereas if you have this big paragraph and get will simply tell you this line has changed which is an entire paragraph and you don't see what's happening"}, {"start": 888.0, "end": 914.0, "text": " so when you have a merge conflict it asks you what to do and it does this in a very simple way we're just going to kind of take a look here as a final demonstration so I have a I get repository here let me that so as you can see there's simply this test file in here and I've just made one commit to the initial commit"}, {"start": 914.0, "end": 934.0, "text": " and let's look at this test file it simply says hello so what I can do is for example I can say hi I can when I want to make a new commit first of all get that as well always tell you kind of what you can do what's happening in your repository"}, {"start": 934.0, "end": 954.0, "text": " here it says changes not staged for commit modified test.txt and it also tells you what you can do so it tells me for example use get checkout dash dash with the file name to discard changes or use get add to update what will be committed"}, {"start": 954.0, "end": 974.0, "text": " there's a I'll use get add with this so it tells me changes to be committed now it's green as you can see so when I now type get commit it should commit these changes"}, {"start": 974.0, "end": 996.0, "text": " and this is common occurrence and get whenever you see a text editor opening get expects you to type a text message like a commit message in this case that it like a log message basically that hashtags are comments which one not go in here this is all is all described right here actually in these comments"}, {"start": 996.0, "end": 1022.0, "text": " the thing about these things is when you type an empty message then get will abort commit so if you notice you've done something wrong you can simply save this file with empty being empty being nothing but commit comments basically get will abort so it's super useful I'll just say I added high and then save this file"}, {"start": 1022.0, "end": 1036.0, "text": " so this is not a special thing all you need to do this is a this is an editor a text editor that edits a text file you simply need to save the file and close the editor and get will be like okay cool I'll continue"}, {"start": 1036.0, "end": 1048.0, "text": " so with get log now you can see we have two commits we have my initial commit and we have the commit called added high if you look at the test file see high"}, {"start": 1048.0, "end": 1067.0, "text": " so what we'll do now is finally will make two branches as I as we've discussed before so this is my initial commit I've made one more commit and we're on branch master right now which gets that as we'll tell you see on branch master"}, {"start": 1067.0, "end": 1088.0, "text": " so this is now master what we'll do is will make a new branch called f1 will make a commit on f1 meaning we'll move this f1 then we'll make a commit on top of master like this which which means we'll move this master"}, {"start": 1088.0, "end": 1104.0, "text": " and then we will merge f1 back into master such that kind of this master is here and at the end we can even kind of remove the f1 branch"}, {"start": 1104.0, "end": 1114.0, "text": " and we'll do this while we're having a merge conflict so that you see the whole process so okay so what I want to do is"}, {"start": 1114.0, "end": 1133.0, "text": " first I want to make a branch f1 for this we can use check out minus b for making a new branch f1 if the branch already exists you simply need to check out which means I simply go to where this branch is to the commit that the branch references to"}, {"start": 1133.0, "end": 1157.0, "text": " we also say we put head where to the to this commit head is always the thing you're looking at basically the thing you've currently checked out so make a new branch f1 and we'll immediately switch to f1 if I type status it says on branch f1 it's the all the same commit but we're just in a different branch"}, {"start": 1157.0, "end": 1181.0, "text": " so we'll make kind of a change to this file here I'm gonna say hello cool save the file status it says it's modified I want to add and commit it and there's a shortcut commit minus a minus m"}, {"start": 1181.0, "end": 1210.0, "text": " so the ape simply says all the files that have changed add them so I don't need to add get add all the it changed files separately though this only counts for kind of changed files if you have completely new files that get isn't tracking yet you need to add them yourself but so I hear with the minus a I skip the need to first add the files and with the minus m I can give directly the commit message more oh cool"}, {"start": 1210.0, "end": 1238.0, "text": " so now what we've done is we're we have made this commit here and moved the f1 flag to this commit what we'll do now is we'll go back to this commit which is currently master branch and we'll make this commit so first what we need to do is we'll go back to some commit which is a check out check out master since master is still referring to that commit"}, {"start": 1238.0, "end": 1267.0, "text": " as you can see when I open the test file there's no hello it's the status from before hello I can now say I can now change the file in some other manner in this case I say hello because I want many ease and I can say I can commit this because I'm now on the branch master it will make this new commit here and move the master branch to that"}, {"start": 1267.0, "end": 1290.0, "text": " more e if you look at get log you see all these commits on this kind of branch you don't see the commit on the f1 branch for that I would have to go back f to the f1 branch I log and you see here it's a different story"}, {"start": 1290.0, "end": 1319.0, "text": " after the added high commit there's the more oh commit whereas up here after the added high commit there's the more e commit so merging and by the way merging also is not only happens when you have different branches but also because it's the same thing when you collaborate with other people and these people make commits and you make commits independent of each other and you kind of try to sync as your work often you need to do a merge"}, {"start": 1319.0, "end": 1340.0, "text": " and then a merge conflicts can also happen so what we do now is we can pack to master because we've oops get checkout master there's like shortcuts for all of these but so we're on this branch right here and we want"}, {"start": 1340.0, "end": 1364.0, "text": " what we're going to do is we want to make the merge commit and so we want to merge f1 into master so while I am on master I can say get merge f1 and it will try to merge but it will tell me conflict automatic merge failed fixed conflicts and then commit the result I can say get status"}, {"start": 1364.0, "end": 1385.0, "text": " and it will tell me you're currently merging you have on merge paths and this test.txt file is both branches modified it so I'm going to the test and this is very strange if you see for the first time but it's actually very intuitive so what it will do is"}, {"start": 1385.0, "end": 1414.0, "text": " wherever the line is that both branches have changed or wherever the block of lines is that both branches have changed it will basically indicate this with by writing directly into the file so it will make these smaller smaller smaller smaller smaller smaller than sign and then it says head which means this is the thing you're currently looking at which we know it's master has changed this first line to this"}, {"start": 1414.0, "end": 1443.0, "text": " hellow then it will be like equal equal equal equal and then it will say down here will say the f1 branch has changed this line the same line to hello and it will able to notice with like the end of this with like larger larger larger larger greater than signs so what you need to do to in order to merge is simply you need to make this file as you wish it is in the merged state"}, {"start": 1443.0, "end": 1469.0, "text": " first of all you can always start by kind of removing actually good practice maybe to remove these equal lines and then within this the limiters kind of change what you how you want the file to look in essence I simply want to have these o's here at the end"}, {"start": 1469.0, "end": 1490.0, "text": " I just want to many like this or like this I like that so I'm going to call that the merge and then I delete these lines so this is the file that I would like to the merged commit to have so what I can do I save this file"}, {"start": 1490.0, "end": 1518.0, "text": " again I say get status it still tells me it's un merged but it tells me what to do it says use get add to more resolution right I've resolved it get add test tasty get status and it says all conflicts fixed but you are still merging use get commit to conclude merge all right get commit"}, {"start": 1518.0, "end": 1539.0, "text": " then and I'm still I'm still have to enter a commit message which is already predefined here saying I merged the branch F1 and there's conflict the conflict but that's fine I like this message so I'm simply going to save the file right here"}, {"start": 1539.0, "end": 1559.0, "text": " and when I look into get log it now gives me the full story first I have this added high commit then I have the more O commit and the more e commit which were in parallel to each other and then I merged both branches into one so we're now right here"}, {"start": 1559.0, "end": 1582.0, "text": " what I can do now is I can even delete the F1 flag because I don't need it anymore basically which I do by get branch minus the F1 and it has deleted branch F1 so no commits are actually deleted when you delete the branch"}, {"start": 1582.0, "end": 1594.0, "text": " it's it's simply the little flag is deleted the only danger is when you delete the little flag the name and you're unable to reach the commit from any other"}, {"start": 1594.0, "end": 1605.0, "text": " but here of course we have this master and by following this edge here we can reach this commit just fine so get won't delete it or garbage collected"}, {"start": 1605.0, "end": 1621.0, "text": " but it will also tell you when or about to do something dangerous so don't worry alright so with this I think you should already have many tools or many many insights into Git"}, {"start": 1621.0, "end": 1634.0, "text": " and another video we're going to look at how to collaborate online with people which isn't much harder than this it's it's simply like two more steps to do to push and pull your work"}, {"start": 1634.0, "end": 1641.0, "text": " from a server together with other people alright so that was it take care"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=iDulhoQ2pro | Attention Is All You Need | https://arxiv.org/abs/1706.03762
Abstract:
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Authors:
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin | Hi there. Today we're looking at attention is all you need by Google. Just to declare I don't work for Google just because we're looking at Google papers lately. But it's just an interesting paper and we're going to see what's the deal with it. So basically what the authors are saying is we should kind of get away from basically RNNs. So traditionally what you would do and these authors particularly are interested in NLP natural language processing. So traditionally when you have like a language task like the cat eats the mouse and you'd like to translate this to say any other language like let's say German whatever what you would do is you would try to encode this sentence into a representation and then decode it again. So somehow somehow this sentence needs to all go into say one vector and then this one vector needs to somehow be transformed into the target language. So these are traditionally called sect to sect tasks and they have been solved so far using recurrent neural networks. You might know the LSTM networks that are very popular for these tasks. What basically happens in RNN is that you go over the say source sentence here one by one here you take the word the you kind of encode it maybe with a word vector if you know what that is. So you turn it into like a vector a word vector and then you use a neural network to turn this vector into what we call a hidden state. So this h0 is a hidden state. You then take the second token here cat you again take its word vector because you need to represent it with numbers somehow. So use word vectors for that. You turn this into you put it through the same function. So here is what it's like a little E for encoder. You turn it into the same function but this time this hidden state also gets plugged in here. So the word vector instead you can actually think of having like a start hidden state here h start usually people either learn this or just initialize with zeros that kind of goes in to the encoder function. So it's always really the same function. And from the previous hidden state and the current word vector the encoder again predicts another hidden state h1 and so on. So you take the next token you turn it into a word vector you put it through this thing little E encoder function and of course this is a lot more complicated in actual like say an LSTM but the basic principle behind it. So you end up with h2 and here it have h3 h4. And the last hidden state h4 here you would use this in kind of exactly the same fashion you would plug it into like a decoder little E decoder which would output you a word D and it would also output you an next hidden state. So h5. Let's say let's let's just go on with the with the and listing of the dates and this h5 would again go into the decoder which would output like so that's how you would decode you basically these RNNs what they do is they kind of take if you look on top here they take an input, a current input and they take the last hidden state and they compute a new hidden state. In the case of the decoder they take the hidden state and they take kind of the previous, usually the previous word that you output you also feed this back into the decoder and they will output the next word. It kind of makes sense. So you would guess that the hidden state kind of encodes what the sentence means and the last word that you output you need this because maybe for grammar right. You know what you just output so kind of the next word should be based on that. Of course you don't have to have to do it exactly this way but that's kind of what what is RNNs did. So attention is a mechanism here to basically increase the performance of the RNNs. So what attention we do is in this particular case if we look at the decoder here if it's trying to predict this word for cat then or the next word here say here it wants the next word and in essence the only the only H6 the only information it really has is what the last word was. German word for cat and what the hidden state is. So if we look at what word it actually should output in the input sentence is this here eats right and if we look at kind of the the information flow that this word has to travel. So first it needs to encode into a word vector it needs to go through this encode that's the same function for all the words so nothing specific can be learned to the word eats here right. It needs to go through this hidden state traverse again into another step this hidden state because we have two more tokens and then the next hidden state and then it goes all the way to the decoder where the first two words are decoded and still so this H6 is hidden state somehow still needs to retain the information that now the word eats somehow is kind of the word to be translated and that the decoder should find the German word for that. So that's of course very very long path there's a lot of transformations involved over these all of these hidden states and the hidden states not only do they need to remember this particular word but all of the words and the order and so on and the grammar not quite the grammar you can actually learn with the decoder themselves but kind of the meaning and the structure of the sentence. So it's very hard for an RNN to learn all of this what we call long-range dependencies so naturally you actually think well why can't we just you know decode the first word to the first word, second word to the second word it actually works pretty well in this example right like the decad cod say eats the we could just decode one by one of course that's not how translation works and translations the sentences can become rearranged in the target language like one word can become many words or it could even be an entirely different expression. So attention is a mechanism by which this decoder here in this step that we're looking at can actually decide to go back and look at particular parts of the input especially what it would do in like popular attention mechanisms is that this decoder here can decide to attend to the hidden states of the input sentence. What that means is in this particular case we would like to teach the decoder somehow that uh-huh look here I need to pay close attention to this step here because that was the step when the word eats was just encoded so it probably has a lot of information about what I would like to do right now namely translate this word eats. So this mechanism allows if you look at the information flow it simply goes through this word vector goes through one encoding step and then is that the hidden state and then the decoder can look directly at that so the path length of information is much shorter than going through all the hidden states in a traditional way. So that's where attention helps and the way that the decoder decides what to look at is like a kind of an addressing scheme you may know it from neural touring machines or or kind of other other kind of neural algorithms things so what the decoder would do is in each step it would output a bunch of keys oops sorry about that that's my hand being trippy um so what it would output is a bunch of keys so k1 through k n and what what these keys would do is they would index these hidden kind of hidden states um via a kind of a softmax architecture and we're gonna look at this I think in the actual paper we're discussing because it's going to become more clear but just kind of notice that the decoder here can decide to attend to the input sentence and kind of draw information directly from there instead of having to go just to the hidden state it's provided with. So if we go to the paper here what do these authors propose and the thing is they they ditch the RNNs they basically say attention is all you need you don't need the entire recurrent things basically in every step of this decoder of this end basically of the decoding so you want to produce the target sentence so in this step in this step in this step you can basically you don't need the recurrence you can just kind of do attention over everything and um it will be fine namely what they do is they propose this transformer architecture so what does it do it has two parts what's called an encoder and the decoder but don't kind of be confused um because this all happens at once so this is not an order then it all happens at once every all the source sentence so if we again have the cap roops that doesn't work as easy let's just do this uh uh uh this is a source sentence and then we also have a target sentence that maybe we've produced two words and we want to produce this third word here I want to produce this so we would feed the entire source sentence and also the target sentence we produce so far to this network namely the source sentence would go into this part and the target that we've produced so far would go into this part and this is then all combined and um at the end we get an output here at the output probabilities that kind of tells us the probabilities for the next word so we can choose the top probability and then repeat the entire process so basically every step in production is one training sample every step in producing sentence here before with the RNNs the entire sentence to sentence translation is one sample because we need to back propagate through all of these RNN steps because they all happen kind of in sequence here basically output of one single token is one sample and then the computation is finished the back prop happens through everything only for this one step there is no multi-step kind of uh back propagation as an RNNs um and this is kind of a paradigm shift in sequence processing because people uh we're always convinced that you kind of need these recurrent things in order to to make good um to learn these dependencies but here they basically say na na na we can just do attention over everything and it'll actually be fine if we just do one step predictions so let's go one by one so here with an input embedding and say an output embedding these these are symmetrical um so basically the tokens just get embedded with say word vectors again then there's a positional encoding this is kind of a special thing um where because you now lose this kind of sequence nature of your algorithm you kind of need to encode where the words are that you push through the network so the network kind of goes ah this is a word at the beginning of the sentence or ah this is a word towards the end of the sentence or that it can compare two words like which one comes first which one comes second and you do this um it's pretty easy for the networks if you do it with kind of these uh trigonometric functioning embeddings so if if I draw your sine wave and I draw your sine wave of that is double as fast and I draw your sine wave that is even faster maybe this one actually sink one two three four five no it doesn't matter you know what I mean so um I can encode the first word I can encode the first position with all down and then the second position is kind of down down up and the third position is kind of up down and so on so this is kind of a uh continuous way of binary encoding of position so if I want to compare two words I can just look at all the scales of these things and I know aha this word one word has a high here and the other word is low here so they must be pretty far away like one must be at the beginning and one must be at the end um if they happen to match in this long rate long wave and they also are both kind of low on this wave and then I can look in this wave for like oh maybe they're close together but here I really get the information which ones first which one second so these are kind of position line codings they they're not critical to this algorithm um but they just encode where the words are which of course that is important and it gives the network a significant boost in performance but it's not like it's not the the meat of the thing the meat of the thing is that now that these encodings go into the networks they simply do what they call attention here attention here and attention here so there's kind of three kinds of attention um so basically the first attention on the bottom left is simply attention as you can see over the input sentence so it it I told you before you need to take this input sentence if you look over here and you somehow need to encode it into a hidden representation and this now looks it much more like the picture I drew here and the picture I drew right at the beginning is that all at once kind of put together this hidden representation and all you do is you use attention over the input sequence which basically means you kind of pick and choose which words you look at more or less um so with the bottom right so in the the output sentence that you've produced so far you simply encoded into kind of a hidden state and then the third on the top right that's the I think the sorry I got interrupted so as I was saying the top right is the most interesting part of the attention mechanism here um we're basically it unites the kind of encoder part with the kind of dec let's not it combines the source sentence with the target sentence that you've produced so far um so as you can see um maybe here I can just uh slightly annoying um but I'm just gonna remove these kind of circles here so if you can see here there's an output going from the part that encodes the source sentence and it goes into this multi-head attention there's two connections and there's also one connection coming from the encoded um output so far here and so there's three connections going into this and um we're gonna take a look at what these three connections are so the three connections here basically are the keys values and queries um if you see here the values and the keys are what is output by the encoding part of the source sentence and the query is output by the encoding part of the target sentence um and these are not only one value key and query so there are many uh in this kind of multi-head attention fashion so there are just many of them instead of one but you can think of and as it's just kind of sets um so the attention computed here is what does it do so first of all it uh calculates the adult product of the keys and the queries and then it does a softmax over this and then it multiplies it by the value so what does this do if you uh dot product the keys and the queries what what you would get is so as you know if you have two vectors and the dot their dot product basically gives you the angle between the vectors um with especially in high dimensions most vectors are going to be of uh kind of a 90 degree kind of uh I know the Americans do the little the little square um so most vectors are going to be not aligned very well so their dot product will kind of be zero-ish but if a key in the query actually align with each other like um if they point into the same directions their dot product will actually be large um so what you can think of this as the the keys are kind of here the keys are just a bunch of vectors in space um and each key has an associated value so each key there is kind of a table um value one value two value three this is really annoying if I do this over text right um so again here so we have a bunch of keys right in space and we have a table with values and each key here corresponds to value value one value two value three value four um so each key is associated with one of these values and then when we introduce a query what what can it do so a query will be a vector like this and we simply compute the so this is q this is the query we compute the dot product with each of the keys and um and then we compute a softmax over this which means that one key will basically be selected so in this case it will be probably this blue key here that has the biggest dot product with the query so this is key two in this in this case um and the softmax so if you don't know what a softmax is you have you have like x1 to x and b like some numbers then you simply do you map them to the exponential function um each one of them and but also each one of them you divide by the sum over over i of e to the x i so basically this is a renormalization basically you you do the exponential function of the numbers which of course this makes the kind of big numbers even bigger so basically um what you end up with is one of these numbers x1 through xn will become very big compared to the others and then you renormalize so basically one of them will be almost one and the other ones will be almost zero simply the the maximum function you can think of in a differentiable way i mean just want to select the biggest entry in this case here um we kind of select the key that aligns most with the query which in this case will be key two and then we when we multiply this softmax thing with the with the values um so this query this this inner product um if we multiply q with k2 as an inner product and um we take the the softmax over it softmax what we'll do is i'm going to draw it upwards here we're going to induce a distribution like this and if we multiply this by the value it will basically select value two um so this is this is kind of an indexing scheme into this memory of values and um this is what then the network uses to compute further things um using so you see the output here goes into kind of more layers of the neural network upwards uh so basically what what you can think what does this mean you can think of here's the whoops deep i want to delete this here you can think of this uh as basically the encoder of the source sentence um right here discovers interesting things that looks ugly it discovers interesting things about the about the the source sentence and it builds key value pairs and then the encoder of the target sentence builds the queries and together they give you kind of the next to next signal so it means that the network basically says he here's a bunch of things here's a here's a bunch of things about the source sentence that you might find interesting that's the values and the keys are ways to index the values so he says here's a bunch of things that are interesting which are the values and here is how you would address these things which is the keys and then the other part of the network builds the queries it says i would like to know certain things so think of the values like attributes like here here here is the name and the the the kind of tallness and the weight of a person right and the keys are like the the actual indexes like name height weight and then the the other part of the network can decide what do i want i actually want the name so the my query is the name it will be aligned with the key name and the corresponding value would be the name of the person you would like to describe so that's how kind of these networks work together and i think it's uh it's a pretty um ingenious it's not entirely new because it has been done of course before with all the differentiable touring machines and whatnot but it's pretty cool that this actually works and actually works kind of better than our events if you simply do this so they describe a bunch of other things here i i don't think they're too important um basically the the point that they make about this attention is that it reduces path lengths and kind of that's the the main reason why it should work better um with this entire attention mechanism you reduce the amount of computation steps that information has to flow from one point in the network to another and that what brings the major improvement because all the computation steps can make you lose information and you don't want that you want short path lengths and so that's that's what this method achieves and they claim that's why it's better and it works so well so they have uh experiments you can look at them they're really good at everything of course um of course you always have state of the art um and i think i will conclude here uh if you want to check it out yourself they have extensive code on gtaub where you can build your transformer networks and with that uh have a nice day and see ya | [{"start": 0.0, "end": 7.58, "text": " Hi there. Today we're looking at attention is all you need by Google. Just to declare I"}, {"start": 7.58, "end": 13.02, "text": " don't work for Google just because we're looking at Google papers lately. But it's just an"}, {"start": 13.02, "end": 19.46, "text": " interesting paper and we're going to see what's the deal with it. So basically what the"}, {"start": 19.46, "end": 30.22, "text": " authors are saying is we should kind of get away from basically RNNs. So traditionally what"}, {"start": 30.22, "end": 35.82, "text": " you would do and these authors particularly are interested in NLP natural language processing."}, {"start": 35.82, "end": 47.540000000000006, "text": " So traditionally when you have like a language task like the cat eats the mouse and you'd"}, {"start": 47.54, "end": 59.379999999999995, "text": " like to translate this to say any other language like let's say German whatever what you would"}, {"start": 59.379999999999995, "end": 66.06, "text": " do is you would try to encode this sentence into a representation and then decode it again."}, {"start": 66.06, "end": 74.82, "text": " So somehow somehow this sentence needs to all go into say one vector and then this one vector"}, {"start": 74.82, "end": 82.3, "text": " needs to somehow be transformed into the target language. So these are traditionally called"}, {"start": 82.3, "end": 90.25999999999999, "text": " sect to sect tasks and they have been solved so far using recurrent neural networks. You might"}, {"start": 90.25999999999999, "end": 97.5, "text": " know the LSTM networks that are very popular for these tasks. What basically happens in"}, {"start": 97.5, "end": 105.18, "text": " RNN is that you go over the say source sentence here one by one here you take the word the"}, {"start": 105.18, "end": 112.18, "text": " you kind of encode it maybe with a word vector if you know what that is. So you turn it into"}, {"start": 112.18, "end": 119.78, "text": " like a vector a word vector and then you use a neural network to turn this vector into"}, {"start": 119.78, "end": 131.58, "text": " what we call a hidden state. So this h0 is a hidden state. You then take the second token here"}, {"start": 131.58, "end": 138.3, "text": " cat you again take its word vector because you need to represent it with numbers somehow."}, {"start": 138.3, "end": 145.78, "text": " So use word vectors for that. You turn this into you put it through the same function. So here"}, {"start": 145.78, "end": 151.78, "text": " is what it's like a little E for encoder. You turn it into the same function but this time"}, {"start": 151.78, "end": 157.54, "text": " this hidden state also gets plugged in here. So the word vector instead you can actually think"}, {"start": 157.54, "end": 165.78, "text": " of having like a start hidden state here h start usually people either learn this or just"}, {"start": 165.78, "end": 170.62, "text": " initialize with zeros that kind of goes in to the encoder function. So it's always really"}, {"start": 170.62, "end": 178.98000000000002, "text": " the same function. And from the previous hidden state and the current word vector the encoder"}, {"start": 178.98000000000002, "end": 187.22, "text": " again predicts another hidden state h1 and so on. So you take the next token you turn it into"}, {"start": 187.22, "end": 195.3, "text": " a word vector you put it through this thing little E encoder function and of course this is"}, {"start": 195.3, "end": 201.54000000000002, "text": " a lot more complicated in actual like say an LSTM but the basic principle behind it. So you"}, {"start": 201.54000000000002, "end": 210.46, "text": " end up with h2 and here it have h3 h4. And the last hidden state h4 here you would use this"}, {"start": 210.46, "end": 217.02, "text": " in kind of exactly the same fashion you would plug it into like a decoder little E decoder"}, {"start": 217.02, "end": 228.3, "text": " which would output you a word D and it would also output you an next hidden state. So h5."}, {"start": 228.3, "end": 236.54000000000002, "text": " Let's say let's let's just go on with the with the and listing of the dates and this h5"}, {"start": 236.54000000000002, "end": 245.06, "text": " would again go into the decoder which would output like so that's how you would decode"}, {"start": 245.06, "end": 251.94, "text": " you basically these RNNs what they do is they kind of take if you look on top here they"}, {"start": 251.94, "end": 259.58, "text": " take an input, a current input and they take the last hidden state and they compute a new"}, {"start": 259.58, "end": 266.86, "text": " hidden state. In the case of the decoder they take the hidden state and they take kind"}, {"start": 266.86, "end": 273.86, "text": " of the previous, usually the previous word that you output you also feed this back into"}, {"start": 273.86, "end": 278.62, "text": " the decoder and they will output the next word. It kind of makes sense. So you would guess"}, {"start": 278.62, "end": 284.5, "text": " that the hidden state kind of encodes what the sentence means and the last word that"}, {"start": 284.5, "end": 291.38, "text": " you output you need this because maybe for grammar right. You know what you just output"}, {"start": 291.38, "end": 298.74, "text": " so kind of the next word should be based on that. Of course you don't have to have to"}, {"start": 298.74, "end": 307.82, "text": " do it exactly this way but that's kind of what what is RNNs did. So attention is a mechanism"}, {"start": 307.82, "end": 317.22, "text": " here to basically increase the performance of the RNNs. So what attention we do is in this"}, {"start": 317.22, "end": 326.42, "text": " particular case if we look at the decoder here if it's trying to predict this word for"}, {"start": 326.42, "end": 339.82, "text": " cat then or the next word here say here it wants the next word and in essence the only the"}, {"start": 339.82, "end": 348.66, "text": " only H6 the only information it really has is what the last word was. German word for cat"}, {"start": 348.66, "end": 355.22, "text": " and what the hidden state is. So if we look at what word it actually should output in the"}, {"start": 355.22, "end": 363.14000000000004, "text": " input sentence is this here eats right and if we look at kind of the the information flow"}, {"start": 364.02000000000004, "end": 369.46000000000004, "text": " that this word has to travel. So first it needs to encode into a word vector it needs to go"}, {"start": 369.46000000000004, "end": 374.58000000000004, "text": " through this encode that's the same function for all the words so nothing specific can be learned"}, {"start": 374.58000000000004, "end": 380.66, "text": " to the word eats here right. It needs to go through this hidden state traverse again into another"}, {"start": 380.66, "end": 386.18, "text": " step this hidden state because we have two more tokens and then the next hidden state and then it"}, {"start": 386.18, "end": 394.98, "text": " goes all the way to the decoder where the first two words are decoded and still so this H6 is"}, {"start": 394.98, "end": 402.58000000000004, "text": " hidden state somehow still needs to retain the information that now the word eats somehow is"}, {"start": 402.58, "end": 411.46, "text": " kind of the word to be translated and that the decoder should find the German word for that. So"}, {"start": 411.46, "end": 421.06, "text": " that's of course very very long path there's a lot of transformations involved over these"}, {"start": 421.7, "end": 426.18, "text": " all of these hidden states and the hidden states not only do they need to remember this particular"}, {"start": 426.18, "end": 433.62, "text": " word but all of the words and the order and so on and the grammar not quite the grammar you can"}, {"start": 433.62, "end": 439.06, "text": " actually learn with the decoder themselves but kind of the meaning and the structure of the sentence."}, {"start": 439.06, "end": 445.62, "text": " So it's very hard for an RNN to learn all of this what we call long-range dependencies"}, {"start": 447.7, "end": 453.14, "text": " so naturally you actually think well why can't we just you know decode the first word to the"}, {"start": 453.14, "end": 458.41999999999996, "text": " first word, second word to the second word it actually works pretty well in this example right"}, {"start": 458.41999999999996, "end": 465.3, "text": " like the decad cod say eats the we could just decode one by one of course that's not how translation"}, {"start": 465.3, "end": 471.14, "text": " works and translations the sentences can become rearranged in the target language like one word"}, {"start": 471.14, "end": 478.34, "text": " can become many words or it could even be an entirely different expression. So attention is a"}, {"start": 478.34, "end": 482.82, "text": " mechanism by which this decoder here in this step that we're looking at can actually"}, {"start": 482.82, "end": 490.98, "text": " decide to go back and look at particular parts of the input especially what it would do in"}, {"start": 490.98, "end": 501.38, "text": " like popular attention mechanisms is that this decoder here can decide to attend to the hidden"}, {"start": 501.38, "end": 507.86, "text": " states of the input sentence. What that means is in this particular case we would like to teach"}, {"start": 507.86, "end": 516.02, "text": " the decoder somehow that uh-huh look here I need to pay close attention to this step here"}, {"start": 516.02, "end": 522.26, "text": " because that was the step when the word eats was just encoded so it probably has a lot of"}, {"start": 522.26, "end": 531.0600000000001, "text": " information about what I would like to do right now namely translate this word eats. So"}, {"start": 531.06, "end": 538.9, "text": " this mechanism allows if you look at the information flow it simply goes through this word vector"}, {"start": 538.9, "end": 544.5799999999999, "text": " goes through one encoding step and then is that the hidden state and then the decoder can look"}, {"start": 544.5799999999999, "end": 551.3, "text": " directly at that so the path length of information is much shorter than going through all the hidden"}, {"start": 551.3, "end": 559.06, "text": " states in a traditional way. So that's where attention helps and the way that the decoder decides"}, {"start": 559.06, "end": 566.02, "text": " what to look at is like a kind of an addressing scheme you may know it from neural touring machines or"}, {"start": 568.42, "end": 576.9, "text": " or kind of other other kind of neural algorithms things so what the decoder would do is in each step"}, {"start": 576.9, "end": 585.6999999999999, "text": " it would output a bunch of keys oops sorry about that that's my hand being trippy"}, {"start": 585.7, "end": 600.26, "text": " um so what it would output is a bunch of keys so k1 through k n and what what these keys would do is"}, {"start": 600.26, "end": 611.7800000000001, "text": " they would index these hidden kind of hidden states um via a kind of a softmax architecture and"}, {"start": 611.78, "end": 617.22, "text": " we're gonna look at this I think in the actual paper we're discussing because it's going to become"}, {"start": 617.22, "end": 624.9, "text": " more clear but just kind of notice that the decoder here can decide to attend to the input sentence"}, {"start": 624.9, "end": 631.14, "text": " and kind of draw information directly from there instead of having to go just to the hidden state"}, {"start": 631.14, "end": 640.02, "text": " it's provided with. So if we go to the paper here what do these authors propose and the thing is they"}, {"start": 640.02, "end": 645.54, "text": " they ditch the RNNs they basically say attention is all you need you don't need the entire recurrent"}, {"start": 645.54, "end": 651.6999999999999, "text": " things basically in every step of this decoder of this end basically of the decoding so you want"}, {"start": 651.6999999999999, "end": 659.46, "text": " to produce the target sentence so in this step in this step in this step you can basically you"}, {"start": 659.46, "end": 667.78, "text": " don't need the recurrence you can just kind of do attention over everything and um it will be fine"}, {"start": 667.78, "end": 678.02, "text": " namely what they do is they propose this transformer architecture so what does it do it has two parts"}, {"start": 678.02, "end": 684.18, "text": " what's called an encoder and the decoder but don't kind of be confused um"}, {"start": 687.14, "end": 692.26, "text": " because this all happens at once so this is not an order then it all happens at once every"}, {"start": 692.26, "end": 702.02, "text": " all the source sentence so if we again have the cap roops that doesn't work as easy let's just"}, {"start": 702.02, "end": 708.8199999999999, "text": " do this uh uh uh this is a source sentence and then we also have a target sentence that maybe we've"}, {"start": 708.8199999999999, "end": 716.5, "text": " produced two words and we want to produce this third word here I want to produce this so we would"}, {"start": 716.5, "end": 724.5, "text": " feed the entire source sentence and also the target sentence we produce so far to this network"}, {"start": 724.5, "end": 730.9, "text": " namely the source sentence would go into this part and the target that we've produced so far"}, {"start": 730.9, "end": 740.1, "text": " would go into this part and this is then all combined and um at the end we get an output here"}, {"start": 740.1, "end": 747.5400000000001, "text": " at the output probabilities that kind of tells us the probabilities for the next word so we can choose"}, {"start": 747.5400000000001, "end": 756.1800000000001, "text": " the top probability and then repeat the entire process so basically every step in production is one"}, {"start": 756.1800000000001, "end": 762.1800000000001, "text": " training sample every step in producing sentence here before with the RNNs the entire sentence"}, {"start": 762.1800000000001, "end": 767.78, "text": " to sentence translation is one sample because we need to back propagate through all of these RNN"}, {"start": 767.78, "end": 776.42, "text": " steps because they all happen kind of in sequence here basically output of one single token is one"}, {"start": 776.42, "end": 782.26, "text": " sample and then the computation is finished the back prop happens through everything only for this"}, {"start": 782.26, "end": 791.62, "text": " one step there is no multi-step kind of uh back propagation as an RNNs um and this is kind of a paradigm"}, {"start": 791.62, "end": 798.42, "text": " shift in sequence processing because people uh we're always convinced that you kind of need these"}, {"start": 798.42, "end": 805.3, "text": " recurrent things in order to to make good um to learn these dependencies but here they basically"}, {"start": 805.3, "end": 811.7, "text": " say na na na we can just do attention over everything and it'll actually be fine if we just do one step"}, {"start": 812.58, "end": 821.14, "text": " predictions so let's go one by one so here with an input embedding and say an output embedding these"}, {"start": 821.14, "end": 827.6999999999999, "text": " these are symmetrical um so basically the tokens just get embedded with say word vectors again"}, {"start": 827.6999999999999, "end": 833.6999999999999, "text": " then there's a positional encoding this is kind of a special thing um where because you now"}, {"start": 834.34, "end": 839.9399999999999, "text": " lose this kind of sequence nature of your algorithm you kind of need to encode where the words are"}, {"start": 839.9399999999999, "end": 844.18, "text": " that you push through the network so the network kind of goes ah this is a word at the beginning"}, {"start": 844.18, "end": 849.22, "text": " of the sentence or ah this is a word towards the end of the sentence or that it can compare two words"}, {"start": 849.22, "end": 856.1800000000001, "text": " like which one comes first which one comes second and you do this um it's pretty easy for the"}, {"start": 856.1800000000001, "end": 862.6600000000001, "text": " networks if you do it with kind of these uh trigonometric functioning embeddings so if if I draw"}, {"start": 862.6600000000001, "end": 871.78, "text": " your sine wave and I draw your sine wave of that is double as fast and I draw your sine wave that"}, {"start": 871.78, "end": 879.86, "text": " is even faster maybe this one actually sink one two three four five no it doesn't matter you"}, {"start": 879.86, "end": 887.38, "text": " know what I mean so um I can encode the first word I can encode the first position with all down"}, {"start": 888.42, "end": 897.22, "text": " and then the second position is kind of down down up and the third position is kind of up down"}, {"start": 897.22, "end": 904.98, "text": " and so on so this is kind of a uh continuous way of binary encoding of position so if I want to"}, {"start": 904.98, "end": 911.94, "text": " compare two words I can just look at all the scales of these things and I know aha this word one"}, {"start": 911.94, "end": 917.0600000000001, "text": " word has a high here and the other word is low here so they must be pretty far away like one must"}, {"start": 917.0600000000001, "end": 923.86, "text": " be at the beginning and one must be at the end um if they happen to match in this long rate long"}, {"start": 923.86, "end": 931.86, "text": " wave and they also are both kind of low on this wave and then I can look in this wave for like oh"}, {"start": 931.86, "end": 936.98, "text": " maybe they're close together but here I really get the information which ones first which one second"}, {"start": 937.86, "end": 944.34, "text": " so these are kind of position line codings they they're not critical to this algorithm um but"}, {"start": 944.98, "end": 950.98, "text": " they just encode where the words are which of course that is important and it gives the"}, {"start": 950.98, "end": 958.26, "text": " network a significant boost in performance but it's not like it's not the the meat of the thing the"}, {"start": 958.26, "end": 968.4200000000001, "text": " meat of the thing is that now that these encodings go into the networks they simply do what they call"}, {"start": 968.4200000000001, "end": 975.7, "text": " attention here attention here and attention here so there's kind of three kinds of attention"}, {"start": 975.7, "end": 981.94, "text": " um so basically the first attention on the bottom left is simply attention as you can see"}, {"start": 982.5, "end": 988.4200000000001, "text": " over the input sentence so it it I told you before you need to take this input sentence if you"}, {"start": 988.4200000000001, "end": 998.1, "text": " look over here and you somehow need to encode it into a hidden representation and this now looks"}, {"start": 998.1, "end": 1002.1, "text": " it much more like the picture I drew here and the picture I drew right at the beginning is that"}, {"start": 1002.1, "end": 1009.46, "text": " all at once kind of put together this hidden representation and all you do is you use attention"}, {"start": 1009.46, "end": 1014.1800000000001, "text": " over the input sequence which basically means you kind of pick and choose which words you look at"}, {"start": 1014.1800000000001, "end": 1020.58, "text": " more or less um so with the bottom right so in the the output sentence that you've produced so"}, {"start": 1020.58, "end": 1028.26, "text": " far you simply encoded into kind of a hidden state and then the third on the top right that's the"}, {"start": 1028.26, "end": 1036.26, "text": " I think the sorry I got interrupted so as I was saying the top right is the most interesting part"}, {"start": 1036.26, "end": 1044.18, "text": " of the attention mechanism here um we're basically it unites the kind of encoder part with the kind"}, {"start": 1044.18, "end": 1051.78, "text": " of dec let's not it combines the source sentence with the target sentence that you've produced so far"}, {"start": 1051.78, "end": 1064.66, "text": " um so as you can see um maybe here I can just uh slightly annoying um but I'm just gonna remove"}, {"start": 1064.66, "end": 1074.26, "text": " these kind of circles here so if you can see here there's an output going from the part that encodes"}, {"start": 1074.26, "end": 1082.02, "text": " the source sentence and it goes into this multi-head attention there's two connections and there's"}, {"start": 1082.02, "end": 1093.46, "text": " also one connection coming from the encoded um output so far here and so there's three connections"}, {"start": 1093.46, "end": 1102.42, "text": " going into this and um we're gonna take a look at what these three connections are so the three"}, {"start": 1102.42, "end": 1113.78, "text": " connections here basically are the keys values and queries um if you see here the values and the keys"}, {"start": 1114.5, "end": 1122.18, "text": " are what is output by the encoding part of the source sentence and the query is output by the"}, {"start": 1122.18, "end": 1129.3000000000002, "text": " encoding part of the target sentence um and these are not only one value key and query so there are"}, {"start": 1129.3, "end": 1136.18, "text": " many uh in this kind of multi-head attention fashion so there are just many of them instead of one but"}, {"start": 1136.18, "end": 1144.34, "text": " you can think of and as it's just kind of sets um so the attention computed here is what does it do"}, {"start": 1144.34, "end": 1153.46, "text": " so first of all it uh calculates the adult product of the keys and the queries and then it does a"}, {"start": 1153.46, "end": 1161.8600000000001, "text": " softmax over this and then it multiplies it by the value so what does this do if you uh dot product"}, {"start": 1161.8600000000001, "end": 1168.58, "text": " the keys and the queries what what you would get is so as you know if you have two vectors"}, {"start": 1169.38, "end": 1177.22, "text": " and the dot their dot product basically gives you the angle between the vectors um with especially"}, {"start": 1177.22, "end": 1185.22, "text": " in high dimensions most vectors are going to be of uh kind of a 90 degree kind of uh I know the"}, {"start": 1185.22, "end": 1192.34, "text": " Americans do the little the little square um so most vectors are going to be not aligned very well"}, {"start": 1192.34, "end": 1199.38, "text": " so their dot product will kind of be zero-ish but if a key in the query actually align with each"}, {"start": 1199.38, "end": 1206.34, "text": " other like um if they point into the same directions their dot product will actually be large"}, {"start": 1206.34, "end": 1214.34, "text": " um so what you can think of this as the the keys are kind of here the keys are just a bunch of vectors"}, {"start": 1214.34, "end": 1227.06, "text": " in space um and each key has an associated value so each key there is kind of a table um value one"}, {"start": 1227.06, "end": 1236.02, "text": " value two value three this is really annoying if I do this over text right um so again here so we"}, {"start": 1236.02, "end": 1244.74, "text": " have a bunch of keys right in space and we have a table with values and each key here corresponds"}, {"start": 1244.74, "end": 1252.74, "text": " to value value one value two value three value four um so each key is associated with one of these"}, {"start": 1252.74, "end": 1260.1, "text": " values and then when we introduce a query what what can it do so a query will be a vector like this"}, {"start": 1260.1, "end": 1267.1399999999999, "text": " and we simply compute the so this is q this is the query we compute the dot product with each of the"}, {"start": 1267.1399999999999, "end": 1277.78, "text": " keys and um and then we compute a softmax over this which means that one key will basically be"}, {"start": 1277.78, "end": 1284.1799999999998, "text": " selected so in this case it will be probably this blue key here that has the biggest dot product"}, {"start": 1284.18, "end": 1293.14, "text": " with the query so this is key two in this in this case um and the softmax so if you don't know"}, {"start": 1293.14, "end": 1299.6200000000001, "text": " what a softmax is you have you have like x1 to x and b like some numbers then you simply do"}, {"start": 1300.42, "end": 1309.22, "text": " you map them to the exponential function um each one of them and but also each one of them you"}, {"start": 1309.22, "end": 1318.02, "text": " divide by the sum over over i of e to the x i so basically this is a renormalization basically you"}, {"start": 1318.02, "end": 1323.78, "text": " you do the exponential function of the numbers which of course this makes the kind of big numbers"}, {"start": 1323.78, "end": 1332.02, "text": " even bigger so basically um what you end up with is one of these numbers x1 through xn will become"}, {"start": 1332.02, "end": 1338.34, "text": " very big compared to the others and then you renormalize so basically one of them will be almost"}, {"start": 1338.34, "end": 1343.6999999999998, "text": " one and the other ones will be almost zero simply the the maximum function you can think of in a"}, {"start": 1343.6999999999998, "end": 1350.5, "text": " differentiable way i mean just want to select the biggest entry in this case here um we kind of"}, {"start": 1350.5, "end": 1355.4599999999998, "text": " select the key that aligns most with the query which in this case will be key two and then we"}, {"start": 1355.4599999999998, "end": 1364.34, "text": " when we multiply this softmax thing with the with the values um so this query this this inner product"}, {"start": 1364.34, "end": 1377.3, "text": " um if we multiply q with k2 as an inner product and um we take the the softmax over it softmax"}, {"start": 1377.3, "end": 1384.02, "text": " what we'll do is i'm going to draw it upwards here we're going to induce a distribution like this"}, {"start": 1385.1399999999999, "end": 1393.06, "text": " and if we multiply this by the value it will basically select value two um so this is this is"}, {"start": 1393.06, "end": 1401.7, "text": " kind of an indexing scheme into this memory of values and um this is what then the network uses to"}, {"start": 1401.7, "end": 1408.6599999999999, "text": " compute further things um using so you see the output here goes into kind of more layers of"}, {"start": 1408.6599999999999, "end": 1414.4199999999998, "text": " the neural network upwards uh so basically what what you can think what does this mean"}, {"start": 1414.42, "end": 1423.6200000000001, "text": " you can think of here's the whoops deep i want to delete this here you can think of this"}, {"start": 1423.6200000000001, "end": 1430.5800000000002, "text": " uh as basically the encoder of the source sentence um right here"}, {"start": 1432.26, "end": 1439.38, "text": " discovers interesting things that looks ugly it discovers interesting things about the"}, {"start": 1439.38, "end": 1448.3400000000001, "text": " about the the source sentence and it builds key value pairs and then the encoder of the target"}, {"start": 1448.3400000000001, "end": 1455.38, "text": " sentence builds the queries and together they give you kind of the next to next signal"}, {"start": 1456.2600000000002, "end": 1462.8200000000002, "text": " so it means that the network basically says he here's a bunch of things here's a here's a bunch of"}, {"start": 1462.82, "end": 1470.1, "text": " things about the source sentence that you might find interesting that's the values"}, {"start": 1472.1, "end": 1480.1799999999998, "text": " and the keys are ways to index the values so he says here's a bunch of things that are interesting"}, {"start": 1480.1799999999998, "end": 1485.46, "text": " which are the values and here is how you would address these things which is the keys"}, {"start": 1485.46, "end": 1494.3400000000001, "text": " and then the other part of the network builds the queries it says i would like to know certain things"}, {"start": 1495.38, "end": 1502.18, "text": " so think of the values like attributes like here here here is the name and the the the kind of"}, {"start": 1502.18, "end": 1508.5, "text": " tallness and the weight of a person right and the keys are like the the actual indexes like name"}, {"start": 1508.5, "end": 1516.34, "text": " height weight and then the the other part of the network can decide what do i want i actually want"}, {"start": 1516.34, "end": 1522.02, "text": " the name so the my query is the name it will be aligned with the key name and the corresponding"}, {"start": 1522.02, "end": 1526.66, "text": " value would be the name of the person you would like to describe so that's how kind of these"}, {"start": 1526.66, "end": 1533.78, "text": " networks work together and i think it's uh it's a pretty um ingenious it's not entirely new because"}, {"start": 1533.78, "end": 1539.22, "text": " it has been done of course before with all the differentiable touring machines and whatnot but"}, {"start": 1539.86, "end": 1545.54, "text": " it's pretty cool that this actually works and actually works kind of better than our events"}, {"start": 1546.74, "end": 1554.98, "text": " if you simply do this so they describe a bunch of other things here i i don't think they're too"}, {"start": 1554.98, "end": 1561.46, "text": " important um basically the the point that they make about this attention is that it reduces path"}, {"start": 1561.46, "end": 1569.22, "text": " lengths and kind of that's the the main reason why it should work better um with this entire"}, {"start": 1569.22, "end": 1575.7, "text": " attention mechanism you reduce the amount of computation steps that information has to flow"}, {"start": 1575.7, "end": 1582.26, "text": " from one point in the network to another and that what brings the major improvement because all"}, {"start": 1582.26, "end": 1588.42, "text": " the computation steps can make you lose information and you don't want that you want short path"}, {"start": 1588.42, "end": 1595.22, "text": " lengths and so that's that's what this method achieves and they claim that's why it's better"}, {"start": 1595.94, "end": 1603.3000000000002, "text": " and it works so well so they have uh experiments you can look at them they're really good at everything"}, {"start": 1603.3000000000002, "end": 1612.5800000000002, "text": " of course um of course you always have state of the art um and i think i will conclude here"}, {"start": 1612.58, "end": 1619.3, "text": " uh if you want to check it out yourself they have extensive code on gtaub where you can build your"}, {"start": 1619.3, "end": 1649.1399999999999, "text": " transformer networks and with that uh have a nice day and see ya"}] |
Yannic Kilcher | https://www.youtube.com/watch?v=-YiMVR3HEuY | Reinforcement Learning with Unsupervised Auxiliary Tasks | https://arxiv.org/abs/1611.05397
Abstract:
Deep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\% expert human performance, and a challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks leading to a mean speedup in learning of 10× and averaging 87\% expert human performance on Labyrinth.
Authors:
Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu | Hi there. Today we're looking at reinforcement learning with unsupervised auxiliary tasks by Google. So in this paper, the authors consider a reinforcement learning task and I can show you what it looks like. It looks like this kind of a maze or this is an example that they give where you have to navigate the maze. It's 3D and you have to navigate from pixel inputs. You have to collect apples and reach the goal and this gives you rewards. So on the left you can see what the Asians actually see on the right. You can see it from the top-down view. The problem is of course that the input is very, or the reward is very sparse, meaning that you have to navigate a lot of maze before you even get a single point. So reinforcement learning has a big trouble with this because it relies on constant reward to notice what actions are good and what actions are bad. So the authors proposes in addition to the regular loss that you would have. So your reward, which is this thing, you would also have an additional set of auxiliary tasks in here. See goes over the auxiliary control tasks that you specify. Each of those has a reward and you're also trying to maximize these each with some kind of a weight here. And the thing is that the parameters that you maximize over control all of the different tasks. So they are partly shared between the tasks. So what you're hoping is that by kind of learning to do one thing, you also learn to do another thing. So the difference between this and let's say, it might have, so we've seen kind of work like this before where you do it in more like an auto encoder setting. So for example, you can't agencies the input on the left here and it kind of tries to predict what the next input will be, what the next frame will be. The thought behind this is if you can accurately predict what the next frame will be, maybe it learns something useful about the environment. In this work, it's different because now we couple a reward to these tasks. And I can show you here what the authors propose as additional reward. Sorry, I'm just going to go there. Especially they consider these two auxiliary control tasks. So pixel changes, which means that the agent actually tries to actively change pixels. So it gets a reward for changing the pixels and the input. So it tries to maximize this. It needs to learn what they need to do to maximize my pixel changes. And probably that will be moving around. So it will learn to kind of move around, not to move against the wall because if it moves against the wall, the pixels won't change. So it will kind of learn to move along the, like a, how a regular human agent would also move, not into a wall, like into a dead end or something, such that the pixels always change. Of course, it's not perfect. You can also change your pixels quite a bit by simply spinning around in a circle. But this is one auxiliary tasks that they augment the agent with. The other one is network features. So it's kind of a meta learning here. You actually reward the agent for changing its own internal activations. So the hope is that it kind of learns about something about itself. How can I activate my internal neural network units? And it gets rewarded for that. So you might want to activate a lot of them and want to learn how they're activated. So this kind of self-interspection. You also hope that it kind of leads to a network that does more sophisticated tasks or that by nature of trying to get most pixel changes and the most network feature activations, that you also learn something useful for the actual task. So these are the two tasks they propose. In addition, they also do, and they have that drawing of this over here. They also do a lot of other things. Namely, on the top left, you can kind of see here that what's it called, the database agent. This is an A3C agent, meaning that it's an active critic. So you learn a policy and you learn a value network. We might over listen to future videos. So just consider this a standard reinforcement learning agent. You feed its experience into a replay buffer. And out of the replay buffer, you do many things. So for one, you try to learn these auxiliary tasks. Note that these are shared parameters between all of these networks. That's why the auxiliary tasks actually help. You also try to better learn your value function and make all this off to have policy learning because you kind of pause the base agent training for a while. And then you train value functions on more just because of the helps. You also try a reward prediction in here. And the way they do it, as they explain, is kind of an excuse sampling way. So out of all the situations you can be in, the agent will have a reward very, very few times. So what they do is they simply sample out of the replay buffer, out of all the experiences they've had so far. They sample more frequently the experiences where they actually gotten a reward. That way that the hope is, of course, the agent, if you look at, I mean, zoom in here, if you look at the experience here where you actually get an apple, then the agent might learn a lot faster. Oh, there's some kind of apple there and I move towards get a reward. So that's the hope that you instantly recognize high reward situations and kind of are not so interested in non-reward situations. Of course, it doesn't introduce bias in your sampling. And you might decide for yourself if that's good or bad. Here it seems to work. So they have a lot of experiments in this task and this labyrinth task and they, of course, as a result of research, they reach state of the art. They're much better than anything else. No, I mean, they don't boast as much. So it's actually fair comparisons. The criticisms, so they also evaluate on a target against the criticisms that I have or twofold. First of all, the choice of auxiliary tasks is, of course, completely up to the implementer, which means that I have to decide as an implementer of this algorithm what my auxiliary task will be. And here, pixel changes and network features, they seem like fairly general tasks that you could apply to a lot of these kind of problems. But it always kind of comes down to how much knowledge about the task. Would you like to quote into the actor? And here, I mean, you can see it makes sense to get at least that the pixel changes as an auxiliary task. But it's questionable how much of kind of domain knowledge this already encodes. So the fact that choice of these are certainly something that you have to decide as a human. And I think these are good choices. So they're not too domain specific, but also they do correspond to like some visual moving around game task. And the other kind of criticism, so really criticism is just that remark is that they do a lot of a lot of things. So I mean, their papers about the auxiliary tasks, but they also then do these skewed sampling and the policy value learning and so on. And of course, you can kind of argue, yeah, this is all done in, you know, the reinforcement learning tasks. That's why it's a fair comparison. I guess it's a philosophical question. If you want to reach state of the art, of course, you have to first of all get a better better method here. This will be the auxiliary tasks. This is the new idea. And then implement all the tricks that the other people have discovered, which is good because you kind of reach the highest performance you can get. But also, the problem is you make it harder to compare. You make it harder to see where the improvement is coming from. Have you simply chosen better hyper parameters for the reward predictions of thing? Have you simply, is there an interaction maybe between the auxiliary tasks and skewed sampling part? All of these kind of things wash out and it's not really clear where the improvement is coming from. On the other hand, if you simply take a basic, basic algorithm, like just a3c here on the top left and you augment it with nothing but these auxiliary tasks of the bottom left. Then, and then you see an improvement, you can be relatively sure. It's due to your new idea. But of course, you won't reach any state of the art numbers because everyone that does a3c also does these tricks. Philosophical question here, I'm standing more on the side of not doing the tricks or maybe doing both. Yeah, but decide for yourself and have a nice day. | [{"start": 0.0, "end": 6.48, "text": " Hi there. Today we're looking at reinforcement learning with unsupervised auxiliary tasks"}, {"start": 6.48, "end": 13.56, "text": " by Google. So in this paper, the authors consider a reinforcement learning task and I can"}, {"start": 13.56, "end": 20.64, "text": " show you what it looks like. It looks like this kind of a maze or this is an example that"}, {"start": 20.64, "end": 26.04, "text": " they give where you have to navigate the maze. It's 3D and you have to navigate from"}, {"start": 26.04, "end": 31.56, "text": " pixel inputs. You have to collect apples and reach the goal and this gives you rewards."}, {"start": 31.56, "end": 36.36, "text": " So on the left you can see what the Asians actually see on the right. You can see it"}, {"start": 36.36, "end": 44.04, "text": " from the top-down view. The problem is of course that the input is very, or the reward is"}, {"start": 44.04, "end": 51.6, "text": " very sparse, meaning that you have to navigate a lot of maze before you even get a single"}, {"start": 51.6, "end": 58.2, "text": " point. So reinforcement learning has a big trouble with this because it relies on constant"}, {"start": 58.2, "end": 64.6, "text": " reward to notice what actions are good and what actions are bad. So the authors proposes"}, {"start": 64.6, "end": 74.92, "text": " in addition to the regular loss that you would have. So your reward, which is this thing,"}, {"start": 74.92, "end": 83.88, "text": " you would also have an additional set of auxiliary tasks in here. See goes over the auxiliary"}, {"start": 83.88, "end": 90.12, "text": " control tasks that you specify. Each of those has a reward and you're also trying to maximize"}, {"start": 90.12, "end": 97.0, "text": " these each with some kind of a weight here. And the thing is that the parameters that you"}, {"start": 97.0, "end": 103.28, "text": " maximize over control all of the different tasks. So they are partly shared between the"}, {"start": 103.28, "end": 108.64, "text": " tasks. So what you're hoping is that by kind of learning to do one thing, you also learn"}, {"start": 108.64, "end": 117.52, "text": " to do another thing. So the difference between this and let's say, it might have, so we've"}, {"start": 117.52, "end": 125.08, "text": " seen kind of work like this before where you do it in more like an auto encoder setting."}, {"start": 125.08, "end": 132.28, "text": " So for example, you can't agencies the input on the left here and it kind of tries to predict"}, {"start": 132.28, "end": 136.44, "text": " what the next input will be, what the next frame will be. The thought behind this is if"}, {"start": 136.44, "end": 140.68, "text": " you can accurately predict what the next frame will be, maybe it learns something useful"}, {"start": 140.68, "end": 148.76, "text": " about the environment. In this work, it's different because now we couple a reward to"}, {"start": 148.76, "end": 157.76, "text": " these tasks. And I can show you here what the authors propose as additional reward. Sorry,"}, {"start": 157.76, "end": 167.12, "text": " I'm just going to go there. Especially they consider these two auxiliary control tasks."}, {"start": 167.12, "end": 176.79999999999998, "text": " So pixel changes, which means that the agent actually tries to actively change pixels."}, {"start": 176.79999999999998, "end": 183.88, "text": " So it gets a reward for changing the pixels and the input. So it tries to maximize this."}, {"start": 183.88, "end": 189.51999999999998, "text": " It needs to learn what they need to do to maximize my pixel changes. And probably that"}, {"start": 189.51999999999998, "end": 194.4, "text": " will be moving around. So it will learn to kind of move around, not to move against the"}, {"start": 194.4, "end": 200.07999999999998, "text": " wall because if it moves against the wall, the pixels won't change. So it will kind of"}, {"start": 200.07999999999998, "end": 211.44, "text": " learn to move along the, like a, how a regular human agent would also move, not into a wall,"}, {"start": 211.44, "end": 216.64, "text": " like into a dead end or something, such that the pixels always change. Of course, it's"}, {"start": 216.64, "end": 223.56, "text": " not perfect. You can also change your pixels quite a bit by simply spinning around in a circle."}, {"start": 223.56, "end": 228.84, "text": " But this is one auxiliary tasks that they augment the agent with. The other one is network"}, {"start": 228.84, "end": 240.07999999999998, "text": " features. So it's kind of a meta learning here. You actually reward the agent for changing"}, {"start": 240.08, "end": 249.12, "text": " its own internal activations. So the hope is that it kind of learns about something about"}, {"start": 249.12, "end": 257.52000000000004, "text": " itself. How can I activate my internal neural network units? And it gets rewarded for that."}, {"start": 257.52000000000004, "end": 261.96000000000004, "text": " So you might want to activate a lot of them and want to learn how they're activated."}, {"start": 261.96000000000004, "end": 268.88, "text": " So this kind of self-interspection. You also hope that it kind of leads to a network that"}, {"start": 268.88, "end": 278.48, "text": " does more sophisticated tasks or that by nature of trying to get most pixel changes and"}, {"start": 278.48, "end": 284.52, "text": " the most network feature activations, that you also learn something useful for the actual"}, {"start": 284.52, "end": 293.28, "text": " task. So these are the two tasks they propose. In addition, they also do, and they have"}, {"start": 293.28, "end": 301.84, "text": " that drawing of this over here. They also do a lot of other things. Namely, on the top"}, {"start": 301.84, "end": 308.23999999999995, "text": " left, you can kind of see here that what's it called, the database agent. This is an A3C"}, {"start": 308.23999999999995, "end": 315.71999999999997, "text": " agent, meaning that it's an active critic. So you learn a policy and you learn a value"}, {"start": 315.71999999999997, "end": 321.03999999999996, "text": " network. We might over listen to future videos. So just consider this a standard reinforcement"}, {"start": 321.04, "end": 327.6, "text": " learning agent. You feed its experience into a replay buffer. And out of the replay"}, {"start": 327.6, "end": 336.24, "text": " buffer, you do many things. So for one, you try to learn these auxiliary tasks. Note that"}, {"start": 336.24, "end": 341.76, "text": " these are shared parameters between all of these networks. That's why the auxiliary tasks"}, {"start": 341.76, "end": 348.68, "text": " actually help. You also try to better learn your value function and make all this off"}, {"start": 348.68, "end": 357.24, "text": " to have policy learning because you kind of pause the base agent training for a while."}, {"start": 357.24, "end": 364.32, "text": " And then you train value functions on more just because of the helps. You also try a reward"}, {"start": 364.32, "end": 370.72, "text": " prediction in here. And the way they do it, as they explain, is kind of an excuse sampling"}, {"start": 370.72, "end": 379.92, "text": " way. So out of all the situations you can be in, the agent will have a reward very, very"}, {"start": 379.92, "end": 386.04, "text": " few times. So what they do is they simply sample out of the replay buffer, out of all the"}, {"start": 386.04, "end": 392.8, "text": " experiences they've had so far. They sample more frequently the experiences where they"}, {"start": 392.8, "end": 399.40000000000003, "text": " actually gotten a reward. That way that the hope is, of course, the agent, if you look"}, {"start": 399.4, "end": 406.59999999999997, "text": " at, I mean, zoom in here, if you look at the experience here where you actually get"}, {"start": 406.59999999999997, "end": 413.2, "text": " an apple, then the agent might learn a lot faster. Oh, there's some kind of apple there"}, {"start": 413.2, "end": 421.2, "text": " and I move towards get a reward. So that's the hope that you instantly recognize high reward"}, {"start": 421.2, "end": 427.79999999999995, "text": " situations and kind of are not so interested in non-reward situations. Of course, it doesn't"}, {"start": 427.8, "end": 433.92, "text": " introduce bias in your sampling. And you might decide for yourself if that's good or bad."}, {"start": 433.92, "end": 442.2, "text": " Here it seems to work. So they have a lot of experiments in this task and this labyrinth"}, {"start": 442.2, "end": 449.8, "text": " task and they, of course, as a result of research, they reach state of the art. They're much"}, {"start": 449.8, "end": 455.48, "text": " better than anything else. No, I mean, they don't boast as much. So it's actually fair"}, {"start": 455.48, "end": 463.28000000000003, "text": " comparisons. The criticisms, so they also evaluate on a target against the criticisms that I"}, {"start": 463.28000000000003, "end": 469.6, "text": " have or twofold. First of all, the choice of auxiliary tasks is, of course, completely"}, {"start": 469.6, "end": 478.36, "text": " up to the implementer, which means that I have to decide as an implementer of this algorithm"}, {"start": 478.36, "end": 482.92, "text": " what my auxiliary task will be. And here, pixel changes and network features, they seem"}, {"start": 482.92, "end": 488.6, "text": " like fairly general tasks that you could apply to a lot of these kind of problems. But it"}, {"start": 488.6, "end": 495.92, "text": " always kind of comes down to how much knowledge about the task. Would you like to quote into"}, {"start": 495.92, "end": 502.72, "text": " the actor? And here, I mean, you can see it makes sense to get at least that the pixel"}, {"start": 502.72, "end": 509.48, "text": " changes as an auxiliary task. But it's questionable how much of kind of domain knowledge this"}, {"start": 509.48, "end": 516.96, "text": " already encodes. So the fact that choice of these are certainly something that you have"}, {"start": 516.96, "end": 524.36, "text": " to decide as a human. And I think these are good choices. So they're not too domain specific,"}, {"start": 524.36, "end": 534.64, "text": " but also they do correspond to like some visual moving around game task. And the other kind"}, {"start": 534.64, "end": 542.3199999999999, "text": " of criticism, so really criticism is just that remark is that they do a lot of a lot of"}, {"start": 542.3199999999999, "end": 548.8, "text": " things. So I mean, their papers about the auxiliary tasks, but they also then do these skewed"}, {"start": 548.8, "end": 554.4399999999999, "text": " sampling and the policy value learning and so on. And of course, you can kind of argue,"}, {"start": 554.4399999999999, "end": 561.6, "text": " yeah, this is all done in, you know, the reinforcement learning tasks. That's why it's a fair"}, {"start": 561.6, "end": 568.5600000000001, "text": " comparison. I guess it's a philosophical question. If you want to reach state of the art, of"}, {"start": 568.5600000000001, "end": 574.5600000000001, "text": " course, you have to first of all get a better better method here. This will be the auxiliary"}, {"start": 574.5600000000001, "end": 584.64, "text": " tasks. This is the new idea. And then implement all the tricks that the other people have discovered,"}, {"start": 584.64, "end": 589.8000000000001, "text": " which is good because you kind of reach the highest performance you can get. But also,"}, {"start": 589.8, "end": 597.1999999999999, "text": " the problem is you make it harder to compare. You make it harder to see where the improvement"}, {"start": 597.1999999999999, "end": 602.8, "text": " is coming from. Have you simply chosen better hyper parameters for the reward predictions"}, {"start": 602.8, "end": 609.64, "text": " of thing? Have you simply, is there an interaction maybe between the auxiliary tasks and skewed"}, {"start": 609.64, "end": 614.5999999999999, "text": " sampling part? All of these kind of things wash out and it's not really clear where"}, {"start": 614.6, "end": 621.4, "text": " the improvement is coming from. On the other hand, if you simply take a basic, basic algorithm,"}, {"start": 621.4, "end": 628.0400000000001, "text": " like just a3c here on the top left and you augment it with nothing but these auxiliary tasks"}, {"start": 628.0400000000001, "end": 633.96, "text": " of the bottom left. Then, and then you see an improvement, you can be relatively sure."}, {"start": 633.96, "end": 638.64, "text": " It's due to your new idea. But of course, you won't reach any state of the art numbers"}, {"start": 638.64, "end": 647.36, "text": " because everyone that does a3c also does these tricks. Philosophical question here, I'm"}, {"start": 647.36, "end": 654.88, "text": " standing more on the side of not doing the tricks or maybe doing both. Yeah, but decide"}, {"start": 654.88, "end": 671.04, "text": " for yourself and have a nice day."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=56GW1IlWgMg | Learning model-based planning from scratch | https://arxiv.org/abs/1707.06170
Abstract:
Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce the "Imagination-based Planner", the first model-based, sequential decision-making agent that can learn to construct, evaluate, and execute plans. Before any action, it can perform a variable number of imagination steps, which involve proposing an imagined action and evaluating it with its model-based imagination. All imagined actions and outcomes are aggregated, iteratively, into a "plan context" which conditions future real and imagined actions. The agent can even decide how to imagine: testing out alternative imagined actions, chaining sequences of actions together, or building a more complex "imagination tree" by navigating flexibly among the previously imagined states using a learned policy. And our agent can learn to plan economically, jointly optimizing for external rewards and computational costs associated with using its imagination. We show that our architecture can learn to solve a challenging continuous control problem, and also learn elaborate planning strategies in a discrete maze-solving task. Our work opens a new direction toward learning the components of a model-based planning system and how to use them.
Authors:
Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, Sebastien Racanière, David Reichert, Théophane Weber, Daan Wierstra, Peter Battaglia | Hi there. Today we're taking a look at learning model-based planning from scratch by DeepMind. So as recap, what is model-based planning? Basically, a model, also called an environment model, is just kind of a black box thing you can imagine, where you have a state of your current environment. You put it in there and you have an action that you want to take. You put it in there as well. And the environment model tells you what the new state as prime here and possibly also the new reward for taking that action is going to be. So this, of course, it's always good to have such an environment model because you can use it to plan ahead. But the author is here, question how do you plan and propose a new algorithm to learn this planning. For now, people have mostly used kind of heuristics to plan either things like A-star search, where you have an amazing, you want to go here, and you kind of have a heuristic say, the distance between the two points, but there's kind of walls in between. So you try to go there, but then there's a wall and you kind of explore around it. So these are kind of the techniques that have existed so far. Also, we've seen stuff like Monte Carlo tree search for alphago and other things like this that are not really learned. So this kind of paper, pros and mechanism to learn how to plan using such a model. So basically they go as an algorithm or a framework, you can say, where they have this, what you see here, this schematic tells you that you have this thing called a manager. Let me just quickly bring up my comment thingy thing. All right. So you can see here, there's this kind of manager and this manager can decide to imagine or act. If it acts, then it simply takes kind of the current state and all the things that happens so far, and decides an action to do in the world. And then it kind of trains on the action like classic reinforcement learning. But if it decides to imagine, it can use its model of the world, its imagination model to perform an action and see what would happen if it did that action. And it can then also use that to the memory and use it to further learn, even though it didn't do the action, it can imagine what happens. And how can it imagine the authors in particular propose different methods of imagining? This graph you see there post methods. The first two methods basically, so here if every row is a method of imagining, the first method, the one step imagining, simply means you have the current state of the world, which is the gray blob here. And what you do is you always go from the current state of the world, imagine one step ahead. So basically you select the state to imagine from, you imagine one step. And if you decide to not take an action after that, but imagine again, because maybe you're not sure yet what you want to do, say one, imagine another action, you would again go from this initial state. So this in the horizontal direction is time, time, internal time. Basically, you would again go from this state, imagine another action based on it, and so on, imagine another action. Until you're satisfied, you've imagined enough so you can actually take a real world step. In contrast, the end step strategy, so these are hard coded strategies, as you can see. It's the learned part is which action should I take, the hard coded part is where do I base this action off? The end step strategy also selects the first state, at first, imagines one action on top of it, but then always selects that new imagined action. So you can see here it selects this one to propose this action, and then it selects that imagined action to propose yet another action. So you can see it kind of imagines one path into the future, instead of many paths, just one step ahead. And then lastly, this imaginary strategy is basically the only one that's actually kind of a learned strategy where the manager can now propose any previously imagined or real world states in order to imagine from. So you always have the current world state, which is the first node in the graph, you select it, of course, at the beginning of no choice, you imagine an action on top of it, but then you can select any of these two nodes to imagine from and here again, the first is selected, an action is imagined, then you have three nodes, you can choose any of those where you want to imagine the next step. And here the, I mean, this example, the manager selects this state right here and try to decide, he decides to imagine another action on top of it until it is satisfied and can then actually go over to plan to actually perform an action in the real world. And if you, if you then decide to do an action in the real world, what you want to, what you can do is you can take all of the things you've imagined and use that. So you see in this pathway here, this flows back to the manager at some point, it decides, okay, I've imagined enough and they can use all of these imagined steps in order to take a real world step. And after the real world step, the entire thing starts again. So that's how it learns to plan, especially interesting, of course, is this imagination tree strategy where it's actually learned to, where it actually learns to plan ahead. So the model is described in detail in a formal manner and then it already goes over two experiments and there's this kind of spaceship tasks where you have to, there's pictures. You have to get the spaceship to move around stuff and around these asteroids and get a reward. You can see different imagination kind of projectives here in the top row. You see the red ones is the kind of executed actions, the blue ones are imagined ones and you see the tree it's constructed. So first it takes an action right here, just without imagining. Then it imagines one step but then decides to take another action, it imagines two actions but decides on a third one. So you see to the left in this picture you see the first action. Then it imagines one action and decides to take an action and it imagines two actions and based on these imaginations, I'm going to guess it's fairly satisfied with the one that's very close to the target. So you see it's pretty smart in that it sees that the second imagined action is fairly close to where it wants to go and it doesn't need to imagine yet another action that then actually hits the target. It can go over to performing the action right away because the imagination gives enough information. So these kind of things are pretty cool to look at and check out the more experiments if you want to know. Here is even more experiments. In discrete mases they feature multiple goals, they feature the system optimizing not only for its reward but also for kind of internal costs. So having a budget for imagining and optimizing not doing too many imaginations step. The kind of on this experiment the kind of thing that bugs me here is the fact that they didn't actually use the full imagination tree algorithm but the manager only selected from what you can see here. So act is act the actual action then sj zero is the kind of the first imagined state, sj case kind of the last imagined state. So basically the manager can only choose between actually acting then doing kind of this one step strategy and then doing kind of this end step strategy in each step. So it kind of limits the way it can plan but I'm going to guess that he did this because otherwise you can have trained the model and it seems a pretty reasonable simplification to make in order to get this to work. Also check out the paper if you want to see how all of these different parts are implemented. Of course you can guess most of them are neural networks and it's pretty standard so far and check out for the additional experiments. They're pretty cool and see you next time. | [{"start": 0.0, "end": 7.0, "text": " Hi there. Today we're taking a look at learning model-based planning from scratch by DeepMind."}, {"start": 7.0, "end": 16.0, "text": " So as recap, what is model-based planning? Basically, a model, also called an environment model,"}, {"start": 16.0, "end": 24.0, "text": " is just kind of a black box thing you can imagine, where you have a state of your current environment."}, {"start": 24.0, "end": 30.0, "text": " You put it in there and you have an action that you want to take. You put it in there as well."}, {"start": 30.0, "end": 41.0, "text": " And the environment model tells you what the new state as prime here and possibly also the new reward for taking that action is going to be."}, {"start": 41.0, "end": 51.0, "text": " So this, of course, it's always good to have such an environment model because you can use it to plan ahead."}, {"start": 51.0, "end": 59.0, "text": " But the author is here, question how do you plan and propose a new algorithm to learn this planning."}, {"start": 59.0, "end": 70.0, "text": " For now, people have mostly used kind of heuristics to plan either things like A-star search, where you have an amazing, you want to go here,"}, {"start": 70.0, "end": 76.0, "text": " and you kind of have a heuristic say, the distance between the two points, but there's kind of walls in between."}, {"start": 76.0, "end": 83.0, "text": " So you try to go there, but then there's a wall and you kind of explore around it."}, {"start": 83.0, "end": 98.0, "text": " So these are kind of the techniques that have existed so far. Also, we've seen stuff like Monte Carlo tree search for alphago and other things like this that are not really learned."}, {"start": 98.0, "end": 109.0, "text": " So this kind of paper, pros and mechanism to learn how to plan using such a model."}, {"start": 109.0, "end": 124.0, "text": " So basically they go as an algorithm or a framework, you can say, where they have this, what you see here, this schematic tells you that you have this thing called a manager."}, {"start": 124.0, "end": 131.0, "text": " Let me just quickly bring up my comment thingy thing."}, {"start": 131.0, "end": 135.0, "text": " All right."}, {"start": 135.0, "end": 144.0, "text": " So you can see here, there's this kind of manager and this manager can decide to imagine or act."}, {"start": 144.0, "end": 159.0, "text": " If it acts, then it simply takes kind of the current state and all the things that happens so far, and decides an action to do in the world."}, {"start": 159.0, "end": 164.0, "text": " And then it kind of trains on the action like classic reinforcement learning."}, {"start": 164.0, "end": 179.0, "text": " But if it decides to imagine, it can use its model of the world, its imagination model to perform an action and see what would happen if it did that action."}, {"start": 179.0, "end": 191.0, "text": " And it can then also use that to the memory and use it to further learn, even though it didn't do the action, it can imagine what happens."}, {"start": 191.0, "end": 201.0, "text": " And how can it imagine the authors in particular propose different methods of imagining?"}, {"start": 201.0, "end": 205.0, "text": " This graph you see there post methods."}, {"start": 205.0, "end": 214.0, "text": " The first two methods basically, so here if every row is a method of imagining,"}, {"start": 214.0, "end": 221.0, "text": " the first method, the one step imagining, simply means you have the current state of the world, which is the gray blob here."}, {"start": 221.0, "end": 227.0, "text": " And what you do is you always go from the current state of the world, imagine one step ahead."}, {"start": 227.0, "end": 234.0, "text": " So basically you select the state to imagine from, you imagine one step."}, {"start": 234.0, "end": 249.0, "text": " And if you decide to not take an action after that, but imagine again, because maybe you're not sure yet what you want to do, say one, imagine another action, you would again go from this initial state."}, {"start": 249.0, "end": 257.0, "text": " So this in the horizontal direction is time, time, internal time."}, {"start": 257.0, "end": 265.0, "text": " Basically, you would again go from this state, imagine another action based on it, and so on, imagine another action."}, {"start": 265.0, "end": 271.0, "text": " Until you're satisfied, you've imagined enough so you can actually take a real world step."}, {"start": 271.0, "end": 281.0, "text": " In contrast, the end step strategy, so these are hard coded strategies, as you can see."}, {"start": 281.0, "end": 291.0, "text": " It's the learned part is which action should I take, the hard coded part is where do I base this action off?"}, {"start": 291.0, "end": 302.0, "text": " The end step strategy also selects the first state, at first, imagines one action on top of it, but then always selects that new imagined action."}, {"start": 302.0, "end": 312.0, "text": " So you can see here it selects this one to propose this action, and then it selects that imagined action to propose yet another action."}, {"start": 312.0, "end": 321.0, "text": " So you can see it kind of imagines one path into the future, instead of many paths, just one step ahead."}, {"start": 321.0, "end": 342.0, "text": " And then lastly, this imaginary strategy is basically the only one that's actually kind of a learned strategy where the manager can now propose any previously imagined or real world states in order to imagine from."}, {"start": 342.0, "end": 367.0, "text": " So you always have the current world state, which is the first node in the graph, you select it, of course, at the beginning of no choice, you imagine an action on top of it, but then you can select any of these two nodes to imagine from and here again, the first is selected, an action is imagined, then you have three nodes, you can choose any of those where you want to imagine the next step."}, {"start": 367.0, "end": 385.0, "text": " And here the, I mean, this example, the manager selects this state right here and try to decide, he decides to imagine another action on top of it until it is satisfied and can then actually go over to plan to actually perform an action in the real world."}, {"start": 385.0, "end": 402.0, "text": " And if you, if you then decide to do an action in the real world, what you want to, what you can do is you can take all of the things you've imagined and use that."}, {"start": 402.0, "end": 416.0, "text": " So you see in this pathway here, this flows back to the manager at some point, it decides, okay, I've imagined enough and they can use all of these imagined steps in order to take a real world step."}, {"start": 416.0, "end": 424.0, "text": " And after the real world step, the entire thing starts again."}, {"start": 424.0, "end": 443.0, "text": " So that's how it learns to plan, especially interesting, of course, is this imagination tree strategy where it's actually learned to, where it actually learns to plan ahead."}, {"start": 443.0, "end": 458.0, "text": " So the model is described in detail in a formal manner and then it already goes over two experiments and there's this kind of spaceship tasks where you have to, there's pictures."}, {"start": 458.0, "end": 469.0, "text": " You have to get the spaceship to move around stuff and around these asteroids and get a reward."}, {"start": 469.0, "end": 475.0, "text": " You can see different imagination kind of projectives here in the top row."}, {"start": 475.0, "end": 483.0, "text": " You see the red ones is the kind of executed actions, the blue ones are imagined ones and you see the tree it's constructed."}, {"start": 483.0, "end": 488.0, "text": " So first it takes an action right here, just without imagining."}, {"start": 488.0, "end": 500.0, "text": " Then it imagines one step but then decides to take another action, it imagines two actions but decides on a third one."}, {"start": 500.0, "end": 506.0, "text": " So you see to the left in this picture you see the first action."}, {"start": 506.0, "end": 521.0, "text": " Then it imagines one action and decides to take an action and it imagines two actions and based on these imaginations, I'm going to guess it's fairly satisfied with the one that's very close to the target."}, {"start": 521.0, "end": 539.0, "text": " So you see it's pretty smart in that it sees that the second imagined action is fairly close to where it wants to go and it doesn't need to imagine yet another action that then actually hits the target."}, {"start": 539.0, "end": 549.0, "text": " It can go over to performing the action right away because the imagination gives enough information."}, {"start": 549.0, "end": 559.0, "text": " So these kind of things are pretty cool to look at and check out the more experiments if you want to know."}, {"start": 559.0, "end": 573.0, "text": " Here is even more experiments. In discrete mases they feature multiple goals, they feature the system optimizing not only for its reward but also for kind of internal costs."}, {"start": 573.0, "end": 581.0, "text": " So having a budget for imagining and optimizing not doing too many imaginations step."}, {"start": 581.0, "end": 597.0, "text": " The kind of on this experiment the kind of thing that bugs me here is the fact that they didn't actually use the full imagination tree algorithm but the manager only selected from what you can see here."}, {"start": 597.0, "end": 612.0, "text": " So act is act the actual action then sj zero is the kind of the first imagined state, sj case kind of the last imagined state."}, {"start": 612.0, "end": 628.0, "text": " So basically the manager can only choose between actually acting then doing kind of this one step strategy and then doing kind of this end step strategy in each step."}, {"start": 628.0, "end": 644.0, "text": " So it kind of limits the way it can plan but I'm going to guess that he did this because otherwise you can have trained the model and it seems a pretty reasonable simplification to make in order to get this to work."}, {"start": 644.0, "end": 650.0, "text": " Also check out the paper if you want to see how all of these different parts are implemented."}, {"start": 650.0, "end": 658.0, "text": " Of course you can guess most of them are neural networks and it's pretty standard so far and check out for the additional experiments."}, {"start": 658.0, "end": 686.0, "text": " They're pretty cool and see you next time."}] |
Yannic Kilcher | https://www.youtube.com/watch?v=agXIYMCICcc | Imagination-Augmented Agents for Deep Reinforcement Learning | Commentary of
https://arxiv.org/abs/1707.06203
Abstract
We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model should be used to arrive at a policy, I2As learn to interpret predictions from a learned environment model to construct implicit plans in arbitrary ways, by using the predictions as additional context in deep policy networks. I2As show improved data efficiency, performance, and robustness to model misspecification compared to several baselines.
Authors
Théophane Weber, Sébastien Racanière, David P. Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomènech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, Razvan Pascanu, Peter Battaglia, David Silver, Daan Wierstra | Hi, today we're taking a look at imagination augmented agents for deep reinforcement learning. This is a paper by DeepMind and has been in the news a bit recently so we're going to have a look at what it's all about. Basically they claim that agents who have a model of the world perform better usually than agents who don't. But of course usually we don't have a model of the world so they make the agent learn a model of the world which you can then use to plan. Now this learning of the model can of course be imperfect because it's learned and so they provide a way to work with imperfect environment models and combine them with model for your approach. So what do we mean by models and model free? Basically what you can say is if you have a model of the world you have kind of a machine say a box and in this box you can you have a state s and you feed the state to the machine and you feed an action and the model of the world will tell you what the s prime the new state is going to be. So this is in the case where you exactly know how your environment works. Now in a model free approach what you would do is you would plan basically you would have a state and you would put that through some kind of the layer neural network and out would come what action should I take right now. So in the model based approach you're trying to try out all these actions and tell you look which one gives me kind of a desired final state and in the model free approach you simply use the rewards to go directly say here's my state what should my action be. So this paper is a combination of both. The basic architecture is here so let's start from the very right we have two paths divided along this line the final policy so which actions you're going to take and what kind of values you can expect is going to be a result of two different models that are combined. There's a model free path which means this is what we talked about simply here is the state and you simply feed it through this neural network thing all along along along outcomes a policy or an action you should take. But then there's also this other path and this is the this imagination path. This click consists of a bunch of these roll out encoders and these roll out encoders is just the agent imagining the future so the agent doing some actions and looking at how they will perform. So as this done there's this imagination core thingy what this consists of is policy network and an environment model. So and this environment model is really the core of the entire thing. So this environment model you basically learn from what you've seen so far. So so far you've taken certain actions here in certain states you use this to to learn the environment model that gives you from one state the next state and the next reward. So that's that's what you learn of course also using neural networks and whatnot and you use that environment model to to imagine the future. So here in this imagination core basically you put in your state you get out some new state and some reward you feed the new state and you imagine another action. Of course the actions aren't random the actions you also take real this thing and this is where it loops all back this is now a model free policy network that works with the environment model. So basically in your imagination you only use if you look at the very right here you only use this right path because your imagination doesn't need to be like super exact or super well planned you can use the model free approach that we kind of know kind of works for some problems you use this to generate your actions that you imagine and you use an environment model in order to look how these actions will play out and that's how you imagine one step of the future and you simply repeat this a couple of steps and then you have an entire what's called a rollout which consists of these pairs of states and rewards and what you do then is you encode this rollout via this encoder which is in this case an LSTM or something like this I think you encode all these states into one vector into one embedding basically for this rollout and this embedding describes kind of this future imagined path. Of course what you're going to hope is that somehow this encoding captures how you will do in the future and how good this will be so these states and rewards. Once you have a couple of these rollouts so once you've imagined a couple of different futures you then aggregate them in this aggregator. In their case they just concatenate these rollout encodings and then you feed this to the big aggregator on top so the big aggregator on top can now combine the model free path and the imagined futures so if the big aggregator things that the imagination isn't correct it can resort to the model free path but it can also think that maybe it's correct or it can be kind of if it's sure it's correct it can fully trust these rollouts and perform actions according to that. All of this is of course trained end to end there's a tiny piece we haven't looked at yet namely how this policy network on the left is learned and this is simply learned by and I have to pay attention that I'm doing the right thing here so you take this big thing here you will find out policy network and you you perform you kind of learn to copy its actions simply from the input so from this model free input over here you take this input and you take the output of your big policy network and you try to simply make a neural network that copies the outputs given these inputs and that's kind of your small policy network in here that's simply model free so this the loop closes in a way that you use your you use your your learned model to then again imagine the future but of course for imagining the future within imagining the future you can't have another instance of this network because it would be infinite recursion so you can only have a model free network all right that's it for the model of course the other a couple of tricks and how to encode these things basically they perform experiments and this is maybe what you've seen in the media so far of this game and this game is a game where you have to push around the brown boxes onto the red squares using the green avatar that you have so this game is difficult because first of all the levels are generated randomly so there's no way you can like hard code anything and second of all if you push box say this box here if you were to push it to the right into the corner you would have no way of getting it out again that's why I have to plan ahead and avoid such mistakes because yeah it's they're not fixable so once you make the mistakes you can't go back and and that's where planning comes in so handy if you imagine this future and if you're model is correct or approximately correct then you can avoid such mistakes of course that's yeah the difficulty in this game and that's where the planning helps um note that they don't they don't code in how the game works so all these models get its pixel input of the game and they have to kind of imagine the pixel output they're gonna get so that's um increased difficulty so technically the the method is model free in the sense that there's really no coded model of the world uh just the pixels so they um have performance comparisons in where if you and I find this on the right here interesting you can see according to the on-roll depth so how how much steps into the future you imagine you can see it kind of flattens out after only about five steps um whereas the game usually lasts for about 50 steps they say so only imagining five steps is already really helpful what I don't like here is that they compare to what they say this copy model because this is here is a standard model free in comparison so it's just a model free agent and of course it per oh no of course but it performs worse um right here and it um because it has no imagination but it also has less parameters so they're trying to compare it to something with the same amount of parameters and say oh we have this copy model agent here and what the copy model agent is doing is simply it's um for the environment model it's the same architecture but for the environment model it simply predicts the output as the input so it simply says oh you do this action the environment is going to be exactly the same as it is now and I don't like it because basically this this entire branch here becomes rather useless and so even though you have parameters in here they're not useful so it's to say that this is a comparison with the model of the same amount of parameters I don't know technically true yeah another thing that they do is they pre-train the environment model from with the model free agent so first they code a model free agent then they pre-train the environment model to then use with this agent so it's not fully learned and I can imagine they tried and it didn't work and this is how you get it to work so they um they also experiment with imperfect models and um so they train the environment model only imperfectly and as you can see here this is kind of the output you can get say if duplicates you have kind of errors you have twice your twice your your character here you have like boxes within the wall or uh all kinds of things and they basically show that if you try to classically plan using these models these bad models you uh you get nowhere this is a Monte Carlo um sampler planner using a poor model and it performance degrades significantly from when you use the good model which is right here and the uh imagination agent is not affected by kind of the bad model except that it takes kind of longer to reach its uh it's high inaccuracy all right so there's a couple of other experiments and um a couple of Pacman experiments where they show you can learn one model uh to transfer kind of to play different games in this Pacman world and that just works the more if you have very sparse rewards which you can imagine yes um if you need to plan then that's what you you get you get the ability to earn more sparse rewards because you can have a look ahead all right so i think i'll conclude here with the discussion of this paper i quite liked it and um it's a cool method combines many things and i see you next time | [{"start": 0.0, "end": 10.92, "text": " Hi, today we're taking a look at imagination augmented agents for deep reinforcement learning."}, {"start": 10.92, "end": 16.36, "text": " This is a paper by DeepMind and has been in the news a bit recently so we're going to"}, {"start": 16.36, "end": 20.92, "text": " have a look at what it's all about."}, {"start": 20.92, "end": 28.64, "text": " Basically they claim that agents who have a model of the world perform better usually"}, {"start": 28.64, "end": 30.560000000000002, "text": " than agents who don't."}, {"start": 30.560000000000002, "end": 37.44, "text": " But of course usually we don't have a model of the world so they make the agent learn"}, {"start": 37.44, "end": 41.8, "text": " a model of the world which you can then use to plan."}, {"start": 41.8, "end": 49.480000000000004, "text": " Now this learning of the model can of course be imperfect because it's learned and so"}, {"start": 49.480000000000004, "end": 57.0, "text": " they provide a way to work with imperfect environment models and combine them with model"}, {"start": 57.0, "end": 58.76, "text": " for your approach."}, {"start": 58.76, "end": 62.16, "text": " So what do we mean by models and model free?"}, {"start": 62.16, "end": 69.28, "text": " Basically what you can say is if you have a model of the world you have kind of a machine"}, {"start": 69.28, "end": 80.12, "text": " say a box and in this box you can you have a state s and you feed the state to the machine"}, {"start": 80.12, "end": 87.72, "text": " and you feed an action and the model of the world will tell you what the s prime the new state"}, {"start": 87.72, "end": 91.08000000000001, "text": " is going to be."}, {"start": 91.08000000000001, "end": 97.24000000000001, "text": " So this is in the case where you exactly know how your environment works."}, {"start": 97.24000000000001, "end": 107.44, "text": " Now in a model free approach what you would do is you would plan basically you would have"}, {"start": 107.44, "end": 114.24, "text": " a state and you would put that through some kind of the layer neural network and out would"}, {"start": 114.24, "end": 119.84, "text": " come what action should I take right now."}, {"start": 119.84, "end": 126.44, "text": " So in the model based approach you're trying to try out all these actions and tell you"}, {"start": 126.44, "end": 132.72, "text": " look which one gives me kind of a desired final state and in the model free approach you"}, {"start": 132.72, "end": 139.44, "text": " simply use the rewards to go directly say here's my state what should my action be."}, {"start": 139.44, "end": 145.72, "text": " So this paper is a combination of both."}, {"start": 145.72, "end": 153.32, "text": " The basic architecture is here so let's start from the very right we have two paths divided"}, {"start": 153.32, "end": 159.04, "text": " along this line the final policy so which actions you're going to take and what kind of"}, {"start": 159.04, "end": 166.92, "text": " values you can expect is going to be a result of two different models that are combined."}, {"start": 166.92, "end": 171.79999999999998, "text": " There's a model free path which means this is what we talked about simply here is the"}, {"start": 171.79999999999998, "end": 177.64, "text": " state and you simply feed it through this neural network thing all along along along"}, {"start": 177.64, "end": 183.44, "text": " outcomes a policy or an action you should take."}, {"start": 183.44, "end": 189.68, "text": " But then there's also this other path and this is the this imagination path."}, {"start": 189.68, "end": 195.24, "text": " This click consists of a bunch of these roll out encoders and these roll out encoders is"}, {"start": 195.24, "end": 203.48, "text": " just the agent imagining the future so the agent doing some actions and looking at how"}, {"start": 203.48, "end": 204.8, "text": " they will perform."}, {"start": 204.8, "end": 217.52, "text": " So as this done there's this imagination core thingy what this consists of is policy network"}, {"start": 217.52, "end": 219.24, "text": " and an environment model."}, {"start": 219.24, "end": 223.64000000000001, "text": " So and this environment model is really the core of the entire thing."}, {"start": 223.64000000000001, "end": 229.88000000000002, "text": " So this environment model you basically learn from what you've seen so far."}, {"start": 229.88, "end": 238.07999999999998, "text": " So so far you've taken certain actions here in certain states you use this to to learn"}, {"start": 238.07999999999998, "end": 244.68, "text": " the environment model that gives you from one state the next state and the next reward."}, {"start": 244.68, "end": 252.56, "text": " So that's that's what you learn of course also using neural networks and whatnot and"}, {"start": 252.56, "end": 260.68, "text": " you use that environment model to to imagine the future."}, {"start": 260.68, "end": 268.84000000000003, "text": " So here in this imagination core basically you put in your state you get out some new state"}, {"start": 268.84000000000003, "end": 273.64, "text": " and some reward you feed the new state and you imagine another action."}, {"start": 273.64, "end": 280.12, "text": " Of course the actions aren't random the actions you also take real this thing and this"}, {"start": 280.12, "end": 286.0, "text": " is where it loops all back this is now a model free policy network that works with the"}, {"start": 286.0, "end": 287.76, "text": " environment model."}, {"start": 287.76, "end": 292.64, "text": " So basically in your imagination you only use if you look at the very right here you only"}, {"start": 292.64, "end": 300.24, "text": " use this right path because your imagination doesn't need to be like super exact or super"}, {"start": 300.24, "end": 307.2, "text": " well planned you can use the model free approach that we kind of know kind of works for some"}, {"start": 307.2, "end": 313.47999999999996, "text": " problems you use this to generate your actions that you imagine and you use an environment"}, {"start": 313.47999999999996, "end": 319.76, "text": " model in order to look how these actions will play out and that's how you imagine one"}, {"start": 319.76, "end": 329.36, "text": " step of the future and you simply repeat this a couple of steps and then you have an"}, {"start": 329.36, "end": 337.08, "text": " entire what's called a rollout which consists of these pairs of states and rewards and what"}, {"start": 337.08, "end": 343.8, "text": " you do then is you encode this rollout via this encoder which is in this case an LSTM"}, {"start": 343.8, "end": 353.47999999999996, "text": " or something like this I think you encode all these states into one vector into one embedding"}, {"start": 353.47999999999996, "end": 364.36, "text": " basically for this rollout and this embedding describes kind of this future imagined path."}, {"start": 364.36, "end": 372.16, "text": " Of course what you're going to hope is that somehow this encoding captures how you will"}, {"start": 372.16, "end": 377.32, "text": " do in the future and how good this will be so these states and rewards."}, {"start": 377.32, "end": 381.96000000000004, "text": " Once you have a couple of these rollouts so once you've imagined a couple of different"}, {"start": 381.96000000000004, "end": 389.2, "text": " futures you then aggregate them in this aggregator."}, {"start": 389.2, "end": 398.2, "text": " In their case they just concatenate these rollout encodings and then you feed this to the"}, {"start": 398.2, "end": 406.96, "text": " big aggregator on top so the big aggregator on top can now combine the model free path"}, {"start": 406.96, "end": 416.08, "text": " and the imagined futures so if the big aggregator things that the imagination isn't correct"}, {"start": 416.08, "end": 424.15999999999997, "text": " it can resort to the model free path but it can also think that maybe it's correct or it"}, {"start": 424.15999999999997, "end": 430.03999999999996, "text": " can be kind of if it's sure it's correct it can fully trust these rollouts and perform"}, {"start": 430.03999999999996, "end": 432.24, "text": " actions according to that."}, {"start": 432.24, "end": 437.56, "text": " All of this is of course trained end to end there's a tiny piece we haven't looked at"}, {"start": 437.56, "end": 447.56, "text": " yet namely how this policy network on the left is learned and this is simply learned"}, {"start": 447.56, "end": 454.04, "text": " by and I have to pay attention that I'm doing the right thing here so you take this big"}, {"start": 454.04, "end": 463.76, "text": " thing here you will find out policy network and you you perform you kind of learn to copy"}, {"start": 463.76, "end": 470.96, "text": " its actions simply from the input so from this model free input over here you take this"}, {"start": 470.96, "end": 484.24, "text": " input and you take the output of your big policy network and you try to simply make"}, {"start": 484.24, "end": 490.88, "text": " a neural network that copies the outputs given these inputs and that's kind of your small"}, {"start": 490.88, "end": 499.84, "text": " policy network in here that's simply model free so this the loop closes in a way that"}, {"start": 499.84, "end": 508.0, "text": " you use your you use your your learned model to then again imagine the future but of course"}, {"start": 508.0, "end": 513.4, "text": " for imagining the future within imagining the future you can't have another instance"}, {"start": 513.4, "end": 518.0, "text": " of this network because it would be infinite recursion so you can only have a model free"}, {"start": 518.0, "end": 529.68, "text": " network all right that's it for the model of course the other a couple of tricks and how"}, {"start": 529.68, "end": 540.32, "text": " to encode these things basically they perform experiments and this is maybe what you've"}, {"start": 540.32, "end": 549.2, "text": " seen in the media so far of this game and this game is a game where you have to push"}, {"start": 549.2, "end": 560.0, "text": " around the brown boxes onto the red squares using the green avatar that you have so this"}, {"start": 560.0, "end": 566.5200000000001, "text": " game is difficult because first of all the levels are generated randomly so there's"}, {"start": 566.52, "end": 576.24, "text": " no way you can like hard code anything and second of all if you push box say this box"}, {"start": 576.24, "end": 583.0799999999999, "text": " here if you were to push it to the right into the corner you would have no way of getting"}, {"start": 583.0799999999999, "end": 595.56, "text": " it out again that's why I have to plan ahead and avoid such mistakes because yeah it's"}, {"start": 595.56, "end": 600.4399999999999, "text": " they're not fixable so once you make the mistakes you can't go back and and that's where"}, {"start": 600.4399999999999, "end": 606.8399999999999, "text": " planning comes in so handy if you imagine this future and if you're model is correct or"}, {"start": 606.8399999999999, "end": 613.3199999999999, "text": " approximately correct then you can avoid such mistakes of course that's yeah the difficulty"}, {"start": 613.3199999999999, "end": 623.4, "text": " in this game and that's where the planning helps um note that they don't they don't code"}, {"start": 623.4, "end": 629.8, "text": " in how the game works so all these models get its pixel input of the game and they have to"}, {"start": 629.8, "end": 638.4399999999999, "text": " kind of imagine the pixel output they're gonna get so that's um increased difficulty so technically"}, {"start": 638.4399999999999, "end": 646.1999999999999, "text": " the the method is model free in the sense that there's really no coded model of the world"}, {"start": 646.2, "end": 661.4000000000001, "text": " uh just the pixels so they um have performance comparisons in where if you and I find this on the"}, {"start": 661.4000000000001, "end": 669.1600000000001, "text": " right here interesting you can see according to the on-roll depth so how how much steps into the"}, {"start": 669.16, "end": 677.88, "text": " future you imagine you can see it kind of flattens out after only about five steps um whereas the"}, {"start": 678.6, "end": 685.56, "text": " game usually lasts for about 50 steps they say so only imagining five steps is already really helpful"}, {"start": 688.36, "end": 694.12, "text": " what I don't like here is that they compare to what they say this copy model"}, {"start": 694.12, "end": 701.5600000000001, "text": " because this is here is a standard model free in comparison so it's just a model free agent"}, {"start": 702.12, "end": 713.72, "text": " and of course it per oh no of course but it performs worse um right here and it um because it has"}, {"start": 713.72, "end": 717.96, "text": " no imagination but it also has less parameters so they're trying to compare it to something with"}, {"start": 717.96, "end": 723.32, "text": " the same amount of parameters and say oh we have this copy model agent here and what the copy model"}, {"start": 723.32, "end": 734.12, "text": " agent is doing is simply it's um for the environment model it's the same architecture but for the"}, {"start": 734.12, "end": 741.8000000000001, "text": " environment model it simply predicts the output as the input so it simply says oh you do this action"}, {"start": 741.8000000000001, "end": 748.6, "text": " the environment is going to be exactly the same as it is now and I don't like it because basically"}, {"start": 748.6, "end": 758.12, "text": " this this entire branch here becomes rather useless and so even though you have parameters in here"}, {"start": 759.72, "end": 767.16, "text": " they're not useful so it's to say that this is a comparison with the model of the same amount of"}, {"start": 767.16, "end": 777.5600000000001, "text": " parameters I don't know technically true yeah another thing that they do is they pre-train"}, {"start": 777.56, "end": 783.56, "text": " the environment model from with the model free agent so first they code a model free agent"}, {"start": 784.5999999999999, "end": 790.76, "text": " then they pre-train the environment model to then use with this agent so it's not fully learned"}, {"start": 790.76, "end": 796.4399999999999, "text": " and I can imagine they tried and it didn't work and this is how you get it to work"}, {"start": 799.16, "end": 799.4799999999999, "text": " so"}, {"start": 799.48, "end": 812.84, "text": " they um they also experiment with imperfect models and um so they train the environment model only"}, {"start": 812.84, "end": 818.2, "text": " imperfectly and as you can see here this is kind of the output you can get say if duplicates you"}, {"start": 818.2, "end": 826.84, "text": " have kind of errors you have twice your twice your your character here you have like boxes within the"}, {"start": 826.84, "end": 835.72, "text": " wall or uh all kinds of things and they basically show that if you try to classically plan using"}, {"start": 835.72, "end": 848.9200000000001, "text": " these models these bad models you uh you get nowhere this is a Monte Carlo um sampler planner using"}, {"start": 848.9200000000001, "end": 854.9200000000001, "text": " a poor model and it performance degrades significantly from when you use the good model"}, {"start": 854.92, "end": 864.28, "text": " which is right here and the uh imagination agent is not affected by kind of the bad model"}, {"start": 865.88, "end": 872.1999999999999, "text": " except that it takes kind of longer to reach its uh it's high inaccuracy"}, {"start": 874.52, "end": 880.12, "text": " all right so there's a couple of other experiments and um a couple of Pacman experiments where"}, {"start": 880.12, "end": 888.36, "text": " they show you can learn one model uh to transfer kind of to play different games in this Pacman world"}, {"start": 888.36, "end": 899.08, "text": " and that just works the more if you have very sparse rewards which you can imagine yes um if you"}, {"start": 899.08, "end": 905.88, "text": " need to plan then that's what you you get you get the ability to earn more sparse rewards because"}, {"start": 905.88, "end": 912.68, "text": " you can have a look ahead all right so i think i'll conclude here with the discussion of this paper"}, {"start": 912.68, "end": 942.52, "text": " i quite liked it and um it's a cool method combines many things and i see you next time"}] |