video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
IRVdiHu1VCc
As Tim talked about earlier, that's a huge distinction. I started to look at teamwork and determination. And basically, all those platitudes they call "successories" that hang with that schmaltzy art in boardrooms around the world right now, that stuff -- it's suddenly all been turned on its head. Safety. Safety first is ... Going back to OSHA and PETA and the Humane Society:
767
793
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=767s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
What if OSHA got it wrong? I mean -- this is heresy, what I'm about to say -- but what if it's really safety third? Right? (Laughter) No, I mean, really. What I mean to say is: I value my safety on these crazy jobs as much as the people that I'm working with, but the ones who really get it done -- they're not out there talking about safety first. They know that other things come first --
793
820
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=793s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
the business of doing the work comes first, the business of getting it done. And I'll never forget, up in the Bering Sea, I was on a crab boat with the "Deadliest Catch" guys -- which I also work on in the first season. We were about 100 miles off the coast of Russia: 50-foot seas, big waves, green water coming over the wheelhouse, right? Most hazardous environment I'd ever seen,
820
841
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=820s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
and I was back with a guy, lashing the pots down. So I'm 40 feet off the deck, which is like looking down at the top of your shoe, you know, and it's doing this in the ocean. Unspeakably dangerous. I scamper down, I go into the wheelhouse and I say, with some level of incredulity, "Captain -- OSHA?" And he says, "OSHA? Ocean." And he points out there. (Laughter)
841
866
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=841s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
But in that moment, what he said next can't be repeated in the Lower 48. It can't be repeated on any factory floor or any construction site. But he looked at me and said, "Son," -- he's my age, by the way, he calls me "son," I love that -- he says, "Son, I'm the captain of a crab boat. My responsibility is not to get you home alive. My responsibility is to get you home rich."
866
886
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=866s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
(Laughter) You want to get home alive, that's on you." And for the rest of that day -- safety first. I mean, I was like -- So, the idea that we create this sense of complacency when all we do is talk about somebody else's responsibility as though it's our own, and vice versa. Anyhow, a whole lot of things. I could talk at length about the many little distinctions we made
886
912
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=886s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
and the endless list of ways that I got it wrong. But what it all comes down to is this: I've formed a theory, and I'm going to share it now in my remaining 2 minutes and 30 seconds. It goes like this: we've declared war on work, as a society -- all of us. It's a civil war. It's a cold war, really. We didn't set out to do it and we didn't twist our mustache in some Machiavellian way,
912
939
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=912s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
but we've done it. And we've waged this war on at least four fronts, certainly in Hollywood. The way we portray working people on TV -- it's laughable. If there's a plumber, he's 300 pounds and he's got a giant butt crack, admit it. You see him all the time. That's what plumbers look like, right? We turn them into heroes, or we turn them into punch lines. That's what TV does.
939
962
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=939s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
We try hard on "Dirty Jobs" not to do that, which is why I do the work and I don't cheat. But, we've waged this war on Madison Avenue. So many of the commercials that come out there in the way of a message -- what's really being said? "Your life would be better if you could work a little less, didn't have to work so hard, got home a little earlier, could retire a little faster, punch out a little sooner."
962
985
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=962s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
It's all in there, over and over, again and again. Washington? I can't even begin to talk about the deals and policies in place that affect the bottom-line reality of the available jobs, because I don't really know; I just know that that's a front in this war. And right here, guys -- Silicon Valley. I mean -- how many people have an iPhone on them right now?
985
1,004
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=985s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
How many people have their BlackBerry? We're plugged in; we're connected. I would never suggest for a second that something bad has come out of the tech revolution. Good grief, not to this crowd. (Laughter) But I would suggest that innovation without imitation is a complete waste of time. And nobody celebrates imitation the way "Dirty Jobs" guys know it has to be done.
1,004
1,030
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1004s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
Your iPhone without those people making the same interface, the same circuitry, the same board, over and over -- all of that -- that's what makes it equally as possible as the genius that goes inside of it. So, we've got this new toolbox. You know? Our tools today don't look like shovels and picks. They look like the stuff we walk around with. And so the collective effect of all of that
1,030
1,057
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1030s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
has been this marginalization of lots and lots of jobs. And I realized, probably too late in this game -- I hope not, because I don't know if I can do 200 more of these things -- but we're going to do as many as we can. And to me, the most important thing to know and to really come face to face with, is that fact that I got it wrong about a lot of things, not just the testicles on my chin.
1,057
1,080
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1057s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
I got a lot wrong. So, we're thinking -- by "we," I mean me -- (Laughter) that the thing to do is to talk about a PR campaign for work -- manual labor, skilled labor. Somebody needs to be out there, talking about the forgotten benefits. I'm talking about grandfather stuff, the stuff a lot us probably grew up with but we've kind of -- you know, kind of lost a little.
1,080
1,111
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1080s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
Barack wants to create two and a half million jobs. The infrastructure is a huge deal. This war on work that I suppose exists, has casualties like any other war. The infrastructure is the first one, declining trade school enrollments are the second one. Every single year, fewer electricians, fewer carpenters, fewer plumbers, fewer welders, fewer pipe fitters, fewer steam fitters.
1,111
1,134
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1111s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
The infrastructure jobs that everybody is talking about creating are those guys -- the ones that have been in decline, over and over. Meanwhile, we've got two trillion dollars, at a minimum, according to the American Society of Civil Engineers, that we need to expend to even make a dent in the infrastructure, which is currently rated at a D minus. So, if I were running for anything -- and I'm not --
1,134
1,157
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1134s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
IRVdiHu1VCc
I would simply say that the jobs we hope to make and the jobs we hope to create aren't going to stick unless they're jobs that people want. And I know the point of this conference is to celebrate things that are near and dear to us, but I also know that clean and dirty aren't opposites. They're two sides of the same coin, just like innovation and imitation, like risk and responsibility, like peripeteia and anagnorisis,
1,157
1,183
https://www.youtube.com/watch?v=IRVdiHu1VCc&t=1157s
Learning from dirty jobs | Mike Rowe
https://i.ytimg.com/vi/I…axresdefault.jpg
hQEnzdLkPj4
hi there check out these clusters of images right here and just have a look at how all of them are pretty much showing the same object so here's balloons here's birds here's sharks or other fish these are from images from the image net data set and you can see that these clusters are pretty much the object classes themselves so there's all the frogs right here here all the all
0
28
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=0s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
the people that have caught fish so this the astonishing thing about this is that these clusters have been obtained without any labels of the image net dataset of course the data set has labels but this method doesn't use the labels it learns to classify images without labels so today we're looking at this paper learning to classify images without labels by water from Guns Becca
28
58
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=28s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
Simon Van Daan hender stung stamatis Georg Ulis mark pro Simmons and Luke fungal and on a high level overview they have a three-step procedure basically first they they use self supervised learning in order to get good representations second they do a clustering so they do a sort of k nearest neighbor clustering but they do clustering on top of those things but
58
95
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=58s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
they're doing in a kind of special way and then third they do a refinement through self labeling so if you know what all of these are you basically understand the paper already but there's a bit of tricky steps in there and it's pretty cool that at the end it works out like you just saw so before we dive in as always if you're here and not subscribed then please do and if you
95
125
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=95s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
liked the video share it out and leave a comment if you feel like commenting cool so as we already stated the problem they ask is it possible to automatically classify images without the use of ground truth annotations or even when the classes themselves are not known a priori now you might seem like you might think that this is outrageous how can you class high when you don't even know
125
155
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=125s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
what the classes are and so on so the way you have to imagine it going forward and they're sort of they don't explicitly explain it but it's it's sort of assumed that if you have a data set dataset ba-da-da-da-da and you learn to classify it what basically that means is you cluster it right you put some of the data points in in the same clusters okay and then of course the data set I'm
155
186
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=155s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
gonna draw the same data set right here the same data set would have an actual classification thing so this would be class zero this here maybe class one and this year might be class 2 now you can't possibly know how the classes are like called or something which one is the first which one is a second so at test time basically if you have a method like this that doesn't use labels what you're
186
210
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=186s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
going to do is you're basically going to find you're going to be as generous as possible and in the assignment of these and say I'll look if I assign this here to cluster zero and this here to cluster 2 and this year to cluster 1 and I just you know carry over the labels what would my accuracy be under that labeling so you've asked generous as possible with the assignments of the labels so
210
238
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=210s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
that's how it's going to work right that's a way you have to keep in mind we're basically developing an algorithm that gives us this kind of clustering of the data and then if that clustering partitions the data in the same way as the actual labeling would the actual labeling with the test labels then we think it's a good algorithm okay so they claim they have a
238
266
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=238s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
okay in this paper we deviate from recent works and advocate a two-step approach and it's actually a three step approach but we're feature learning and clustering are decoupled okay why why is that so they argue what you could do what people have done is and I'm going to well this is just a wall of text so what you could do is you could just basically cluster the data like who says
266
295
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=266s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
you can't use clustering algorithms but then the question is what what do you cluster them by like you need a distance so if I have points in 2d it sort of makes sense to use the Euclidean distance here but if I have images of cats and dogs and whatnot then the Euclidean distance between the pixels is really not a good a good thing but also so you might think we could actually we
295
322
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=295s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
could you use a deep neural network and then basically send the image that's the image right here send the image through the deep neural network and then either take this last state right here so it goes through and through and through and we could get take either of the hidden states or we could just take you know the last state that is the sort of hidden representation right here and do
322
345
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=322s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
the clustering with that but then of course the question is what do you what which neural network do you take how do you train that neural network and there have been a few approaches such as a deep cluster which try to formulate basically an objective for that neural network where you first you send all the images through right you send a bunch of images through to get you in embedding
345
369
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=345s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
space you get you points and then in embedding space you think well the features that are in the embedding space they are somehow latent and they you know if basically the entire thing is if this neural network was used to classify images you would have a classification head on top and a classification head this is like a five class classification that is nothing else than a linear
369
395
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=369s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
classifier boundary that you put on top of this hidden representation so if you were to use this neural network for classification it must be possible to draw a linear boundary between the classes and therefore the either things like the inner product distance or the Euclidean distance must make sense in that space if they don't make sense in the picture space but they must make
395
421
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=395s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
sense in the hidden representation space because what you're going to do with them is exactly linear classification the last classification head of a neural network is just a linear classifier so the assumption is that and the conclusion is well in this space you should be able to cluster by Euclidian distance so what deep cluster does alternate like is first get the
421
448
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=421s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
representations you start off with a random neural network then cluster these representations then basically label self label the images in a way now Oh way oversimplifying that technique right here but you have these alternative steps of clustering and then kind of finding better representation and then clustering these representations and what it basically says is that the CNN
448
472
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=448s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
itself is such a is like a prior because it's the translation invariant works very good for very well for natural images the CNN itself will lead to good representations if we do it in this way and there's some good results there but this paper argues that if you do that then the the algorithm tends to focus a lot on very low-level features so if the pixel on the bottom right here is blue
472
501
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=472s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
right then you can and the neural network by chance puts two of those images where the blue pixel on the bottom right it puts them close together then in the next step it will because they're close together will cluster them together and then it will basically feed back the new representation should put the two in the same class right it will feed back that it should focus even more
501
526
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=501s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
on that blue pixel so it's very very dependent on initializations and it can jump super easily onto these low-level of features that have nothing to do with what the high level task you're ultimately trying to solve which is to classify these images later so what this paper does is it says we can eliminate this we can eliminate this the fact that these methods will predict
526
561
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=526s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
will produce neural networks that focus on low-level features and how do we do that we do that by representation learning so representation you're learning you might know this as self supervised learning and this is the task they solve in the first step of their objective so let's go through this this right here is an image now the T is a transformation of that image and in self supervised
561
592
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=561s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
learning there are several methods that you can transform an image so for example you can random crop an image you can just cut out like a piece right here and scale that up to be as large as the original image or you can use for example data augmentation which means you take the image and you basically so if there is I don't know the cat right here you kind of convolve it with some
592
619
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=592s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
things it's there's like a very squiggly cat okay I'm terrible is you can you can rotate it for example so it's like this okay so these are all these are all sets including the crop sets of this transformation T so you transform it in some way and you want after you've transformed it you send your original image that it should be red you send your original image and the transformed
619
652
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=619s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
image through a neural network each one by themselves okay and then after then this you say the hidden representation here should be close to each other Oh this is this is basically the self supervised training task it's it's been shown to work very very well as a pre-training method for classification neural networks you you have an image and it's augmented version and you
652
681
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=652s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
minimize the inner product or the Euclidean distance between the two versions in the hidden space and the rationale is exactly the same the rationale is that this hidden space of course should be linearly classifiable and so the distance between those should be close and the rationale between having these tasks is that well if I flip the image right if I flip the image
681
706
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=681s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
to the right it cannot focus on the pixel on the bottom right anymore because that's not going to be the pixel on the bottom right here and I'm not always going to flip it into the same direction and sometimes I'm gonna crop it so it also can't focus on the pics on the bottom right because in the crop that pixel is like out here it's not even in the crop so basically what
706
727
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=706s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
you're looking to do with these self supervised methods is you are looking to destroy this low level of information that's that's all you're looking to build a pipeline of a neural network here that destroys deliberately low level information and you do that by coming up with tasks like this self supervision tasks that this that deliberately exclude this information
727
752
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=727s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
from being used I think that's what's going on generally in the self supervised learning thing okay so this here as you can see is the neural network that you train you send both images the original and the Augmented version through the same neural network and then you minimize some distance which is usually like the inner product or the Euclidean distance in this
752
776
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=752s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
embedding space okay and what you train you can see right here you train the parameters of this neural network so the transformations are fixed or sampled and the distance is fixed you train the neural networks such that your embeddings minimize this task now this is nothing new this has been this has been used for a couple of years now to get better representation self
776
799
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=776s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
supervised learning the thing but they basically say we can use this as an initialization step for this clustering procedure because if we don't do that we focus on these low-level features okay and notice you don't need any labels for this procedure that's why it's called self supervise okay so the second second part is the clustering now they cluster but they
799
828
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=799s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
don't just cluster these representations that would be that doesn't perform very well in their in their experiments what they instead do is they minimize this entire objective right here and we'll go through it step by step so they train a new neural network okay this thing right here this is a new neural network so first you have you already have the neural network which was called what was
828
860
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=828s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
it even called the one that gives you the embedding with the theta okay it's called five theta it's the same architecture and I think they initialize one with the other so in step 1 you get Phi Theta 5 theta goes if from from X gives you a representation of X ok let's call it hidden X so that's via self supervised learning but in step 2 you train an entirely new new neural
860
891
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=860s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
network this phi ada here and you initialize it with this one but now you train it to do the following again you want to minimize sorry you want to maximize the inner product right here see that's the inner product you want to maximize the inner product between two things now that's the same thing as before we want to minimize the distance between two things and the dot product
891
920
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=891s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
distance in that case you maximize the dot product between two things and the two things are two images that go through the same neural network as before right this and this now what's different here is that here input and one image of the data set that's the same as before okay so we input one image but here before in the self supervised learning we input an
920
944
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=920s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
Augmented version of that and now we input something else we input this K right here now what's K what K comes from this neighbor set of X okay this is the set of neighbors of X and these neighbors are determined with respect to this neural network right here okay so what you do after step one is you take your neural network with the good embeddings and here is your data set X
944
976
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=944s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
your data set X that's really another your data set X is this list basically of all the images in your data set and what we're going to do is you're going to take all of them using that neural network that you just trained and embed them into a latent space right here okay this is the latent space where you have done this self supervised training and now for each image right here so if this
976
1,005
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=976s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
is X eye you're going to find its K nearest neighbors and they use I think they use five as a benchmark so you're going to find its nearest neighbors it's five nearest neighbors and you do this for each image so this image has these five nearest neighbors and so on so in step two what you're trying to do is you're going to try to pull together each image and its nearest neighbors in
1,005
1,033
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1005s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
that in this this not in this space directly but you determine which ones are the nearest neighbor from this neural net where can you keep it constant that's how you determine what the nearest neighbors are in the first task and that is your NX set for X I and in the second step you're trying to make the representations of any image and its nearest neighbors closer to each other
1,033
1,062
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1033s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
okay so with with this thing right here you maximize the inner product between X in after this neural network and a nearest neighbor of X that was was a nearest neighbor after the first task now the way they cluster here is not just again by putting it into an embedding space like we saw before but this thing right here this neural network as you can see here is is a C
1,062
1,097
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1062s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
dimensional vector in 0 1 now C is the number of classes that you can either know that so you don't know which class is which you don't have labels but you could know how many classes there are or you could just guess how many classes there are and as long as you as you over guess you can still like build super clusters later so this they simply say it's in 0 1 but they also say it
1,097
1,123
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1097s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
performs a soft assignment so we're also going to assume that this is normalized so for each for each data point X here you're going to you're going to have an image you're going to put it through this new neural network ok this new neural network new and it's going to tell you it's going to give you basically a histogram let's say class 1 2 or 3 we guess there are 3 class and
1,123
1,150
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1123s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
it's going to give you an assignment of the 3 and you also take a nearest neighbor here is your dataset you also take a nearest neighbor of that so you so you look for this set n of X and you take a nearest neighbor maybe that's that's a maybe that's a dog I can't I really can't draw a dog yeah that's the best I can do I'm sorry and you also put that through the same network and you're
1,150
1,181
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1150s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
saying since they were nearest neighbor and task 1 they must share some sort of interesting high level features because that's what the first task was for therefore I want to make them closer together in in the in the light of these of this neural network right here so this is also going to give you an assignment like maybe like this okay and now you object you you train this
1,181
1,208
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1181s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
network right here to basically match these two distributions okay so this is this is now a classifier into c classes but we guess c and we don't have labels we simply our label is going to be my neighbors from the first task must have the same labels that's our label now they say they also have this term right here which is the entropy over assignments okay as you can
1,208
1,238
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1208s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
see so they minimize the following they minimize this quantity which has a negative in front of it so that means they maximize this log inner product and they also maximize the entropy because sorry so they minimize this thing and but the entropy is a negative quantity right so they maximize the interview because here is a plus and now they minimize the entropy let's see what they
1,238
1,271
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1238s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
say by minimizing the following objective now entropy is the sum of the negative sum of P log P and this if this is P yes this is the probability that an image is are going to be assigned to cluster C over the entire dataset so they're going to mmm yes so it's negative this quantity negative minus P log P and this is the entropy so they're going to minimize the entropy let's see
1,271
1,311
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1271s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
what they say we include an entropy term the second term in equation two which spreads the predictions uniformly across clusters C ok so what we want is a uniform assignment over class which means we should maximize the entropy oh yes okay they minimize this thing and this here is the negative entropy right okay so they want basically what they want of over the
1,311
1,350
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1311s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
whole dataset that not all of the images are going to be in the same cluster well this is cluster one and then this is cluster two and then this is cluster three so that term counteracts that basically the more evenly spread the entire dataset distribution is the the the higher the entropy the lower the negative entropy and that's the goal right here I'm sorry this this was I was
1,350
1,376
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1350s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
confused by the too many negative signs and then you minimize the entire thing all right now they say they say different thing right here they say here this bracket denotes the dot product operator as we saw it's the dot product between these two distributions right here the first term in equation two imposes this neural network to make consistent predictions for a sample X I
1,376
1,402
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1376s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
and its neighboring samples the neighbors of X I and here is an interesting thing note that the dot product will be maximal when the predictions are one hot that means confident and assigned to the same cluster consistent so they basically say the objective encourages confidence because it encourages predictions to be one hot and it can Courage's consistency because it you
1,402
1,428
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1402s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
know that because the distributions need to be the same they should be in the same cluster right now I agree with the consistency like if you make the inner product high then of the up to date of two of these histograms of course they'll look the same right because these are ultimately vectors these are three-dimensional vectors let's call them two-dimensional vectors right so
1,428
1,451
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1428s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
here is class one here is class two if you you know make the inner product small or high they will agree on their predictions but I I disagree that this encourages anything to be one hot like in and if you have two vectors that are both zero one times zero one the inner product is going to be one and if you have two assignments that are 0.5 and 0.5 then it is also going to result in
1,451
1,481
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1451s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
an in an inner product of is it 0.5 right is also going to to be no so what's the inner product here the inner product is 0.5 times 0.5 plus 0.5 times 0.5 which is 0.5 am i dumb an embarrassingly long time later oh it's because the l1 norm okay okay we got it we got it I am I am okay I am too dumb yes of course I was thinking of these vectors being normalized in L 2 space
1,481
1,523
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1481s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
where their inner products would always be 1 but of course if you have assignments between classes and it's a probability distribution a histogram then all of the possible assignments lie on this on this thing right here now the inner product with yourself of course is the length of the vector and the length of a vector that points to one class or the other class is longer than a vector
1,523
1,553
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1523s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
that points in between so okay I see that's where they get this that's where they get this must be one hot from so okay I'll give that to them it is actually encouraging one hot predictions as long as these things are normalized in l1 space which they probably are because they're histograms right yes that was that was dumbness of me I was trying to make a counter example I'm
1,553
1,581
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1553s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
like wait a minute this counter example is a counter example to my counter example okay so yeah that's that's that so as you can see they are of course correct here and they now make the first experiments so they say basically after the first step of the self supervised training they can already retrieve sort of nearest neighbors and the nearest neighbors the
1,581
1,613
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1581s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
nearest neighbors of these images right here are the ones that you see on the right and after the self supervised one these nearest neighbors are already pretty good at sharing the high level features actually crazy-crazy good right this flute here is in different sizes as you can see the fishes aren't aren't all exactly the same the birds so you can see it really focuses on sort of higher
1,613
1,642
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1613s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
level of features but I guess it's really dependent on this higher level tasks and they what they also investigate this quantitatively but I just want to focus on how how good is this after only the self supervised thing and now they do this clustering and they can already sort they could already evaluate it right here because now they have a clustering right after
1,642
1,669
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1642s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
this step they've basically pulled together the neighbors and they have this neural network that is not assigning classes so they could already evaluate this and they are going to do that but that's not good enough yet then they do a third step which is fine tuning through self labeling now self labeling is pretty much exactly what it's what it says it's you label your
1,669
1,694
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1669s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
own data with your own classifier now that might be a bit outrageous and you basically saying wait a minute if I label my own data and learn a classifier on these labels isn't isn't it just going to come out the same and the answer is no right if you have a dataset because your classifier doesn't give you just first of all if your classifier is something like this right just happens
1,694
1,727
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1694s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
to be and you label and you learn a new classifier it is going to be more like this right because it sort of maximizes a lot of classifiers maximize these distances between the classes so even if it's like that and then the second step they do is they say okay there are some points where we are actually more confident about such as this one we're more confident about that one also this one
1,727
1,754
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1727s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
and then this one here is pretty close like we're not super neither this one but we're very confident about these two so we're only going to use the ones where we are in fact confident about to learn to learn the new classifier or basically we you can also weigh them and so on but they go by confidence right here as you can see in this final algorithm so this is the entire
1,754
1,784
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1754s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
algorithm and I got kicked away at our algorithm there we go all right so semantic clustering by adopting nearest neighbors they're scan algorithm so in the first step you do this pretext task this is the self supervision the representation learning okay for your entire data set no sorry this is this is this your optimized optimizer neural network with task T
1,784
1,819
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1784s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
that's just self supervised representation learning okay then the second thing we're going to determine the nearest neighbor set for each X now they also in that's that they also augment the data they do a heavy data augmentation and so on also in this in the third step in the self labeling they do date augmentation there's a lot of tricks in here but ultimately the base
1,819
1,843
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1819s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
algorithm goes like this so you find your neighboring sets for each X and then what you do while you're clustering loss decreases you update this clustering neural network by with this loss that we saw so this is the loss where you make the nearest neighbors closer to each other while still keeping the entropy high okay and then in the last after you've done this you go through when you say
1,843
1,873
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1843s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
while the length of Y increases what's why why is all the data points that are above a certain threshold now you going to filter the data set that is above a certain threshold and that's your data set Y and you Terrain this same neural network you basically fine-tune it with the cross entropy loss on your own labels so now you only have labels Y okay so it's not it's not labels you
1,873
1,910
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1873s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
have the cross entropy loss between the assignments of this and the assignments of your data set okay so you basically do the same task but you filter by confidence and they use a threshold I think of 0.7 or something like this now let's go into the experiments the experiments or look as follows so they do some ablations to find out where in their methods kind of the the gains come
1,910
1,946
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1910s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
from and we'll just quickly go through them if they just do these self supervision at the beginning and then just do k-means clustering on top of that that will give them on C for ten a thirty five point nine percent accuracy so not very good so the clustering you can't just cluster on top of these representations and then be done if they do what they say so this is sample and
1,946
1,976
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1946s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
batch entropy loss this basically means you do not care about the nearest neighbors you do this entire thing but you only make an image close to the prediction close to itself and it's augmentations so you don't use any nearest neighbor information also doesn't work like I wouldn't pay too much attention that the numbers are ten twenty or thirty it just it like doesn't
1,976
1,999
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1976s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
work now if you use the scan loss you all of a sudden you get into a regime where there is actual signal so this is um this is now significantly above the this is significantly above random guessing and if you use strong data augmentation as I said is a lot of this is has these tricks in it of what kind of data augmentation you do and so on so never forget that that these papers besides
1,999
2,030
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=1999s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
their idea they put in all the tricks they can so you get 10% more and then if you do this self labeling step you get another 10% more and this is fairly respectable like 83.5 without ever seeing labels it's fairly good but of course there are only ten classes right here so keep that in mind but they will do it on image net later and they investigate what kind of self
2,030
2,062
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2030s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
supervision tasks at the beginning are important and they investigate things like rot net feature decoupling and noise contrastive estimation which noise contrastive estimation is the best a noise contrastive estimation I think is just where you as we said you input an image and then it's kind of noisy versions with augmented in various ways and then you classify them together and
2,062
2,088
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2062s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
this has been like this these methods have been very successful in the last few years yeah so this they they have various investigations into their algorithm I want to point out this here this is the accuracy vs. confidence after the complete clustering step so this is now the third step is self labeling and you can see right here as these confidence of the network goes up
2,088
2,121
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2088s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
the actual accuracy goes up as well so that means the network after the classroom is really more confident about the points that it can classify more accurately there's like a correlation between the network is confident and the actual label of the point which is remarkable because it has never seen the label but also see how sort of the range here is soup is quite small so with the standard
2,121
2,148
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2121s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
augmentation that goes like from here to here so where you set that threshold is fairly important that might be quite brittle here because you need to set the threshold right such that some points are below it and some are above it and you you don't want to pull in points where where you're not because if you pull in points from here you're only you only have the correct label for 75% or
2,148
2,179
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2148s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
something like them of them and that means if you now self label and learn on them you're going to learn the wrong signal so this this step seems fairly brittle honestly but I don't know of course they go on and investigate various things such as how many clusters do you need or how many nearest neighbors sorry do you need this number K here and you can see that if you have
2,179
2,212
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2179s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
zero neighbors then you're doing a lot worse than if you have let's say five nearest neighbors so the jump here as you can see is fairly high in all the data sets but after that it sort of doesn't really matter much so it seems like five nearest neighbors should be enough for most things and here they just show that when they remove the false positives that their algorithm
2,212
2,238
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2212s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
actually converges to the correct clustering the correct accuracy which is not surprising like if you remove the wrong samples that are wrong then the rest of the samples are going to be right I think that's just showing that it doesn't go into some kind of crazy downward spiral loop or something like this but still it's just kind of funny okay so they do you investigate how much
2,238
2,263
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2238s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
they improve and they improve by quite a lot above the kind of previous methods so they have a lot of previous methods but a manner this includes things like k-means and so on ganz deep cluster that we spoke about and this method it already gets as you can see fairly close to good accuracy so you have like eighty eight point six percent accuracy and that's you know
2,263
2,292
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2263s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
fairly remarkable on C 410 without seeing the labels but we'll go on and now they go into image net now image net of course has way more classes it has a thousand classes compared to see four tenths ten classes so if you if you think you know clustering ten classes might and they're fairly apart from each other might work with various techniques image net a thousand classes that's way
2,292
2,319
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2292s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
more difficult that I do subsample this to 50 100 and 200 classes and they get okay accuracy as you can see they they get 81 percent in for 50 classes where a supervised baseline would get 86 percent into 200 classes they get 69 percent where a supervised baseline would get 76 percent so it's fairly it's it's there and and that's quite remarkable for this low number of classes and they figure
2,319
2,361
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2319s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
out that if they look for these samples that are kind of in the most of the middle of their cluster they get these prototypes right here you can see all of these images I don't if you know imagine that some of the images we really only have the part of the object and so on so here with the prototypical things you really get center clear shot of the object with clearly visible features and
2,361
2,387
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2361s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg
hQEnzdLkPj4
so on so this sort of re sort of repeats the fact that this clustering really does go on what sort of semantic information of course the labels here are you know from the from the test label set the network can't figure that out and and then they go for a thousand classes and in a thousand classes it doesn't really work because there might be just too many confusions right here but they do
2,387
2,421
https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2387s
Learning To Classify Images Without Labels (Paper Explained)
https://i.ytimg.com/vi/h…axresdefault.jpg