CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
12
100
DESCRIPTION
stringlengths
66
5k
TRANSCRIPTION
stringlengths
150
90.9k
SEGMENTS
stringlengths
1.05k
146k
Yannic Kilcher
https://www.youtube.com/watch?v=TOo-HnjjuhU
[ML News] Multiplayer Stable Diffusion | OpenAI needs more funding | Text-to-Video models incoming
#mlnews #ai #mlinpl Your news from the world of Machine Learning! OUTLINE: 0:00 - Introduction 1:25 - Stable Diffusion Multiplayer 2:15 - Huggingface: DOI for Models & Datasets 3:10 - OpenAI asks for more funding 4:25 - The Stack: Source Code Dataset 6:30 - Google Vizier Open-Sourced 7:10 - New Models 11:50 - Helpful Things 20:30 - Prompt Databases 22:15 - Lexicap by Karpathy References: Stable Diffusion Multiplayer https://huggingface.co/spaces/huggingface-projects/stable-diffusion-multiplayer?roomid=room-0 Huggingface: DOI for Models & Datasets https://huggingface.co/blog/introducing-doi OpenAI asks for more funding https://www.theinformation.com/articles/openai-valued-at-nearly-20-billion-in-advanced-talks-with-microsoft-for-more-funding https://www.wsj.com/articles/microsoft-in-advanced-talks-to-increase-investment-in-openai-11666299548 The Stack: Source Code Dataset https://huggingface.co/datasets/bigcode/the-stack?utm_source=pocket_mylist Google Vizier Open-Sourced https://github.com/google/vizier New Models https://imagen.research.google/video/ https://phenaki.github.io/ https://makeavideo.studio/?utm_source=pocket_mylist https://dreamfusion3d.github.io/ https://arxiv.org/pdf/2210.15257.pdf https://huggingface.co/spaces/PaddlePaddle/ERNIE-ViLG https://github.com/PaddlePaddle/PaddleHub Helpful Things https://thecharlieblake.co.uk/visualising-ml-number-formats https://griddly.ai/ https://engineering.fb.com/2022/10/18/open-source/ocp-summit-2022-grand-teton/?utm_source=twitter&utm_medium=organic_social&utm_campaign=eng2022h2 https://twitter.com/psuraj28/status/1580640841583902720?utm_source=pocket_mylist https://huggingface.co/blog/stable_diffusion_jax https://github.com/Lightning-AI/stable-diffusion-deploy https://lightning.ai/docs/stable/ https://github.com/CarperAI/trlx https://github.com/DLR-RM/rl-baselines3-zoo https://github.com/Sea-Snell/JAXSeq https://www.reddit.com/r/MachineLearning/comments/xoitw9/p_albumentations_13_is_released_a_python_library/?utm_source=pocket_mylist https://twitter.com/Warvito/status/1570691960792580096?utm_source=pocket_mylist https://arxiv.org/abs/2209.07162 https://academictorrents.com/details/63aeb864bbe2115ded0aa0d7d36334c026f0660b https://huggingface.co/spaces/THUDM/CodeGeeX https://ai.facebook.com/blog/gpu-inference-engine-nvidia-amd-open-source/?utm_source=twitter&utm_medium=organic_social&utm_campaign=blog https://github.com/nerfstudio-project/nerfstudio https://www.nerfacc.com/en/latest/ https://github.com/dstackai/dstack https://www.reddit.com/r/MachineLearning/comments/yeyxlo/p_openai_whisper_3x_cpu_inference_speedup/?utm_source=pocket_mylist https://github.com/MiscellaneousStuff/openai-whisper-cpu/issues/1 Prompt Databases https://huggingface.co/datasets/poloclub/diffusiondb https://publicprompts.art/ https://visualise.ai/ https://twitter.com/SamuelAlbanie/status/1574111928431026179/photo/1 Lexicap by Karpathy https://karpathy.ai/lexicap/0139-large.html Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
A lot of the text video models have recently come out, but not only that, a lot of other stuff has happened too, such as multiplayer, stable, diffusion, and OpenAI is looking for even more money from Microsoft. Stay tuned, this is ML News. Hello everyone, as you can see, I'm not in my usual setting, I'm actually currently in Poland. It is the last day of the E-Wide, of the machine learning in Poland conference. This conference is absolutely glorious, absolutely fantastic. It was really cool being here, it is over now, I'm going home, but next year, please be here. Or if you're a company that's looking to get rid of some money and sponsor an awesome conference, the ML and PL conference has been organized at least as well as any of the new ripses or ICMLs that I've ever been to. And it is very likely that this conference is going to grow and become more notorious in the next few years. So it was a great lineup of keynote speakers, decorials, and other content, and I even had the pleasure of joining in to a bit of a concert at one of the poster sessions, which was certainly a unique experience. So thanks again to the ML and PL organizers, see you there next year, alright? So stable diffusion is going multiplayer, this is a hugging phase space. And there's essentially a giant canvas, and you can just come in here and you drag this square somewhere, and you give it some kind of a description, and they will just kind of fit in what you're doing. All of this is collectively drawn by people, and I'm always afraid, because I don't want to destroy something, right? Because all of this is just very, very cool at what people come up with. Just another example of something that I would have never thought of, but because stuff is open and release, this is, you know, this can be built. So absolutely cool, give it a try, and maybe this inspires you to build something that is even cooler than this. I don't know what it's going to be, but I'm sure one of you has a great idea right now. Another hugging phase news, they introduce DOI, digital object, and it fires project sets and models. DOIs are sort of a standard way in scientific literature of addressing things like addressing papers, addressing artifacts, and now hugging phase is introducing these things for their models and data sets on the hub. So on the hub, you're going to see this little box with which you can generate. Essentially, it's a UUID for a model or a data set that is never going to change in the future. Now, you can outdate it so you can say, well, this one is deprecated. I have a new version of this model. But it is a unique identifier to that model that you have. And this is really good if you want to put it inside the papers, so as to make it reproducible. And given that it is a standard, it just incorporates with the whole rest of the scientific ecosystem. So definitely a big fuss for anyone who does work in research. Wall Street Journal writes, Microsoft in advance talks to increase investment in OpenAI. This article essentially, there isn't much detail about OpenAI is apparently asking for more money, more invested. Microsoft has previously invested about $8 billion into Microsoft. And on top of that, probably really preferential access to Azure in exchange that OpenAI will provide preferential access to Microsoft for its product. It's funny because here it says, last week Microsoft announced it was integrating Dolly II with various products, including Microsoft Design, a new graphic designer, which is cool. And the image creator for Search App Bing, is that their big plan? Is that the $1 billion investment to get Bing off the ground finally? I'm not sure. Now, keep in mind that just because OpenAI goes and asks for more money, that doesn't mean that they're bankrupt-zoony. It could also mean that they're planning for an even bigger push startups. And I don't know if OpenAI can still be considered a startup, but startups often they do take on more money whenever they want to start scaling even more. Now, how much OpenAI wants to scale even more? I don't know. It could also be that they're just out of money and need more. The stack is a data set. It's by the Big Code project, and it's three terabyte of permissively licensed source code. So this data set is fully open. You can download it if you want to train anything like a Codex model or something similar. The data set pays specific attention to the licensing of the code that is included in the data set. The code is MIT license, Apache license, BSD-3 license. Essentially license such that you can do whatever you want with it. Now, that doesn't get you out of the weeds legally of doing anything and everything, because you still have to do things like provide a copyright. Notice if you copy one of these codes verbatim. The stack not only pays attention to this when they collect this initially, but also as you can see on the hugging face entry and the hugging face top. There are terms of use for the stack. And one of the terms of use of the stack is that you must always update your own version of the stack the most recent usable version. And this is because they have essentially a form where you as a source code author can go and request removal of your source code from the stack. So even if you license this under MIT license, they don't want anyone's code who doesn't want to be part of the stack. So you can go and request that your code be removed from the stack. They will then do that, update the data set, and by agreeing to these terms, if you download the data set, you essentially agree to always download the newest version and use the newest version of the data set, such as to propagate that removal of that code. Now as I understand it, not a lawyer, this is not legal advice, but as I understand it, you are entering into a binding agreement by clicking this checkbox and clicking this button. So think about whether you want that or not, but it is good that another option is out there next to just scraping it up, I guess. Google releases Vizier Open Source. Vizier is a black box optimizer that works at scale. So many, many different experiments that need to be hyper parameter optimized. Vizier essentially decides which hyper parameter to try next. So you can run this as a service if you have a lot of parallel workers and you want to run hyper parameter optimizations. They have APIs for users and the user here is essentially someone who wants to do hyper parameter optimization. They have APIs for developers, which means that you can put in new optimization algorithms. So if you're a developer of a black box optimizational algorithm, you can integrate that with Vizier and they have a benchmarking API. So apparently this thing has been running inside of Google for a while and now they finally decided to release it open source. So it's certainly tried and tested. All right, now we get into the video models. There have been a few video models. Now they have been released a while back, but I'll just summarize them briefly here. Imagine video is a text to video model. You can see a bunch of samples right here and they look really, really cool. So this is a video diffusion model, but as far as I understand it, it's kind of a combination of fully convolutional networks and super resolution networks in order to get this effect. They describe this further in a few diagrams on their websites. Imagine video uses video unit architecture to capture spatial fidelity and temporal dynamics. Temporal self-attention is used in the base video diffusion model while temporal convolutions are used in the temporal and spatial super resolution models. There is a paper to go along with it if you are interested. Now also from Google Research is Venaki. I'm not exactly sure how to pronounce that, but it is a different text to video model that can produce up to minutes long videos with changing text. So here you can see a prompt that constantly changes and as it does, the video changes as well. So rather than being a diffusion model, this model compresses video to a tokenized representation and then essentially uses a causal autoregressive language model to continue that tokenized representation. With that, they're able to essentially produce unbounded video as the beginning of the video simply drops out of the context. But as long as you feed into the side input, more and more text that you want to be produced, you can see that the video keeps changing, keeps adapting and keeps being faithful to the currently in focus part of the prompt. What's interesting is that the training data seems to be mostly text to image with just a few text to video pairs inside of the training data. Now we're not done with the text to video models yet. NETTA AI actually released, make a video, yet another text to video model. And this one is also a bit special because it essentially only produces a single image from text. So this is a essentially text to image model and then an unsupervised video generator from that image. So the text to image model is essentially as we know text to image models, but then the video model is unsupervised. It simply learns from unsupervised video data, how video behaves and is then able to take a single picture and let a single frame of that video and make the entire video out of it. The results look really cool. What I think is cool between all of these works is that they all have a different approach for the same problem. The all the results they produce are very cool and it's gonna be interesting to see how this text to video problem will ultimately be like canonically solved, let's say. I don't know, but I'm keeping my eyes open. Now slightly different, but not entirely different is dream fusion. This isn't text to video, this is text to 3D. Now if you think that is relatively straightforward, then none of these things actually involve 3D training data, at least as far as I can understand it. Rather what they do is they consider the entire scene essentially like a nerve. So what they do is they start with a random 3D scene. So you pick your 3D scene, you fill a bunch of voxels and don't fill the other voxels. And then you optimize that 3D scene to satisfy text to image models that essentially act as photographs of that scene. So it is a lot like nerve, except that you don't have pictures but you like optimize for a text to image model rather than optimizing for an actual image. And that is a really cool idea and actually seems to work pretty great. Now there's other work still improving text to image to fusion models themselves, but the only BILG 2.0 is one of them. This is an iteration of the previous model and it is using mixture of denosing experts. I don't wanna go too much into this but you can definitely see right here that the results are breathtaking and very good with a great resolution. Now there is a demo on the hogging face hub, but as far as I understand this model isn't released. So the demo and the code that they put on GitHub they simply calls some API where the model is actually stored. This is a neat tool not directly related to machine learning, but if you've ever wondered what like the difference between a BFLOW 16 and an FP16 is, I never knew. But Charlie Blake has a very cool tool on a blog that essentially shows you the different trade-offs you can make when you choose a number format. So it shows you for the different numbers what kind of ranges you can represent with them where they're good at, where they're not good at. So you can see here clearly the difference between a BFLOW 16 and an FP16. One can represent a lot of numbers and the other one can represent just very small range of numbers but two more precision. Gridley.js is a tool that allows you to interact with grid world reinforcement learning environments. So there are a number of cool features right here. You can edit levels directly. You can also try out the levels. You can debug your policies. You can record trajectories. So right now I don't have a trajectory but what I can do is I can put record right here and I can move this thing around here, here, going to the lava and then I die. And you can see the steps I've taken right here. So you can use this to do various kinds of things debugging, investigating, and so on. If you are into reinforcement learning and you work with grid world, then by all means check this out. Meta announces their new box, I guess. This is the box. This is an architecture for a deep learning, the grand titan. Essentially they release the architecture open source. So their engineers have sat down and thought long and tired about what it takes for a great machine learning system like their bit more older DGX boxes and they essentially tell you, look, we believe that this combination of hardware, this processor, is these GPUs connected like this with these power supplies, will be a very great base for doing research. Yeah, they're releasing these specs essentially for you to just buy or assemble, I guess, whatever you want to do with it. But I can tell you it is relatively hard to decide exactly on every component of the hardware. It is really great that people who are very competent in this actually think about it and give their suggestions. So if you have a lab or a company and you really want to buy your own hardware, maybe this is a good option for you. Plugging faced fusers from version 0.5 on one, on forward supports diffusers in jacks. If you like jacks, if you like stable diffusion, go for it. Muse is an open source stable diffusion production server. Well, it is not as much a server as it is sort of like a tutorial on how to bring up a server. This is based on the Lightning apps framework, which is open source and it's kind of an easy way to bring together all the components you need to deploy machine learning things. And this repository is essentially a specification on how to pull up a stable diffusion server. So if you want to deploy stable diffusion yourself, this is probably the fastest and simplest way to do so. Crlx by Carp for AI is a library that allows you to do reinforcement learning for text models. So you can see right here, you can give either some sort of a reward function or you can give a data set that assigns values to expert demonstrations and you can train a language model to incorporate that. This is a relatively new domain to do reinforcement learning on text models, but it is cool to have another library to tackle the problem. RLBaseLines 3 Zoo is a training framework for stable baselines 3 in reinforcement learning agents. Stable baselines is a library that tries to give reference implementations of reinforcement learning algorithms because they're very tricky and they're very hard to get right. So these are good, solid and performant reference implementations. Stable baselines 3 is the third iteration of it and this repository right here, the zoo contains a number of surrounding things like scripts that make it very easy to interact with it but also repaired agents and prepared hyperparameter settings that work well in different standard environments. Jaxsec is a library that allows you to train very large language models in Jax. So the cool thing is that with this library, you essentially get things like data parallelism or model parallelism or free. You can just specify them and you can trade them off, however you want. This is due to the power and simplicity of Jaxsec. Albuminations, I hope I'm pronouncing that correctly. 1.3 is out and it introduces a bunch of new image augmentations. This is a library for image augmentations. So it's good that they introduce new augmentations that fits very well to the augmentations they already have. There's also a bunch of bug fixes in more. If you're looking for image augmentations in Python, this might be a good library. This is a really cool thing you can do with diffusion models. These people have trained diffusion models of rain images and were able to create new synthetic brain images with a degree of controllability. Now there is a paper on archive if you are interested. You can also download the data set of 100,000 synthetic brain images. CodeX is a multilingual code generation model. This is, as it says, it's essentially something similar like codeX, but it is released. You can actually go and you can download the model and use it yourself. MetaAI releases AI template, which is an inference engine. The goal here is to make inference faster. They get a lot of speed ups over just running standard inference and something like I knowledge. So this does two things. First of all, it optimizes your computation graph. If your computation graph contains a lot of little operations that could be used together into something that's really optimal for a given hardware, or just that can be expressed in a smarter way, then a graph optimizer can do that. And in a second step, there is a compiler to compile all of this to highly performance C++ code that runs on backend hardwares, such as a GPU that uses CUDA or even a AMD GPU. So if fast inference is a concern to you, this is definitely a thing to check out. Nerf Studio describes itself as a collaboration friendly studio for nerfs, but it is more like a collection, an entire collection of software who handle nerfs, anything from training, validating, or even experiencing yourself. You can see they have a viewer that allows you to just explore a nerfs that you do and make a little videos from it, but really it covers everything to do with nerfs. Now speaking of nerf, nerf hack is a PyTorch nerf acceleration toolbox. This gets significant speed ups over simply using nerf code that's out there. For example, vanilla nerf model with ape layer, multi layer perceptrons can be trained to better quality in one hour, rather than one to two a days as in the paper. This stack, the logo doesn't exactly work on dark background, but this stack is a library that wants to standardize your ML workflows that you run in the cloud. This is essentially you check your workflows into GitHub and this stack helps you to run them uniformly anywhere. So in a workflow, you can specify things like your workflow name, obviously, but then it starts. You can say, okay, my provider is bash, so this is essentially a bash script. Now what are the commands? I wanna pip install some stuff. I wanna run this training script right here, but it also has things like artifacts, and you can also specify things like I wanna load data from this S3 bucket over there. I wanna run on this cloud over there. So all of this is quite geared towards machine learning. It's certainly not the first workflow engine or the first iteration from, hey, let's check our things into source code, but it is very targeted at running ML workflows in the cloud. Several people have figured out massive speed ups in the open air whisper model. For example, this first here has figured out a 3x speed up on CPU inference, but refers to a GitHub thread where someone else has found an even bigger a 3.25 x speed up. Again, it's very cool to see what people do when you just give them the model. Lastly, I wanna point to a couple of databases for stuff. Mainly around stable diffusion. So diffusion DB is on the hugging phase it's a data set of prompts that have been entered by real users into stable diffusion and the corresponding images that they got out. Public prompts, that's a public prompts dot art in your browser is a database of three prompts and three models. These models are mostly trained using Dream Booth, but if you're looking for inspiration for prompts and what they turn out, then this is maybe a place to go. Likewise, visualized.ai is a website that goes a little bit more business-y, so it lets you create some free stuff from like stable diffusion, but then it also acts like as a bit of a marketplace for these things such that you could also buy them or sell them. It's cool to see that different business models are trying to spring up around this ecosystem. Ultimately, someone will figure out how to really make money off of this stuff, but you know, it's good to be part of the time when people are just trying stuff and seeing what happens with not only on the research side, but also on the business side. Lastly, big science has released a prompt source, which is an IDE for natural language prompts. So this is a way to give people a bit more help and a bit more standardization when they use prompts to achieve certain goals, for example, when they use prompts to tackle some of the NLP challenges that are now more and more phrased simply as prompts into these large language models, rather than as data that goes into especially trained model for that task. So if you find yourself in this situation or a similar one, then prompts or maybe for you. And lastly, this is a database of all Lex Friedman podcasts transcribed. This is the website of Andre Karpotti, and he used a simple combination of a download script from YouTube combined with OpenAI's whisper to transcribe all of Lex Friedman's podcast episodes. You can go to any one of them. You can click and they are here with time annotations and all. It's a very simple but very cool project. Thank you, Andre. And I thank all of you for listening. I'll be home again next week. And till then, stay hydrated. Bye-bye. Ling bien.
[{"start": 0.0, "end": 6.8, "text": " A lot of the text video models have recently come out, but not only that, a lot of other stuff has happened too,"}, {"start": 6.8, "end": 14.36, "text": " such as multiplayer, stable, diffusion, and OpenAI is looking for even more money from Microsoft."}, {"start": 14.36, "end": 16.36, "text": " Stay tuned, this is ML News."}, {"start": 20.84, "end": 25.84, "text": " Hello everyone, as you can see, I'm not in my usual setting, I'm actually currently in Poland."}, {"start": 25.84, "end": 30.6, "text": " It is the last day of the E-Wide, of the machine learning in Poland conference."}, {"start": 30.6, "end": 34.480000000000004, "text": " This conference is absolutely glorious, absolutely fantastic."}, {"start": 34.480000000000004, "end": 40.32, "text": " It was really cool being here, it is over now, I'm going home, but next year, please be here."}, {"start": 40.32, "end": 44.480000000000004, "text": " Or if you're a company that's looking to get rid of some money and sponsor an awesome conference,"}, {"start": 44.480000000000004, "end": 53.24, "text": " the ML and PL conference has been organized at least as well as any of the new ripses or ICMLs that I've ever been to."}, {"start": 53.24, "end": 60.2, "text": " And it is very likely that this conference is going to grow and become more notorious in the next few years."}, {"start": 60.2, "end": 64.2, "text": " So it was a great lineup of keynote speakers, decorials, and other content,"}, {"start": 64.2, "end": 69.84, "text": " and I even had the pleasure of joining in to a bit of a concert at one of the poster sessions,"}, {"start": 69.84, "end": 71.92, "text": " which was certainly a unique experience."}, {"start": 71.92, "end": 76.52000000000001, "text": " So thanks again to the ML and PL organizers, see you there next year, alright?"}, {"start": 76.52000000000001, "end": 80.92, "text": " So stable diffusion is going multiplayer, this is a hugging phase space."}, {"start": 80.92, "end": 87.16, "text": " And there's essentially a giant canvas, and you can just come in here and you drag this square somewhere,"}, {"start": 87.16, "end": 92.44, "text": " and you give it some kind of a description, and they will just kind of fit in what you're doing."}, {"start": 92.44, "end": 100.0, "text": " All of this is collectively drawn by people, and I'm always afraid, because I don't want to destroy something, right?"}, {"start": 100.0, "end": 104.2, "text": " Because all of this is just very, very cool at what people come up with."}, {"start": 104.2, "end": 113.56, "text": " Just another example of something that I would have never thought of, but because stuff is open and release, this is, you know, this can be built."}, {"start": 113.56, "end": 120.04, "text": " So absolutely cool, give it a try, and maybe this inspires you to build something that is even cooler than this."}, {"start": 120.04, "end": 124.88, "text": " I don't know what it's going to be, but I'm sure one of you has a great idea right now."}, {"start": 124.88, "end": 131.88, "text": " Another hugging phase news, they introduce DOI, digital object, and it fires project sets and models."}, {"start": 131.88, "end": 140.24, "text": " DOIs are sort of a standard way in scientific literature of addressing things like addressing papers, addressing artifacts,"}, {"start": 140.24, "end": 144.68, "text": " and now hugging phase is introducing these things for their models and data sets on the hub."}, {"start": 144.68, "end": 149.4, "text": " So on the hub, you're going to see this little box with which you can generate."}, {"start": 149.4, "end": 156.0, "text": " Essentially, it's a UUID for a model or a data set that is never going to change in the future."}, {"start": 156.0, "end": 159.64, "text": " Now, you can outdate it so you can say, well, this one is deprecated."}, {"start": 159.64, "end": 161.64, "text": " I have a new version of this model."}, {"start": 161.64, "end": 165.92, "text": " But it is a unique identifier to that model that you have."}, {"start": 165.92, "end": 171.07999999999998, "text": " And this is really good if you want to put it inside the papers, so as to make it reproducible."}, {"start": 171.07999999999998, "end": 176.88, "text": " And given that it is a standard, it just incorporates with the whole rest of the scientific ecosystem."}, {"start": 176.88, "end": 181.51999999999998, "text": " So definitely a big fuss for anyone who does work in research."}, {"start": 181.51999999999998, "end": 187.51999999999998, "text": " Wall Street Journal writes, Microsoft in advance talks to increase investment in OpenAI."}, {"start": 187.52, "end": 194.32000000000002, "text": " This article essentially, there isn't much detail about OpenAI is apparently asking for more money, more invested."}, {"start": 194.32000000000002, "end": 198.52, "text": " Microsoft has previously invested about $8 billion into Microsoft."}, {"start": 198.52, "end": 208.20000000000002, "text": " And on top of that, probably really preferential access to Azure in exchange that OpenAI will provide preferential access to Microsoft for its product."}, {"start": 208.20000000000002, "end": 215.52, "text": " It's funny because here it says, last week Microsoft announced it was integrating Dolly II with various products, including Microsoft Design,"}, {"start": 215.52, "end": 218.12, "text": " a new graphic designer, which is cool."}, {"start": 218.12, "end": 223.48000000000002, "text": " And the image creator for Search App Bing, is that their big plan?"}, {"start": 223.48000000000002, "end": 228.04000000000002, "text": " Is that the $1 billion investment to get Bing off the ground finally?"}, {"start": 228.04000000000002, "end": 228.72, "text": " I'm not sure."}, {"start": 228.72, "end": 235.92000000000002, "text": " Now, keep in mind that just because OpenAI goes and asks for more money, that doesn't mean that they're bankrupt-zoony."}, {"start": 235.92000000000002, "end": 239.92000000000002, "text": " It could also mean that they're planning for an even bigger push startups."}, {"start": 239.92000000000002, "end": 245.48000000000002, "text": " And I don't know if OpenAI can still be considered a startup, but startups often they do take on"}, {"start": 245.48, "end": 249.28, "text": " more money whenever they want to start scaling even more."}, {"start": 249.28, "end": 252.04, "text": " Now, how much OpenAI wants to scale even more?"}, {"start": 252.04, "end": 252.72, "text": " I don't know."}, {"start": 252.72, "end": 256.68, "text": " It could also be that they're just out of money and need more."}, {"start": 256.68, "end": 258.59999999999997, "text": " The stack is a data set."}, {"start": 258.59999999999997, "end": 264.48, "text": " It's by the Big Code project, and it's three terabyte of permissively licensed source code."}, {"start": 264.48, "end": 266.48, "text": " So this data set is fully open."}, {"start": 266.48, "end": 272.64, "text": " You can download it if you want to train anything like a Codex model or something similar."}, {"start": 272.64, "end": 278.96, "text": " The data set pays specific attention to the licensing of the code that is included in the data set."}, {"start": 278.96, "end": 283.52, "text": " The code is MIT license, Apache license, BSD-3 license."}, {"start": 283.52, "end": 287.96, "text": " Essentially license such that you can do whatever you want with it."}, {"start": 287.96, "end": 292.47999999999996, "text": " Now, that doesn't get you out of the weeds legally of doing anything and everything,"}, {"start": 292.47999999999996, "end": 296.2, "text": " because you still have to do things like provide a copyright."}, {"start": 296.2, "end": 299.68, "text": " Notice if you copy one of these codes verbatim."}, {"start": 299.68, "end": 303.48, "text": " The stack not only pays attention to this when they collect this initially,"}, {"start": 303.48, "end": 308.08, "text": " but also as you can see on the hugging face entry and the hugging face top."}, {"start": 308.08, "end": 310.40000000000003, "text": " There are terms of use for the stack."}, {"start": 310.40000000000003, "end": 315.88, "text": " And one of the terms of use of the stack is that you must always update your own version of the stack"}, {"start": 315.88, "end": 318.24, "text": " the most recent usable version."}, {"start": 318.24, "end": 323.88, "text": " And this is because they have essentially a form where you as a source code author can go"}, {"start": 323.88, "end": 327.44, "text": " and request removal of your source code from the stack."}, {"start": 327.44, "end": 332.68, "text": " So even if you license this under MIT license, they don't want anyone's code"}, {"start": 332.68, "end": 335.2, "text": " who doesn't want to be part of the stack."}, {"start": 335.2, "end": 339.28, "text": " So you can go and request that your code be removed from the stack."}, {"start": 339.28, "end": 341.96, "text": " They will then do that, update the data set,"}, {"start": 341.96, "end": 345.64, "text": " and by agreeing to these terms, if you download the data set,"}, {"start": 345.64, "end": 349.24, "text": " you essentially agree to always download the newest version"}, {"start": 349.24, "end": 351.6, "text": " and use the newest version of the data set,"}, {"start": 351.6, "end": 355.04, "text": " such as to propagate that removal of that code."}, {"start": 355.04, "end": 358.96000000000004, "text": " Now as I understand it, not a lawyer, this is not legal advice,"}, {"start": 358.96000000000004, "end": 362.12, "text": " but as I understand it, you are entering into a binding agreement"}, {"start": 362.12, "end": 364.68, "text": " by clicking this checkbox and clicking this button."}, {"start": 364.68, "end": 367.28000000000003, "text": " So think about whether you want that or not,"}, {"start": 367.28000000000003, "end": 372.52000000000004, "text": " but it is good that another option is out there next to just scraping it up, I guess."}, {"start": 372.52000000000004, "end": 375.28000000000003, "text": " Google releases Vizier Open Source."}, {"start": 375.28000000000003, "end": 379.24, "text": " Vizier is a black box optimizer that works at scale."}, {"start": 379.24, "end": 383.88, "text": " So many, many different experiments that need to be hyper parameter optimized."}, {"start": 383.88, "end": 387.48, "text": " Vizier essentially decides which hyper parameter to try next."}, {"start": 387.48, "end": 391.28, "text": " So you can run this as a service if you have a lot of parallel workers"}, {"start": 391.28, "end": 393.96, "text": " and you want to run hyper parameter optimizations."}, {"start": 393.96, "end": 397.24, "text": " They have APIs for users and the user here is essentially someone"}, {"start": 397.24, "end": 399.8, "text": " who wants to do hyper parameter optimization."}, {"start": 399.8, "end": 405.44, "text": " They have APIs for developers, which means that you can put in new optimization algorithms."}, {"start": 405.44, "end": 408.84, "text": " So if you're a developer of a black box optimizational algorithm,"}, {"start": 408.84, "end": 411.44, "text": " you can integrate that with Vizier"}, {"start": 411.44, "end": 413.68, "text": " and they have a benchmarking API."}, {"start": 413.68, "end": 417.36, "text": " So apparently this thing has been running inside of Google for a while"}, {"start": 417.36, "end": 420.56, "text": " and now they finally decided to release it open source."}, {"start": 420.56, "end": 423.36, "text": " So it's certainly tried and tested."}, {"start": 423.36, "end": 425.68, "text": " All right, now we get into the video models."}, {"start": 425.68, "end": 427.56, "text": " There have been a few video models."}, {"start": 427.56, "end": 429.76, "text": " Now they have been released a while back,"}, {"start": 429.76, "end": 432.08, "text": " but I'll just summarize them briefly here."}, {"start": 432.08, "end": 435.56, "text": " Imagine video is a text to video model."}, {"start": 435.56, "end": 441.0, "text": " You can see a bunch of samples right here and they look really, really cool."}, {"start": 441.0, "end": 443.92, "text": " So this is a video diffusion model,"}, {"start": 443.92, "end": 445.76, "text": " but as far as I understand it,"}, {"start": 445.76, "end": 448.8, "text": " it's kind of a combination of fully convolutional networks"}, {"start": 448.8, "end": 453.04, "text": " and super resolution networks in order to get this effect."}, {"start": 453.04, "end": 456.0, "text": " They describe this further in a few diagrams on their websites."}, {"start": 456.0, "end": 459.04, "text": " Imagine video uses video unit architecture"}, {"start": 459.04, "end": 462.72, "text": " to capture spatial fidelity and temporal dynamics."}, {"start": 462.72, "end": 466.32, "text": " Temporal self-attention is used in the base video diffusion model"}, {"start": 466.32, "end": 469.64, "text": " while temporal convolutions are used in the temporal"}, {"start": 469.64, "end": 472.15999999999997, "text": " and spatial super resolution models."}, {"start": 472.15999999999997, "end": 475.12, "text": " There is a paper to go along with it if you are interested."}, {"start": 475.12, "end": 478.12, "text": " Now also from Google Research is Venaki."}, {"start": 478.12, "end": 480.44, "text": " I'm not exactly sure how to pronounce that,"}, {"start": 480.44, "end": 483.76, "text": " but it is a different text to video model"}, {"start": 483.76, "end": 488.32, "text": " that can produce up to minutes long videos with changing text."}, {"start": 488.32, "end": 491.56, "text": " So here you can see a prompt that constantly changes"}, {"start": 491.56, "end": 494.64, "text": " and as it does, the video changes as well."}, {"start": 494.64, "end": 497.28, "text": " So rather than being a diffusion model,"}, {"start": 497.28, "end": 502.28, "text": " this model compresses video to a tokenized representation"}, {"start": 502.28, "end": 506.35999999999996, "text": " and then essentially uses a causal autoregressive language model"}, {"start": 506.35999999999996, "end": 509.55999999999995, "text": " to continue that tokenized representation."}, {"start": 509.55999999999995, "end": 514.16, "text": " With that, they're able to essentially produce unbounded video"}, {"start": 514.16, "end": 517.9599999999999, "text": " as the beginning of the video simply drops out of the context."}, {"start": 517.9599999999999, "end": 520.76, "text": " But as long as you feed into the side input,"}, {"start": 520.76, "end": 523.52, "text": " more and more text that you want to be produced,"}, {"start": 523.52, "end": 526.0799999999999, "text": " you can see that the video keeps changing,"}, {"start": 526.08, "end": 529.0, "text": " keeps adapting and keeps being faithful"}, {"start": 529.0, "end": 532.72, "text": " to the currently in focus part of the prompt."}, {"start": 532.72, "end": 535.0400000000001, "text": " What's interesting is that the training data"}, {"start": 535.0400000000001, "end": 537.48, "text": " seems to be mostly text to image"}, {"start": 537.48, "end": 542.0400000000001, "text": " with just a few text to video pairs inside of the training data."}, {"start": 542.0400000000001, "end": 544.72, "text": " Now we're not done with the text to video models yet."}, {"start": 544.72, "end": 547.64, "text": " NETTA AI actually released, make a video,"}, {"start": 547.64, "end": 550.36, "text": " yet another text to video model."}, {"start": 550.36, "end": 552.36, "text": " And this one is also a bit special"}, {"start": 552.36, "end": 556.92, "text": " because it essentially only produces a single image from text."}, {"start": 556.92, "end": 560.44, "text": " So this is a essentially text to image model"}, {"start": 560.44, "end": 565.44, "text": " and then an unsupervised video generator from that image."}, {"start": 565.44, "end": 568.48, "text": " So the text to image model is essentially"}, {"start": 568.48, "end": 570.76, "text": " as we know text to image models,"}, {"start": 570.76, "end": 573.2, "text": " but then the video model is unsupervised."}, {"start": 573.2, "end": 576.9200000000001, "text": " It simply learns from unsupervised video data,"}, {"start": 576.9200000000001, "end": 581.24, "text": " how video behaves and is then able to take a single picture"}, {"start": 581.24, "end": 583.72, "text": " and let a single frame of that video"}, {"start": 583.72, "end": 585.92, "text": " and make the entire video out of it."}, {"start": 585.92, "end": 587.76, "text": " The results look really cool."}, {"start": 587.76, "end": 590.52, "text": " What I think is cool between all of these works"}, {"start": 590.52, "end": 593.6, "text": " is that they all have a different approach for the same problem."}, {"start": 593.6, "end": 595.84, "text": " The all the results they produce are very cool"}, {"start": 595.84, "end": 597.92, "text": " and it's gonna be interesting to see"}, {"start": 597.92, "end": 601.4, "text": " how this text to video problem will ultimately be"}, {"start": 601.4, "end": 603.48, "text": " like canonically solved, let's say."}, {"start": 603.48, "end": 606.44, "text": " I don't know, but I'm keeping my eyes open."}, {"start": 606.44, "end": 610.04, "text": " Now slightly different, but not entirely different is dream fusion."}, {"start": 610.04, "end": 613.24, "text": " This isn't text to video, this is text to 3D."}, {"start": 613.24, "end": 617.68, "text": " Now if you think that is relatively straightforward,"}, {"start": 617.68, "end": 622.68, "text": " then none of these things actually involve 3D training data,"}, {"start": 623.12, "end": 625.12, "text": " at least as far as I can understand it."}, {"start": 625.12, "end": 628.52, "text": " Rather what they do is they consider the entire scene"}, {"start": 628.52, "end": 630.0, "text": " essentially like a nerve."}, {"start": 630.0, "end": 633.9599999999999, "text": " So what they do is they start with a random 3D scene."}, {"start": 633.9599999999999, "end": 637.3199999999999, "text": " So you pick your 3D scene, you fill a bunch of voxels"}, {"start": 637.3199999999999, "end": 638.92, "text": " and don't fill the other voxels."}, {"start": 638.92, "end": 641.88, "text": " And then you optimize that 3D scene"}, {"start": 641.88, "end": 644.9599999999999, "text": " to satisfy text to image models"}, {"start": 644.9599999999999, "end": 648.1999999999999, "text": " that essentially act as photographs of that scene."}, {"start": 648.1999999999999, "end": 652.28, "text": " So it is a lot like nerve, except that you don't have pictures"}, {"start": 652.28, "end": 655.76, "text": " but you like optimize for a text to image model"}, {"start": 655.76, "end": 658.16, "text": " rather than optimizing for an actual image."}, {"start": 658.16, "end": 660.0, "text": " And that is a really cool idea"}, {"start": 660.0, "end": 662.16, "text": " and actually seems to work pretty great."}, {"start": 662.16, "end": 664.64, "text": " Now there's other work still improving text"}, {"start": 664.64, "end": 666.9599999999999, "text": " to image to fusion models themselves,"}, {"start": 666.96, "end": 670.44, "text": " but the only BILG 2.0 is one of them."}, {"start": 670.44, "end": 672.88, "text": " This is an iteration of the previous model"}, {"start": 672.88, "end": 676.32, "text": " and it is using mixture of denosing experts."}, {"start": 676.32, "end": 677.9200000000001, "text": " I don't wanna go too much into this"}, {"start": 677.9200000000001, "end": 679.9200000000001, "text": " but you can definitely see right here"}, {"start": 679.9200000000001, "end": 683.12, "text": " that the results are breathtaking"}, {"start": 683.12, "end": 685.6800000000001, "text": " and very good with a great resolution."}, {"start": 685.6800000000001, "end": 688.24, "text": " Now there is a demo on the hogging face hub,"}, {"start": 688.24, "end": 691.24, "text": " but as far as I understand this model isn't released."}, {"start": 691.24, "end": 694.48, "text": " So the demo and the code that they put on GitHub"}, {"start": 694.48, "end": 698.48, "text": " they simply calls some API where the model is actually stored."}, {"start": 698.48, "end": 703.48, "text": " This is a neat tool not directly related to machine learning,"}, {"start": 705.84, "end": 708.4, "text": " but if you've ever wondered what like the difference"}, {"start": 708.4, "end": 713.4, "text": " between a BFLOW 16 and an FP16 is, I never knew."}, {"start": 713.4, "end": 717.72, "text": " But Charlie Blake has a very cool tool on a blog"}, {"start": 717.72, "end": 721.04, "text": " that essentially shows you the different trade-offs"}, {"start": 721.04, "end": 723.9200000000001, "text": " you can make when you choose a number format."}, {"start": 723.92, "end": 725.64, "text": " So it shows you for the different numbers"}, {"start": 725.64, "end": 728.28, "text": " what kind of ranges you can represent with them"}, {"start": 728.28, "end": 730.4, "text": " where they're good at, where they're not good at."}, {"start": 730.4, "end": 732.52, "text": " So you can see here clearly the difference"}, {"start": 732.52, "end": 735.8, "text": " between a BFLOW 16 and an FP16."}, {"start": 735.8, "end": 738.1999999999999, "text": " One can represent a lot of numbers"}, {"start": 738.1999999999999, "end": 741.8399999999999, "text": " and the other one can represent just very small range"}, {"start": 741.8399999999999, "end": 744.4, "text": " of numbers but two more precision."}, {"start": 744.4, "end": 748.4399999999999, "text": " Gridley.js is a tool that allows you to interact"}, {"start": 748.4399999999999, "end": 751.56, "text": " with grid world reinforcement learning environments."}, {"start": 751.56, "end": 753.88, "text": " So there are a number of cool features right here."}, {"start": 753.88, "end": 756.08, "text": " You can edit levels directly."}, {"start": 756.08, "end": 757.64, "text": " You can also try out the levels."}, {"start": 757.64, "end": 759.32, "text": " You can debug your policies."}, {"start": 759.32, "end": 761.28, "text": " You can record trajectories."}, {"start": 761.28, "end": 763.32, "text": " So right now I don't have a trajectory"}, {"start": 763.32, "end": 766.0, "text": " but what I can do is I can put record right here"}, {"start": 766.0, "end": 769.2, "text": " and I can move this thing around here, here,"}, {"start": 769.2, "end": 771.72, "text": " going to the lava and then I die."}, {"start": 771.72, "end": 775.4, "text": " And you can see the steps I've taken right here."}, {"start": 775.4, "end": 778.08, "text": " So you can use this to do various kinds of things"}, {"start": 778.08, "end": 780.72, "text": " debugging, investigating, and so on."}, {"start": 780.72, "end": 782.52, "text": " If you are into reinforcement learning"}, {"start": 782.52, "end": 786.16, "text": " and you work with grid world, then by all means check this out."}, {"start": 786.16, "end": 790.28, "text": " Meta announces their new box, I guess."}, {"start": 790.28, "end": 791.48, "text": " This is the box."}, {"start": 791.48, "end": 794.12, "text": " This is an architecture for a deep learning,"}, {"start": 794.12, "end": 795.72, "text": " the grand titan."}, {"start": 795.72, "end": 799.6, "text": " Essentially they release the architecture open source."}, {"start": 799.6, "end": 803.0, "text": " So their engineers have sat down and thought long"}, {"start": 803.0, "end": 806.3199999999999, "text": " and tired about what it takes for a great machine learning"}, {"start": 806.3199999999999, "end": 809.56, "text": " system like their bit more older DGX boxes"}, {"start": 809.56, "end": 812.9599999999999, "text": " and they essentially tell you, look, we believe that"}, {"start": 812.9599999999999, "end": 815.5999999999999, "text": " this combination of hardware, this processor,"}, {"start": 815.5999999999999, "end": 819.92, "text": " is these GPUs connected like this with these power supplies,"}, {"start": 819.92, "end": 823.3599999999999, "text": " will be a very great base for doing research."}, {"start": 823.3599999999999, "end": 825.9599999999999, "text": " Yeah, they're releasing these specs essentially"}, {"start": 825.9599999999999, "end": 829.4, "text": " for you to just buy or assemble, I guess,"}, {"start": 829.4, "end": 830.64, "text": " whatever you want to do with it."}, {"start": 830.64, "end": 833.4, "text": " But I can tell you it is relatively hard"}, {"start": 833.4, "end": 836.88, "text": " to decide exactly on every component of the hardware."}, {"start": 836.88, "end": 840.84, "text": " It is really great that people who are very competent"}, {"start": 840.84, "end": 844.88, "text": " in this actually think about it and give their suggestions."}, {"start": 844.88, "end": 847.6, "text": " So if you have a lab or a company"}, {"start": 847.6, "end": 849.76, "text": " and you really want to buy your own hardware,"}, {"start": 849.76, "end": 852.12, "text": " maybe this is a good option for you."}, {"start": 852.12, "end": 856.12, "text": " Plugging faced fusers from version 0.5 on one,"}, {"start": 856.12, "end": 859.48, "text": " on forward supports diffusers in jacks."}, {"start": 859.48, "end": 863.44, "text": " If you like jacks, if you like stable diffusion, go for it."}, {"start": 863.44, "end": 868.0400000000001, "text": " Muse is an open source stable diffusion production server."}, {"start": 868.0400000000001, "end": 871.84, "text": " Well, it is not as much a server as it is sort of like a tutorial"}, {"start": 871.84, "end": 873.96, "text": " on how to bring up a server."}, {"start": 873.96, "end": 876.84, "text": " This is based on the Lightning apps framework,"}, {"start": 876.84, "end": 880.0400000000001, "text": " which is open source and it's kind of an easy way"}, {"start": 880.0400000000001, "end": 882.36, "text": " to bring together all the components you need"}, {"start": 882.36, "end": 884.48, "text": " to deploy machine learning things."}, {"start": 884.48, "end": 887.24, "text": " And this repository is essentially a specification"}, {"start": 887.24, "end": 889.6400000000001, "text": " on how to pull up a stable diffusion server."}, {"start": 889.6400000000001, "end": 892.2800000000001, "text": " So if you want to deploy stable diffusion yourself,"}, {"start": 892.28, "end": 896.4399999999999, "text": " this is probably the fastest and simplest way to do so."}, {"start": 896.4399999999999, "end": 900.56, "text": " Crlx by Carp for AI is a library that allows you"}, {"start": 900.56, "end": 903.72, "text": " to do reinforcement learning for text models."}, {"start": 903.72, "end": 905.68, "text": " So you can see right here, you can give either"}, {"start": 905.68, "end": 909.64, "text": " some sort of a reward function or you can give a data set"}, {"start": 909.64, "end": 913.12, "text": " that assigns values to expert demonstrations"}, {"start": 913.12, "end": 916.4399999999999, "text": " and you can train a language model to incorporate that."}, {"start": 916.4399999999999, "end": 920.0, "text": " This is a relatively new domain to do reinforcement learning"}, {"start": 920.0, "end": 924.04, "text": " on text models, but it is cool to have another library"}, {"start": 924.04, "end": 925.36, "text": " to tackle the problem."}, {"start": 925.36, "end": 928.24, "text": " RLBaseLines 3 Zoo is a training framework"}, {"start": 928.24, "end": 931.4, "text": " for stable baselines 3 in reinforcement learning agents."}, {"start": 931.4, "end": 934.28, "text": " Stable baselines is a library that tries"}, {"start": 934.28, "end": 936.04, "text": " to give reference implementations"}, {"start": 936.04, "end": 937.88, "text": " of reinforcement learning algorithms"}, {"start": 937.88, "end": 940.88, "text": " because they're very tricky and they're very hard to get right."}, {"start": 940.88, "end": 944.68, "text": " So these are good, solid and performant reference"}, {"start": 944.68, "end": 945.64, "text": " implementations."}, {"start": 945.64, "end": 948.56, "text": " Stable baselines 3 is the third iteration of it"}, {"start": 948.56, "end": 951.7199999999999, "text": " and this repository right here, the zoo contains"}, {"start": 951.7199999999999, "end": 955.3199999999999, "text": " a number of surrounding things like scripts"}, {"start": 955.3199999999999, "end": 957.3599999999999, "text": " that make it very easy to interact with it"}, {"start": 957.3599999999999, "end": 962.1999999999999, "text": " but also repaired agents and prepared hyperparameter settings"}, {"start": 962.1999999999999, "end": 965.52, "text": " that work well in different standard environments."}, {"start": 965.52, "end": 969.4, "text": " Jaxsec is a library that allows you to train"}, {"start": 969.4, "end": 971.7199999999999, "text": " very large language models in Jax."}, {"start": 971.7199999999999, "end": 973.7199999999999, "text": " So the cool thing is that with this library,"}, {"start": 973.7199999999999, "end": 976.1199999999999, "text": " you essentially get things like data parallelism"}, {"start": 976.1199999999999, "end": 978.04, "text": " or model parallelism or free."}, {"start": 978.04, "end": 980.52, "text": " You can just specify them and you can trade them off,"}, {"start": 980.52, "end": 981.56, "text": " however you want."}, {"start": 981.56, "end": 985.64, "text": " This is due to the power and simplicity of Jaxsec."}, {"start": 985.64, "end": 988.8399999999999, "text": " Albuminations, I hope I'm pronouncing that correctly."}, {"start": 988.8399999999999, "end": 993.8399999999999, "text": " 1.3 is out and it introduces a bunch of new image augmentations."}, {"start": 994.24, "end": 996.68, "text": " This is a library for image augmentations."}, {"start": 996.68, "end": 999.8399999999999, "text": " So it's good that they introduce new augmentations"}, {"start": 999.8399999999999, "end": 1003.4, "text": " that fits very well to the augmentations they already have."}, {"start": 1003.4, "end": 1005.28, "text": " There's also a bunch of bug fixes in more."}, {"start": 1005.28, "end": 1008.16, "text": " If you're looking for image augmentations in Python,"}, {"start": 1008.16, "end": 1009.68, "text": " this might be a good library."}, {"start": 1009.68, "end": 1012.88, "text": " This is a really cool thing you can do with diffusion models."}, {"start": 1012.88, "end": 1014.56, "text": " These people have trained diffusion models"}, {"start": 1014.56, "end": 1018.0, "text": " of rain images and were able to create"}, {"start": 1018.0, "end": 1023.0, "text": " new synthetic brain images with a degree of controllability."}, {"start": 1023.0, "end": 1026.08, "text": " Now there is a paper on archive if you are interested."}, {"start": 1026.08, "end": 1029.32, "text": " You can also download the data set of 100,000"}, {"start": 1029.32, "end": 1031.28, "text": " synthetic brain images."}, {"start": 1031.28, "end": 1035.68, "text": " CodeX is a multilingual code generation model."}, {"start": 1035.68, "end": 1038.96, "text": " This is, as it says, it's essentially something similar"}, {"start": 1038.96, "end": 1041.24, "text": " like codeX, but it is released."}, {"start": 1041.24, "end": 1043.6, "text": " You can actually go and you can download the model"}, {"start": 1043.6, "end": 1045.0, "text": " and use it yourself."}, {"start": 1045.0, "end": 1049.44, "text": " MetaAI releases AI template, which is an inference engine."}, {"start": 1049.44, "end": 1051.96, "text": " The goal here is to make inference faster."}, {"start": 1051.96, "end": 1055.6399999999999, "text": " They get a lot of speed ups over just running standard inference"}, {"start": 1055.6399999999999, "end": 1057.2, "text": " and something like I knowledge."}, {"start": 1057.2, "end": 1058.84, "text": " So this does two things."}, {"start": 1058.84, "end": 1062.48, "text": " First of all, it optimizes your computation graph."}, {"start": 1062.48, "end": 1065.9599999999998, "text": " If your computation graph contains a lot of little operations"}, {"start": 1065.9599999999998, "end": 1068.9599999999998, "text": " that could be used together into something"}, {"start": 1068.9599999999998, "end": 1072.12, "text": " that's really optimal for a given hardware,"}, {"start": 1072.12, "end": 1074.76, "text": " or just that can be expressed in a smarter way,"}, {"start": 1074.76, "end": 1077.36, "text": " then a graph optimizer can do that."}, {"start": 1077.36, "end": 1079.56, "text": " And in a second step, there is a compiler"}, {"start": 1079.56, "end": 1082.32, "text": " to compile all of this to highly performance"}, {"start": 1082.32, "end": 1087.32, "text": " C++ code that runs on backend hardwares, such as a GPU"}, {"start": 1087.32, "end": 1090.2, "text": " that uses CUDA or even a AMD GPU."}, {"start": 1090.2, "end": 1092.48, "text": " So if fast inference is a concern to you,"}, {"start": 1092.48, "end": 1094.4399999999998, "text": " this is definitely a thing to check out."}, {"start": 1094.4399999999998, "end": 1098.6, "text": " Nerf Studio describes itself as a collaboration friendly studio"}, {"start": 1098.6, "end": 1101.56, "text": " for nerfs, but it is more like a collection,"}, {"start": 1101.56, "end": 1103.48, "text": " an entire collection of software"}, {"start": 1103.48, "end": 1107.12, "text": " who handle nerfs, anything from training, validating,"}, {"start": 1107.12, "end": 1108.96, "text": " or even experiencing yourself."}, {"start": 1108.96, "end": 1111.2, "text": " You can see they have a viewer that allows you"}, {"start": 1111.2, "end": 1113.32, "text": " to just explore a nerfs that you do"}, {"start": 1113.32, "end": 1115.2, "text": " and make a little videos from it,"}, {"start": 1115.2, "end": 1118.4, "text": " but really it covers everything to do with nerfs."}, {"start": 1118.4, "end": 1121.52, "text": " Now speaking of nerf, nerf hack is a"}, {"start": 1121.52, "end": 1124.2, "text": " PyTorch nerf acceleration toolbox."}, {"start": 1124.2, "end": 1128.0800000000002, "text": " This gets significant speed ups over simply using"}, {"start": 1128.0800000000002, "end": 1129.24, "text": " nerf code that's out there."}, {"start": 1129.24, "end": 1131.2, "text": " For example, vanilla nerf model"}, {"start": 1131.2, "end": 1133.8400000000001, "text": " with ape layer, multi layer perceptrons"}, {"start": 1133.8400000000001, "end": 1136.96, "text": " can be trained to better quality in one hour,"}, {"start": 1136.96, "end": 1140.4, "text": " rather than one to two a days as in the paper."}, {"start": 1140.4, "end": 1144.76, "text": " This stack, the logo doesn't exactly work on dark background,"}, {"start": 1144.76, "end": 1147.56, "text": " but this stack is a library that wants to standardize"}, {"start": 1147.56, "end": 1150.64, "text": " your ML workflows that you run in the cloud."}, {"start": 1150.64, "end": 1154.96, "text": " This is essentially you check your workflows into GitHub"}, {"start": 1154.96, "end": 1158.64, "text": " and this stack helps you to run them uniformly anywhere."}, {"start": 1158.64, "end": 1162.6, "text": " So in a workflow, you can specify things like your workflow name,"}, {"start": 1162.6, "end": 1164.2, "text": " obviously, but then it starts."}, {"start": 1164.2, "end": 1166.48, "text": " You can say, okay, my provider is bash,"}, {"start": 1166.48, "end": 1168.28, "text": " so this is essentially a bash script."}, {"start": 1168.28, "end": 1169.64, "text": " Now what are the commands?"}, {"start": 1169.64, "end": 1171.36, "text": " I wanna pip install some stuff."}, {"start": 1171.36, "end": 1173.6, "text": " I wanna run this training script right here,"}, {"start": 1173.6, "end": 1175.84, "text": " but it also has things like artifacts,"}, {"start": 1175.84, "end": 1178.6, "text": " and you can also specify things like I wanna load data"}, {"start": 1178.6, "end": 1180.9199999999998, "text": " from this S3 bucket over there."}, {"start": 1180.9199999999998, "end": 1182.84, "text": " I wanna run on this cloud over there."}, {"start": 1182.84, "end": 1186.32, "text": " So all of this is quite geared towards machine learning."}, {"start": 1186.32, "end": 1189.48, "text": " It's certainly not the first workflow engine"}, {"start": 1189.48, "end": 1191.08, "text": " or the first iteration from,"}, {"start": 1191.08, "end": 1193.52, "text": " hey, let's check our things into source code,"}, {"start": 1193.52, "end": 1196.8799999999999, "text": " but it is very targeted at running ML workflows in the cloud."}, {"start": 1196.8799999999999, "end": 1200.36, "text": " Several people have figured out massive speed ups"}, {"start": 1200.36, "end": 1202.28, "text": " in the open air whisper model."}, {"start": 1202.28, "end": 1206.96, "text": " For example, this first here has figured out a 3x speed up"}, {"start": 1206.96, "end": 1210.72, "text": " on CPU inference, but refers to a GitHub thread"}, {"start": 1210.72, "end": 1213.28, "text": " where someone else has found an even bigger"}, {"start": 1213.28, "end": 1215.96, "text": " a 3.25 x speed up."}, {"start": 1215.96, "end": 1218.3999999999999, "text": " Again, it's very cool to see what people do"}, {"start": 1218.3999999999999, "end": 1220.24, "text": " when you just give them the model."}, {"start": 1221.28, "end": 1225.28, "text": " Lastly, I wanna point to a couple of databases for stuff."}, {"start": 1225.28, "end": 1227.2, "text": " Mainly around stable diffusion."}, {"start": 1227.2, "end": 1229.68, "text": " So diffusion DB is on the hugging phase"}, {"start": 1229.68, "end": 1232.68, "text": " it's a data set of prompts that have been entered"}, {"start": 1232.68, "end": 1235.76, "text": " by real users into stable diffusion"}, {"start": 1235.76, "end": 1238.4, "text": " and the corresponding images that they got out."}, {"start": 1238.4, "end": 1241.5600000000002, "text": " Public prompts, that's a public prompts dot art"}, {"start": 1241.5600000000002, "end": 1245.88, "text": " in your browser is a database of three prompts"}, {"start": 1245.88, "end": 1247.3200000000002, "text": " and three models."}, {"start": 1247.3200000000002, "end": 1250.24, "text": " These models are mostly trained using Dream Booth,"}, {"start": 1250.24, "end": 1253.2, "text": " but if you're looking for inspiration for prompts"}, {"start": 1253.2, "end": 1256.72, "text": " and what they turn out, then this is maybe a place to go."}, {"start": 1256.72, "end": 1260.84, "text": " Likewise, visualized.ai is a website that goes"}, {"start": 1260.84, "end": 1263.84, "text": " a little bit more business-y, so it lets you create"}, {"start": 1263.84, "end": 1266.8, "text": " some free stuff from like stable diffusion,"}, {"start": 1266.8, "end": 1269.84, "text": " but then it also acts like as a bit of a marketplace"}, {"start": 1269.84, "end": 1273.76, "text": " for these things such that you could also buy them or sell them."}, {"start": 1273.76, "end": 1276.0, "text": " It's cool to see that different business models"}, {"start": 1276.0, "end": 1278.8, "text": " are trying to spring up around this ecosystem."}, {"start": 1278.8, "end": 1281.48, "text": " Ultimately, someone will figure out"}, {"start": 1281.48, "end": 1283.96, "text": " how to really make money off of this stuff,"}, {"start": 1283.96, "end": 1286.04, "text": " but you know, it's good to be part of the time"}, {"start": 1286.04, "end": 1288.08, "text": " when people are just trying stuff"}, {"start": 1288.08, "end": 1291.32, "text": " and seeing what happens with not only on the research side,"}, {"start": 1291.32, "end": 1293.0, "text": " but also on the business side."}, {"start": 1293.0, "end": 1295.6, "text": " Lastly, big science has released a prompt source,"}, {"start": 1295.6, "end": 1298.8799999999999, "text": " which is an IDE for natural language prompts."}, {"start": 1298.8799999999999, "end": 1301.48, "text": " So this is a way to give people a bit more help"}, {"start": 1301.48, "end": 1305.12, "text": " and a bit more standardization when they use prompts"}, {"start": 1305.12, "end": 1307.32, "text": " to achieve certain goals, for example,"}, {"start": 1307.32, "end": 1311.36, "text": " when they use prompts to tackle some of the NLP challenges"}, {"start": 1311.36, "end": 1315.1599999999999, "text": " that are now more and more phrased simply as prompts"}, {"start": 1315.16, "end": 1316.96, "text": " into these large language models,"}, {"start": 1316.96, "end": 1320.96, "text": " rather than as data that goes into especially trained model"}, {"start": 1320.96, "end": 1321.92, "text": " for that task."}, {"start": 1321.92, "end": 1324.4, "text": " So if you find yourself in this situation"}, {"start": 1324.4, "end": 1327.48, "text": " or a similar one, then prompts or maybe for you."}, {"start": 1327.48, "end": 1332.24, "text": " And lastly, this is a database of all Lex Friedman podcasts"}, {"start": 1332.24, "end": 1333.2, "text": " transcribed."}, {"start": 1333.2, "end": 1335.3600000000001, "text": " This is the website of Andre Karpotti,"}, {"start": 1335.3600000000001, "end": 1339.72, "text": " and he used a simple combination of a download script"}, {"start": 1339.72, "end": 1342.4, "text": " from YouTube combined with OpenAI's whisper"}, {"start": 1342.4, "end": 1346.1200000000001, "text": " to transcribe all of Lex Friedman's podcast episodes."}, {"start": 1346.1200000000001, "end": 1348.16, "text": " You can go to any one of them."}, {"start": 1348.16, "end": 1352.3600000000001, "text": " You can click and they are here with time annotations"}, {"start": 1352.3600000000001, "end": 1352.88, "text": " and all."}, {"start": 1352.88, "end": 1355.48, "text": " It's a very simple but very cool project."}, {"start": 1355.48, "end": 1356.52, "text": " Thank you, Andre."}, {"start": 1356.52, "end": 1358.92, "text": " And I thank all of you for listening."}, {"start": 1358.92, "end": 1360.3600000000001, "text": " I'll be home again next week."}, {"start": 1360.3600000000001, "end": 1362.2800000000002, "text": " And till then, stay hydrated."}, {"start": 1362.2800000000002, "end": 1363.1200000000001, "text": " Bye-bye."}, {"start": 1363.12, "end": 1373.04, "text": " Ling bien."}]
Yannic Kilcher
https://www.youtube.com/watch?v=W5M-dvzpzSQ
The New AI Model Licenses have a Legal Loophole (OpenRAIL-M of BLOOM, Stable Diffusion, etc.)
#ai #stablediffusion #license So-called responsible AI licenses are stupid, counterproductive, and have a dangerous legal loophole in them. OpenRAIL++ License here: https://www.ykilcher.com/license OUTLINE: 0:00 - Introduction 0:40 - Responsible AI Licenses (RAIL) of BLOOM and Stable Diffusion 3:35 - Open source software's dilemma of bad usage and restrictions 8:45 - Good applications, bad applications 12:45 - A dangerous legal loophole 15:50 - OpenRAIL++ License 16:50 - This has nothing to do with copyright 26:00 - Final thoughts References: https://huggingface.co/CompVis/stable-diffusion/tree/main https://huggingface.co/spaces/CompVis/stable-diffusion-license https://huggingface.co/bigscience/bloom?text=34%2B10%3D44+%0A54%2B20%3D https://huggingface.co/spaces/bigscience/license https://huggingface.co/runwayml/stable-diffusion-v1-5 https://huggingface.co/spaces/CompVis/stable-diffusion-license/raw/main/license.txt https://www.gnu.org/philosophy/programs-must-not-limit-freedom-to-run.en.html https://www.gnu.org/philosophy/free-sw.html#four-freedoms https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license https://bigscience.huggingface.co/blog/bigscience-ethical-charter https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses https://en.wikipedia.org/wiki/Copyright#Eligible_works https://en.wikipedia.org/wiki/Creative_work https://www.pearlcohen.com/copyright-office-reiterates-that-works-created-by-ai-cannot-be-copyrighted/ https://jipel.law.nyu.edu/vol-8-no-2-1-hedrick/#II https://www.ykilcher.com/license Links: Homepage: https://ykilcher.com Merch: https://ykilcher.com/merch YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher Discord: https://ykilcher.com/discord LinkedIn: https://www.linkedin.com/in/ykilcher If you want to support me, the best thing to do is to share out the content :) If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this): SubscribeStar: https://www.subscribestar.com/yannickilcher Patreon: https://www.patreon.com/yannickilcher Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2 Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
The new responsible AI licenses that models like stable diffusion or bloom have are stupid. They conflict with open source principles. In fact, they're distinctly not open source and they have a glaring legal loophole in them. So join me as we'll explore the fun world of model licenses. So first things first, I am not a lawyer. This is not legal advice. These are my own opinions and the conclusions that I've come to while researching this topic and all of it is for entertainment purposes only. Take everything with a grain of salt and with my own personal bias. That being said, if you go to the hugging face hub right now and you look at stable diffusion, what you're going to see is this is a pill right here. License Creative ML Open Rail Eb Open rail is a new type of license. Rail in this case, so this is the license. Rail is the responsible AI license. I believe that's what the acronym stands for. Open means that it is without usage restrictions and M stands for the model that is being licensed as opposed to the code or the data. But stable diffusion isn't the only model. In fact, the first model at least that I'm aware of using such a license was bloom, which was released earlier, which is a large language model that comes out of the big science initiative. And it uses the very similar big science bloom rail 1.0 license. Now, what is this rail license? What is an open rail license? Essentially, it is a permissive license that lets you use the model to produce stuff and puts no restrictions on you then taking that stuff, selling that stuff and doing with that stuff, whatever you want. You're also allowed to take the model and actually sell it or sell its outputs or train it further, distill it, fine tune it, whatever you want to do and then make money off of it. You have no responsibility, for example, as in GPL code, to then release your model again as open source. So everything seems like a very permissive Apache or MIT license that you might be familiar if you are in software. However, there is a difference. The rail licenses explicitly put usage restrictions on these things. So what does that mean? If you look at one of these licenses and you scroll way down to the attachments, then you'll see usage restrictions. You agree not to use the model or derivatives of the model for any of these purposes and some of these purposes are to defame, disparage or otherwise harass others or to generate or disseminate verifiably false information with the purpose of harming others and so on. There are several usage restrictions in this license and the license make sure that you agree that you don't use the model for any of these purposes and whatever you do with the model be that fine tune it, distill it, sell it and so on, you must pass on, you must enforce continuously these usage restrictions. So even if you take the model and you fine tune it on your own data or something like this, then you may keep that private but you may still not use it for any of these things. So much like a copy left license that sort of propagates the openness of code. In this case, it's not about the openness of the model but what is propagated is the usage restrictions. So the purpose of this is that the developers of these models, they don't want their work to be used for anything that they consider bad or harmful or unethical. Now they are not the first people to think about something like this. The open source software community obviously had to grapple with this topic for a long time and they have reached a very conclusive conclusion. Is that a word conclusive conclusion? Now let me quote from Richard Stolman on why programs must not limit the freedom to run them. This is a principle of free software and ingrained in open source software. So in this article he says free software means software controlled by its users rather than the reverse. Specifically it means the software comes with four essential freedoms that software users deserve. At the head of the list is Freedom 0, the freedom to run the program as you wish in order to do what you wish. And here he goes into the arguments. Some developers propose to place usage restrictions in software licenses to ban using the program for certain purposes but he says that would be a disastrous path. This article explains why freedom 0 must not be limited. Conditions to limit the use of a program would achieve little of their aims but would wreck the free software community. So firstly describes what is evidently clear to everyone but is still actually a part of the open rail licenses. If you look at the first usage restriction it's as you are not allowed to use the model in any way that violates any applicable national federal state, local or international law or regulation. As Stolman points out here that is already covered by the law. He gives the example of fraud. He says a license condition against fraud would be superfluous and a contruder fraud is a crime. And therefore the license condition that you may not break any laws is almost topological and superfluous. But it would be okay if a license contains superfluous information after all lawyers want to be paid. But he goes further and he gives the example what if the condition were against some specialized private activity that is not outlawed. For instance, PTA proposed a license that would forbid the use of the software to cause pain to animals with a spinal column. Or there might be a condition against using a certain program to make or publish drawings of vomit. And so on. He says it's not clear these would be enforceable free software licenses are based on copyright law and trying to impose usage condition that way is stretching what copyright law permits in a dangerous way. Would you like books to carry a license condition about how you can use the information in them? Well it's a good point but actually this point that these licenses are based on copyright law in terms of the open-ray licenses in my opinion is actually not given. And that's why we're gonna look at that's why on hugging face you have to click a little checkbox that you've actually read the license agreement for some of these models. Because in my opinion copyright does not apply here but we'll get to that later. The first installment asks what if such conditions are legally enforceable would that be good? And here it gets to the point the fact is people have very different ethical ideas about the activities that might be done using software. I happen to think those four unusual activities the ones he mentioned above are legitimate and should not be forbidden. And he clearly says your views about these issues might differ and that's precisely the point. The result of such usage restrictions would be a system that you could not count on for any purpose. Allowing usage restrictions in free software would mainly push users towards non-free software trying to stop users from doing something through usage restrictions in free software is as ineffective as pushing on an object through a long straight soft piece of cooked spaghetti. It's akin to someone with a very small hammer seeing every problem as a nail and not even acknowledging that the nail is far too big for the hammer. But not only is it ineffective it is worse than ineffective, Stolen says. It's wrong too because software developers should not exercise such power over what users do. Imagine selling pens with conditions but what you can write with them if you make something that is generally useful like a pen. People will use it to write all sorts of things, even horrible things such as order to torture a dissident. But you must not have the power to control people's activities through their pens. It is the same for text editor, compiler or a kernel and in my opinion for a language model. In my opinion Richard Stolenen really hits the nail on the head here with an appropriately sized hammer. We've seen in recent years more and more an evolution in the AI world of a mentality that essentially says we know what's good for you. And a complete disregard that other people might have different ideas. Now don't get me wrong if you create something like this. You can put any license on it that you want. You can make any contract that you want. You can make money off it and keep it for yourself whatever you want. But don't then also go out and say, oh, we are free, we are open, we are for everyone. No, you are not. And it takes no further to look than actually to look at the license itself and some of these usage restrictions. For example, you may not use this model to provide medical advice and medical results interpretation. You know how many people in the world do not have access to any medical advice at all and would actually be benefiting from some sort of medical advice. With maybe a disclaimer that look, this is generated. Don't take this as fact, but they would usually benefit from something like this. You may not use this model to generate or disseminate information for the purpose to be used in administration of justice, law enforcement, immigration or asylum processes. This is like a like Silicon Valley is the entire world for all the inclusivity and diversity that these people claim. The world view over what's good and what's bad and what's useful and what's unethical is so narrow. How many places in the world would be immensely thankful to any help they can get with enforcing justice with effectively administrating law enforcement. Now I'm not saying that these things are good or bad per se and I can see where these people are coming from. But it is exactly how Stalman says it is making a pen and then telling people what they can and can't write with the pen. Without any regard that in a different context what they may write may actually be good for them. And we've seen a lot of applications of language model that violate a lot of these things that actually have beneficial applications. But don't worry, there is always a method to do that. See this here is from a blog post that accompanies the big science open rail license with the release of the Bloom model. My use of the model falls under a restriction but I still think it's not harmful and could be valuable. Well the blog post says please contact the licensor of the model you are using or distributing for them to assess the case and see whether an authorization and or license could be granted for you in this very specific case. So here is the answer even though you may think that what you're doing is quite okay and actually beneficial even though a technically conflicts with one of the usage restrictions you go to them. You go to the creators of the model and ask may I please have an exception for these usage restrictions for my particular case and they will assess that for you. Now again I'm not saying they can't do that this is absolutely legal and if that's how they want to go about releasing their model then find with me but it is certainly not open it is certainly not inclusive it is certainly not accessible to the whole world it is very much we know what's good for you and you play a you do not have the authority to decide that for yourself you come to us and then we decide if it's good enough. What's even more the rest of the license is essentially it's a copy paste of rather standard terms of permissive open source licenses such as this one the software is provided on an as is basis without warranties or conditions of any kind either expressed or implied including without limitations any warranties or conditions of title non-infringement merchantability or fitness for a particular purpose. You are solely responsible for determining the appropriateness of using or redistributing the model derivatives of the model and complementary material and assume any risks associated with your exercise of permission under this license. So the license is very unidirectional it is we don't trust you we put usage restrictions on you user of the model but when it comes to us nope no liability no warranty no nothing no guarantees of anything that the model does. Usually in open source software this is bidirectional it's I write some code if it misbehaves you know you're the one using it if I do something stupid you choose to download or not to download it that's it but on the other hand I will not come to you and tell you how to use it or what to do with it and what not to do with it whereas here same thing for the creators but not so same thing for the users but we go on and here is where I think the crucial part comes in and thanks to people on our discord for pointing this out to me. There is paragraph seven right here updates and runtime restrictions to the maximum extent permitted by law licensor reserves the right to restrict remotely or otherwise usage of the model in violation of this license so if you violate the license and you somehow use it via an API or something like this or there is some other means of restricting you a licensor can do that so far so good but it also says they reserve the right to update the model through electronic means or modify the output of the model based on updates now as far as I understand this is not just in violation of the license they reserve the right to update the model just indefinitely now you may think okay this isn't too bad either you can just release an update so what the last sentence says you shall undertake reasonable efforts to use the latest version of this model and this I believe is in fact the dangerous part it goes beyond just usage restrictions or non-use jurisdictions first of all it's gonna depend on what reasonable efforts means but certainly if you're simply downloading a model from hugging face and then running it then reasonable effort would certainly include that you point your download script to the new version if you fine-tuned your model a little bit to do something then I guess it's up to a judge to decide whether it's reasonable effort for you to redo that fine-tuning with the new version of the base model it might very well be but what does that mean in practice well let's for a moment assume that reasonable effort means that you actually have to upgrade whether you're a fine-tuner or just a consumer of the original model what someone could do if they don't like a certain model being out there for example stable diffusion if they don't like stable diffusion being out there just for free to use for everyone well they could just buy the organization that made stable diffusion and therefore buy the holder of the rights to the stable diffusion model they could release and update to the model that just so happens to be much worse than the previous model but you would be forced under the slice to upgrade to the newest model you could actually not run the old model anymore a judge is not gonna care that you explain to them but the old model is actually way better and does a better job no the judge will simply say well this is a new version of the model you agree to always upgrade to the newest model so therefore you must use it so there is a clear path for anyone with a chunk of money to destroy any of these models that are currently out there by simply buying them releasing an upgraded version and then there goes your model now you may think that is far fetched but I guess both of us can think of a few places that have a lot of money and have a vested interest in such things not being freely open and freely shared around so take your pick now here's the deal I don't like these licenses I think they're counterproductive I think they're counter to the spirit of open source and I think they have a paternalistic elitist mentality we know what's good for you but if you are so inclined if you must use a license with usage restrictions if that is really your thing to do that then I have created an updated version for you I call it the open rail plus plus license the m here stands for model feel free to adjust this to open rail D or open rail A licenses the license is essentially exactly the same you fill in a bunch of stuff the only difference is that paragraph seven has the last sentence removed the receiver of the license must not take reasonable efforts to always use the latest version of the model that's it if you must use usage restrictions use the open rail plus plus license okay now that we got that out of the way I want to come to the last part of this video and here I want to say again I am not a lawyer this is my opinion but in my opinion this thing is drastically different from the open source licenses that we are used to not just in terms of the content of a containing usage restrictions but in fact the little pathway how such a license is applicable is completely different see open source licenses are based on copyright now copyright applies to a work of creative making a creative work as it's defined now creative works are defined differently from jurisdiction to jurisdiction but here in the NYU journal for intellectual property and entertainment law there is a post by Samantha think headric that goes into detail of copyright and code and how it relates to algorithms and the outputs of algorithms and that's an important distinction specifically it talks about some court decision saying the seventh circuit however has provided a framework that breaks down creativity into three distinct elements of originality creativity and novelty a work is original if it is the independent creation of its author a work is creative if it embodies some modest amount of intellectual labor a work is novel if it differs from existing works in some relevant aspect for a work to be copyrightable it must be original and creative but need not be novel now all of these things are again pretty vague but here's the deal copyright applies automatically if you make a creative work such as if you write a book if you make a movie or anything like this you automatically receive copyright for that but that only applies to creative works now usually ideas are not considered creative works you can patent certain ideas depending on the jurisdiction but you cannot have copyright on an idea you only have copyright of on the realization of an idea if it is a creative work so for example you do not have copyright on that idea of aromas between two Italian rival families but the work of Romeo and Juliet has copyright to it and the same counts for source code you do not have copyright on the idea of the Linux kernel but copyright exists on the code itself of the kernel that's why you can re-implement someone else's algorithm in your own code provided you haven't copied from them and provided a judge rules that it is substantially different implementation of the idea and then you will be the copyright holder to that new code now this gets interesting when we come into the context of GitHub co-pilot and things like this but let's leave this out of the way for now copyright applies to creative works off and this is sometimes very explicitly described human authors i've previously reported on the case of Stephen Tyler that tries to patent or obtain copyright registrations on the work outputs of his AI algorithm for example here is an article by Clyde Schumann of Pearl Cohen that goes into detail of how this was again and again rejected the copyright office again concluded that the work lacked the required human authorship necessary to sustain a claim in copyright so a human author needs to be involved in order for work to have copyright source code is not the same as the output of an algorithm for example if you write the source code for a machine learning model the training code the data loading code and all of that the optimizer code then you have copyright on all of that but not automatically on the output of that code so then you run the code and the output of that code of the training process is the model the model output is different from the source code and it's not per se clear whether you have copyright on that model now Tyler here argues that he is AI his algorithm should have copyright on that thing but it is also thinkable that he as the maker of the algorithm and the runner of the algorithm has copyright on the thing but as i understand it both of these claims have been rejected the courts have ruled that while if you use something like photoshop to make an i-stigital painting then yes it's essentially a tool and you provide the creative input as a human so you have the copyright on that final output of the algorithm even if it's run through photoshop but if you simply press go on stable diffusion then you do not necessarily have copyright on the output if you enter a prompt however then that could be considered enough human authorship but what i'm pretty sure again opinion is that if you simply write training code for a language model and then let that run you do not have copyright on the resulting model because it would not be considered on their most jurisdictions as a creative work because you have not done any sort of creative thinking you have not been able to come up with an idea it is not an intent to bring an idea to life in a work in fact we do know that these things are essentially black boxes so it's essentially impossible to fulfill these many provisions and standards of copyright law here so in my opinion you as a human don't have the copyright on the resulting model and neither does the algorithm itself the NYU article states the difficult question is whether an algorithm exhibits sufficient intellectual labor or whether we would deem an algorithm to be capable of exhibiting any intellectual labor or true creativity at all now obviously copyright law is much more difficult than that but after reading through a big chunk of it which i guess is still a tiny chunk of everything there is to know i am fairly sure there is no copyright at all on models if they are simply trained by an algorithm like the training code for gpt or the training code for stable diffusion and therefore you can't simply say here is the license for the model the reason that works with code the reason you can simply put an MIT license file next to your code on github is because without that no one would be allowed to use your code by default so by default you would have copyright and no one could copy it and by putting that file there you essentially allow that however here it's the other way around you do not have a default license you do not have a default right on the model itself on the code yes but not on the model and therefore if you simply put that model somewhere to download it doesn't matter whether you have a license file next to it because i can download the model file and i have never agreed to that license and without having agreed to that license there is absolutely nothing you can do against me using that model for whatever purpose and that is why at least in my estimation hugging face now implements these barriers right here you need to agree to share your contact information to access this model now this is framed as you know you share your contact information we just want to know who's using that model no no no no no no no you have to accept the conditions to access its files and content and next to the checkmark it says i have read the license and agree with its terms now this isn't just to register your username with the authors clicking this checkbox right here is a contract you are entering into a contract with i guess hugging face i'm not really sure but by doing this action you actively accept the license and that's how it becomes enforceable i mean if you have different opinions please correct me if i'm wrong but for example i don't see the same checkboxy thing here on the bloom model or on the original stable diffusion model even though i guess there aren't actually any files right here but notice the difference with something like an Apache a gpl or an MIT license there is automatic copyright which essentially gets downgraded for you to be able to use it so you essentially implicitly accept the license by doing so whereas here there is no license and you enter into a contract by clicking this checkbox and this in my opinion is another downside of these licenses because we can't simply put these models out there anymore for people to download we actually are legally enforced to make sure that every person who's able to download the model first has entered into such a contract with whomever it is that makes the model available to download and this again severely restricts the distribution capabilities of these models and essentially centralizes an already relatively central system even more to institutions who can actually enforce such provisions or at least can enforce the fact that you need to enter into the agreement such as having a website with a little checkbox that has a user login and so on but i hope you kind of see that even though this is all framed in terms of open source and so on this has nothing to do with the provisions of open source it is not based on copyright law so the legal pathway is entirely different on top of that again i would argue that these licenses are quite harmful to the ecosystems they're very paternalistic and i think we should move away as fast as possible from this attitude that some people absolutely know what's good for other people and force them to come back if they have some different idea of what's ethical and unethical and useful and not useful and make them essentially go and ask for permission for all of these things yeah i don't like it uh don't do it if you make a model put it out there give good information about what it can and can't do what it might be useful for what it might not be useful for what the dangers of it are and whatnot and then put the decision power and the competence with the users contrary to what silicon valley believes the rest of the world isn't just oblivious to any ethical considerations i know it's hard to believe but a person can actually make competent decisions even though they're not paying twelve dollars for a pumpkin spice latte and i hope the current run of models for example stable diffusion which is really useful model do get somehow retrained or realized in the future to be actually open source and actually conform to the principles of free software until then be careful what you enter into that prompt box that's all from me again if you want to access the open rail plus plus license it's ykilture.com slash license and i'll see you next time bye bye
[{"start": 0.0, "end": 8.0, "text": " The new responsible AI licenses that models like stable diffusion or bloom have are stupid."}, {"start": 8.0, "end": 10.24, "text": " They conflict with open source principles."}, {"start": 10.24, "end": 16.04, "text": " In fact, they're distinctly not open source and they have a glaring legal loophole in them."}, {"start": 16.04, "end": 21.6, "text": " So join me as we'll explore the fun world of model licenses."}, {"start": 21.6, "end": 23.88, "text": " So first things first, I am not a lawyer."}, {"start": 23.88, "end": 25.2, "text": " This is not legal advice."}, {"start": 25.2, "end": 29.8, "text": " These are my own opinions and the conclusions that I've come to while researching this topic"}, {"start": 29.8, "end": 33.8, "text": " and all of it is for entertainment purposes only."}, {"start": 33.8, "end": 38.08, "text": " Take everything with a grain of salt and with my own personal bias."}, {"start": 38.08, "end": 43.04, "text": " That being said, if you go to the hugging face hub right now and you look at stable diffusion,"}, {"start": 43.04, "end": 46.92, "text": " what you're going to see is this is a pill right here."}, {"start": 46.92, "end": 49.72, "text": " License Creative ML Open Rail Eb"}, {"start": 49.72, "end": 52.8, "text": " Open rail is a new type of license."}, {"start": 52.8, "end": 55.44, "text": " Rail in this case, so this is the license."}, {"start": 55.44, "end": 58.24, "text": " Rail is the responsible AI license."}, {"start": 58.24, "end": 60.24, "text": " I believe that's what the acronym stands for."}, {"start": 60.24, "end": 67.96000000000001, "text": " Open means that it is without usage restrictions and M stands for the model that is being licensed"}, {"start": 67.96000000000001, "end": 70.68, "text": " as opposed to the code or the data."}, {"start": 70.68, "end": 73.12, "text": " But stable diffusion isn't the only model."}, {"start": 73.12, "end": 77.96000000000001, "text": " In fact, the first model at least that I'm aware of using such a license was bloom,"}, {"start": 77.96000000000001, "end": 83.28, "text": " which was released earlier, which is a large language model that comes out of the big science initiative."}, {"start": 83.28, "end": 88.64, "text": " And it uses the very similar big science bloom rail 1.0 license."}, {"start": 88.64, "end": 90.92, "text": " Now, what is this rail license?"}, {"start": 90.92, "end": 92.76, "text": " What is an open rail license?"}, {"start": 92.76, "end": 97.32000000000001, "text": " Essentially, it is a permissive license that lets you use the model to produce stuff"}, {"start": 97.32000000000001, "end": 102.64, "text": " and puts no restrictions on you then taking that stuff, selling that stuff"}, {"start": 102.64, "end": 104.76, "text": " and doing with that stuff, whatever you want."}, {"start": 104.76, "end": 109.28, "text": " You're also allowed to take the model and actually sell it or sell its outputs"}, {"start": 109.28, "end": 112.44, "text": " or train it further, distill it, fine tune it,"}, {"start": 112.44, "end": 114.88, "text": " whatever you want to do and then make money off of it."}, {"start": 114.88, "end": 121.88, "text": " You have no responsibility, for example, as in GPL code, to then release your model again as open source."}, {"start": 121.88, "end": 128.2, "text": " So everything seems like a very permissive Apache or MIT license that you might be familiar"}, {"start": 128.2, "end": 129.8, "text": " if you are in software."}, {"start": 129.8, "end": 131.84, "text": " However, there is a difference."}, {"start": 131.84, "end": 138.04, "text": " The rail licenses explicitly put usage restrictions on these things."}, {"start": 138.04, "end": 143.95999999999998, "text": " So what does that mean? If you look at one of these licenses and you scroll way down to the attachments,"}, {"start": 143.95999999999998, "end": 146.56, "text": " then you'll see usage restrictions."}, {"start": 146.56, "end": 152.76, "text": " You agree not to use the model or derivatives of the model for any of these purposes"}, {"start": 152.76, "end": 158.39999999999998, "text": " and some of these purposes are to defame, disparage or otherwise harass others"}, {"start": 158.39999999999998, "end": 164.79999999999998, "text": " or to generate or disseminate verifiably false information with the purpose of harming others"}, {"start": 164.8, "end": 171.76000000000002, "text": " and so on. There are several usage restrictions in this license and the license make sure that you agree"}, {"start": 171.76000000000002, "end": 177.28, "text": " that you don't use the model for any of these purposes and whatever you do with the model"}, {"start": 177.28, "end": 185.64000000000001, "text": " be that fine tune it, distill it, sell it and so on, you must pass on, you must enforce continuously these usage restrictions."}, {"start": 185.64000000000001, "end": 190.60000000000002, "text": " So even if you take the model and you fine tune it on your own data or something like this,"}, {"start": 190.6, "end": 196.92, "text": " then you may keep that private but you may still not use it for any of these things."}, {"start": 196.92, "end": 201.44, "text": " So much like a copy left license that sort of propagates the openness of code."}, {"start": 201.44, "end": 207.6, "text": " In this case, it's not about the openness of the model but what is propagated is the usage restrictions."}, {"start": 207.6, "end": 215.24, "text": " So the purpose of this is that the developers of these models, they don't want their work to be used for anything"}, {"start": 215.24, "end": 218.44, "text": " that they consider bad or harmful or unethical."}, {"start": 218.44, "end": 222.24, "text": " Now they are not the first people to think about something like this."}, {"start": 222.24, "end": 227.64, "text": " The open source software community obviously had to grapple with this topic for a long time"}, {"start": 227.64, "end": 232.0, "text": " and they have reached a very conclusive conclusion."}, {"start": 232.0, "end": 234.48, "text": " Is that a word conclusive conclusion?"}, {"start": 234.48, "end": 239.96, "text": " Now let me quote from Richard Stolman on why programs must not limit the freedom to run them."}, {"start": 239.96, "end": 245.36, "text": " This is a principle of free software and ingrained in open source software."}, {"start": 245.36, "end": 252.20000000000002, "text": " So in this article he says free software means software controlled by its users rather than the reverse."}, {"start": 252.20000000000002, "end": 258.0, "text": " Specifically it means the software comes with four essential freedoms that software users deserve."}, {"start": 258.0, "end": 265.24, "text": " At the head of the list is Freedom 0, the freedom to run the program as you wish in order to do what you wish."}, {"start": 265.24, "end": 267.08000000000004, "text": " And here he goes into the arguments."}, {"start": 267.08000000000004, "end": 274.16, "text": " Some developers propose to place usage restrictions in software licenses to ban using the program for certain purposes"}, {"start": 274.16, "end": 277.56, "text": " but he says that would be a disastrous path."}, {"start": 277.56, "end": 280.64000000000004, "text": " This article explains why freedom 0 must not be limited."}, {"start": 280.64000000000004, "end": 287.16, "text": " Conditions to limit the use of a program would achieve little of their aims but would wreck the free software community."}, {"start": 287.16, "end": 293.8, "text": " So firstly describes what is evidently clear to everyone but is still actually a part of the open rail licenses."}, {"start": 293.8, "end": 300.0, "text": " If you look at the first usage restriction it's as you are not allowed to use the model in any way"}, {"start": 300.0, "end": 306.08, "text": " that violates any applicable national federal state, local or international law or regulation."}, {"start": 306.08, "end": 310.04, "text": " As Stolman points out here that is already covered by the law."}, {"start": 310.04, "end": 311.64, "text": " He gives the example of fraud."}, {"start": 311.64, "end": 317.36, "text": " He says a license condition against fraud would be superfluous and a contruder fraud is a crime."}, {"start": 317.36, "end": 325.12, "text": " And therefore the license condition that you may not break any laws is almost topological and superfluous."}, {"start": 325.12, "end": 330.8, "text": " But it would be okay if a license contains superfluous information after all lawyers want to be paid."}, {"start": 330.8, "end": 337.92, "text": " But he goes further and he gives the example what if the condition were against some specialized private activity that is not outlawed."}, {"start": 337.92, "end": 344.88, "text": " For instance, PTA proposed a license that would forbid the use of the software to cause pain to animals with a spinal column."}, {"start": 344.88, "end": 349.88, "text": " Or there might be a condition against using a certain program to make or publish drawings of vomit."}, {"start": 349.88, "end": 350.6, "text": " And so on."}, {"start": 350.6, "end": 362.76000000000005, "text": " He says it's not clear these would be enforceable free software licenses are based on copyright law and trying to impose usage condition that way is stretching what copyright law permits in a dangerous way."}, {"start": 362.76000000000005, "end": 367.92, "text": " Would you like books to carry a license condition about how you can use the information in them?"}, {"start": 367.92, "end": 377.64000000000004, "text": " Well it's a good point but actually this point that these licenses are based on copyright law in terms of the open-ray licenses in my opinion is actually not given."}, {"start": 377.64, "end": 386.36, "text": " And that's why we're gonna look at that's why on hugging face you have to click a little checkbox that you've actually read the license agreement for some of these models."}, {"start": 386.36, "end": 391.0, "text": " Because in my opinion copyright does not apply here but we'll get to that later."}, {"start": 391.0, "end": 396.84, "text": " The first installment asks what if such conditions are legally enforceable would that be good?"}, {"start": 396.84, "end": 404.68, "text": " And here it gets to the point the fact is people have very different ethical ideas about the activities that might be done using software."}, {"start": 404.68, "end": 411.24, "text": " I happen to think those four unusual activities the ones he mentioned above are legitimate and should not be forbidden."}, {"start": 411.24, "end": 416.84000000000003, "text": " And he clearly says your views about these issues might differ and that's precisely the point."}, {"start": 416.84000000000003, "end": 422.84000000000003, "text": " The result of such usage restrictions would be a system that you could not count on for any purpose."}, {"start": 422.84000000000003, "end": 433.44, "text": " Allowing usage restrictions in free software would mainly push users towards non-free software trying to stop users from doing something through usage restrictions in free software"}, {"start": 433.44, "end": 440.08, "text": " is as ineffective as pushing on an object through a long straight soft piece of cooked spaghetti."}, {"start": 440.08, "end": 448.64, "text": " It's akin to someone with a very small hammer seeing every problem as a nail and not even acknowledging that the nail is far too big for the hammer."}, {"start": 448.64, "end": 453.2, "text": " But not only is it ineffective it is worse than ineffective, Stolen says."}, {"start": 453.2, "end": 459.36, "text": " It's wrong too because software developers should not exercise such power over what users do."}, {"start": 459.36, "end": 466.96000000000004, "text": " Imagine selling pens with conditions but what you can write with them if you make something that is generally useful like a pen."}, {"start": 466.96000000000004, "end": 473.52000000000004, "text": " People will use it to write all sorts of things, even horrible things such as order to torture a dissident."}, {"start": 473.52000000000004, "end": 477.68, "text": " But you must not have the power to control people's activities through their pens."}, {"start": 477.68, "end": 483.12, "text": " It is the same for text editor, compiler or a kernel and in my opinion for a language model."}, {"start": 483.12, "end": 489.68, "text": " In my opinion Richard Stolenen really hits the nail on the head here with an appropriately sized hammer."}, {"start": 489.68, "end": 497.84000000000003, "text": " We've seen in recent years more and more an evolution in the AI world of a mentality that essentially says we know what's good for you."}, {"start": 497.84000000000003, "end": 502.24, "text": " And a complete disregard that other people might have different ideas."}, {"start": 502.24, "end": 505.44, "text": " Now don't get me wrong if you create something like this."}, {"start": 505.44, "end": 509.44, "text": " You can put any license on it that you want. You can make any contract that you want."}, {"start": 509.44, "end": 514.64, "text": " You can make money off it and keep it for yourself whatever you want. But don't then also go out and say,"}, {"start": 514.64, "end": 517.68, "text": " oh, we are free, we are open, we are for everyone."}, {"start": 517.68, "end": 518.88, "text": " No, you are not."}, {"start": 518.88, "end": 525.68, "text": " And it takes no further to look than actually to look at the license itself and some of these usage restrictions."}, {"start": 525.68, "end": 531.6, "text": " For example, you may not use this model to provide medical advice and medical results interpretation."}, {"start": 531.6, "end": 541.84, "text": " You know how many people in the world do not have access to any medical advice at all and would actually be benefiting from some sort of medical advice."}, {"start": 541.84, "end": 544.88, "text": " With maybe a disclaimer that look, this is generated."}, {"start": 544.88, "end": 549.44, "text": " Don't take this as fact, but they would usually benefit from something like this."}, {"start": 549.44, "end": 559.36, "text": " You may not use this model to generate or disseminate information for the purpose to be used in administration of justice, law enforcement, immigration or asylum processes."}, {"start": 559.36, "end": 567.92, "text": " This is like a like Silicon Valley is the entire world for all the inclusivity and diversity that these people claim."}, {"start": 567.92, "end": 575.12, "text": " The world view over what's good and what's bad and what's useful and what's unethical is so narrow."}, {"start": 575.12, "end": 583.12, "text": " How many places in the world would be immensely thankful to any help they can get with enforcing justice with effectively"}, {"start": 583.12, "end": 590.0, "text": " administrating law enforcement. Now I'm not saying that these things are good or bad per se and I can see where these people are coming from."}, {"start": 590.0, "end": 597.2, "text": " But it is exactly how Stalman says it is making a pen and then telling people what they can and can't write with the pen."}, {"start": 597.2, "end": 602.16, "text": " Without any regard that in a different context what they may write may actually be good for them."}, {"start": 602.16, "end": 609.44, "text": " And we've seen a lot of applications of language model that violate a lot of these things that actually have beneficial applications."}, {"start": 609.44, "end": 612.88, "text": " But don't worry, there is always a method to do that."}, {"start": 612.88, "end": 620.08, "text": " See this here is from a blog post that accompanies the big science open rail license with the release of the Bloom model."}, {"start": 620.08, "end": 626.72, "text": " My use of the model falls under a restriction but I still think it's not harmful and could be valuable."}, {"start": 626.72, "end": 633.68, "text": " Well the blog post says please contact the licensor of the model you are using or distributing for them to assess the case"}, {"start": 633.68, "end": 638.72, "text": " and see whether an authorization and or license could be granted for you in this very specific case."}, {"start": 638.72, "end": 645.44, "text": " So here is the answer even though you may think that what you're doing is quite okay and actually beneficial"}, {"start": 645.44, "end": 650.1600000000001, "text": " even though a technically conflicts with one of the usage restrictions you go to them."}, {"start": 650.1600000000001, "end": 656.88, "text": " You go to the creators of the model and ask may I please have an exception for these usage restrictions"}, {"start": 656.88, "end": 660.24, "text": " for my particular case and they will assess that for you."}, {"start": 660.24, "end": 665.44, "text": " Now again I'm not saying they can't do that this is absolutely legal and if that's how they want to go"}, {"start": 665.44, "end": 670.96, "text": " about releasing their model then find with me but it is certainly not open it is certainly not"}, {"start": 670.96, "end": 678.1600000000001, "text": " inclusive it is certainly not accessible to the whole world it is very much we know what's good"}, {"start": 678.1600000000001, "end": 683.9200000000001, "text": " for you and you play a you do not have the authority to decide that for yourself you come to us"}, {"start": 683.9200000000001, "end": 686.8000000000001, "text": " and then we decide if it's good enough."}, {"start": 686.8000000000001, "end": 692.8800000000001, "text": " What's even more the rest of the license is essentially it's a copy paste of rather standard terms"}, {"start": 692.88, "end": 698.24, "text": " of permissive open source licenses such as this one the software is provided on an as is"}, {"start": 698.24, "end": 703.28, "text": " basis without warranties or conditions of any kind either expressed or implied including"}, {"start": 703.28, "end": 708.24, "text": " without limitations any warranties or conditions of title non-infringement merchantability or"}, {"start": 708.24, "end": 713.4399999999999, "text": " fitness for a particular purpose. You are solely responsible for determining the appropriateness"}, {"start": 713.4399999999999, "end": 718.16, "text": " of using or redistributing the model derivatives of the model and complementary material and"}, {"start": 718.16, "end": 723.6, "text": " assume any risks associated with your exercise of permission under this license."}, {"start": 723.6, "end": 729.8399999999999, "text": " So the license is very unidirectional it is we don't trust you we put usage restrictions on you"}, {"start": 729.8399999999999, "end": 737.92, "text": " user of the model but when it comes to us nope no liability no warranty no nothing no guarantees"}, {"start": 737.92, "end": 744.3199999999999, "text": " of anything that the model does. Usually in open source software this is bidirectional it's"}, {"start": 744.32, "end": 750.5600000000001, "text": " I write some code if it misbehaves you know you're the one using it if I do something stupid you"}, {"start": 750.5600000000001, "end": 755.6800000000001, "text": " choose to download or not to download it that's it but on the other hand I will not come to you"}, {"start": 755.6800000000001, "end": 761.0400000000001, "text": " and tell you how to use it or what to do with it and what not to do with it whereas here same"}, {"start": 761.0400000000001, "end": 766.96, "text": " thing for the creators but not so same thing for the users but we go on and here is where I think"}, {"start": 766.96, "end": 772.6400000000001, "text": " the crucial part comes in and thanks to people on our discord for pointing this out to me."}, {"start": 772.64, "end": 778.48, "text": " There is paragraph seven right here updates and runtime restrictions to the maximum extent"}, {"start": 778.48, "end": 784.8, "text": " permitted by law licensor reserves the right to restrict remotely or otherwise usage of the model"}, {"start": 784.8, "end": 792.16, "text": " in violation of this license so if you violate the license and you somehow use it via an API or"}, {"start": 792.16, "end": 798.24, "text": " something like this or there is some other means of restricting you a licensor can do that so far"}, {"start": 798.24, "end": 804.08, "text": " so good but it also says they reserve the right to update the model through electronic means or"}, {"start": 804.08, "end": 810.88, "text": " modify the output of the model based on updates now as far as I understand this is not just in"}, {"start": 810.88, "end": 816.4, "text": " violation of the license they reserve the right to update the model just indefinitely now you"}, {"start": 816.4, "end": 822.24, "text": " may think okay this isn't too bad either you can just release an update so what the last sentence"}, {"start": 822.24, "end": 830.24, "text": " says you shall undertake reasonable efforts to use the latest version of this model and this I believe"}, {"start": 830.24, "end": 835.52, "text": " is in fact the dangerous part it goes beyond just usage restrictions or non-use"}, {"start": 835.52, "end": 841.2, "text": " jurisdictions first of all it's gonna depend on what reasonable efforts means but certainly if"}, {"start": 841.2, "end": 846.32, "text": " you're simply downloading a model from hugging face and then running it then reasonable effort"}, {"start": 846.32, "end": 852.08, "text": " would certainly include that you point your download script to the new version if you fine-tuned"}, {"start": 852.08, "end": 858.08, "text": " your model a little bit to do something then I guess it's up to a judge to decide whether it's"}, {"start": 858.08, "end": 864.8000000000001, "text": " reasonable effort for you to redo that fine-tuning with the new version of the base model it might"}, {"start": 864.8000000000001, "end": 871.2, "text": " very well be but what does that mean in practice well let's for a moment assume that reasonable"}, {"start": 871.2, "end": 877.36, "text": " effort means that you actually have to upgrade whether you're a fine-tuner or just a consumer of"}, {"start": 877.36, "end": 882.32, "text": " the original model what someone could do if they don't like a certain model being out there for"}, {"start": 882.32, "end": 888.08, "text": " example stable diffusion if they don't like stable diffusion being out there just for free to use"}, {"start": 888.08, "end": 893.6, "text": " for everyone well they could just buy the organization that made stable diffusion and therefore"}, {"start": 893.6, "end": 900.08, "text": " buy the holder of the rights to the stable diffusion model they could release and update to the model"}, {"start": 900.08, "end": 907.44, "text": " that just so happens to be much worse than the previous model but you would be forced under the"}, {"start": 907.44, "end": 913.36, "text": " slice to upgrade to the newest model you could actually not run the old model anymore a judge is"}, {"start": 913.36, "end": 918.1600000000001, "text": " not gonna care that you explain to them but the old model is actually way better and does a better"}, {"start": 918.1600000000001, "end": 924.0, "text": " job no the judge will simply say well this is a new version of the model you agree to always"}, {"start": 924.0, "end": 930.16, "text": " upgrade to the newest model so therefore you must use it so there is a clear path for anyone with a"}, {"start": 930.16, "end": 936.64, "text": " chunk of money to destroy any of these models that are currently out there by simply buying them"}, {"start": 936.64, "end": 942.16, "text": " releasing an upgraded version and then there goes your model now you may think that is far fetched"}, {"start": 942.16, "end": 947.28, "text": " but I guess both of us can think of a few places that have a lot of money and have a vested"}, {"start": 947.28, "end": 953.04, "text": " interest in such things not being freely open and freely shared around so take your pick now"}, {"start": 953.04, "end": 957.4399999999999, "text": " here's the deal I don't like these licenses I think they're counterproductive I think they're"}, {"start": 957.4399999999999, "end": 964.0799999999999, "text": " counter to the spirit of open source and I think they have a paternalistic elitist mentality we"}, {"start": 964.0799999999999, "end": 970.9599999999999, "text": " know what's good for you but if you are so inclined if you must use a license with usage restrictions"}, {"start": 970.9599999999999, "end": 978.16, "text": " if that is really your thing to do that then I have created an updated version for you I call it"}, {"start": 978.16, "end": 985.12, "text": " the open rail plus plus license the m here stands for model feel free to adjust this to open"}, {"start": 985.12, "end": 991.52, "text": " rail D or open rail A licenses the license is essentially exactly the same you fill in a bunch"}, {"start": 991.52, "end": 997.1999999999999, "text": " of stuff the only difference is that paragraph seven has the last sentence removed the receiver"}, {"start": 997.1999999999999, "end": 1003.1999999999999, "text": " of the license must not take reasonable efforts to always use the latest version of the model that's"}, {"start": 1003.2, "end": 1010.24, "text": " it if you must use usage restrictions use the open rail plus plus license okay now that we got"}, {"start": 1010.24, "end": 1014.4000000000001, "text": " that out of the way I want to come to the last part of this video and here I want to say again I"}, {"start": 1014.4000000000001, "end": 1023.2800000000001, "text": " am not a lawyer this is my opinion but in my opinion this thing is drastically different from the"}, {"start": 1023.2800000000001, "end": 1028.72, "text": " open source licenses that we are used to not just in terms of the content of a containing usage"}, {"start": 1028.72, "end": 1035.84, "text": " restrictions but in fact the little pathway how such a license is applicable is completely different"}, {"start": 1035.84, "end": 1043.76, "text": " see open source licenses are based on copyright now copyright applies to a work of creative"}, {"start": 1043.76, "end": 1049.52, "text": " making a creative work as it's defined now creative works are defined differently from jurisdiction"}, {"start": 1049.52, "end": 1055.28, "text": " to jurisdiction but here in the NYU journal for intellectual property and entertainment law"}, {"start": 1055.28, "end": 1061.44, "text": " there is a post by Samantha think headric that goes into detail of copyright and code and how it"}, {"start": 1061.44, "end": 1067.04, "text": " relates to algorithms and the outputs of algorithms and that's an important distinction specifically"}, {"start": 1067.04, "end": 1072.08, "text": " it talks about some court decision saying the seventh circuit however has provided a framework that"}, {"start": 1072.08, "end": 1078.72, "text": " breaks down creativity into three distinct elements of originality creativity and novelty a work"}, {"start": 1078.72, "end": 1084.6399999999999, "text": " is original if it is the independent creation of its author a work is creative if it embodies some"}, {"start": 1084.64, "end": 1090.0800000000002, "text": " modest amount of intellectual labor a work is novel if it differs from existing works in some"}, {"start": 1090.0800000000002, "end": 1095.76, "text": " relevant aspect for a work to be copyrightable it must be original and creative but need not be"}, {"start": 1095.76, "end": 1102.16, "text": " novel now all of these things are again pretty vague but here's the deal copyright applies"}, {"start": 1102.16, "end": 1108.24, "text": " automatically if you make a creative work such as if you write a book if you make a movie or"}, {"start": 1108.24, "end": 1115.84, "text": " anything like this you automatically receive copyright for that but that only applies to creative"}, {"start": 1115.84, "end": 1123.52, "text": " works now usually ideas are not considered creative works you can patent certain ideas depending"}, {"start": 1123.52, "end": 1129.52, "text": " on the jurisdiction but you cannot have copyright on an idea you only have copyright of on the"}, {"start": 1129.52, "end": 1136.8, "text": " realization of an idea if it is a creative work so for example you do not have copyright on that"}, {"start": 1136.8, "end": 1146.0, "text": " idea of aromas between two Italian rival families but the work of Romeo and Juliet has copyright to"}, {"start": 1146.0, "end": 1152.1599999999999, "text": " it and the same counts for source code you do not have copyright on the idea of the Linux kernel"}, {"start": 1152.1599999999999, "end": 1158.72, "text": " but copyright exists on the code itself of the kernel that's why you can re-implement someone"}, {"start": 1158.72, "end": 1164.3999999999999, "text": " else's algorithm in your own code provided you haven't copied from them and provided a judge"}, {"start": 1164.4, "end": 1169.92, "text": " rules that it is substantially different implementation of the idea and then you will be the"}, {"start": 1169.92, "end": 1176.0800000000002, "text": " copyright holder to that new code now this gets interesting when we come into the context of"}, {"start": 1176.0800000000002, "end": 1182.0800000000002, "text": " GitHub co-pilot and things like this but let's leave this out of the way for now copyright applies"}, {"start": 1182.0800000000002, "end": 1189.2, "text": " to creative works off and this is sometimes very explicitly described human authors i've previously"}, {"start": 1189.2, "end": 1196.8, "text": " reported on the case of Stephen Tyler that tries to patent or obtain copyright registrations on the"}, {"start": 1196.8, "end": 1203.52, "text": " work outputs of his AI algorithm for example here is an article by Clyde Schumann of Pearl Cohen"}, {"start": 1203.52, "end": 1210.0800000000002, "text": " that goes into detail of how this was again and again rejected the copyright office again concluded"}, {"start": 1210.0800000000002, "end": 1216.56, "text": " that the work lacked the required human authorship necessary to sustain a claim in copyright so a"}, {"start": 1216.56, "end": 1224.0, "text": " human author needs to be involved in order for work to have copyright source code is not the same"}, {"start": 1224.0, "end": 1231.2, "text": " as the output of an algorithm for example if you write the source code for a machine learning"}, {"start": 1231.2, "end": 1237.44, "text": " model the training code the data loading code and all of that the optimizer code then you have"}, {"start": 1237.44, "end": 1244.24, "text": " copyright on all of that but not automatically on the output of that code so then you run the code"}, {"start": 1244.24, "end": 1249.6, "text": " and the output of that code of the training process is the model the model output is different from"}, {"start": 1249.6, "end": 1254.56, "text": " the source code and it's not per se clear whether you have copyright on that model now Tyler here"}, {"start": 1254.56, "end": 1261.44, "text": " argues that he is AI his algorithm should have copyright on that thing but it is also thinkable"}, {"start": 1261.44, "end": 1266.96, "text": " that he as the maker of the algorithm and the runner of the algorithm has copyright on the thing"}, {"start": 1266.96, "end": 1272.48, "text": " but as i understand it both of these claims have been rejected the courts have ruled that while if"}, {"start": 1272.48, "end": 1278.4, "text": " you use something like photoshop to make an i-stigital painting then yes it's essentially a tool and"}, {"start": 1278.4, "end": 1283.76, "text": " you provide the creative input as a human so you have the copyright on that final output of the"}, {"start": 1283.76, "end": 1290.72, "text": " algorithm even if it's run through photoshop but if you simply press go on stable diffusion then"}, {"start": 1290.72, "end": 1297.1200000000001, "text": " you do not necessarily have copyright on the output if you enter a prompt however then that"}, {"start": 1297.12, "end": 1302.9599999999998, "text": " could be considered enough human authorship but what i'm pretty sure again opinion is that if you"}, {"start": 1302.9599999999998, "end": 1309.04, "text": " simply write training code for a language model and then let that run you do not have copyright"}, {"start": 1309.04, "end": 1315.9199999999998, "text": " on the resulting model because it would not be considered on their most jurisdictions as a creative"}, {"start": 1315.9199999999998, "end": 1321.9199999999998, "text": " work because you have not done any sort of creative thinking you have not been able to come up with"}, {"start": 1321.92, "end": 1329.2, "text": " an idea it is not an intent to bring an idea to life in a work in fact we do know that these things"}, {"start": 1329.2, "end": 1334.72, "text": " are essentially black boxes so it's essentially impossible to fulfill these many provisions and"}, {"start": 1334.72, "end": 1340.88, "text": " standards of copyright law here so in my opinion you as a human don't have the copyright on the"}, {"start": 1340.88, "end": 1347.2, "text": " resulting model and neither does the algorithm itself the NYU article states the difficult question"}, {"start": 1347.2, "end": 1352.96, "text": " is whether an algorithm exhibits sufficient intellectual labor or whether we would deem an algorithm"}, {"start": 1352.96, "end": 1359.04, "text": " to be capable of exhibiting any intellectual labor or true creativity at all now obviously copyright"}, {"start": 1359.04, "end": 1363.92, "text": " law is much more difficult than that but after reading through a big chunk of it which i guess is"}, {"start": 1363.92, "end": 1369.92, "text": " still a tiny chunk of everything there is to know i am fairly sure there is no copyright at all"}, {"start": 1369.92, "end": 1377.2, "text": " on models if they are simply trained by an algorithm like the training code for gpt or the training"}, {"start": 1377.2, "end": 1383.68, "text": " code for stable diffusion and therefore you can't simply say here is the license for the model"}, {"start": 1383.68, "end": 1390.72, "text": " the reason that works with code the reason you can simply put an MIT license file next to your code"}, {"start": 1390.72, "end": 1396.8000000000002, "text": " on github is because without that no one would be allowed to use your code by default so by default"}, {"start": 1396.8, "end": 1401.28, "text": " you would have copyright and no one could copy it and by putting that file there you essentially"}, {"start": 1401.28, "end": 1406.24, "text": " allow that however here it's the other way around you do not have a default license you do not"}, {"start": 1406.24, "end": 1412.32, "text": " have a default right on the model itself on the code yes but not on the model and therefore if"}, {"start": 1412.32, "end": 1417.52, "text": " you simply put that model somewhere to download it doesn't matter whether you have a license file"}, {"start": 1417.52, "end": 1423.52, "text": " next to it because i can download the model file and i have never agreed to that license and without"}, {"start": 1423.52, "end": 1429.76, "text": " having agreed to that license there is absolutely nothing you can do against me using that model for"}, {"start": 1429.76, "end": 1435.84, "text": " whatever purpose and that is why at least in my estimation hugging face now implements these barriers"}, {"start": 1435.84, "end": 1441.04, "text": " right here you need to agree to share your contact information to access this model now this is"}, {"start": 1441.04, "end": 1446.32, "text": " framed as you know you share your contact information we just want to know who's using that model"}, {"start": 1446.32, "end": 1452.6399999999999, "text": " no no no no no no no you have to accept the conditions to access its files and content and next to"}, {"start": 1452.64, "end": 1459.5200000000002, "text": " the checkmark it says i have read the license and agree with its terms now this isn't just to register"}, {"start": 1459.5200000000002, "end": 1466.3200000000002, "text": " your username with the authors clicking this checkbox right here is a contract you are entering into"}, {"start": 1466.3200000000002, "end": 1473.92, "text": " a contract with i guess hugging face i'm not really sure but by doing this action you actively accept"}, {"start": 1473.92, "end": 1479.68, "text": " the license and that's how it becomes enforceable i mean if you have different opinions please"}, {"start": 1479.68, "end": 1485.68, "text": " correct me if i'm wrong but for example i don't see the same checkboxy thing here on the bloom"}, {"start": 1485.68, "end": 1491.2, "text": " model or on the original stable diffusion model even though i guess there aren't actually any files"}, {"start": 1491.2, "end": 1497.44, "text": " right here but notice the difference with something like an Apache a gpl or an MIT license there"}, {"start": 1497.44, "end": 1503.6000000000001, "text": " is automatic copyright which essentially gets downgraded for you to be able to use it so you"}, {"start": 1503.6, "end": 1510.24, "text": " essentially implicitly accept the license by doing so whereas here there is no license and you"}, {"start": 1510.24, "end": 1516.56, "text": " enter into a contract by clicking this checkbox and this in my opinion is another downside of"}, {"start": 1516.56, "end": 1522.3999999999999, "text": " these licenses because we can't simply put these models out there anymore for people to download"}, {"start": 1522.3999999999999, "end": 1529.36, "text": " we actually are legally enforced to make sure that every person who's able to download the model"}, {"start": 1529.36, "end": 1535.76, "text": " first has entered into such a contract with whomever it is that makes the model available to"}, {"start": 1535.76, "end": 1541.04, "text": " download and this again severely restricts the distribution capabilities of these models and"}, {"start": 1541.04, "end": 1547.36, "text": " essentially centralizes an already relatively central system even more to institutions who can"}, {"start": 1547.36, "end": 1553.9199999999998, "text": " actually enforce such provisions or at least can enforce the fact that you need to enter into the"}, {"start": 1553.92, "end": 1559.2, "text": " agreement such as having a website with a little checkbox that has a user login and so on but"}, {"start": 1559.2, "end": 1564.8000000000002, "text": " i hope you kind of see that even though this is all framed in terms of open source and so on"}, {"start": 1564.8000000000002, "end": 1570.72, "text": " this has nothing to do with the provisions of open source it is not based on copyright law"}, {"start": 1570.72, "end": 1577.1200000000001, "text": " so the legal pathway is entirely different on top of that again i would argue that these licenses"}, {"start": 1577.1200000000001, "end": 1583.3600000000001, "text": " are quite harmful to the ecosystems they're very paternalistic and i think we should move away"}, {"start": 1583.36, "end": 1589.9199999999998, "text": " as fast as possible from this attitude that some people absolutely know what's good for other people"}, {"start": 1589.9199999999998, "end": 1596.24, "text": " and force them to come back if they have some different idea of what's ethical and unethical and"}, {"start": 1596.24, "end": 1601.28, "text": " useful and not useful and make them essentially go and ask for permission for all of these things"}, {"start": 1601.28, "end": 1606.6399999999999, "text": " yeah i don't like it uh don't do it if you make a model put it out there give good information about"}, {"start": 1606.6399999999999, "end": 1611.4399999999998, "text": " what it can and can't do what it might be useful for what it might not be useful for what the"}, {"start": 1611.44, "end": 1617.28, "text": " dangers of it are and whatnot and then put the decision power and the competence with the users"}, {"start": 1617.28, "end": 1623.68, "text": " contrary to what silicon valley believes the rest of the world isn't just oblivious to any ethical"}, {"start": 1623.68, "end": 1629.2, "text": " considerations i know it's hard to believe but a person can actually make competent decisions"}, {"start": 1629.2, "end": 1634.3200000000002, "text": " even though they're not paying twelve dollars for a pumpkin spice latte and i hope the current"}, {"start": 1634.3200000000002, "end": 1641.04, "text": " run of models for example stable diffusion which is really useful model do get somehow retrained"}, {"start": 1641.04, "end": 1646.96, "text": " or realized in the future to be actually open source and actually conform to the principles"}, {"start": 1646.96, "end": 1652.6399999999999, "text": " of free software until then be careful what you enter into that prompt box that's all from me"}, {"start": 1652.6399999999999, "end": 1659.84, "text": " again if you want to access the open rail plus plus license it's ykilture.com slash license and"}, {"start": 1659.84, "end": 1670.8, "text": " i'll see you next time bye bye"}]
Yannic Kilcher
https://www.youtube.com/watch?v=_NMQyOu2HTo
ROME: Locating and Editing Factual Associations in GPT (Paper Explained & Author Interview)
"#ai #language #knowledge \n\nLarge Language Models have the ability to store vast amounts of facts (...TRUNCATED)
" Hello, today we're talking about locating and editing factual associations in GPD by Kevin Meng, D(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.4, \"text\": \" Hello, today we're talking about locating and editing (...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=igS2Wy8ur5U
Is Stability turning into OpenAI?
"#stablediffusion #aiart #openai \n\nStability AI has stepped into some drama recently. They are acc(...TRUNCATED)
" Stability AI has a few growing pains. In the recent weeks, they found themselves in multiple contr(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 10.5, \"text\": \" Stability AI has a few growing pains. In the recent w(...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=_okxGdHM5b8
Neural Networks are Decision Trees (w/ Alexander Mattick)
"#neuralnetworks #machinelearning #ai \n\nAlexander Mattick joins me to discuss the paper \"Neural N(...TRUNCATED)
" Hello everyone today we're talking about neural networks and decision trees I have Alex undermatic(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 14.0, \"text\": \" Hello everyone today we're talking about neural netwo(...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=3N3Bl5AA5QU
This is a game changer! (AlphaTensor by DeepMind explained)
"#alphatensor #deepmind #ai \n\nMatrix multiplication is the most used mathematical operation in all(...TRUNCATED)
" Hello there, today DeepMind published a new paper called AlphaTensor. This is a system that speeds(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.54, \"text\": \" Hello there, today DeepMind published a new paper cal(...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=S-7r0-oysaU
[ML News] OpenAI's Whisper | Meta Reads Brain Waves | AI Wins Art Fair, Annoys Humans
"#mlnews #openai #ai \n\nEverything important going on in the ML world right here!\n\nSponsor: Paper(...TRUNCATED)
" OpenAI releases Whisper to open source, PyTorch moves to the Linux Foundation and Meta can read yo(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.48, \"text\": \" OpenAI releases Whisper to open source, PyTorch moves(...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=xbxe-x6wvRw
[ML News] Stable Diffusion Takes Over! (Open Source AI Art)
"#stablediffusion #aiart #mlnews \n\nStable Diffusion has been released and is riding a wave of crea(...TRUNCATED)
" Stable diffusion has been released to the public and the world is creative as never before. It's a(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 6.4, \"text\": \" Stable diffusion has been released to the public and t(...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=0PAiQ1jTN5k
How to make your CPU as fast as a GPU - Advances in Sparsity w/ Nir Shavit
"#ai #sparsity #gpu \n\nSparsity is awesome, but only recently has it become possible to properly ha(...TRUNCATED)
" Today I'm talking to Nier Shivit about Sparsity. Nier has been long time active in the field as a (...TRUNCATED)
"[{\"start\": 0.0, \"end\": 5.68, \"text\": \" Today I'm talking to Nier Shivit about Sparsity. Nier(...TRUNCATED)
Yannic Kilcher
https://www.youtube.com/watch?v=K-cXYoqHxBc
More Is Different for AI - Scaling Up, Emergence, and Paperclip Maximizers (w/ Jacob Steinhardt)
"#ai #interview #research \n\nJacob Steinhardt believes that future AI systems will be qualitatively(...TRUNCATED)
" Hi, this is an interview with Jacob Steinhardt, who is the author of a blog post series called Mor(...TRUNCATED)
"[{\"start\": 0.0, \"end\": 7.84, \"text\": \" Hi, this is an interview with Jacob Steinhardt, who i(...TRUNCATED)

Dataset Card for "Yannic-Kilcher"

More Information needed

Downloads last month
39
Edit dataset card