Kandinsky_2.0 / README.md
eltociear's picture
Update README.md
2e1ebc6
|
raw
history blame
1.93 kB
---
license: apache-2.0
tags:
- Kandinsky
- text-image
- text2image
- diffusion
- latent diffusion
- mCLIP-XLMR
- mT5
---
# Kandinsky 2.0
Kandinsky 2.0 — the first multilingual text2image model.
[Open In Colab](https://colab.research.google.com/drive/1uPg9KwGZ2hJBl9taGA_3kyKGw12Rh3ij?usp=sharing)
[GitHub repository](https://github.com/ai-forever/Kandinsky-2.0)
[Habr post](https://habr.com/ru/company/sberbank/blog/701162/)
[Demo](https://rudalle.ru/)
**UNet size: 1.2B parameters**
![NatallE.png](https://s3.amazonaws.com/moonup/production/uploads/1669132577749-5f91b1208a61a359f44e1851.png)
It is a latent diffusion model with two multi-lingual text encoders:
* mCLIP-XLMR (560M parameters)
* mT5-encoder-small (146M parameters)
These encoders and multilingual training datasets unveil the real multilingual text2image generation experience!
![header.png](https://s3.amazonaws.com/moonup/production/uploads/1669132825912-5f91b1208a61a359f44e1851.png)
# How to use
```python
pip install "git+https://github.com/ai-forever/Kandinsky-2.0.git"
from kandinsky2 import get_kandinsky2
model = get_kandinsky2('cuda', task_type='text2img')
images = model.generate_text2img('кошка в космосе', batch_size=4, h=512, w=512, num_steps=75, denoised_type='dynamic_threshold', dynamic_threshold_v=99.5, sampler='ddim_sampler', ddim_eta=0.01, guidance_scale=10)
```
# Authors
+ Arseniy Shakhmatov: [GitHub](https://github.com/cene555), [Blog](https://t.me/gradientdip)
+ Anton Razzhigaev: [GitHub](https://github.com/razzant), [Blog](https://t.me/abstractDL)
+ Aleksandr Nikolich: [GitHub](https://github.com/AlexWortega), [Blog](https://t.me/lovedeathtransformers)
+ Vladimir Arkhipkin: [GitHub](https://github.com/oriBetelgeuse)
+ Igor Pavlov: [GitHub](https://github.com/boomb0om)
+ Andrey Kuznetsov: [GitHub](https://github.com/kuznetsoffandrey)
+ Denis Dimitrov: [GitHub](https://github.com/denndimitrov)