|
--- |
|
license: apache-2.0 |
|
tags: |
|
- Kandinsky |
|
- text-image |
|
- text2image |
|
- diffusion |
|
- latent diffusion |
|
- mCLIP-XLMR |
|
- mT5 |
|
--- |
|
|
|
# Kandinsky 2.0 |
|
Kandinsky 2.0 — the first multilingual text2image model. |
|
[GitHub repository](https://github.com/ai-forever/Kandinsky-2.0) |
|
|
|
**UNet size: 1.2B parameters** |
|
|
|
![NatallE.png](https://s3.amazonaws.com/moonup/production/uploads/1669132577749-5f91b1208a61a359f44e1851.png) |
|
|
|
It is a latent diffusion model with two multi-lingual text encoders: |
|
* mCLIP-XLMR (560M parameters) |
|
* mT5-encoder-small (146M parameters) |
|
|
|
|
|
These encoders and multilingual training datasets unveil the real multilingual text2image generation experience! |
|
|
|
![header.png](https://s3.amazonaws.com/moonup/production/uploads/1669132825912-5f91b1208a61a359f44e1851.png) |
|
|
|
# How to use |
|
|
|
```python |
|
pip install "git+https://github.com/ai-forever/Kandinsky-2.0.git" |
|
|
|
from kandinsky2 import get_kandinsky2 |
|
model = get_kandinsky2('cuda', task_type='text2img') |
|
images = model.generate_text2img('кошка в космосе', batch_size=4, h=512, w=512, num_steps=75, denoised_type='dynamic_threshold', dynamic_threshold_v=99.5, sampler='ddim_sampler', ddim_eta=0.01, guidance_scale=10) |
|
``` |
|
|
|
# Authors |
|
|
|
+ Arseniy Shakhmatov: [Github](https://github.com/cene555), [Blog](https://t.me/gradientdip) |
|
+ Anton Razzhigaev: [Github](https://github.com/razzant), [Blog](https://t.me/abstractDL) |
|
+ Aleksandr Nikolich: [Github](https://github.com/AlexWortega), [Blog](https://t.me/lovedeathtransformers) |
|
+ Vladimir Arkhipkin: [Github](https://github.com/oriBetelgeuse) |
|
+ Igor Pavlov: [Github](https://github.com/boomb0om) |
|
+ Andrey Kuznetsov: [Github](https://github.com/kuznetsoffandrey) |
|
+ Denis Dimitrov: [Github](https://github.com/denndimitrov) |
|
|
|
|
|
|
|
|