# SkyPaint-Chinese-EN-v-1.0
SkyPaint is a Chinese-English bilingual text-generated image project developed by Singularity-AI. It is still being updated and optimized.
# Model Introduction
The SkyPaint text generation image model is mainly composed of two parts, namely the prompt word text encoder model and the diffusion model. Therefore, our optimization is also divided into two steps. First, based on [OpenAI-CLIP](https://github.com/openai/CLIP), we optimized the prompt word text encoder model to make SkyPaint have the ability to recognize Chinese and English, and then optimized the diffusion model, so that SkyPaint has modern artistic capabilities and can produce high-quality pictures.
# Model Function
Chinese and English mixed prompt word input.
Generating high-quality images in a modern art style.
English prompt words for stable_diffusion_1.x official model and related fine-tuning models.
Retain usage habits and methods of stable_diffusion prompt words.
Introduction to SkyCLIP Models
SkyCLIP is a CLIP model obtained by using an efficient method of training Chinese-English bilingual CLIP models. This method only needs to use text data to achieve efficient distillation of the OpenAI-CLIP model, which greatly reduces the data threshold. At the same time, training requires Compared with the original CLIP model, the computing power requirement is reduced by more than 90%, which is convenient for the open source community to reproduce/fine-tune. This method only changes the text encoder of OpenAI-CLIP, and can be used with the image encoder of OpenAI-CLIP to realize the image-text retrieval function.
# Show Cases
[机械狗](results/1.png)
[城堡 大海 夕阳 宫崎骏动画](results/2.png)
[花落知多少](results/3.png)
[半鸡半人,强壮](results/4.png)
[鸡你太美](results/5.png)
# Trail and Experience
Please visit [SkyPaint official website](https://sky-paint.singularity-ai.com/index.html#/),
or [scan the QR code with WeChat](https://user-images.githubusercontent.com/120169448/209092358-7556d2ea-6374-4235-b2ee-77665f066d2c.jpg) to experience the model.
# Test cases
```py
from diffusers import StableDiffusionPipeline
device = 'cuda'
pipe = StableDiffusionPipeline.from_pretrained("SkyWork/SkyPaint").to(device)
prompts = [
'机械狗',
'城堡 大海 夕阳 宫崎骏动画',
'花落知多少',
'鸡你太美',
]
for prompt in prompts:
prompt = 'sai-v1 art, ' + prompt
image = pipe(prompt).images[0]
image.save("%s.jpg" % prompt)
```
## SkyCLIP training data source
Chinese-English Machine Translation Task Parallel Corpus.
United Nations Chinese-English Parallel Corpus.
[LAION](https://laion.ai/) Chinese and English Corpus (Part).
[Wukong](https://wukong-dataset.github.io/wukong-dataset/index.html) Chinese Corpus (Part).
[AI-Challenger](https://github.com/AIChallenger) translation task Chinese and English corpus.
Chinese and English corpus of ancient poetry.
A Chinese and English corpus composed of common words in the prompt word handbook/magic book.
## SkyCLIP training method
Use the text_encoder of OpenAI-CLIP as the teacher model and freeze the parameters. The student model uses a multilingual BERT model of the same size as the teacher model. During training, the English input is obtained through the teacher model to obtain the corresponding t_en_hiddent_state, and English and Chinese are respectively obtained through the student model. The corresponding s_en_hiddent_state, s_zh_hidden_state uses l1, l2, cos distance, etc. to construct loss functions so that the Chinese and English hidden_state of the student model gradually approaches the hidden_state of the teacher model. Due to the natural unequal length of Chinese and English in the parallel corpus, in order to make the parallel Chinese and English as close as possible, we also added a Chinese decoder during the training process, and used the Chinese and English hidden_state of the student model as the hidden_state input of the decoder. The translation task is used to assist in the alignment of Chinese and English.
## SkyCLIP Model Evaluation
At present, we mainly evaluate the zero-shot performance of SkyCLIP on Flickr30K-CN, and mainly compare several related open source models with Chinese capabilities. For the L/14 size model, our evaluation process refers to the evaluation script provided by Chinese-CLIP.
Flickr30K-CN Retrieval:
Task | Text-to-Image | Image-to-Text |
MR |
Setup | Zero-shot | Zero-shot |
Metric | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
Taiyi-326M | 53.8 | 79.9 | 86.6 | 64.0 | 90.4 | 96.1 | 78.47 |
AltCLIP | 50.7 | 75.4 | 83.1 | 73.4 | 92.8 | 96.9 | 78.72 |
Wukong | 51.9 | 78.6 | 85.9 | 75 | 94.4 | 97.7 | 80.57 |
R2D2 | 42.6 | 69.5 | 78.6 | 63.0 | 90.1 | 96.4 | 73.37 |
CN-CLIP | 68.1 | 89.7 | 94.5 | 80.2 | 96.6 | 98.2 | 87.87 |
SkyCLIP | 58.8 | 82.6 | 89.6 | 78.8 | 96.1 | 98.3 | 84.04 |
# SkyCLIP calculates image-text similarity
```py
from PIL import Image
import requests
import clip
import torch
from transformers import BertTokenizer
from transformers import CLIPProcessor, CLIPModel, CLIPTextModel
import numpy as np
query_texts = ['一个人', '一辆汽车', '两个男人', '两个女人'] # 这里是输入提示词,可以随意替换。
# 加载SkyCLIP 中英文双语 text_encoder
text_tokenizer = BertTokenizer.from_pretrained("./tokenizer")
text_encoder = CLIPTextModel.from_pretrained("./text_encoder").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000040083.jpg" #这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
clip_text_proj = clip_model.text_projection
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text)[0]
# sep_token对应于openai-clip的eot_token
sep_index = torch.nonzero(text == student_tokenizer.sep_token_id)
text_features = text_features[torch.arange(text.shape[0]), sep_index[:, 1]]
# 乘text投影矩阵
text_features = clip_text_proj(text_features)
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
## Diffusion Model
Our data uses the filtered Laion data set as the training data, and adds 'sai-v1 art' as the tag in front of the text so that the model can learn the style and quality we want more quickly. The pre-training model uses [stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as pre-training, and uses 16 A100s for 50 hours of training. The current model is still being optimized, and there will be more stable model updates in the future.
# License
- [CreativeML Open RAIL-M](LICENSE-MODEL)
# Join the developer group
[Scan the QR code with WeChat](https://user-images.githubusercontent.com/120169448/211474310-88048d66-bb14-4f9a-9137-91e358f7f1e3.jpg) to join in the developer group of SkyPaint.
——————————————————————————————————————————————
# SkyPaint-Chinese-EN-v-1.0
#### SkyPaint是由奇点智源开发的中英双语文本生成图像的项目,目前还在持续更新优化中
- 项目地址: [SkyWorkAIGC-SkyPaint](https://github.com/SkyWorkAIGC/SkyPaint)
# 模型介绍
SkyPaint文本生成图片模型主要由两大部分组成,即提示词文本编码器模型和扩散模型两大部分。因此我们的优化也分为两步,首先基于[OpenAI-CLIP](https://github.com/openai/CLIP)优化了提示词文本编码器模型使得SkyPaint具有中英文识别能力,然后优化了扩散模型,使得SkyPaint具有现代艺术能力可以产生高质量图片。
# 模型功能
* 支持汉语和英文以及中英文混合提示词输入
* 支持生成现代艺术风格的高质量图片
* 支持stable_diffusion_1.x官方模型及相关微调模型的英文提示词
* 保留stable_diffusion提示词的使用习惯和方法
# 体验试用
你可以访问[SkyPaint官网](https://sky-paint.singularity-ai.com/index.html#/),
或者 [微信扫描此小程序码](https://user-images.githubusercontent.com/120169448/209092358-7556d2ea-6374-4235-b2ee-77665f066d2c.jpg) 来体验。
### SkyCLIP模型简介
SkyCLIP是我们采用一种高效的训练中英双语CLIP模型的方法得到的CLIP模型,该方法仅需要使用文本数据即可实现对[OpenAI-CLIP](https://github.com/openai/CLIP)模型的高效蒸馏,大幅降低了数据门槛,同时训练所需算力要求相较于原始CLIP模型减少90%以上,方便开源社区可以进行复现/微调。该方法仅改变了OpenAI-CLIP的文本编码器,可搭配使用OpenAI-CLIP的图像编码器实现图文检索功能。
### SkyCLIP训练数据来源
* 中英文机器翻译任务平行语料
* 联合国中英文平行语料
* [LAION](https://laion.ai/)中英文语料(部分)
* [Wukong](https://wukong-dataset.github.io/wukong-dataset/index.html)中文语料(部分)
* [AI-Challenger](https://github.com/AIChallenger)翻译任务中英文语料
* 古诗词中英文语料
* 提示词手册/魔法书中常见词组合而成的中英文语料
### SkyCLIP训练方法
将OpenAI-CLIP的text_encoder作为教师模型并且冻结参数,学生模型采用和教师模型同样大小的多语言BERT模型,训练时英文输入通过教师模型获取相应的t_en_hiddent_state,英文和中文分别通过学生模型获取相应s_en_hiddent_state,s_zh_hidden_state,采用l1、l2、cos距离等构造损失函数使得学生模型的中英文hiddent_state逐渐靠近教师模型的hiddent_state。由于平行语料的中文和英文存在天然的不等长性质,为了使得平行的中文和英文尽量接近,训练过程中我们还添加了中文解码器,使用学生模型的中英文hiddent_state作为解码器的hidden_state输入,通过翻译任务来辅助实现中文和英文的对齐目的。
### SkyCLIP模型评估
目前我们主要评估了SkyCLIP在[Flickr30K-CN](https://github.com/li-xirong/cross-lingual-cap)的zero-shot表现,主要对比了若干具备中文能力的相关开源模型,为确保对比的公平性,具有多个模型尺寸的我们均选取基于OpenAI-CLIP ViT-L/14尺寸的模型,我们评估的流程参考了[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)所提供的评估脚本。
**Flickr30K-CN Retrieval**:
Task | Text-to-Image | Image-to-Text |
MR |
Setup | Zero-shot | Zero-shot |
Metric | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 |
Taiyi-326M | 53.8 | 79.9 | 86.6 | 64.0 | 90.4 | 96.1 | 78.47 |
AltCLIP | 50.7 | 75.4 | 83.1 | 73.4 | 92.8 | 96.9 | 78.72 |
Wukong | 51.9 | 78.6 | 85.9 | 75 | 94.4 | 97.7 | 80.57 |
R2D2 | 42.6 | 69.5 | 78.6 | 63.0 | 90.1 | 96.4 | 73.37 |
CN-CLIP | 68.1 | 89.7 | 94.5 | 80.2 | 96.6 | 98.2 | 87.87 |
SkyCLIP | 58.8 | 82.6 | 89.6 | 78.8 | 96.1 | 98.3 | 84.04 |
### SkyCLIP计算图文相似度
```py
from PIL import Image
import requests
import clip
import torch
from transformers import BertTokenizer
from transformers import CLIPProcessor, CLIPModel, CLIPTextModel
import numpy as np
query_texts = ['一个人', '一辆汽车', '两个男人', '两个女人'] # 这里是输入提示词,可以随意替换。
# 加载SkyCLIP 中英文双语 text_encoder
text_tokenizer = BertTokenizer.from_pretrained("./tokenizer")
text_encoder = CLIPTextModel.from_pretrained("./text_encoder").eval()
text = text_tokenizer(query_texts, return_tensors='pt', padding=True)['input_ids']
url = "http://images.cocodataset.org/val2017/000000040083.jpg" #这里可以换成任意图片的url
# 加载CLIP的image encoder
clip_model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14")
clip_text_proj = clip_model.text_projection
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
image = processor(images=Image.open(requests.get(url, stream=True).raw), return_tensors="pt")
with torch.no_grad():
image_features = clip_model.get_image_features(**image)
text_features = text_encoder(text)[0]
# sep_token对应于openai-clip的eot_token
sep_index = torch.nonzero(text == student_tokenizer.sep_token_id)
text_features = text_features[torch.arange(text.shape[0]), sep_index[:, 1]]
# 乘text投影矩阵
text_features = clip_text_proj(text_features)
image_features = image_features / image_features.norm(dim=1, keepdim=True)
text_features = text_features / text_features.norm(dim=1, keepdim=True)
# 计算余弦相似度 logit_scale是尺度系数
logit_scale = clip_model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).cpu().numpy()
print(np.around(probs, 3))
```
### 扩散模型 Diffusion Model
我们的数据采用了筛选过的Laion数据集作为训练数据,同时在文本前面加上了 'sai-v1 art' 作为tag使模型能够更快速的学习到我们想要的风格及质量。
预训练模型采用了[stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) 作为预训练,使用了16块A100训练了50个小时。
目前模型还在持续优化中,后续会有更稳定的模型更新
# 效果展示
[机械狗](results/1.png)
[城堡 大海 夕阳 宫崎骏动画](results/2.png)
[花落知多少](results/3.png)
[半鸡半人,强壮](results/4.png)
[鸡你太美](results/5.png)
## 测试用例
```py
from diffusers import StableDiffusionPipeline
device = 'cuda'
pipe = StableDiffusionPipeline.from_pretrained("SkyWork/SkyPaint").to(device)
prompts = [
'机械狗',
'城堡 大海 夕阳 宫崎骏动画',
'花落知多少',
'鸡你太美',
]
for prompt in prompts:
prompt = 'sai-v1 art, ' + prompt
image = pipe(prompt).images[0]
image.save("%s.jpg" % prompt)
```
# License
- [CreativeML Open RAIL-M](LICENSE-MODEL)
# 加入开发者群
[微信扫描此二维码](https://user-images.githubusercontent.com/120169448/211474310-88048d66-bb14-4f9a-9137-91e358f7f1e3.jpg) 加入SkyPaint画师群,和其他开发者&小程序真实用户一起沟通。