Artiprocher
commited on
Commit
•
736bdcb
1
Parent(s):
ca9b1e9
add model
Browse files- .ipynb_checkpoints/README-checkpoint.md +65 -0
- README.md +62 -0
- config.json +32 -0
- generation_config.json +7 -0
- pytorch_model.bin +3 -0
- special_tokens_map.json +7 -0
- tokenizer.json +0 -0
- tokenizer_config.json +11 -0
.ipynb_checkpoints/README-checkpoint.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- pytorch
|
5 |
+
- transformers
|
6 |
+
- text-generation
|
7 |
+
---
|
8 |
+
|
9 |
+
# BeautifulPrompt
|
10 |
+
|
11 |
+
## 简介 Brief Introduction
|
12 |
+
|
13 |
+
我们开源了一个自动Prompt生成模型,您可以直接输入一个极其简单的Prompt,就可以得到经过语言模型优化过的Prompt,帮助您更简单地生成高颜值图像。
|
14 |
+
|
15 |
+
We release an automatic Prompt generation model, you can directly enter an extremely simple Prompt and get a Prompt optimized by the language model to help you generate high value images more simply.
|
16 |
+
|
17 |
+
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
|
18 |
+
|
19 |
+
## 使用 Usage
|
20 |
+
|
21 |
+
```python
|
22 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
23 |
+
|
24 |
+
tokenizer = AutoTokenizer.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd')
|
25 |
+
model = AutoModelForCausalLM.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd').eval().cuda()
|
26 |
+
|
27 |
+
raw_prompt = '1 girl'
|
28 |
+
input = f'Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: {raw_prompt}\nOutput:'
|
29 |
+
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
|
30 |
+
|
31 |
+
outputs = model.generate(
|
32 |
+
input_ids,
|
33 |
+
max_length=384,
|
34 |
+
do_sample=True,
|
35 |
+
temperature=1.0,
|
36 |
+
top_k=50,
|
37 |
+
top_p=0.95,
|
38 |
+
repetition_penalty=1.2,
|
39 |
+
num_return_sequences=5)
|
40 |
+
|
41 |
+
prompts = tokenizer.batch_decode(outputs[:, input_ids.size(1):], skip_special_tokens=True)
|
42 |
+
prompts = [p.strip() for p in prompts]
|
43 |
+
print(prompts)
|
44 |
+
```
|
45 |
+
|
46 |
+
## 作品展示 Gallery
|
47 |
+
|
48 |
+
| Original | BeautifulPrompt |
|
49 |
+
| ---------------------------------------- | ---------------------------------- |
|
50 |
+
| prompt: taylor swift, country, golden, fearless,wavehair | prompt: portrait of taylor swift as a beautiful woman, long hair, country, golden ratio, intricate, symmetrical, cinematic lighting, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration |
|
51 |
+
| ![](example1.png) | ![](example2.png) |
|
52 |
+
|
53 |
+
|
54 |
+
| Original | BeautifulPrompt |
|
55 |
+
| ---------------------------------------- | ---------------------------------- |
|
56 |
+
| prompt: A majestic sailing ship | prompt: a massive sailing ship, epic, cinematic, artstation, greg rutkowski, james gurney, sparth |
|
57 |
+
| ![](example3.png) | ![](example4.png) |
|
58 |
+
|
59 |
+
|
60 |
+
|
61 |
+
## 使用须知 Notice for Use
|
62 |
+
|
63 |
+
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
|
64 |
+
|
65 |
+
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
README.md
CHANGED
@@ -1,3 +1,65 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- pytorch
|
5 |
+
- transformers
|
6 |
+
- text-generation
|
7 |
---
|
8 |
+
|
9 |
+
# BeautifulPrompt
|
10 |
+
|
11 |
+
## 简介 Brief Introduction
|
12 |
+
|
13 |
+
我们开源了一个自动Prompt生成模型,您可以直接输入一个极其简单的Prompt,就可以得到经过语言模型优化过的Prompt,帮助您更简单地生成高颜值图像。
|
14 |
+
|
15 |
+
We release an automatic Prompt generation model, you can directly enter an extremely simple Prompt and get a Prompt optimized by the language model to help you generate high value images more simply.
|
16 |
+
|
17 |
+
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
|
18 |
+
|
19 |
+
## 使用 Usage
|
20 |
+
|
21 |
+
```python
|
22 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
23 |
+
|
24 |
+
tokenizer = AutoTokenizer.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd')
|
25 |
+
model = AutoModelForCausalLM.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd').eval().cuda()
|
26 |
+
|
27 |
+
raw_prompt = '1 girl'
|
28 |
+
input = f'Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: {raw_prompt}\nOutput:'
|
29 |
+
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
|
30 |
+
|
31 |
+
outputs = model.generate(
|
32 |
+
input_ids,
|
33 |
+
max_length=384,
|
34 |
+
do_sample=True,
|
35 |
+
temperature=1.0,
|
36 |
+
top_k=50,
|
37 |
+
top_p=0.95,
|
38 |
+
repetition_penalty=1.2,
|
39 |
+
num_return_sequences=5)
|
40 |
+
|
41 |
+
prompts = tokenizer.batch_decode(outputs[:, input_ids.size(1):], skip_special_tokens=True)
|
42 |
+
prompts = [p.strip() for p in prompts]
|
43 |
+
print(prompts)
|
44 |
+
```
|
45 |
+
|
46 |
+
## 作品展示 Gallery
|
47 |
+
|
48 |
+
| Original | BeautifulPrompt |
|
49 |
+
| ---------------------------------------- | ---------------------------------- |
|
50 |
+
| prompt: taylor swift, country, golden, fearless,wavehair | prompt: portrait of taylor swift as a beautiful woman, long hair, country, golden ratio, intricate, symmetrical, cinematic lighting, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration |
|
51 |
+
| ![](example1.png) | ![](example2.png) |
|
52 |
+
|
53 |
+
|
54 |
+
| Original | BeautifulPrompt |
|
55 |
+
| ---------------------------------------- | ---------------------------------- |
|
56 |
+
| prompt: A majestic sailing ship | prompt: a massive sailing ship, epic, cinematic, artstation, greg rutkowski, james gurney, sparth |
|
57 |
+
| ![](example3.png) | ![](example4.png) |
|
58 |
+
|
59 |
+
|
60 |
+
|
61 |
+
## 使用须知 Notice for Use
|
62 |
+
|
63 |
+
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
|
64 |
+
|
65 |
+
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
|
config.json
ADDED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "alibaba-pai/pai-bloom-1b1-text2prompt-sd",
|
3 |
+
"apply_residual_connection_post_layernorm": false,
|
4 |
+
"architectures": [
|
5 |
+
"BloomForCausalLM"
|
6 |
+
],
|
7 |
+
"attention_dropout": 0.0,
|
8 |
+
"attention_softmax_in_fp32": true,
|
9 |
+
"bias_dropout_fusion": true,
|
10 |
+
"bos_token_id": 1,
|
11 |
+
"eos_token_id": 2,
|
12 |
+
"hidden_dropout": 0.0,
|
13 |
+
"hidden_size": 1536,
|
14 |
+
"initializer_range": 0.02,
|
15 |
+
"layer_norm_epsilon": 1e-05,
|
16 |
+
"masked_softmax_fusion": true,
|
17 |
+
"model_type": "bloom",
|
18 |
+
"n_head": 16,
|
19 |
+
"n_inner": null,
|
20 |
+
"n_layer": 24,
|
21 |
+
"offset_alibi": 100,
|
22 |
+
"pad_token_id": 3,
|
23 |
+
"pretraining_tp": 1,
|
24 |
+
"skip_bias_add": true,
|
25 |
+
"skip_bias_add_qkv": false,
|
26 |
+
"slow_but_exact": false,
|
27 |
+
"torch_dtype": "bfloat16",
|
28 |
+
"transformers_version": "4.29.2",
|
29 |
+
"unk_token_id": 0,
|
30 |
+
"use_cache": true,
|
31 |
+
"vocab_size": 250880
|
32 |
+
}
|
generation_config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_from_model_config": true,
|
3 |
+
"bos_token_id": 1,
|
4 |
+
"eos_token_id": 2,
|
5 |
+
"pad_token_id": 3,
|
6 |
+
"transformers_version": "4.29.2"
|
7 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6c63faf4cb0a99a8cac144fb3faaeabb0c053b33d09c9e2d5ebee36440b8d506
|
3 |
+
size 2140165151
|
special_tokens_map.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"bos_token": "<s>",
|
3 |
+
"eos_token": "</s>",
|
4 |
+
"pad_token": "</s>",
|
5 |
+
"sep_token": "<sep>",
|
6 |
+
"unk_token": "<unk>"
|
7 |
+
}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"add_prefix_space": false,
|
3 |
+
"bos_token": "<s>",
|
4 |
+
"clean_up_tokenization_spaces": false,
|
5 |
+
"eos_token": "</s>",
|
6 |
+
"model_max_length": 1000000000000000019884624838656,
|
7 |
+
"pad_token": "<pad>",
|
8 |
+
"padding_side": "left",
|
9 |
+
"tokenizer_class": "BloomTokenizer",
|
10 |
+
"unk_token": "<unk>"
|
11 |
+
}
|