Guizmus commited on
Commit
19bec55
1 Parent(s): cfc07e0

Upload Tardisfusion-v2.ckpt (#2)

Browse files

- Upload Tardisfusion-v2.ckpt (434c94c2bc7d8878ea17944ff66e92f951e62b05)
- Upload 11 files (f45a2a88ab35dea411bcfd2e0ae678874a66678b)
- Rename preprocessor_config.json to feature_extractor/preprocessor_config.json (bf6dc0b2f312dad2786e414ffba6762c3986adb1)
- Rename scheduler_config.json to scheduler/scheduler_config.json (16a5d827b3a399e6343fb30d8e16b366c7ce3d99)
- Rename config.json to text_encoder/config.json (24761c21fa1499128e699ac037c229dbbe349980)
- Delete pytorch_model.bin (088400d54777595eebad563124b42b719b778c35)
- Upload pytorch_model.bin (7431fc3039b5377d54b56b18cf9d27c6378326fb)
- Delete merges.txt (cb0e7f8817c81eb4856902c97cf74e6cbc7b2d52)
- Rename special_tokens_map.json to tokenizer/special_tokens_map.json (67fe7e7c28e460361057cca7749421df91eb3ca4)
- Rename tokenizer_config.json to tokenizer/tokenizer_config.json (5b212972e870301d3bafc361146ff8887c665a59)
- Delete vocab.json (c046fbb37da6b0ecef0fbf6d62d357f8a53b48f4)
- Upload 2 files (15ffa6999f57654913388f3dfd4c808e8a35e8e6)
- Delete diffusion_pytorch_model.bin (4833fec746ed4c536d377030d4b6b22e128f5e59)
- Create tokenizer/config.json (47162f072f31babf62120eeced4745e25b44ae78)
- Rename tokenizer/config.json to unet/config.json (d74b53381b8687fcfde14f0f43ccbd433f06665a)
- Upload diffusion_pytorch_model.bin (2247339202cf42769a05be75a5da7fa74f990928)
- Create vae/config.json (78aa3c5a1a11e9bfd246143f8492fc77b8efef7a)
- Upload diffusion_pytorch_model.bin (64f70f74ba5283011907696b82b001a1fddc4a5a)
- Update README.md (998561d765d73f7ccc10948a4dd5e2945cbc49db)
- Upload showcase.jpg (cb9ac46015d39ccbfe464a9ece52f81f6bae9ad6)

.gitattributes CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
32
  *.zst filter=lfs diff=lfs merge=lfs -text
33
  *tfevents* filter=lfs diff=lfs merge=lfs -text
34
  Tardisfusion-v1.ckpt filter=lfs diff=lfs merge=lfs -text
 
 
32
  *.zst filter=lfs diff=lfs merge=lfs -text
33
  *tfevents* filter=lfs diff=lfs merge=lfs -text
34
  Tardisfusion-v1.ckpt filter=lfs diff=lfs merge=lfs -text
35
+ Tardisfusion-v2.ckpt filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,13 +1,37 @@
1
  ---
2
- license: openrail
3
  ---
4
 
5
- This model intends to replicate the generic style of the interior of a TARDIS. The first dataset (82 pictures) focusses on the console a little too much, but training 2k steps only helped complete what was already in the original model this was based on, RunwayML 1.5
6
- All doctor who seasons were used, with no distinction in the prompt.
 
 
 
 
 
 
 
 
 
 
7
 
8
- * Recommended sampling method : DPM2 a Karras, Euler a
9
- * Activation token : TardisRoom style
10
- * Best used on img2img
11
- * [download link](https://huggingface.co/Guizmus/Tardisfusion/resolve/main/Tardisfusion-v1.ckpt)
12
 
13
- This first version is still rough, better ones should come soon.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: creativeml-openrail-m
3
  ---
4
 
5
+ # TARDISfusion
6
+ <p>
7
+ <img src="https://huggingface.co/Guizmus/Tardisfusion/raw/main/showcase.jpg"/><br/>
8
+ This is a Dreamboothed Stable Diffusion model trained on 3 Style concepts.<br/>
9
+ The total dataset is made of 209 pictures, and the training has been done on runawayml 1.5 with 2500 steps and the new VAE.
10
+ The following tokens will add their corresponding concept :<br/>
11
+ <ul>
12
+ <li><b>Classic Tardis style</b> : Architectural and furniture style seen inside the TARDIS in the series before the reboot.</li>
13
+ <li><b>Modern Tardis style</b>: Architectural and furniture style seen inside the TARDIS in the series after the reboot</li>
14
+ <li><b>Tardis Box style</b>: A style made from the TARDIS seen from the outside. Summons a TARDIS anywhere.</li>
15
+ </ul>
16
+ </p>
17
 
18
+ ## 🧨 Diffusers
 
 
 
19
 
20
+ This model can be used just like any other Stable Diffusion model. For more information,
21
+ please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
22
+
23
+ You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
24
+
25
+ ```python
26
+ from diffusers import StableDiffusionPipeline
27
+ import torch
28
+
29
+ model_id = "Guizmus/Tardisfusion"
30
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
31
+ pipe = pipe.to("cuda")
32
+
33
+ prompt = "a bedroom, Classic Tardis style"
34
+ image = pipe(prompt).images[0]
35
+
36
+ image.save("./TARDIS Style.png")
37
+ ```
Tardisfusion-v2.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7dd51bc640826a4ed8b724153d548270e4a2fc725fa08a110cc7b4e300c6b62
3
+ size 2132871991
args.json ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "pretrained_model_name_or_path": "F:/AI/Diffusers/stable-diffusion-v1-5",
3
+ "pretrained_vae_name_or_path": null,
4
+ "revision": "fp16",
5
+ "tokenizer_name": null,
6
+ "instance_data_dir": null,
7
+ "class_data_dir": null,
8
+ "instance_prompt": null,
9
+ "class_prompt": null,
10
+ "save_sample_prompt": "a bedroom,modern tardis style",
11
+ "save_sample_negative_prompt": null,
12
+ "n_save_sample": 4,
13
+ "save_guidance_scale": 7.5,
14
+ "save_infer_steps": 50,
15
+ "pad_tokens": false,
16
+ "with_prior_preservation": true,
17
+ "prior_loss_weight": 1.0,
18
+ "num_class_images": 100,
19
+ "output_dir": "F:/AI/Outputs/Dreambooth/Tardisfusion",
20
+ "seed": 42,
21
+ "resolution": 512,
22
+ "center_crop": false,
23
+ "train_text_encoder": true,
24
+ "train_batch_size": 1,
25
+ "sample_batch_size": 8,
26
+ "num_train_epochs": 9,
27
+ "max_train_steps": 2500,
28
+ "gradient_accumulation_steps": 1,
29
+ "gradient_checkpointing": true,
30
+ "learning_rate": 1e-06,
31
+ "scale_lr": false,
32
+ "lr_scheduler": "constant",
33
+ "lr_warmup_steps": 0,
34
+ "use_8bit_adam": false,
35
+ "adam_beta1": 0.9,
36
+ "adam_beta2": 0.999,
37
+ "adam_weight_decay": 0.01,
38
+ "adam_epsilon": 1e-08,
39
+ "max_grad_norm": 1.0,
40
+ "push_to_hub": false,
41
+ "hub_token": null,
42
+ "hub_model_id": null,
43
+ "logging_dir": "logs",
44
+ "log_interval": 10,
45
+ "save_interval": 500,
46
+ "save_min_steps": 0,
47
+ "mixed_precision": "no",
48
+ "not_cache_latents": false,
49
+ "local_rank": -1,
50
+ "concepts_list": [
51
+ {
52
+ "instance_prompt": "Classic Tardis style",
53
+ "instance_data_dir": "F:/AI/Datasets/DrWho/ClassicTardis",
54
+ "class_prompt": "black and white style",
55
+ "class_data_dir": "F:/AI/Datasets/Regularisation/BlackNWhiteStyle"
56
+ },
57
+ {
58
+ "instance_prompt": "Modern Tardis style",
59
+ "instance_data_dir": "F:/AI/Datasets/DrWho/ModernTardis",
60
+ "class_prompt": "strange style",
61
+ "class_data_dir": "F:/AI/Datasets/Regularisation/StrangeStyle"
62
+ },
63
+ {
64
+ "instance_prompt": "Tardis Box style",
65
+ "instance_data_dir": "F:/AI/Datasets/DrWho/ExteriorTardis",
66
+ "class_prompt": "a phonebooth",
67
+ "class_data_dir": "F:/AI/Datasets/Regularisation/APhonebooth"
68
+ }
69
+ ]
70
+ }
feature_extractor/preprocessor_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": 224,
3
+ "do_center_crop": true,
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_resize": true,
7
+ "feature_extractor_type": "CLIPFeatureExtractor",
8
+ "image_mean": [
9
+ 0.48145466,
10
+ 0.4578275,
11
+ 0.40821073
12
+ ],
13
+ "image_std": [
14
+ 0.26862954,
15
+ 0.26130258,
16
+ 0.27577711
17
+ ],
18
+ "resample": 3,
19
+ "size": 224
20
+ }
model_index.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.7.0.dev0",
4
+ "feature_extractor": [
5
+ "transformers",
6
+ "CLIPFeatureExtractor"
7
+ ],
8
+ "scheduler": [
9
+ "diffusers",
10
+ "DDIMScheduler"
11
+ ],
12
+ "text_encoder": [
13
+ "transformers",
14
+ "CLIPTextModel"
15
+ ],
16
+ "tokenizer": [
17
+ "transformers",
18
+ "CLIPTokenizer"
19
+ ],
20
+ "unet": [
21
+ "diffusers",
22
+ "UNet2DConditionModel"
23
+ ],
24
+ "vae": [
25
+ "diffusers",
26
+ "AutoencoderKL"
27
+ ]
28
+ }
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "DDIMScheduler",
3
+ "_diffusers_version": "0.7.0.dev0",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "clip_sample": false,
8
+ "num_train_timesteps": 1000,
9
+ "set_alpha_to_one": false,
10
+ "steps_offset": 1,
11
+ "trained_betas": null
12
+ }
showcase.jpg ADDED
text_encoder/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "F:/AI/Diffusers/stable-diffusion-v1-5",
3
+ "architectures": [
4
+ "CLIPTextModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dropout": 0.0,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "quick_gelu",
11
+ "hidden_size": 768,
12
+ "initializer_factor": 1.0,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 77,
17
+ "model_type": "clip_text_model",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "projection_dim": 768,
22
+ "torch_dtype": "float32",
23
+ "transformers_version": "4.24.0",
24
+ "vocab_size": 49408
25
+ }
text_encoder/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ef8c57f31a2369b3596623bd9eb4c9b9eb7247a3f8b889f1a758f49d45c2e949
3
+ size 492309793
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "do_lower_case": true,
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 77,
22
+ "name_or_path": "F:/AI/Diffusers/stable-diffusion-v1-5\\tokenizer",
23
+ "pad_token": "<|endoftext|>",
24
+ "special_tokens_map_file": "./special_tokens_map.json",
25
+ "tokenizer_class": "CLIPTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
unet/config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.7.0.dev0",
4
+ "_name_or_path": "F:/AI/Diffusers/stable-diffusion-v1-5",
5
+ "act_fn": "silu",
6
+ "attention_head_dim": 8,
7
+ "block_out_channels": [
8
+ 320,
9
+ 640,
10
+ 1280,
11
+ 1280
12
+ ],
13
+ "center_input_sample": false,
14
+ "cross_attention_dim": 768,
15
+ "down_block_types": [
16
+ "CrossAttnDownBlock2D",
17
+ "CrossAttnDownBlock2D",
18
+ "CrossAttnDownBlock2D",
19
+ "DownBlock2D"
20
+ ],
21
+ "downsample_padding": 1,
22
+ "flip_sin_to_cos": true,
23
+ "freq_shift": 0,
24
+ "in_channels": 4,
25
+ "layers_per_block": 2,
26
+ "mid_block_scale_factor": 1,
27
+ "norm_eps": 1e-05,
28
+ "norm_num_groups": 32,
29
+ "out_channels": 4,
30
+ "sample_size": 64,
31
+ "up_block_types": [
32
+ "UpBlock2D",
33
+ "CrossAttnUpBlock2D",
34
+ "CrossAttnUpBlock2D",
35
+ "CrossAttnUpBlock2D"
36
+ ]
37
+ }
unet/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce9cdc6acb9b8792a9f2cea06090107f1b2e2ec8892ce56fcad86381ce0aa9af
3
+ size 3438375973
vae/config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.7.0.dev0",
4
+ "_name_or_path": "F:/AI/Diffusers/stable-diffusion-v1-5",
5
+ "act_fn": "silu",
6
+ "block_out_channels": [
7
+ 128,
8
+ 256,
9
+ 512,
10
+ 512
11
+ ],
12
+ "down_block_types": [
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D",
16
+ "DownEncoderBlock2D"
17
+ ],
18
+ "in_channels": 3,
19
+ "latent_channels": 4,
20
+ "layers_per_block": 2,
21
+ "norm_num_groups": 32,
22
+ "out_channels": 3,
23
+ "sample_size": 512,
24
+ "up_block_types": [
25
+ "UpDecoderBlock2D",
26
+ "UpDecoderBlock2D",
27
+ "UpDecoderBlock2D",
28
+ "UpDecoderBlock2D"
29
+ ]
30
+ }
vae/diffusion_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36bb8e1b54aba3a0914eb35fba13dcb107e9f18d379d1df2158732cd4bf56a94
3
+ size 334711857