|
--- |
|
license: creativeml-openrail-m |
|
tags: |
|
- pytorch |
|
- diffusers |
|
- stable-diffusion |
|
- text-to-image |
|
- diffusion-models-class |
|
- dreambooth-hackathon |
|
- animal |
|
widget: |
|
- text: a photo of a zzelda cat in space |
|
--- |
|
|
|
# DreamBooth model for the zzelda concept trained by Sanderbaduk on dataset of cats. |
|
|
|
This is a Stable Diffusion model fine-tuned on pictures of my mum's cat "Zelda" with DreamBooth. It can be used by using the phrase 'zzelda cat' in a prompt. |
|
|
|
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! |
|
|
|
<table> |
|
<tr> |
|
<td>One of the images used to fine-tune on<br>"a photo of zzelda cat on a chair"</td> |
|
<td>One of the images generated by the model<br>"a photo of zzelda cat in space"</td> |
|
</tr> |
|
<tr> |
|
<td> |
|
<img src="http://i.imgur.com/zFOzQtf.jpg" style="max-height:400px"> |
|
</td> |
|
<td> |
|
<img src="http://i.imgur.com/12Nilhg.png" style="max-height:400px"> |
|
</td> |
|
</tr> |
|
</table> |
|
|
|
## Description |
|
|
|
|
|
This is a Stable Diffusion model fine-tuned on images of my mum's cat Zelda for the animal theme. |
|
|
|
To experiment a bit, I used a custom prompt for each image based on the file name. This works, but does not seem to have made much of a difference. |
|
The model was trained on CPU after encountering issues with CUDA, taking around 2 hours on 32 cores. |
|
|
|
It works a lot better locally than in the widget, where it tends to take a few more tries to get the right cat. |
|
|
|
## Usage |
|
|
|
```python |
|
from diffusers import StableDiffusionPipeline |
|
|
|
pipeline = StableDiffusionPipeline.from_pretrained('Sanderbaduk/zelda-the-cat') |
|
image = pipeline().images[0] |
|
image |
|
``` |
|
|