Spaces:
Sleeping
Sleeping
Srimanth Agastyaraju
commited on
Commit
•
42a9d1a
1
Parent(s):
7c6ffc8
Update README 2
Browse files- .ipynb_checkpoints/README-checkpoint.md +18 -4
- .ipynb_checkpoints/app-checkpoint.py +1 -1
- README.md +18 -4
- app.py +1 -1
- results/tom_cruise_plain/.ipynb_checkpoints/out_11-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_12-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_13-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_15-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_16-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_17-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_19-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_2-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_21-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_23-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_24-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_25-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_27-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_28-checkpoint.png +0 -0
- results/tom_cruise_plain/.ipynb_checkpoints/out_29-checkpoint.png +0 -0
.ipynb_checkpoints/README-checkpoint.md
CHANGED
@@ -14,6 +14,7 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
|
|
14 |
# Stable diffusion finetune using LoRA
|
15 |
|
16 |
## HuggingFace Spaces URL: https://huggingface.co/spaces/asrimanth/person-thumbs-up
|
|
|
17 |
|
18 |
## Approach
|
19 |
|
@@ -43,8 +44,8 @@ Number of epochs : 50-60
|
|
43 |
Augmentations used : Center crop, Random Flip
|
44 |
Gradient accumulation steps : Tried 1, 3, and 4 for different experiments. 4 gave decent results.
|
45 |
|
46 |
-
text2image_fine-tune
|
47 |
-
**https://wandb.ai/asrimanth/text2image_fine-tune**
|
48 |
**Model card for asrimanth/person-thumbs-up-lora: https://huggingface.co/asrimanth/person-thumbs-up-lora**
|
49 |
**Prompt: ```<tom_cruise> #thumbsup```**
|
50 |
|
@@ -52,12 +53,12 @@ Deployed models:
|
|
52 |
|
53 |
When the above experiment failed, I had to try different datasets. One of them was "tom cruise".
|
54 |
|
55 |
-
srimanth-thumbs-up-lora-plain
|
56 |
**wandb link: https://wandb.ai/asrimanth/srimanth-thumbs-up-lora-plain**
|
57 |
**Model card for srimanth-thumbs-up-lora-plain: https://huggingface.co/asrimanth/srimanth-thumbs-up-lora-plain**
|
58 |
**Prompt: ```srimanth thumbs up```**
|
59 |
|
60 |
-
person-thumbs-up-plain-lora wandb
|
61 |
**wandb link: https://wandb.ai/asrimanth/person-thumbs-up-plain-lora**
|
62 |
**Model card for asrimanth/person-thumbs-up-plain-lora: https://huggingface.co/asrimanth/person-thumbs-up-plain-lora**
|
63 |
**Prompt: ```tom cruise thumbs up```**
|
@@ -77,5 +78,18 @@ person-thumbs-up-lora-no-cap wandb dashboard:
|
|
77 |
|
78 |
### Deployment
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
+ I chose streamlit to deploy the application on HuggingFace spaces. It was developer friendly and the app logic can be found in app.py
|
81 |
+ Streamlit app would be a great choice for an MVP.
|
|
|
|
|
|
|
|
14 |
# Stable diffusion finetune using LoRA
|
15 |
|
16 |
## HuggingFace Spaces URL: https://huggingface.co/spaces/asrimanth/person-thumbs-up
|
17 |
+
## Please note that the app on spaces is very slow due to compute constraints. For good results, please try locally.
|
18 |
|
19 |
## Approach
|
20 |
|
|
|
44 |
Augmentations used : Center crop, Random Flip
|
45 |
Gradient accumulation steps : Tried 1, 3, and 4 for different experiments. 4 gave decent results.
|
46 |
|
47 |
+
text2image_fine-tune :
|
48 |
+
**wandb dashboard : https://wandb.ai/asrimanth/text2image_fine-tune**
|
49 |
**Model card for asrimanth/person-thumbs-up-lora: https://huggingface.co/asrimanth/person-thumbs-up-lora**
|
50 |
**Prompt: ```<tom_cruise> #thumbsup```**
|
51 |
|
|
|
53 |
|
54 |
When the above experiment failed, I had to try different datasets. One of them was "tom cruise".
|
55 |
|
56 |
+
srimanth-thumbs-up-lora-plain : We use the plain dataset with srimanth mentioned above.
|
57 |
**wandb link: https://wandb.ai/asrimanth/srimanth-thumbs-up-lora-plain**
|
58 |
**Model card for srimanth-thumbs-up-lora-plain: https://huggingface.co/asrimanth/srimanth-thumbs-up-lora-plain**
|
59 |
**Prompt: ```srimanth thumbs up```**
|
60 |
|
61 |
+
person-thumbs-up-plain-lora wandb : We use the
|
62 |
**wandb link: https://wandb.ai/asrimanth/person-thumbs-up-plain-lora**
|
63 |
**Model card for asrimanth/person-thumbs-up-plain-lora: https://huggingface.co/asrimanth/person-thumbs-up-plain-lora**
|
64 |
**Prompt: ```tom cruise thumbs up```**
|
|
|
78 |
|
79 |
### Deployment
|
80 |
|
81 |
+
To run inference locally, choose a model and run the command:
|
82 |
+
```
|
83 |
+
python3 inference.py
|
84 |
+
```
|
85 |
+
|
86 |
+
To run the streamlit app locally, run the command:
|
87 |
+
```
|
88 |
+
streamlit run app.py
|
89 |
+
```
|
90 |
+
|
91 |
+ I chose streamlit to deploy the application on HuggingFace spaces. It was developer friendly and the app logic can be found in app.py
|
92 |
+ Streamlit app would be a great choice for an MVP.
|
93 |
+
+ AWS sagemaker would be a good choice for deploying models, since it supports huggingface models with minimal friction.
|
94 |
+
+ A docker container orchestrated in a kubernetes cluster would be ideal.
|
95 |
+
+ In practice, evaluation of models in real-time would let us know if there is model drift and retrain accordingly.
|
.ipynb_checkpoints/app-checkpoint.py
CHANGED
@@ -71,7 +71,7 @@ if __name__ == "__main__":
|
|
71 |
with col1_inp:
|
72 |
n_images = int(st.number_input("Enter the number of images", value=3, min_value=0, max_value=50))
|
73 |
with col2_inp:
|
74 |
-
n_inference_steps = int(st.number_input("Enter the number of inference steps", value=
|
75 |
with col_3_inp:
|
76 |
seed_input = int(st.number_input("Enter the seed (default=25)", value=25, min_value=0))
|
77 |
submitted = st.form_submit_button("Predict")
|
|
|
71 |
with col1_inp:
|
72 |
n_images = int(st.number_input("Enter the number of images", value=3, min_value=0, max_value=50))
|
73 |
with col2_inp:
|
74 |
+
n_inference_steps = int(st.number_input("Enter the number of inference steps", value=5, min_value=0))
|
75 |
with col_3_inp:
|
76 |
seed_input = int(st.number_input("Enter the seed (default=25)", value=25, min_value=0))
|
77 |
submitted = st.form_submit_button("Predict")
|
README.md
CHANGED
@@ -14,6 +14,7 @@ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-
|
|
14 |
# Stable diffusion finetune using LoRA
|
15 |
|
16 |
## HuggingFace Spaces URL: https://huggingface.co/spaces/asrimanth/person-thumbs-up
|
|
|
17 |
|
18 |
## Approach
|
19 |
|
@@ -43,8 +44,8 @@ Number of epochs : 50-60
|
|
43 |
Augmentations used : Center crop, Random Flip
|
44 |
Gradient accumulation steps : Tried 1, 3, and 4 for different experiments. 4 gave decent results.
|
45 |
|
46 |
-
text2image_fine-tune
|
47 |
-
**https://wandb.ai/asrimanth/text2image_fine-tune**
|
48 |
**Model card for asrimanth/person-thumbs-up-lora: https://huggingface.co/asrimanth/person-thumbs-up-lora**
|
49 |
**Prompt: ```<tom_cruise> #thumbsup```**
|
50 |
|
@@ -52,12 +53,12 @@ Deployed models:
|
|
52 |
|
53 |
When the above experiment failed, I had to try different datasets. One of them was "tom cruise".
|
54 |
|
55 |
-
srimanth-thumbs-up-lora-plain
|
56 |
**wandb link: https://wandb.ai/asrimanth/srimanth-thumbs-up-lora-plain**
|
57 |
**Model card for srimanth-thumbs-up-lora-plain: https://huggingface.co/asrimanth/srimanth-thumbs-up-lora-plain**
|
58 |
**Prompt: ```srimanth thumbs up```**
|
59 |
|
60 |
-
person-thumbs-up-plain-lora wandb
|
61 |
**wandb link: https://wandb.ai/asrimanth/person-thumbs-up-plain-lora**
|
62 |
**Model card for asrimanth/person-thumbs-up-plain-lora: https://huggingface.co/asrimanth/person-thumbs-up-plain-lora**
|
63 |
**Prompt: ```tom cruise thumbs up```**
|
@@ -77,5 +78,18 @@ person-thumbs-up-lora-no-cap wandb dashboard:
|
|
77 |
|
78 |
### Deployment
|
79 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
+ I chose streamlit to deploy the application on HuggingFace spaces. It was developer friendly and the app logic can be found in app.py
|
81 |
+ Streamlit app would be a great choice for an MVP.
|
|
|
|
|
|
|
|
14 |
# Stable diffusion finetune using LoRA
|
15 |
|
16 |
## HuggingFace Spaces URL: https://huggingface.co/spaces/asrimanth/person-thumbs-up
|
17 |
+
## Please note that the app on spaces is very slow due to compute constraints. For good results, please try locally.
|
18 |
|
19 |
## Approach
|
20 |
|
|
|
44 |
Augmentations used : Center crop, Random Flip
|
45 |
Gradient accumulation steps : Tried 1, 3, and 4 for different experiments. 4 gave decent results.
|
46 |
|
47 |
+
text2image_fine-tune :
|
48 |
+
**wandb dashboard : https://wandb.ai/asrimanth/text2image_fine-tune**
|
49 |
**Model card for asrimanth/person-thumbs-up-lora: https://huggingface.co/asrimanth/person-thumbs-up-lora**
|
50 |
**Prompt: ```<tom_cruise> #thumbsup```**
|
51 |
|
|
|
53 |
|
54 |
When the above experiment failed, I had to try different datasets. One of them was "tom cruise".
|
55 |
|
56 |
+
srimanth-thumbs-up-lora-plain : We use the plain dataset with srimanth mentioned above.
|
57 |
**wandb link: https://wandb.ai/asrimanth/srimanth-thumbs-up-lora-plain**
|
58 |
**Model card for srimanth-thumbs-up-lora-plain: https://huggingface.co/asrimanth/srimanth-thumbs-up-lora-plain**
|
59 |
**Prompt: ```srimanth thumbs up```**
|
60 |
|
61 |
+
person-thumbs-up-plain-lora wandb : We use the
|
62 |
**wandb link: https://wandb.ai/asrimanth/person-thumbs-up-plain-lora**
|
63 |
**Model card for asrimanth/person-thumbs-up-plain-lora: https://huggingface.co/asrimanth/person-thumbs-up-plain-lora**
|
64 |
**Prompt: ```tom cruise thumbs up```**
|
|
|
78 |
|
79 |
### Deployment
|
80 |
|
81 |
+
To run inference locally, choose a model and run the command:
|
82 |
+
```
|
83 |
+
python3 inference.py
|
84 |
+
```
|
85 |
+
|
86 |
+
To run the streamlit app locally, run the command:
|
87 |
+
```
|
88 |
+
streamlit run app.py
|
89 |
+
```
|
90 |
+
|
91 |
+ I chose streamlit to deploy the application on HuggingFace spaces. It was developer friendly and the app logic can be found in app.py
|
92 |
+ Streamlit app would be a great choice for an MVP.
|
93 |
+
+ AWS sagemaker would be a good choice for deploying models, since it supports huggingface models with minimal friction.
|
94 |
+
+ A docker container orchestrated in a kubernetes cluster would be ideal.
|
95 |
+
+ In practice, evaluation of models in real-time would let us know if there is model drift and retrain accordingly.
|
app.py
CHANGED
@@ -71,7 +71,7 @@ if __name__ == "__main__":
|
|
71 |
with col1_inp:
|
72 |
n_images = int(st.number_input("Enter the number of images", value=3, min_value=0, max_value=50))
|
73 |
with col2_inp:
|
74 |
-
n_inference_steps = int(st.number_input("Enter the number of inference steps", value=
|
75 |
with col_3_inp:
|
76 |
seed_input = int(st.number_input("Enter the seed (default=25)", value=25, min_value=0))
|
77 |
submitted = st.form_submit_button("Predict")
|
|
|
71 |
with col1_inp:
|
72 |
n_images = int(st.number_input("Enter the number of images", value=3, min_value=0, max_value=50))
|
73 |
with col2_inp:
|
74 |
+
n_inference_steps = int(st.number_input("Enter the number of inference steps", value=5, min_value=0))
|
75 |
with col_3_inp:
|
76 |
seed_input = int(st.number_input("Enter the seed (default=25)", value=25, min_value=0))
|
77 |
submitted = st.form_submit_button("Predict")
|
results/tom_cruise_plain/.ipynb_checkpoints/out_11-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_12-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_13-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_15-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_16-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_17-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_19-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_2-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_21-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_23-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_24-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_25-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_27-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_28-checkpoint.png
ADDED
results/tom_cruise_plain/.ipynb_checkpoints/out_29-checkpoint.png
ADDED