ZeroCool94
commited on
Commit
•
86a2f21
1
Parent(s):
5de51ae
Update README.md
Browse files
README.md
CHANGED
@@ -105,7 +105,7 @@ The model was trained on the following dataset:
|
|
105 |
|
106 |
**Hardware and others**
|
107 |
- **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
|
108 |
-
- **Hours Trained:**
|
109 |
- **Optimizer:** AdamW
|
110 |
- **Adam Beta 1**: 0.9
|
111 |
- **Adam Beta 2**: 0.999
|
@@ -114,11 +114,13 @@ The model was trained on the following dataset:
|
|
114 |
- **Gradient Checkpointing**: True
|
115 |
- **Gradient Accumulations**: 4
|
116 |
- **Batch:** 1
|
117 |
-
- **Learning
|
|
|
|
|
118 |
- **Lora unet Learning Rate**: 1e-7
|
119 |
- **Lora Text Encoder Learning Rate**: 1e-7
|
120 |
- **Resolution**: 512 pixels
|
121 |
-
- **Total Training Steps:**
|
122 |
|
123 |
Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
|
124 |
|
|
|
105 |
|
106 |
**Hardware and others**
|
107 |
- **Hardware:** 1 x Nvidia RTX 3050 8GB GPU
|
108 |
+
- **Hours Trained:** 758 hours approximately.
|
109 |
- **Optimizer:** AdamW
|
110 |
- **Adam Beta 1**: 0.9
|
111 |
- **Adam Beta 2**: 0.999
|
|
|
114 |
- **Gradient Checkpointing**: True
|
115 |
- **Gradient Accumulations**: 4
|
116 |
- **Batch:** 1
|
117 |
+
- **Learning Rate:** 1e-7
|
118 |
+
- **Learning Rate Scheduler:** cosine_with_restarts
|
119 |
+
- **Learning Rate Warmup Steps:** 10,000
|
120 |
- **Lora unet Learning Rate**: 1e-7
|
121 |
- **Lora Text Encoder Learning Rate**: 1e-7
|
122 |
- **Resolution**: 512 pixels
|
123 |
+
- **Total Training Steps:** 2,022,799
|
124 |
|
125 |
Developed by: [ZeroCool94](https://github.com/ZeroCool940711) at [Sygil-Dev](https://github.com/Sygil-Dev/)
|
126 |
|