zaidmehdi commited on
Commit
ab4bbba
β€’
1 Parent(s): 01d43bf

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +41 -5
README.md CHANGED
@@ -1,13 +1,49 @@
1
  ---
2
- title: Manga Colorizer
3
- emoji: πŸ“ˆ
4
  colorFrom: green
5
  colorTo: gray
6
  sdk: gradio
7
- sdk_version: 4.24.0
8
- app_file: app.py
9
  pinned: false
10
  license: mit
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: MangaColorizer
3
+ emoji: πŸ–ŒοΈπŸŽ¨
4
  colorFrom: green
5
  colorTo: gray
6
  sdk: gradio
7
+ app_file: main.py
 
8
  pinned: false
9
  license: mit
10
  ---
11
 
12
+ # MangaColorizer
13
+ This project is a colorizer of grayscale images, and in particular for manga, comics or drawings.
14
+ Given a black and white (grayscale) image, the model produces a colorized version of it.
15
+
16
+ [Link to the Demo](https://huggingface.co/spaces/zaidmehdi/manga-colorizer)
17
+
18
+ ![Demo App](docs/images/demo_screenshot.png "Demo App")
19
+
20
+
21
+ ## How I built this project:
22
+ The data I used to train this model contains 755 colored images from some chapters of **Bleach, Dragon Ball Super, Naruto, One Piece and Attack on Titan**. I also used 215 other images for the validation set, as well as 109 other images for the test set.
23
+
24
+ In the current version, I trained an encoder-decoder model from scratch with the following architecture:
25
+ ```
26
+ MangaColorizer(
27
+ (encoder): Sequential(
28
+ (0): Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
29
+ (1): ReLU(inplace=True)
30
+ (2): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
31
+ (3): ReLU(inplace=True)
32
+ (4): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
33
+ (5): ReLU(inplace=True)
34
+ )
35
+ (decoder): Sequential(
36
+ (0): ConvTranspose2d(256, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
37
+ (1): ReLU(inplace=True)
38
+ (2): ConvTranspose2d(128, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1))
39
+ (3): ReLU(inplace=True)
40
+ (4): ConvTranspose2d(64, 3, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
41
+ (5): Tanh()
42
+ )
43
+ )
44
+ ```
45
+ The inputs to the model are the grayscale version of the manga images, and the target is the colored version of the image.
46
+ The loss function used is the MSE of all the pixel values produced by the model (compared to the target pixel values).
47
+ **Currently, it achieves an MSE of 0.00859 on the test set.**
48
+
49
+ For more details, you can refer to the docs directory.