Spaces:
Running
on
T4
Running
on
T4
praeclarumjj3
commited on
Commit
•
f206f62
1
Parent(s):
7476063
Fix text
Browse files- gradio_app.py +14 -7
gradio_app.py
CHANGED
@@ -187,12 +187,19 @@ def segment(path, task, dataset, backbone):
|
|
187 |
|
188 |
title = "OneFormer: One Transformer to Rule Universal Image Segmentation"
|
189 |
|
190 |
-
description = "<p
|
191 |
-
+ "<p
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
|
197 |
# article =
|
198 |
|
@@ -202,7 +209,7 @@ setup_modules()
|
|
202 |
|
203 |
gradio_inputs = [gr.Image(source="upload", tool=None, label="Input Image",type="filepath"),
|
204 |
gr.Radio(choices=["the task is panoptic" ,"the task is instance", "the task is semantic"], type="value", value="the task is panoptic", label="Task Token Input"),
|
205 |
-
gr.Radio(choices=["COCO (133 classes)" ,"Cityscapes (19 classes)", "ADE20K (150 classes)"], type="value", value="
|
206 |
gr.Radio(choices=["DiNAT-L" ,"Swin-L"], type="value", value="DiNAT-L", label="Backbone"),
|
207 |
]
|
208 |
gradio_outputs = [gr.Image(type="pil", label="Segmentation Overlay"), gr.Image(type="pil", label="Segmentation Map")]
|
|
|
187 |
|
188 |
title = "OneFormer: One Transformer to Rule Universal Image Segmentation"
|
189 |
|
190 |
+
description = "<p font-size: 16px; font-weight: w600; text-align: center'> <a href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a href='https://arxiv.org/abs/2211.06220' target='_blank'>ArXiv Paper</a> | <a href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github Repo</a></p>" \
|
191 |
+
+ "<p font-size: 12px; text-align: center' margin: 10px font-weight: w300; text-align: center'> <a href='https://chrisjuniorli.github.io/' target='_blank'>Jiachen Li<sup>*</sup></a> <a href='https://www.linkedin.com/in/mtchiu/' target='_blank'>MangTik Chiu<sup>*</sup></a> <a href='https://alihassanijr.com/' target='_blank'>Ali Hassani</a> <a href='https://www.linkedin.com/in/nukich74/' target='_blank'>Nikita Orlov</a> <a href='https://www.humphreyshi.com/home' target='_blank'>Humphrey Shi</a></p>" \
|
192 |
+
+ "<p text-align: center; font-size: 14px; font-weight: w300;'> \
|
193 |
+
OneFormer is the first multi-task universal image segmentation framework based on transformers. Our single OneFormer model achieves state-of-the-art performance across all three segmentation tasks with a single task-conditioned joint training process. OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.\
|
194 |
+
</p>" \
|
195 |
+
+ "<p text-align: center; font-size: 14px; font-weight: w300;'> [Note: Inference on CPU may take upto 2 minutes. On a single RTX A6000 GPU, OneFormer is able to inference at more than 15 FPS.</p>"
|
196 |
+
|
197 |
+
# description = "<p style='color: #E0B941; font-size: 16px; font-weight: w600; text-align: center'> <a style='color: #E0B941;' href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a style='color: #E0B941;' href='https://arxiv.org/abs/2211.06220' target='_blank'>OneFormer: One Transformer to Rule Universal Image Segmentation</a> | <a style='color: #E0B941;' href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github</a></p>" \
|
198 |
+
# + "<p style='color:royalblue; margin: 10px; font-size: 16px; font-weight: w400;'> \
|
199 |
+
# [Note: Inference on CPU may take upto 2 minutes.] This is the official gradio demo for our paper <span style='color:#E0B941;'>OneFormer: One Transformer to Rule Universal Image Segmentation</span> To use <span style='color:#E0B941;'>OneFormer</span>: <br> \
|
200 |
+
# (1) <span style='color:#E0B941;'>Upload an Image</span> or <span style='color:#E0B941;'> select a sample image from the examples</span> <br> \
|
201 |
+
# (2) Select the value of the <span style='color:#E0B941;'>Task Token Input</span>. <br>\
|
202 |
+
# (3) Select the <span style='color:#E0B941;'>Model</span> and <span style='color:#E0B941;'>Backbone</span>. </p>"
|
203 |
|
204 |
# article =
|
205 |
|
|
|
209 |
|
210 |
gradio_inputs = [gr.Image(source="upload", tool=None, label="Input Image",type="filepath"),
|
211 |
gr.Radio(choices=["the task is panoptic" ,"the task is instance", "the task is semantic"], type="value", value="the task is panoptic", label="Task Token Input"),
|
212 |
+
gr.Radio(choices=["COCO (133 classes)" ,"Cityscapes (19 classes)", "ADE20K (150 classes)"], type="value", value="COCO (133 classes)", label="Model"),
|
213 |
gr.Radio(choices=["DiNAT-L" ,"Swin-L"], type="value", value="DiNAT-L", label="Backbone"),
|
214 |
]
|
215 |
gradio_outputs = [gr.Image(type="pil", label="Segmentation Overlay"), gr.Image(type="pil", label="Segmentation Map")]
|