praeclarumjj3 commited on
Commit
eee15d2
1 Parent(s): eac4aac
Files changed (1) hide show
  1. gradio_app.py +4 -4
gradio_app.py CHANGED
@@ -187,12 +187,12 @@ def segment(path, task, dataset, backbone):
187
 
188
  title = "OneFormer: One Transformer to Rule Universal Image Segmentation"
189
 
190
- description = "<p style='font-size: 16px; font-weight: w600; text-align: center'> <a href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a href='https://arxiv.org/abs/2211.06220' target='_blank'>ArXiv Paper</a> | <a href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github Repo</a></p>" \
191
- + "<p style='font-size: 12px; text-align: center' margin: 10px font-weight: w300; text-align: center'> <a href='https://praeclarumjj3.github.io/' target='_blank'>Jitesh Jain<sup>*</sup></a> <a href='https://chrisjuniorli.github.io/' target='_blank'>Jiachen Li<sup>*</sup></a> <a href='https://www.linkedin.com/in/mtchiu/' target='_blank'>MangTik Chiu<sup>*</sup></a> <a href='https://alihassanijr.com/' target='_blank'>Ali Hassani</a> <a href='https://www.linkedin.com/in/nukich74/' target='_blank'>Nikita Orlov</a> <a href='https://www.humphreyshi.com/home' target='_blank'>Humphrey Shi</a></p>" \
192
- + "<p style='text-align: center; font-size: 14px; font-weight: w300;'> \
193
  OneFormer is the first multi-task universal image segmentation framework based on transformers. Our single OneFormer model achieves state-of-the-art performance across all three segmentation tasks with a single task-conditioned joint training process. OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.\
194
  </p>" \
195
- + "<p style='text-align: center; font-size: 14px; font-weight: w300;'> [Note: Inference on CPU may take upto 2 minutes. On a single RTX A6000 GPU, OneFormer is able to inference at more than 15 FPS.</p>"
196
 
197
  # description = "<p style='color: #E0B941; font-size: 16px; font-weight: w600; text-align: center'> <a style='color: #E0B941;' href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a style='color: #E0B941;' href='https://arxiv.org/abs/2211.06220' target='_blank'>OneFormer: One Transformer to Rule Universal Image Segmentation</a> | <a style='color: #E0B941;' href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github</a></p>" \
198
  # + "<p style='color:royalblue; margin: 10px; font-size: 16px; font-weight: w400;'> \
 
187
 
188
  title = "OneFormer: One Transformer to Rule Universal Image Segmentation"
189
 
190
+ description = "<p style='font-size: 12px; margin: 5px; font-weight: w300; text-align: center'> <a href='https://praeclarumjj3.github.io/' target='_blank'>Jitesh Jain</a> <a href='https://chrisjuniorli.github.io/' target='_blank'>Jiachen Li<sup>*</sup></a> <a href='https://www.linkedin.com/in/mtchiu/' target='_blank'>MangTik Chiu<sup>*</sup></a> <a href='https://alihassanijr.com/' target='_blank'>Ali Hassani</a> <a href='https://www.linkedin.com/in/nukich74/' target='_blank'>Nikita Orlov</a> <a href='https://www.humphreyshi.com/home' target='_blank'>Humphrey Shi</a></p>" \
191
+ + "<p style='font-size: 16px; margin: 5px; font-weight: w600; text-align: center'> <a href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a href='https://arxiv.org/abs/2211.06220' target='_blank'>ArXiv Paper</a> | <a href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github Repo</a></p>" \
192
+ + + "<p style='text-align: center; margin: 5px; font-size: 14px; font-weight: w300;'> \
193
  OneFormer is the first multi-task universal image segmentation framework based on transformers. Our single OneFormer model achieves state-of-the-art performance across all three segmentation tasks with a single task-conditioned joint training process. OneFormer uses a task token to condition the model on the task in focus, making our architecture task-guided for training, and task-dynamic for inference, all with a single model. We believe OneFormer is a significant step towards making image segmentation more universal and accessible.\
194
  </p>" \
195
+ + "<p style='text-align: center; font-size: 14px; margin: 5px; font-weight: w300;'> [Note: Inference on CPU may take upto 2 minutes. On a single RTX A6000 GPU, OneFormer is able to inference at more than 15 FPS.]</p>"
196
 
197
  # description = "<p style='color: #E0B941; font-size: 16px; font-weight: w600; text-align: center'> <a style='color: #E0B941;' href='https://praeclarumjj3.github.io/oneformer/' target='_blank'>Project Page</a> | <a style='color: #E0B941;' href='https://arxiv.org/abs/2211.06220' target='_blank'>OneFormer: One Transformer to Rule Universal Image Segmentation</a> | <a style='color: #E0B941;' href='https://github.com/SHI-Labs/OneFormer' target='_blank'>Github</a></p>" \
198
  # + "<p style='color:royalblue; margin: 10px; font-size: 16px; font-weight: w400;'> \