robustsam-vit-large / README.md
jadechoghari's picture
Update README.md
17a46f1 verified
|
raw
history blame
3.98 kB
metadata
library_name: transformers

RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)

Model Card for ViT Large (ViT-L) version

Open In Colab Huggingfaces

Official repository for RobustSAM: Segment Anything Robustly on Degraded Images

Project Page | Paper | Video | Dataset

Updates

  • July 2024: ✨ Training code, data and model checkpoints for different ViT backbones are released!
  • June 2024: ✨ Inference code has been released!
  • Feb 2024: ✨ RobustSAM was accepted into CVPR 2024!

Introduction

Segment Anything Model (SAM) has emerged as a transformative approach in image segmentation, acclaimed for its robust zero-shot segmentation capabilities and flexible prompting system. Nonetheless, its performance is challenged by images with degraded quality. Addressing this limitation, we propose the Robust Segment Anything Model (RobustSAM), which enhances SAM's performance on low-quality images while preserving its promptability and zero-shot generalization.

Our method leverages the pre-trained SAM model with only marginal parameter increments and computational requirements. The additional parameters of RobustSAM can be optimized within 30 hours on eight GPUs, demonstrating its feasibility and practicality for typical research laboratories. We also introduce the Robust-Seg dataset, a collection of 688K image-mask pairs with different degradations designed to train and evaluate our model optimally. Extensive experiments across various segmentation tasks and datasets confirm RobustSAM's superior performance, especially under zero-shot conditions, underscoring its potential for extensive real-world application. Additionally, our method has been shown to effectively improve the performance of SAM-based downstream tasks such as single image dehazing and deblurring.

image

Comparison of computational requirements

image

Visual Comparison

image

Quantitative Comparison

Seen dataset with synthetic degradation

image

Unseen dataset with synthetic degradation

image

Unseen dataset with real degradation

image

Reference

If you find this work useful, please consider citing us!

@inproceedings{chen2024robustsam,
  title={RobustSAM: Segment Anything Robustly on Degraded Images},
  author={Chen, Wei-Ting and Vong, Yu-Jiet and Kuo, Sy-Yen and Ma, Sizhou and Wang, Jian},
  journal={CVPR},
  year={2024}
}

Acknowledgements

We thank the authors of SAM from which our repo is based off of.