--- license: apache-2.0 --- # Segment Anything **NEW** Segment Anything now officially supported in `transformers`! Check out the [official documentation](https://huggingface.co/docs/transformers/main/en/model_doc/sam) This repository is the mirror of the official [Segment Anything repository](https://github.com/facebookresearch/segment-anything), together with the model weights. We also provide instructions on how to easily download the model weights. **[Meta AI Research, FAIR](https://ai.facebook.com/research/)** [Alexander Kirillov](https://alexander-kirillov.github.io/), [Eric Mintun](https://ericmintun.github.io/), [Nikhila Ravi](https://nikhilaravi.com/), [Hanzi Mao](https://hanzimao.me/), Chloe Rolland, Laura Gustafson, [Tete Xiao](https://tetexiao.com), [Spencer Whitehead](https://www.spencerwhitehead.com/), Alex Berg, Wan-Yen Lo, [Piotr Dollar](https://pdollar.github.io/), [Ross Girshick](https://www.rossgirshick.info/) [[`Paper`](https://ai.facebook.com/research/publications/segment-anything/)] [[`Project`](https://segment-anything.com/)] [[`Demo`](https://segment-anything.com/demo)] [[`Dataset`](https://segment-anything.com/dataset/index.html)] [[`Blog`](https://ai.facebook.com/blog/segment-anything-foundation-model-image-segmentation/)] [[`BibTeX`](#citing-segment-anything)] ![SAM design](https://raw.githubusercontent.com/facebookresearch/segment-anything/main/assets/model_diagram.png) The **Segment Anything Model (SAM)** produces high quality object masks from input prompts such as points or boxes, and it can be used to generate masks for all objects in an image. It has been trained on a [dataset](https://segment-anything.com/dataset/index.html) of 11 million images and 1.1 billion masks, and has strong zero-shot performance on a variety of segmentation tasks.
## Download the checkpoints ```bash pip install huggingface_hub ``` Then let's say you want to download the file `checkpoints/sam_vit_b_01ec64.pth`, ```python from huggingface_hub import hf_hub_download chkpt_path = hf_hub_download("ybelkada/segment-anything", "checkpoints/sam_vit_b_01ec64.pth") ``` ## Installation The code requires `python>=3.8`, as well as `pytorch>=1.7` and `torchvision>=0.8`. Please follow the instructions [here](https://pytorch.org/get-started/locally/) to install both PyTorch and TorchVision dependencies. Installing both PyTorch and TorchVision with CUDA support is strongly recommended. Install Segment Anything: ``` pip install git+https://github.com/facebookresearch/segment-anything.git ``` or clone the repository locally and install with ``` git clone git@github.com:facebookresearch/segment-anything.git cd segment-anything; pip install -e . ``` The following optional dependencies are necessary for mask post-processing, saving masks in COCO format, the example notebooks, and exporting the model in ONNX format. `jupyter` is also required to run the example notebooks. ``` pip install opencv-python pycocotools matplotlib onnxruntime onnx ``` ## Getting Started First download a [model checkpoint](#model-checkpoints). Then the model can be used in just a few lines to get masks from a given prompt: ``` from segment_anything import build_sam, SamPredictor predictor = SamPredictor(build_sam(checkpoint="")) predictor.set_image(
## ONNX Export SAM's lightweight mask decoder can be exported to ONNX format so that it can be run in any environment that supports ONNX runtime, such as in-browser as showcased in the [demo](https://segment-anything.com/demo). Export the model with ``` python scripts/export_onnx_model.py --checkpoint