Edit model card

Model Description

SkinSAM is on the 12-layer ViT-b model, the mask decoder module of SAM is fine-tuned on a combined dataset of ISIC and PH2 skin lesion images and masks. SkinSAM was trained on an Nvidia Tesla A100 40GB GPU.

Some of the notable results taken:
ISIC Dataset:

  1. IOU 78.25%
  2. Pixel Accuracy 92.18%
  3. F1 Score 87.47%

PH2 Dataset:

  1. IOU 86.68%
  2. Pixel Accuracy 93.33%
  3. F1 Score 93.95%
Downloads last month
33
Safetensors
Model size
93.7M params
Tensor type
F32
·
Inference API
Inference API (serverless) does not yet support transformers models for this pipeline type.

Dataset used to train ahishamm/skinsam

Space using ahishamm/skinsam 1