[Paper] [GitHub]

TeCoA (Mao et al. (2023)) CLIP ViT-L/14 model.

Supervised adversarial fine-tuning from Openai CLIP initialization on ImageNet with infinity-norm and radius 2/255.

Usage

model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:chs20/tecoa2-clip')

Citation

If you find this model useful, please consider citing our paper:

@article{schlarmann2024robustclip,
    title={Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models}, 
    author={Christian Schlarmann and Naman Deep Singh and Francesco Croce and Matthias Hein},
    year={2024},
    journal={ICML}
}
Downloads last month
131
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including chs20/tecoa2-clip