File size: 424 Bytes
4d0cfd2 |
1 2 3 4 5 6 7 8 9 10 11 12 |
---
tags:
- immich
- clip
---
# Model Description
This repo contains ONNX exports for the CLIP model [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14).
It separates the visual and textual encoders into separate models for the purpose of generating image and text embeddings.
This repo is specifically intended for use with [Immich](https://immich.app/), a self-hosted photo library.
|