metadata
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
CLIP-BERT training data
This data was used to train the CLIP-BERT model first described in this paper.
The dataset is based on text and images from MS COCO, SBU Captions, Visual Genome QA and Conceptual Captions.
The image features have been extracted using the CLIP model openai/clip-vit-base-patch32 available on Huggingface.