NoW / README.md
NJUyued's picture
Update README.md
945c19d verified
|
raw
history blame
4 kB
metadata
license: cc-by-nc-4.0
task_categories:
  - text-to-image
  - image-to-text
  - text-retrieval
modalities:
  - image
  - text
language:
  - zh
  - en
  - ja
  - ru
tags:
  - realistic
  - industry
  - mobile user interface
  - image-text matching
  - image-text retrieval
  - noisy correspondence learning
  - NCL benchmark
size_categories:
  - 100K<n<1M

PC2-NoiseofWeb

This repo releases data introduced in our paper

PC2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval
Authors: Yue Duan, Zhangxuan Gu, Zhenzhe Ying, Lei Qi, Changhua Meng and Yinghuan Shi

Quick links: [arXiv (coming soon) | Published paper (coming soon) | Poster (coming soon) | Zhihu (coming soon) | Code download | Dataset download]

Data Collection

We develop a new dataset named Noise of Web (NoW) for NCL. It contains 100K cross-modal pairs consisting of website images and multilingual website meta-descriptions (98,000 pairs for training, 1,000 for validation, and 1,000 for testing). NoW has two main characteristics: without human annotations and the noisy pairs are naturally captured. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In NCR (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by Bottom-up Attention Model to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt APT as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. The estimated noise ratio of this dataset is nearly 70%.

Data Structure


|-- h5100k_precomp
|   |-- dev_caps_bpe.txt
|   |-- dev_caps_bert.txt
|   |-- dev_ids.txt
|   |-- dev_ims.npy
|   |-- test_caps_bpe.txt
|   |-- test_caps_bert.txt
|   |-- test_ids.txt
|   |-- test_ims.npy
|   |-- train_caps_bpe.txt
|   |-- train_caps_bert.txt
|   |-- train_ids.txt
|   |-- train_ims.npy
|-- vocab
|   |-- now100k_precomp_vocab_bert.json
|   |-- now100k_precomp_vocab_bpe.json
|   |-- now100k_precomp_vocab_jieba.json

Please note that since our raw data contains some sensitive business data, we only provide the encoded image features (*_ims.npy) and the token ids of the text tokenized. For tokenizer, we use both Tokenizers with BPE to produce *_caps_bpe.txt and BertTokenizer with bert-base-multilingual-cased pre-trained model to produce *_caps_bert.txt. Our vocabulary size of BPE tokenizer is 10,000 and that of BertTokenizer is 32702. *_ids.txt records the serial number of the data in the original 500k dataset. In the future, we may process and make the original dataset public.