Datasets:
Size:
100K<n<1M
ArXiv:
Tags:
image-text retrieval
noisy correspondence learning
NCL-specific benchmark
realistic
industry
mobile user interface
License:
license: cc-by-nc-4.0 | |
task_categories: | |
- text-to-image | |
- image-to-text | |
- text-retrieval | |
modalities: | |
- image | |
- text | |
language: | |
- zh | |
- en | |
- ja | |
- ru | |
tags: | |
- realistic | |
- industry | |
- mobile user interface | |
- image-text matching | |
- image-text retrieval | |
- noisy correspondence learning | |
- NCL benchmark | |
size_categories: | |
- 100K<n<1M | |
# PC2-NoiseofWeb | |
This repo releases data introduced in our paper | |
> ***PC2: Pseudo-Classification Based Pseudo-Captioning for Noisy Correspondence Learning in Cross-Modal Retrieval*** | |
> ***Authors**: Yue Duan, Zhangxuan Gu, Zhenzhe Ying, Lei Qi, Changhua Meng and Yinghuan Shi* | |
Quick links: [[arXiv (coming soon)]() | [Published paper (coming soon)]() | [Poster (coming soon)]() | [Zhihu (coming soon)]() | [Code download]() | [Dataset download](https://drive.google.com/file/d/1MsR9GmRDUj4NoeL4xL8TXpes51JnpsrZ/view?usp=drive_link)] | |
## Data Collection | |
We develop a new dataset named **Noise of Web (NoW)** for NCL. It contains **100K** cross-modal pairs consisting of **website images** and **multilingual website meta-descriptions** (**98,000 pairs for training, 1,000 for validation, and 1,000 for testing**). NoW has two main characteristics: *without human annotations and the noisy pairs are naturally captured*. The source image data of NoW is obtained by taking screenshots when accessing web pages on mobile user interface (MUI) with 720 $\times$ 1280 resolution, and we parse the meta-description field in the HTML source code as the captions. In [NCR](https://github.com/XLearning-SCU/2021-NeurIPS-NCR) (predecessor of NCL), each image in all datasets were preprocessed using Faster-RCNN detector provided by [Bottom-up Attention Model](https://github.com/peteanderson80/bottom-up-attention) to generate 36 region proposals, and each proposal was encoded as a 2048-dimensional feature. Thus, following NCR, we release our the features instead of raw images for fair comparison. However, we can not just use detection methods like Faster-RCNN to extract image features since it is trained on real-world animals and objects on MS-COCO. To tackle this, we adapt [APT](https://openaccess.thecvf.com/content/CVPR2023/papers/Gu_Mobile_User_Interface_Element_Detection_via_Adaptively_Prompt_Tuning_CVPR_2023_paper.pdf) as the detection model since it is trained on MUI data. Then, we capture the 768-dimensional features of top 36 objects for one image. Due to the automated and non-human curated data collection process, the noise in NoW is highly authentic and intrinsic. **The estimated noise ratio of this dataset is nearly 70%**. | |
<div align=center> | |
<img width="750px" src="/figures/now-1.jpg"> | |
</div> | |
## Data Structure | |
``` | |
|-- h5100k_precomp | |
| |-- dev_caps_bpe.txt | |
| |-- dev_caps_bert.txt | |
| |-- dev_ids.txt | |
| |-- dev_ims.npy | |
| |-- test_caps_bpe.txt | |
| |-- test_caps_bert.txt | |
| |-- test_ids.txt | |
| |-- test_ims.npy | |
| |-- train_caps_bpe.txt | |
| |-- train_caps_bert.txt | |
| |-- train_ids.txt | |
| |-- train_ims.npy | |
|-- vocab | |
| |-- now100k_precomp_vocab_bert.json | |
| |-- now100k_precomp_vocab_bpe.json | |
| |-- now100k_precomp_vocab_jieba.json | |
``` | |
Please note that since our raw data contains some sensitive business data, we only provide the **encoded image features** (\*_ims.npy) and the **token ids of the text tokenized**. For tokenizer, we use both [Tokenizers](https://github.com/huggingface/tokenizers) with [BPE](https://huggingface.co/docs/tokenizers/api/models#tokenizers.models.BPE) to produce \*_caps_bpe.txt and [BertTokenizer](https://huggingface.co/transformers/v3.0.2/model_doc/bert.html#berttokenizer) with [bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) pre-trained model to produce \*_caps_bert.txt. **Our vocabulary size of BPE tokenizer is 10,000 and that of BertTokenizer is 32702**. \*_ids.txt records the serial number of the data in the original 500k dataset. In the future, we may process and make the original dataset public. |