File size: 5,686 Bytes
80e9035 a16075b 80e9035 ea113ea 80e9035 ffbe69a ea113ea 80e9035 ea113ea 80e9035 ea113ea 80e9035 ea113ea 80e9035 ea113ea 80e9035 cab118c 80e9035 ea113ea 80e9035 cab118c 80e9035 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
license: cc-by-nc-4.0
language:
- en
tags:
- fashion
- visual search
pretty_name: LAION — Referred Visual Search — Fashion
size_categories:
- 1M<n<10M
---
<div align="center">
<h1 align="center">LAION - Referred Visual Search - Fashion</h1>
Introduced in ***LRVSF-Fashion: Extending Visual Search with Referring Instructions***
<a href="https://simon-lepage.github.io"><strong>Simon Lepage</strong></a>
—
<strong>Jérémie Mary</strong>
—
<a href=https://davidpicard.github.io><strong>David Picard</strong></a>
<a href=https://ailab.criteo.com>CRITEO AI Lab</a>
&
<a href=https://imagine-lab.enpc.fr>ENPC</a>
</div>
<p align="center">
<a href="https://arxiv.org/abs/2306.02928">
<img alt="ArXiV Badge" src="https://img.shields.io/badge/arXiv-2306.02928-b31b1b.svg">
</a>
</p>
<div align="center">
<div id=links>
**Useful Links**<br>
[Test set](https://zenodo.org/doi/10.5281/zenodo.11189942) —
[Benchmark Code](https://github.com/Simon-Lepage/LRVSF-Benchmark) —
[LRVS-F Leaderboard](https://huggingface.co/spaces/Slep/LRVSF-Leaderboard) —
[Demo](https://huggingface.co/spaces/Slep/CondViT-LRVSF-Demo)
</div>
</div>
## **Composition**
LAION-RVS-Fashion is composed of images from :
- **[LAION 2B EN](https://huggingface.co/datasets/laion/laion2B-en)**
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**
These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](https://arxiv.org/abs/2306.02928) for additional details.
|Split|Products|Distractors|
|-:|:-:|:-:|
|Train|272,457|-|
|Valid|400|99,541|
|Test|2,000|2,000,014|
**Total number of training images :** 841,718.
## **Samples**
<table style='text-align:center'>
<tbody>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/97969.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/219924.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Neck</td>
<td colspan=2>Lower Body</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>a scarf with multi-coloured stripes</td>
<td colspan=2>stella pants - dark suede</td>
</tr>
<tr></tr>
<tr>
<td></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/72317.1.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.0.jpg" style="height:200px"></td>
<td><img src="https://huggingface.co/datasets/Slep/LAION-RVS-Fashion/resolve/main/assets/108856.1.jpg" style="height:200px"></td>
</tr>
<tr>
<td><b>Categories</b></td>
<td colspan=2>Feet</td>
<td colspan=2>Bags</td>
</tr>
<tr>
<td><b>BLIP2 Captions</b></td>
<td colspan=2>neon green patent leather heels with studs</td>
<td colspan=2>the burberry small leather bag is brown and leather</td>
</tr>
</tbody>
</table>
## **Attributes**
- **URL**, **WIDTH**, **HEIGHT**, **punsafe**, **pwatermark**, **language**: Original LAION fields. Please refer to their repository.
- **TEXT**: Text originally associated with the image.
- **ENG_TEXT** : Translated version for MULTI/NOLANG, copy of TEXT for EN.
- **TYPE**: SIMPLE (isolated products), COMPLEX (scenes), PARTIAL_COMPLEX (zommed-in scenes)
- **PRODUCT_ID**: Product identifier, allows to group together images depicting the same product.
- **INDEX_SRC**: ID of parquet file originally storing this image.
- **CATEGORY**: Categories of the products - `Bags, Feet, Hands, Head, Lower Body, Neck, Outwear, Upper Body, Waist, Whole Body` for the products, and `NonClothing` for some distractors.
- **blip2_caption1, blip2_caption2**: [BLIP2-FlanT5XL](https://huggingface.co/Salesforce/blip2-flan-t5-xl)-generated captions.
We also release `bootstrap_IDs.pkl`, the file used to generate the bootstrapped results of the paper. `test_subsets` is composed of [product IDs](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L229), while `dist_{N}_subsets` are [row indices](https://github.com/Simon-Lepage/CondViT-LRVSF/blob/b660d82b5775de417ba81ac846b6df004b31eb75/lrvsf/test/metrics.py#L248).
---
## Citing the dataset
To cite our work, please use the following BibTeX entry :
```bibtex
@article{lepage2023lrvsf,
title={LRVS-Fashion: Extending Visual Search with Referring Instructions},
author={Lepage, Simon and Mary, Jérémie and Picard, David},
journal={arXiv:2306.02928},
year={2023}
}
``` |