Update links
Browse files
README.md
CHANGED
@@ -18,8 +18,8 @@ size_categories:
|
|
18 |
|
19 |
[Simon Lepage](https://simon-lepage.github.io), Jérémie Mary, [David Picard](https://davidpicard.github.io)
|
20 |
|
21 |
-
[[`Paper`](
|
22 |
-
[[`Demo`](
|
23 |
[[`Code`](https://github.com/Simon-Lepage/CondViT-LRVSF)]
|
24 |
[[`BibTeX`](#citing-the-dataset)]
|
25 |
|
@@ -33,7 +33,7 @@ LAION-RVS-Fashion is composed of images from :
|
|
33 |
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
|
34 |
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**
|
35 |
|
36 |
-
These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](
|
37 |
|
38 |
|Split|Products|Distractors|
|
39 |
|-:|:-:|:-:|
|
@@ -110,7 +110,7 @@ To cite our work, please use the following BibTeX entry :
|
|
110 |
@article{lepage2023condvit,
|
111 |
title={Weakly-Supervised Conditional Embedding for Referred Visual Search},
|
112 |
author={Lepage, Simon and Mary, Jérémie and Picard, David},
|
113 |
-
journal={arXiv:
|
114 |
year={2023}
|
115 |
}
|
116 |
```
|
|
|
18 |
|
19 |
[Simon Lepage](https://simon-lepage.github.io), Jérémie Mary, [David Picard](https://davidpicard.github.io)
|
20 |
|
21 |
+
[[`Paper`](https://arxiv.org/abs/2306.02928)]
|
22 |
+
[[`Demo`](https://huggingface.co/spaces/Slep/CondViT-LRVSF-Demo)]
|
23 |
[[`Code`](https://github.com/Simon-Lepage/CondViT-LRVSF)]
|
24 |
[[`BibTeX`](#citing-the-dataset)]
|
25 |
|
|
|
33 |
- **[LAION 2B MULTI TRANSLATED](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en)**
|
34 |
- **[LAION 1B NOLANG TRANSLATED](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en)**
|
35 |
|
36 |
+
These images have been grouped based on extracted product IDs. Each product in the training set is composed of at least a single image (isolated product), and a complex image (scene). We added categorical metadata and BLIP2 captions to each product. Please see the [samples](#samples) and refer to [our paper](https://arxiv.org/abs/2306.02928) for additional details.
|
37 |
|
38 |
|Split|Products|Distractors|
|
39 |
|-:|:-:|:-:|
|
|
|
110 |
@article{lepage2023condvit,
|
111 |
title={Weakly-Supervised Conditional Embedding for Referred Visual Search},
|
112 |
author={Lepage, Simon and Mary, Jérémie and Picard, David},
|
113 |
+
journal={arXiv:2306.02928},
|
114 |
year={2023}
|
115 |
}
|
116 |
```
|