JosselinSom
commited on
Commit
•
e33a589
1
Parent(s):
e98b467
Update README.md
Browse files
README.md
CHANGED
@@ -230,6 +230,9 @@ tags:
|
|
230 |
# Image2Struct - Latex
|
231 |
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
|
232 |
|
|
|
|
|
|
|
233 |
## Dataset description
|
234 |
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
|
235 |
This subdataset focuses on LaTeX code. The model is given an image of the expected output with the prompt:
|
@@ -243,4 +246,43 @@ This subdataset focuses on LaTeX code. The model is given an image of the expect
|
|
243 |
* algorithms
|
244 |
* code
|
245 |
|
246 |
-
The last category: **wild**, was collected by taking screenshots of equations in the Wikipedia page of "equation" and its related pages.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
230 |
# Image2Struct - Latex
|
231 |
[Paper](TODO) | [Website](https://crfm.stanford.edu/helm/image2structure/latest/) | Datasets ([Webpages](https://huggingface.co/datasets/stanford-crfm/i2s-webpage), [Latex](https://huggingface.co/datasets/stanford-crfm/i2s-latex), [Music sheets](https://huggingface.co/datasets/stanford-crfm/i2s-musicsheet)) | [Leaderboard](https://crfm.stanford.edu/helm/image2structure/latest/#/leaderboard) | [HELM repo](https://github.com/stanford-crfm/helm) | [Image2Struct repo](https://github.com/stanford-crfm/image2structure)
|
232 |
|
233 |
+
**License:** [Apache License](http://www.apache.org/licenses/) Version 2.0, January 2004
|
234 |
+
|
235 |
+
|
236 |
## Dataset description
|
237 |
Image2struct is a benchmark for evaluating vision-language models in practical tasks of extracting structured information from images.
|
238 |
This subdataset focuses on LaTeX code. The model is given an image of the expected output with the prompt:
|
|
|
246 |
* algorithms
|
247 |
* code
|
248 |
|
249 |
+
The last category: **wild**, was collected by taking screenshots of equations in the Wikipedia page of "equation" and its related pages.
|
250 |
+
|
251 |
+
## Uses
|
252 |
+
|
253 |
+
To load the subset `equation` of the dataset to be sent to the model under evaluation in Python:
|
254 |
+
|
255 |
+
```python
|
256 |
+
import datasets
|
257 |
+
datasets.load_dataset("stanford-crfm/i2s-latex", "equation", split="validation")
|
258 |
+
```
|
259 |
+
|
260 |
+
|
261 |
+
To evaluate a model on Image2Latex (equation) using [HELM](https://github.com/stanford-crfm/helm/), run the following command-line commands:
|
262 |
+
|
263 |
+
```sh
|
264 |
+
pip install crfm-helm
|
265 |
+
helm-run --run-entries image2latex:subset=equation,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
|
266 |
+
```
|
267 |
+
|
268 |
+
You can also run the evaluation for only a specific `subset` and `difficulty`:
|
269 |
+
```sh
|
270 |
+
helm-run --run-entries image2latex:subset=equation,difficulty=hard,model=vlm --models-to-run google/gemini-pro-vision --suite my-suite-i2s --max-eval-instances 10
|
271 |
+
```
|
272 |
+
|
273 |
+
For more information on running Image2Struct using [HELM](https://github.com/stanford-crfm/helm/), refer to the [HELM documentation](https://crfm-helm.readthedocs.io/) and the article on [reproducing leaderboards](https://crfm-helm.readthedocs.io/en/latest/reproducing_leaderboards/).
|
274 |
+
|
275 |
+
## Citation
|
276 |
+
|
277 |
+
**BibTeX:**
|
278 |
+
|
279 |
+
```tex
|
280 |
+
@misc{roberts2024image2struct,
|
281 |
+
title={Image2Struct: A Benchmark for Evaluating Vision-Language Models in Extracting Structured Information from Images},
|
282 |
+
author={Josselin Somerville Roberts and Tony Lee and Chi Heem Wong and Michihiro Yasunaga and Yifan Mai and Percy Liang},
|
283 |
+
year={2024},
|
284 |
+
eprint={TBD},
|
285 |
+
archivePrefix={arXiv},
|
286 |
+
primaryClass={TBD}
|
287 |
+
}
|
288 |
+
```
|