kilogram / README.md
nielsr's picture
nielsr HF staff
Link dataset to paper
b109919 verified
|
raw
history blame
2.33 kB
metadata
annotations_creators:
  - crowdsourced
language:
  - en
multilinguality:
  - monolingual
pretty_name: KiloGram
size_categories:
  - 1K<n<10K
source_datasets:
  - original
tags:
  - tangrams
  - reference-games
  - vision-language
viewer: false

Preprocessed training and evaluation data from KiloGram.

KiloGram dataset and code repo: https://github.com/lil-lab/kilogram


File Formats

Training Set

Texts: train_*.json are all in the format of {tangramName: list(annotations)}.

Images: Colored images with parts (under /color) are named in the format of tangramName_{idx}.png, where idx corresponds to the index of the annotation in the text file.

Validation, Development, Heldout Set

Texts: {whole, part}_{black, color}.json are in the format of {"targets": list(imageFileNames), "images": list(imageFileNames), "texts": list(annotations)}. We flattened all the contexts and concatenated them into one list for each entry.

E.g. the first 10 elements in "targets" are the image file name of the target of the first context repeated 10 times; the first 10 of "images" are the image file names in that context; and the first 10 of "texts" are the corresponding 10 annotations in that context.

/controlled contains experiments with constrained contexts controlled for number of parts, and /random contains ones without. (See Appendix A.8 in paper)

/development/texts/augmented/aug_dev.json and images/augmented.tar.bz2 are experiments in the same format as above used to evaluate the effect of adding part information.

Intermediate files:

*/text/controlled/eval_batch_data.json are in the format of {tangramName: {numOfParts: list({"target": [tangramName_{idx}, annotation], "distractors": list(list([tangramName_{idx}, annotation]))})}}, used to generate controlled experiment jsons. Note: annotations are descriptions concatenated by "#" instead of in natural English.

Citation

@misc{ji2022abstractvisualreasoningtangram,
      title={Abstract Visual Reasoning with Tangram Shapes}, 
      author={Anya Ji and Noriyuki Kojima and Noah Rush and Alane Suhr and Wai Keen Vong and Robert D. Hawkins and Yoav Artzi},
      year={2022},
      eprint={2211.16492},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2211.16492}, 
}