CLoT-Oogiri-GO / README.md
zhongshsh's picture
Update README.md
fc83be4 verified
|
raw
history blame
2.15 kB
metadata
license: mit
task_categories:
  - visual-question-answering
language:
  - en
  - zh
  - ja
pretty_name: Oogiri-GO
size_categories:
  - 100K<n<1M

Oogiri-GO Dataset Card

Data discription: Oogiri-GO is a multimodal and multilingual humor dataset, and contains more than 130,000 Oogiri samples in English (en.jsonl), Chinese (cn.jsonl), and Japanese (jp.jsonl). Notably, in Oogiri-GO, 77.95% of samples are annotated with human preferences, namely the number of likes, indicating the popularity of a response. As illustrated in Fig. 1, Oogiri-GO contains three types of Oogiri games according to the input that can be images, text, or both, and are respectively called "Text to Text" (T2T), "Image to Text" (I2T), and "Image & Text to Text " (IT2T) for brevity.

Figure 1. Examples of the three types of LoT-based Oogiri games. Players are required to make surprising and creative humorous responses (blue box) to the given multimodal information e.g., images, text, or both.

Each line in the jsonl files represents a sample, formatted as follows:

{"type": "I2T", "question": null, "image": "5651380", "text": "It wasn't on purpose, I'm sorry!", "star": 5}

where type indicates the type of Oogiri game for the sample (T2T, I2T, IT2T); question represents the text question for the sample, with None for types other than T2T; image indicates the image question for the sample, with None for T2T samples; text is the text response for the sample; and star denotes the human preference.

Data distribution: Table summarizes the distribution of these game types. For training purposes, 95% of the samples are randomly selected to construct the training dataset, while the remaining 5% form the test dataset for validation and analysis.

Category English Chinese Japanese
I2T 17336 32130 40278
T2T 6433 15797 11842
IT2T 0 912 9420

Project page for more information: https://zhongshsh.github.io/CLoT