Update README.md
Browse files
README.md
CHANGED
@@ -37,3 +37,55 @@ configs:
|
|
37 |
- split: test
|
38 |
path: data/test-*
|
39 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
- split: test
|
38 |
path: data/test-*
|
39 |
---
|
40 |
+
|
41 |
+
# Dataset Card for ChemQA
|
42 |
+
|
43 |
+
Introducing ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning. This work is inspired by IsoBench[1] and ChemLLMBench[2].
|
44 |
+
|
45 |
+
|
46 |
+
## Content
|
47 |
+
|
48 |
+
There are 5 QA Tasks in total:
|
49 |
+
* Counting Numbers of Carbons and Hydrogens in Organic Molecules: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets.
|
50 |
+
* Calculating Molecular Weights in Organic Molecules: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets.
|
51 |
+
* Name Conversion: From SMILES to IUPAC: adapted from the 600 PubChem molecules created from [2], evenly divided into validation and evaluation datasets.
|
52 |
+
* Molecule Captioning and Editing: inspired by [2], adapted from dataset provided in [3], following the same training, validation and evaluation splits.
|
53 |
+
* Retro-synthesis Planning: inspired by [2], adapted from dataset provided in [4], following the same training, validation and evaluation splits.
|
54 |
+
|
55 |
+
## Load the Dataset
|
56 |
+
|
57 |
+
```python
|
58 |
+
from datasets import load_dataset
|
59 |
+
dataset_train = load_dataset('shangzhu/ChemQA', split='train')
|
60 |
+
dataset_valid = load_dataset('shangzhu/ChemQA', split='valid')
|
61 |
+
dataset_test = load_dataset('shangzhu/ChemQA', split='test')
|
62 |
+
```
|
63 |
+
|
64 |
+
## Reference
|
65 |
+
|
66 |
+
[1] Fu, D., Khalighinejad, G., Liu, O., Dhingra, B., Yogatama, D., Jia, R., & Neiswanger, W. (2024). IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations.
|
67 |
+
|
68 |
+
[2] Guo, T., Guo, kehan, Nan, B., Liang, Z., Guo, Z., Chawla, N., Wiest, O., & Zhang, X. (2023). What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks. Advances in Neural Information Processing Systems (Vol. 36, pp. 59662–59688).
|
69 |
+
|
70 |
+
[3] Edwards, C., Lai, T., Ros, K., Honke, G., Cho, K., & Ji, H. (2022). Translation between Molecules and Natural Language. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 375–413.
|
71 |
+
|
72 |
+
[4] Irwin, R., Dimitriadis, S., He, J., & Bjerrum, E. J. (2022). Chemformer: a pre-trained transformer for computational chemistry. Machine Learning: Science and Technology, 3(1), 15022.
|
73 |
+
|
74 |
+
|
75 |
+
|
76 |
+
## Citation
|
77 |
+
|
78 |
+
```BibTeX
|
79 |
+
@misc{chemQA2024,
|
80 |
+
title={ChemQA: a Multimodal Question-and-Answering Dataset on Chemistry Reasoning},
|
81 |
+
author={Shang Zhu and Xuefeng Liu and Ghazal Khalighinejad},
|
82 |
+
year={2024},
|
83 |
+
publisher={Hugging Face},
|
84 |
+
howpublished={\url{https://huggingface.co/datasets/shangzhu/ChemQA}},
|
85 |
+
}
|
86 |
+
```
|
87 |
+
|
88 |
+
|
89 |
+
## Contact
|
90 |
+
|
91 |
+
shangzhu@umich.edu
|