|
--- |
|
license: cc-by-4.0 |
|
configs: |
|
- config_name: validation |
|
data_files: |
|
- split: validation |
|
path: ScienceAgentBench.csv |
|
language: |
|
- en |
|
--- |
|
|
|
## ScienceAgentBench |
|
|
|
The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about their true capabilities. |
|
In this work, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation. |
|
To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery: |
|
- To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them. |
|
- We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs. |
|
- Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility. |
|
|
|
## Benchmark Access |
|
|
|
To prevent benchmark data contamination, we only provide the annotation sheet on Huggingface, which includes all necessary *inputs* to run an agent. |
|
|
|
To evaluate the agent outcomes, i.e. generated code, please follow the instructions in our [github repository](https://github.com/OSU-NLP-Group/ScienceAgentBench). |
|
|
|
## Benchmark Structure |
|
|
|
- "instance_id" (str): unique id for each task |
|
- "domain" (str): scientific discipline of each task |
|
- "subtask_categories" (str): sub-tasks involved in each task |
|
- "github_name" (str): the original github repository each task is adapted from |
|
- "task_inst" (str): task goal description and output formatting instruction |
|
- "domain_knowledge" (str): expert-annotated information about the task |
|
- "dataset_folder_tree" (str): string representation of dataset directory structure for each task |
|
- "dataset_preview" (str): string representation of the first few examples/lines in dataset files used in each task |
|
- "src_file_or_path" (str): source program location in the original github repository that is adapted |
|
- "gold_program_name" (str): name of annotated program (reference solution) for each task |
|
- "output_fname" (str): output location to save the generated program for each task |
|
- "eval_script_name" (str): name of evaluation script to check success criteria for each task |
|
|
|
## Licensing Information |
|
|
|
Most tasks in ScienceAgentBench is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. |
|
We retain their original licenses for tasks adapted from [rasterio/rasterio](https://github.com/rasterio/rasterio?tab=License-1-ov-file) (Instance ID: 32, 46, 53, 54, 84) and [hackingmaterials/matminer](https://github.com/hackingmaterials/matminer?tab=License-1-ov-file) (Instance ID: 3). |
|
|
|
## Disclaimer |
|
|
|
Our benchmark is constructed by adapting open-source code and data, to which we respect their creators' ownership and intellectual property. In Appendix I of our paper, we have made our best effort to cite the original papers, list the repositories, and provide their licenses. Still, we acknowledge that two repositories ([rasterio/rasterio](https://github.com/rasterio/rasterio) and [hackingmaterials/matminer](https://github.com/hackingmaterials/matminer)) are copyrighted and believe their terms for use are compatible with our research purpose. We welcome requests from the original authors to modify or remove relevant tasks related to those two repositories if needed. |
|
|
|
## Citation |
|
|
|
If you find our code and data useful, please consider citing our paper: |
|
|
|
``` |
|
@misc{chen2024scienceagentbenchrigorousassessmentlanguage, |
|
title={ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery}, |
|
author={Ziru Chen and Shijie Chen and Yuting Ning and Qianheng Zhang and Boshi Wang and Botao Yu and Yifei Li and Zeyi Liao and Chen Wei and Zitong Lu and Vishal Dey and Mingyi Xue and Frazier N. Baker and Benjamin Burns and Daniel Adu-Ampratwum and Xuhui Huang and Xia Ning and Song Gao and Yu Su and Huan Sun}, |
|
year={2024}, |
|
eprint={2410.05080}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2410.05080}, |
|
} |
|
``` |