Model Card for SciLitLLM1.5
SciLitLLM1.5 adapts a general large language model for effective scientific literature understanding. Starting from the Qwen2.5-7B/14B model, SciLitLLM1.5-7B/14B goes through a hybrid strategy that integrates continual pre-training (CPT) and supervised fine-tuning (SFT), to simultaneously infuse scientific domain knowledge and enhance instruction-following capabilities for domain-specific tasks.
In this process, we identify two key challenges: (1) constructing high-quality CPT corpora, and (2) generating diverse SFT instructions. We address these challenges through a meticulous pipeline, including PDF text extraction, parsing content error correction, quality filtering, and synthetic instruction creation.
Applying this strategy, we present SciLitLLM-7B and 14B, specialized in scientific literature understanding, which demonstrates promising performance on scientific literature understanding benchmarks.
We observe promising performance enhancements, with an average improvement of 4.0% on SciAssess and 10.1% on SciRIFF, compared to the leading LLMs under 10B parameters. Notably, SciLitLLM-7B even outperforms Llama3.1 and Qwen2.5 with 70B parameters on SciRIFF. Additionally, SciLitLLM-14B achieves leading results on both benchmarks, surpassing other open-source LLMs. Further ablation studies demonstrate the effectiveness of each module in our pipeline.
See the paper for more details and github for data processing codes.
Requirements
Since SciLitLLM is based on Qwen2.5, we advise you to install transformers>=4.37.0
, or you might encounter the following error:
KeyError: 'qwen2'
Quickstart
Here provides a code snippet with apply_chat_template
to show you how to load the tokenizer and model and how to generate contents.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Uni-SMART/SciLitLLM-1.5/7B/",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Uni-SMART/SciLitLLM-1.5/7B/")
prompt = "Can you summarize this article for me?\n <ARTICLE>"
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
Citation
If you find our work helpful, feel free to give us a cite.
@misc{li2024scilitllmadaptllmsscientific,
title={SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding},
author={Sihang Li and Jin Huang and Jiaxi Zhuang and Yaorui Shi and Xiaochen Cai and Mingjun Xu and Xiang Wang and Linfeng Zhang and Guolin Ke and Hengxing Cai},
year={2024},
eprint={2408.15545},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.15545},
}