TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
TruthX models for paper "TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space".
TruthX is an inference-time method to elicit the truthfulness of LLMs by editing their internal representations in truthful space, thereby mitigating the hallucinations of LLMs. On the TruthfulQA benchmark, TruthX yields an average enhancement of 20% in truthfulness across 13 advanced LLMs.
TruthfulQA MC1 accuracy of TruthX across 13 advanced LLMs
This repo provides TruthX models trained on a variety of LLMs:
- Llama-1-7B, Alpaca-7B
- Llama-2-7B, Llama-2-7B-Chat, Vicuna-7B-v1.5
- Mistral-7B-v0.1, Mistral-7B-Instruct-v0.1, Mistral-7B-Instruct-v0.2
- Baichuan2-7B-Base, Baichuan2-7B-Chat
- Chatglm3-6B-Base, Chatglm3-6B
Please refer to GitHub repo and our paper for more details.
Licence
Model weights and the inference code are released under The GNU General Public License v3.0 (GPLv3)
Citation
If this repository is useful for you, please cite as:
@misc{zhang2024truthx,
title={TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space},
author={Shaolei Zhang and Tian Yu and Yang Feng},
year={2024},
eprint={2402.17811},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2402.17811}
}
If you have any questions, feel free to contact zhangshaolei20z@ict.ac.cn
.