|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
|
|
## Interactive Evolution: A Neural-Symbolic Self-Training Framework for Large Language Models |
|
|
|
Paper Link: https://arxiv.org/abs/2406.11736 |
|
|
|
Code Repo: https://github.com/xufangzhi/ENVISIONS |
|
|
|
|
|
|
|
## π₯ News |
|
|
|
- π₯π₯π₯ We make public the final checkpoints after self-training ! ! ! |
|
|
|
|
|
## Note |
|
The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review. |
|
|
|
|
|
## Prompt for Zero-shot Evaluation |
|
|
|
```markdown |
|
Generate the logical representation for the given context and question. |
|
The context is: <context> |
|
The question is: <question> |
|
The logical representation is: |
|
``` |
|
|
|
|
|
## Citation |
|
If you find it helpful, please kindly cite the paper. |
|
``` |
|
@misc{xu2024interactive, |
|
title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models}, |
|
author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu}, |
|
year={2024}, |
|
eprint={2406.11736}, |
|
archivePrefix={arXiv}, |
|
} |
|
``` |
|
|
|
|