inference: false
language:
- en
tags:
- instruction-finetuning
task_categories:
- text-generation
xFinder-llama38it
Model Details
xFinder-llama38it is a model specifically designed for key answer extraction in large language models (LLMs). It is trained by fine-tuning Llama3-8B-Instruct.
- Developed by: IAAR
- Fine-tuned from Model: Llama3-8B-Instruct
Model Sources
- Repository: https://github.com/IAAR-Shanghai/xFinder
- Paper: https://arxiv.org/abs/2405.11874
Uses
xFinder is primarily used to enhance the evaluation of LLMs by accurately extracting key answers from their outputs. It addresses the limitations of traditional regular expression (RegEx)-based extraction methods, which often fail to handle the diverse and complex outputs generated by LLMs. xFinder improves the reliability of model assessments across various tasks.
Training Details
xFinder-llama38it is fine-tuned from Llama3-8B-Instruct. The training data consists of approximately 26.9K samples from the Key Answer Finder (KAF) dataset. This dataset is designed to enhance the accuracy and robustness of key answer extraction and includes a variety of tasks. It has been meticulously annotated by GPT-4 and human experts to ensure high-quality training and evaluation. For more details, see this paper and try it with code.
Evaluation
xFinder is evaluated on the fully human-annotated test and generalization sets of the KAF dataset. The results demonstrate significant improvements in extraction accuracy and robustness compared to traditional methods. For more details, please refer to the paper and try it out using the provided code.
Citation
@article{xFinder,
title={xFinder: Robust and Pinpoint Answer Extraction for Large Language Models},
author={Qingchen Yu and Zifan Zheng and Shichao Song and Zhiyu Li and Feiyu Xiong and Bo Tang and Ding Chen},
journal={arXiv preprint arXiv:2405.11874},
year={2024},
}