Edit model card

This is the backbone SFT model used in the paper "DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging".

The detailed training/evaluation information can be found at https://api.wandb.ai/links/merge_exp/2qs92v6f.

For the detailed information about this model, please refer to our paper.

If you found this model useful, please cite our paper:

@article{lin2024dogerm,
  title={DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging},
  author={Lin, Tzu-Han and Li, Chen-An and Lee, Hung-yi and Chen, Yun-Nung},
  journal={arXiv preprint arXiv:2407.01470},
  year={2024}
}
Downloads last month
12
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for miulab/llama2-7b-alpaca-sft-10k

Finetuned
(587)
this model

Dataset used to train miulab/llama2-7b-alpaca-sft-10k

Collection including miulab/llama2-7b-alpaca-sft-10k