Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges
Abstract
Offering a promising solution to the scalability challenges associated with human evaluation, the LLM-as-a-judge paradigm is rapidly gaining traction as an approach to evaluating large language models (LLMs). However, there are still many open questions about the strengths and weaknesses of this paradigm, and what potential biases it may hold. In this paper, we present a comprehensive study of the performance of various LLMs acting as judges. We leverage TriviaQA as a benchmark for assessing objective knowledge reasoning of LLMs and evaluate them alongside human annotations which we found to have a high inter-annotator agreement. Our study includes 9 judge models and 9 exam taker models -- both base and instruction-tuned. We assess the judge model's alignment across different model sizes, families, and judge prompts. Among other results, our research rediscovers the importance of using Cohen's kappa as a metric of alignment as opposed to simple percent agreement, showing that judges with high percent agreement can still assign vastly different scores. We find that both Llama-3 70B and GPT-4 Turbo have an excellent alignment with humans, but in terms of ranking exam taker models, they are outperformed by both JudgeLM-7B and the lexical judge Contains, which have up to 34 points lower human alignment. Through error analysis and various other studies, including the effects of instruction length and leniency bias, we hope to provide valuable lessons for using LLMs as judges in the future.
Community
๐๐๐ง ๐๐๐๐ฌ ๐ฌ๐๐ซ๐ฏ๐ ๐๐ฌ ๐ซ๐๐ฅ๐ข๐๐๐ฅ๐ ๐ฃ๐ฎ๐๐ ๐๐ฌ โ๏ธ?
We aim to identify the right metrics for evaluating Judge LLMs and understand their sensitivities to prompt guidelines, engineering, and specificity. Key findings -
๐ ๐ง๐ผ๐ฝ ๐ฃ๐ฒ๐ฟ๐ณ๐ผ๐ฟ๐บ๐ฒ๐ฟ๐: Only ๐๐ฃ๐ง-๐ฐ and ๐๐๐ฎ๐บ๐ฎ-๐ฏ ๐ณ๐ฌ๐ shine among 9 judge models. However, they still fall short of inter-human annotator agreement.
๐ ๐๐๐ฎ๐น๐๐ฎ๐๐ถ๐ผ๐ป ๐ ๐ฒ๐๐ฟ๐ถ๐ฐ: Scores assigned by judges with 80%+ percent alignment with humans can be 20 points apart! Cohen's kappa is a superior metric.
โ๏ธ ๐ฅ๐ฎ๐ป๐ธ๐ถ๐ป๐ด ๐๐ ๐๐ฐ๐ผ๐ฟ๐ถ๐ป๐ด: Most aligned in scores != most discriminative, in some cases, judge models with low alignment such as Contains (lexical match), and JudgeLM-7B outperform better models in terms of ๐๐๐๐๐๐๐ models, because their biases are more systematic.
๐งฉ ๐๐ฒ๐ป๐ถ๐ฒ๐ป๐ฐ๐: Judge LLMs tend to be more lenient than strict.
๐ญ ๐ฉ๐๐น๐ป๐ฒ๐ฟ๐ฎ๐ฏ๐ถ๐น๐ถ๐๐: Judge LLMs can be easily tricked by controlled responses like "Yes," "Sure," and "I don't know."
๐ฏ ๐๐ผ๐ป๐๐ฟ๐ผ๐น๐น๐ฎ๐ฏ๐ถ๐น๐ถ๐๐: It's not easy to steer large models while smaller models get confused by adding too much detail.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper