Papers
arxiv:2402.01306

KTO: Model Alignment as Prospect Theoretic Optimization

Published on Feb 2
Authors:
,
,
,
,

Abstract

Kahneman & Tversky's prospect theory tells us that humans perceive random variables in a biased but well-defined manner; for example, humans are famously loss-averse. We show that objectives for aligning LLMs with human feedback implicitly incorporate many of these biases -- the success of these objectives (e.g., DPO) over cross-entropy minimization can partly be ascribed to them being human-aware loss functions (HALOs). However, the utility functions these methods attribute to humans still differ from those in the prospect theory literature. Using a Kahneman-Tversky model of human utility, we propose a HALO that directly maximizes the utility of generations instead of maximizing the log-likelihood of preferences, as current methods do. We call this approach Kahneman-Tversky Optimization (KTO), and it matches or exceeds the performance of preference-based methods at scales from 1B to 30B. Crucially, KTO does not need preferences -- only a binary signal of whether an output is desirable or undesirable for a given input. This makes it far easier to use in the real world, where preference data is scarce and expensive.

Community

Hey, Amazing work :)
We've summarised this and a few other papers in our blog. Hope you like it

  1. KTO: The infamous alignment algorithm
  2. OLMoE: Open Data, Weights, Code Mixture of Experts models
  3. Mamba in the LlaMA: Distilling from Transformers to Mamba
  4. PlanSearch: Improving Code Generation via Planning

https://datta0.substack.com/p/ai-unplugged-19-kto-for-model-alignment

Sign up or log in to comment

Models citing this paper 20

Browse 20 models citing this paper

Datasets citing this paper 4

Spaces citing this paper 11

Collections including this paper 4