# A Case Study: Language Modeling with Pairwise Preference

An autoregressive language model is described as a repeated application of the next-token conditional probability, as in

[] \begin{align} p(w_1, w_2, \ldots, w_T) = \prod_{t=1}^T p(w_t | w_{ \lt t}). \end{align} []

A conditional autoregressive language model is exactly the same except that it is conditioned on another variable $X$:

[] \begin{align} p(w_1, w_2, \ldots, w_T | x) = \prod_{t=1}^T p(w_t | w_{ \lt t}, x). \end{align} []

There are many different ways to build a neural network to implement the next-token conditional distribution. We do not discuss any of those approaches, as they are out of the course's scope. An interesting property of a language model is that it can be used for two purposes:

• Scoring a sequence: we can use $p(w_1, w_2, \ldots, w_T | X)$ to score an answer sequence $w$ given a query $x$.
• Approximately finding the best sequence: we can use approximate decoding to find $\arg\max_w p(w | x)$.

This allows us to perform causal inference and outcome maximization simultaneously. Consider the problem of query-based text generation, where the goal is to produce an open-ended answer $w$ to a query $x$. Because it is often impossible to give an absolute score to the answer $w$ given a query $x$, it is customary to ask a human annotator a relative ranking between two (or more) answers $w_+$ and $w_-$ given a query $x$. Without loss of generality, let $w_+$ be the preferred answer to $w_-$. We assume that there exists a strict total order among all possible answers. That is,

• Irreflexive: $r(w|x) \lt r(w|x)$ cannot hold.
• Asymmetric: If $r(w|x) \lt r(w'|x)$, then $r(w|x) \gt r(w'|x)$ cannot hold.
• Transitive: If $r(w|x) \lt r(w'|x)$ and $r(w'|x) \lt r(w''|x)$, then $r(w|x) \lt r(w''|x)$.
• Connected: If $w \neq w'$, then either $r(w|x) \lt r(w'|x)$ or $r(w|x) \gt r(w'|x)$ holds.

In other words, we can enumerate all possible answers according to their (unobserved) ratings on a 1-dimensional line.

A non-causal approach. It is then relatively trivial to train this language model, assuming that we have a large amount of triplets

[$] D=\left\{(x^1, w^1_+, w^1_-), \ldots, (x^N, w^N_+, w^N_-)\right\}. [$]

For each triplet, we ensure that the language model puts a higher probability on $w_+$ than on $w_-$ given $x$ by minimizing the following loss function:

[] \begin{align} L_{\mathrm{pairwise}}(p) = \frac{1}{N} um_{n=1}^N \max(0, m-\log p(w^n_+|x) + \log p(w^n_- | x)), \end{align} []

where $m \in [0, \infty)$ is a margin hyperparameter. For each triplet, the loss inside the summation is zero, if the language model puts the log-probability on $w_+$ more than that on $w_-$ with the minimum margin of $m$. This loss alone is however not enough to train a well-trained language model from which we can produce a high-quality answer. For we have only pair-wise preference triplets for reasonable answers only. The language model trained in this way is not encouraged to put low probabilities on gibberish. We avoid this issue by ensuring that the language model puts reasonably high probabilities on all reasonable answer by minimizing the following extra loss function:

[] \begin{align} L_{\mathrm{likelihood}}(p) = - \frac{1}{2N} um_{n=1}^N \left( \log p(w^n_+ | x) + \log p(w^n_- | x) \right), \end{align} []

which corresponds to the so-called negative log-likelihood loss.

A causal consideration. This approach works well under the assumption that it is only the content that is embedded in the answer $w$. This is unfortunately not the case. Any answer is a combination of the content and the style, and the latter should not be the basis on which the answer is rated. For instance, one aspect of style is the verbosity. Often, a longer answer is considered to be highly rated, because of the subconscious bias by a human rater believing a better answer would be able to write a longer answer, although there is no reason why there should not be a better and more concise answer. This process can be described as the graph below, where $r$ is the rating and $s$ is the style:

The direct effect of $w$ on the rating $r$ is based on the content, but then there is spourious correlation between $w$ and $r$ via the style $s$. For instance, $s$ could encode the verbosity which affects both how $w$ is written and how a human rater perceives the quality and gives the rating $r$. In the naive approach above, the language model, as a scorer, will fail to distinguish between these two and capture both, which is clearly undesirable; a longer answer is not necessarily a better answer. In other words, a language model $p_0$ trained in a purely supervised learning way above will score $w$ high for both causal and spurious (via $s$) reasons. An answer $w$ sampled from $p_0$ can then be considered dependent upon not only the question $x$ itself but also of an unobserved style variable $s$.

Direct preference optimization[1] or unlikelihood learning[2] We can resolve this issue by combining two ideas we have studied earlier; randomized controlled trials (RCT; Randomized Controlled Trials) and inverse probability weighting (IPW; Inverse Probability Weighting). First, we sample two answers, $w$ and $w'$, from the already trained model $p_0$, using supervised learning above:

[] \begin{align} w, w' im p_0(w|x). \end{align} []

These two answers (approximately) maximize the estimated outcome (rating) by capturing both the content and style. One interesting side-effect of imperfect learning and inference (generation) is that both of these answers would largely share the style. If we use $s'$ to denote that style, we can think of each answer as sampled from $w | x, s'$. With a new language model $p_1$ (potentially initialized from $p_0$), we can compute the rating after removing the dependence on the style $s$ by IPW:

[] \begin{align} \hat{r}(w|x) = \frac{p_1(w|x)}{p_0(w|x)}. \end{align} []

This reminds us of $\mathrm{do}$ operation, resulting in the following modified graph:

Of course, this score $\hat{r}$ does not mean anything, since $p_1$ does not mean anything yet. We have to train $p_1$ by asking an expert to provide their preference between $w$ and $w'$. Without loss of generality, let $w$ be the preferred answer over $w'$. That is, $w_+=w$ and $w_-=w'$. We train $p_1$ by minimizing

[] \begin{align} L'_{\mathrm{pairwise}}(p_1) = \frac{1}{N} um_{n=1}^N \max\left(0, m-\log \frac{p_1(w^n_+|x)}{p_0(w^n_+|x)} + \log \frac{p_1(w^n_- | x)}{p_0(w^n_- | x)} \right), \end{align} []

where we assume have $N$ pairs. $m$ is a margin as before. It is possible to replace the margin loss with another loss function, such as a log loss or linear loss. This procedure encourages $p_1$ to capture only the direct (causal) effect of the answer on the rating, dissecting out the indirect (spurious) effect via the style $s$. One training is done, we use $p_1$ to produce a better answer, which dependes less on the spurious correlation between the answer and the rating via the style. Because this procedure is extremely implicit about the existence of and the dependence on the style, it can be beneficial to repeat this procedure multiple rounds in order to further remove the effect of the spurious correlation and improve the quality of a generated answer~[3].

## General references

Cho, Kyunghyun (2024). "A Brief Introduction to Causal Inference in Machine Learning". arXiv:2405.08793 [cs.LG].

## References

1. "Direct preference optimization: Your language model is secretly a reward model" (2024). Advances in Neural Information Processing Systems 36.
2. "Neural text generation with unlikelihood training" (2019). arXiv preprint arXiv:1908.04319.
3. "Training language models to follow instructions with human feedback" (2022). Advances in Neural Information Processing Systems 35.