## How to know when to gamble

Here’s something neat I found in Daniel Kahenman’s book. Suppose you’re offered $N$ opportunities to take a particular bet; in each case, you gain $r$ with probability $p$, or lose $l$ with probability $1-p$ (both $r$ and $l$ are positive quantities). We can label each outcome of such a trial by the number of bets $k$ that you win; the monetary value $V(k)$ associated with a particular outcome is

The expected monetary outcome for a single bet ($N=1$) in particular is then $E_0 = E[V] = pr - (1-p)l$ . Assume it’s nominally a good deal, in the sense of having positive expected value:

The expected return from $N$ bets, all independent and identical, is $NE_0$, so (up to perhaps a fixed threshold, set by my total initial funds and the probability of my losses exceeding them), if a single bet is a good deal for me, $N$ bets are also a good deal, with a constant increase in expected wealth per bet. And if a single bet is bad deal, $E_0 < 0$, $N$ bets are bad as well.

Now suppose the value you assign to each outcome is not just a linear function of the expected wealth. In particular, suppose you’re loss averse: a loss of a certain magnitude brings more pain than a gain of the same magnitude brings joy. We can represent by $U$ the “utility”, or effective value, that you actually assign to a particular event. In general this won’t be equal to the monetary value, but I’m going to assume it is at least in principle expressible as a dollar amount. Returning to the gambling experiment from above, we can model loss aversion by weighting all negative outcomes by a factor $\alpha > 1$:

where the label $k$ denotes the total number of wins.

Now consider a bet which, if taken only once, has a positive expected value, but which you still regard as a bad deal:

Using the same notation as above, the second condition means

Certainly you ought not to take the single bet. But what if $N$ is greater than one – what if it’s much larger than one? Crucially, your utility will be measured based on the outcome of the whole chain of bets – not one bet at a time.

To answer this, we just need to count how likely we are to win $k$ times, and work out what utility we derive in each such case. Since the bets are taken independently, the law of large numbers tells us that the distribution of $k$ values from $N$ bets is, for large $N$, a normal distribution with mean $N p$ and a width proportional to $\sqrt{N}$.

Each outcome can be labeled by its $k$ value. In Figure 1, I’ve plotted the probability $p(k)$ of winning a particular number $k$ of bets for a normal distribution with a mean value that’s small compared to the distribution width – this qualitatively describes the distribution of outcomes when the number of bets $N$ is small. The red shading indicates outcomes where the nominal monetary value is negative. Below, I’ve also plotted the “reward density”, $R(k) = p(k) \cdot U(k)$, which indicates how much a particular outcome $k$ contributes to the expected utility of the experiment. In other words, the area under the $R(k)$ curve equals, on average, how much utility one receives from participating in the $N$-bet experiment:

Red shading indicates areas where $R(k) < 0$, and blue shading, $R(k) > 0$. There are two curves plotted here – one, the dashed curve, is the reward for a non-risk-averse individual, for whom $U$ and $V$ are equal everywhere. Since the red area on the left side of the graph is a bit less than the blue area on the right side, this individual will, on average, come out ahead from participating. The second curve, the solid line, represents the risk-averse individual, whose negative outcomes are amplified by a factor $\alpha = 2$. Their red shading on the left outweights the blue on the right, and on average they consequently fare poorly.

The next plot indicates what happens when we allow $N$ to grow. The middle of the distibution gets pulled out to the right, putting less area in the red. The fraction of events with negative outcomes decreases, because the mean of the distribution grows faster than the width; as a result, the relevance of negative-value outcomes, is sharply diminished. In a practical sense: the amount to which risk aversion actually matters in evaluating a sequence of bets decreases as $N$ grows. What’s more, it decreases fast – that residual red area under the $R(k)$ curve is exponentially small in $N$. So, assuming you know you’re in a large - $N$ situation, the best strategy is to more or less neglect any loss-averse tendencies, knowing they’re not relevant to the overall outcome.

In a practical sense, what should one make of this? It’s certainly a contrived scenario – in particular, the assumption of independence seems quite dubious if we’re considering applying this to human affairs. One should expect all humans involved in any sequence of transactions to take past information into account. But I think it’s a very helpful model for illustrating the qualitative differences between long term and short term goals. In the second example above, each particular bet ‘hurts’ the loss-averse gambler as much as in the first example, so if they’re taking these outcomes one at a time, they may find it quite tempting to quit early and cut their losses – leading to worse financial outcomes on average. A gambler who can distance themselves psychologically from the individual bets and teach themselves to feel outcomes only over longer timescales will be able to tolerate larger values of $N$, and consequently enjoy higher expected profit. It pays, then, to keep one’s eyes on the horizon.

What sorts of human endeavours might be modeled this way? Off the top of my head:

• First dates (not independent, but should be correlated in your favor!)
• Job applications

And where will this intuition fail?

• Actual gambling (negative expected value)
• Sunk-cost situations, i.e. repeatedly trying to salvage a particular failing project – in this case the ‘bets’ are your repeated attempts to salvage the situation, which may be highly correlated if a single causal factor is constant.

There’s a lot more on ‘thinking like a trader’ and the dangers of normal distributions in Nassim Taleb’s book The Black Swan (I’ve read about half so far, recommended).