Utility Functions and Decision Analysis
(2001-07-04)
What is the concept of "utility", as used in decision analysis?
How do utilities differ from expectations? How are utilities used?
-
A utility is a numerical rating assigned to every possible outcome
a decision maker may be faced with.
(In a choice between several alternative prospects,
the one with the highest utility is always preferred.)
To qualify as a true utility scale however, the rating must be such that
the utility of any uncertain prospect is equal to the expected value
(the mathematical expectation) of the utilities of all its possible outcomes
(which could be either "final" outcomes or uncertain prospects themselves).
When decisions are made by a so-called rational agent
(if A is preferred to B and B to C, then A must be preferred to C), it should be
clear that some numerical scale can be devised to rate any possible outcome
"simply" by comparing and ranking these.
Determining equivalence in money terms may be helpful in such a systematic process
but it's not theoretically indispensable.
What may be less clear, however, is how to devise such a rating system so that
it would possess the above fundamental property required of a utility scale.
One theoretical way to do so is to compare prospects and/or final outcomes
to tickets entitling the holder to a chance at winning some jackpot,
which is at least as valuable as any outcome under consideration.
A ticket with a face value of 75% means a chance of winning the jackpot
with a probability of 0.75 and it will be assigned a utility of 0.75.
Anything which is estimated to be just as valuable as such a ticket (no more, no less)
will be assigned a utility of 0.75 as well.
The scale so defined does have the property required of utility scales.
Consider, for example, a prospect which may have one of two outcomes:
- The first outcome has a probability of 0.3 and a utility of 0.6
(it could be a ticket with a 60% face value).
- The second outcome has a probability of 0.7 and a utility of 0.2
(it could be a ticket with a 20% face value).
When these two outcomes actually consist of lottery tickets, the whole thing
is cequivalent
(think long and hard about this) to having a chance
to win the jackpot with probability
0.3 ´ 0.6 +
0.7 ´ 0.2 = 0.32 .
The prospect has therefore, by definition, a utility of 0.32,
and we do observe that the result has been computed with the same rule as a
mathematical expectation.
It would be so in any other case involving either lottery tickets or
things/situations previously assigned a utility
(by direct or indirect comparisons with such tickets).
The type of utilities introduced above are between 0 and 1,
but no such restriction is in fact required.
The key observation is that we may either translate or rescale a utility scale
without affecting at all the decisions it implies:
Each side of every comparison is translated or rescaled the same way and it does
not affect inequalities as long as the scaling factor is positive.
In particular, we may keep the same utility scale if we're faced with an outcome
more valuable than whatever jackpot we first considered.
If that jackpot is estimated to be just as desirable as a chance of
winning the bigger prize with probability p, we may assign a utility 1/p to the
bigger prize (and this, of course, is larger than 1).
Similarly, the original "ticket" scale may have to be extended to assign
negative utilities to certain undesirable situations.
Considering such a situation "in context",
as an outcome of a prospect whose other outcomes are quite positive, allows
the semi-direct use of the "ticket" scale to evaluate its negative utility.
It should be stressed that, even when there is no such thing as a "top prize",
the utilities of all prospects are necessarily bounded.
(Recall the difference between a maximum, which is achieved in at least one
case, and an upper limit, which may not be.
Utilities have an upper limit, not necessarily a maximum.)
This may be visualized by considering that the utility function of money,
which is normally nondecreasing (!), may either have an asymptote or be constant above
a certain point.
For a proof that utilities must be bounded,
see our discussion of the St. Petersburg's Paradox...
In real life, utilities are not linearly related to money values
(or else the lotteries would go out of business), which is another way to say that
the mathematical expectation of a monetary gamble need not be the proper
utility measure to use. The monetary expectation is only a special
example of a utility,
which is mathematically acceptable but not at all realistic.
It is, unfortunately, given in elementary texts (which do not introduce the utility
concept) as the sole basis for a rational analysis of gambling decisions.
This is clearly not so in practice:
For example, you may be willing to pay one dollar for
an (unfair) chance in 2000 000 at $1000 000,
but very few people (if any) would pay even
$499 999 for a chance in two at $1000 000s.
(However, someone would take that last bet in a very special situations
where an immediate gain of $1000 000 would make a lifelong dream possible,
whereas the loss of even half a million would not be considered nearly
as significant in the long run.)
The rational basis for such choices is based on the utilities involved.
Before you analyze choices, you have to determine the relevant "utility curve" carefully
when it comes to actual possible outcomes: If your current wealth is W, what would be the
exact utility rating to you of a total wealth equal to W, W-1, W-499999, or W+1000000?
How does that compare to nonmonetary things like the loss of a limb? Above or below
the knee? What's a relationship or a marriage worth? What about social status?
Recognition? Public ridicule? Will you go out naked for $10 000, for $10,
or would someone have to pay you not to expose yourself every day?
Everything that carries any weight at all in your choices has to be assigned some
utility on your own personal scale, which you may only build by introspection or,
better, retrospection (recalling relevant past choices).
In some cases, comparisons with the ubiquitous money scale may help.
Although the utility function (u) which gives utility as a function of money
(total wealth) is not normally a linear function, it may have a simple
mathematical form under certain common assumptions (see below).
One caveat is that nonmonetary gratifications often play a role in actual choices which
seem based solely on monetary exchanges:
There's some playful element in any lottery, which increases the appeal of
purchasing a lottery ticket. Lottery operators know this very well and they design
their lottery "games" with this in mind.
Note also that it's the entire situation you've reached at some point that is
assigned a utility rating, not the various components of that situation
(money, health, happiness, etc.).
It's worth noting that if you assume that your attitude towards money does
not depend on how much of it you have right now (which is probably true within limits),
then the monetary part of your own utility function u
must be (up to irrelevant rescaling), an exponential function of your wealth.
(It could also be linear, but this is usually disallowed on the ground that a proper
utility function must be bounded when the stakes are potentially unbounded.)
Indeed, this assumption states that your
utility function u is such that the quantity h is irrelevant in your preference between
something of utility p u(a+h) + (1-p) u(b+h) and
something of utility q u(c+h) + (1-q) u(d+h).
For now, we'll leave it up to the reader to show that this true if [easy] and
only if [tougher] the function u is either linear [ruled out] or of the
following form, up to some irrelevant rescaling:
u(x) = 1 - exp( -x/r )
Although x should normally be equal to one's entire wealth, changing the
"zero point" merely rescales linearly an exponential function and is therefore
irrelevant to decisions (as explained above).
It's therefore customary, when using the exponential utility function,
to consider that x is the amount to be gained or lost in a given gamble.
Separate gambles can be analyzed separately with an exponential utility
function (that's not true of any other).
In the above expression for an exponential utility function of money,
the constant amount r (measured in the same money unit used for the variable x) is
called the risk tolerance.
For more general utility functions that
risk tolerance is not a constant and may be defined at each point x
of the utility curve as equal to the quantity -u'(x)/u"(x)
(notice that this definition is independent of the allowed linear rescaling
of the utility function).
Portfolio managers will tell you that an investor's risk tolerance
is roughly proportional to his assets
(at least that's what most of them assume to be true).
This may be interpreted in either one of two ways:
- EITHER: When prospect are analyzed, the risk tolerance used in the analysis
of future uncertainty is the constant corresponding to the current situation.
At the next step, when certain events have actually come to pass,
a different constant will be used to make a slightly different analysis,
using the new risk tolerance corresponding to the new situation.
- OR: The utility function used to make strategical decisions incorporates
the future variability of the investor's risk tolerance.
For example, if the risk tolerance is indeed proportional to wealth (r=kx),
the utility function is then a solution of the differential equation
k x u"(x) + u'(x) = 0. Solve this by letting y be u'(x), so that
k x dy + y dx = 0 (or k dy/y + dx/x = 0), which means
that y is proportional to x-1/k.
Therefore, up to some irrelevant rescaling,
the utility u is also a power of x, namely -x1-1/k.
For this function to have an upper limit, the exponent should be negative.
This is to say that we must have k<1.
The rule of thumb [that's all it is]
in the corporate world seems to be that the management of most companies
behaves as if k=1/6 (risk tolerance = one sixth of equity).
With the second of the above interpretations, this would mean the
utility function of a major corporation (unless it's close to bankruptcy)
would typically be -1/x5.
Rather surprisingly, interviews of experienced corporate decision makers
seem to be consistent with this...
(2001-07-04)
The so-called "St. Petersburg Game" is played with a fair coin, which is tossed
until heads appears. If the game lasts for n+1 tosses, the player receives
2n dollars.
Namely: $1 if heads appears at the very first toss,
$2 if it does at the second toss, then $4, $8, $16, $32, $64, $128, etc.
What's a decent price to pay for the privilege of playing this game?
-
This is called the "St. Petersburg Paradox":
The mathematical expectation of the Petersburg Game is infinite,
since it would be the sum of the divergent series:
(1/2)(1) + (1/4)(2) + (1/8)(4) + (1/16)(8) + ... = 1/2 + 1/2 + 1/2 + 1/2 + ...
However, it's clear that nobody would ever pay more than a few dollars for a
shot at this type of gamble... Why?
When the question was first posed,
early in the 18th century, it was still believed that the value of a gamble should only
be based on its "fair" price, which is another name for its
mathematical expectation.
The fact that it clearly cannot be so with the above
game ultimately led to the introduction of the modern concept of the
utility of a prospect.
The discussion originated with a
correspondence between the Swiss mathematician, residing in Basel, Nicolas Bernoulli
(1687-1759, not to be confused with his well-known father also called Nicolas, 1662-1705),
and
Pierre Rémond de Montmort (1678-1719), in Paris.
Montmort had authored a successful book entitled
"Essay d'analyse sur les jeux de hazard" (Paris, 1708).
Bernoulli was making
suggestions for a future edition, focusing on a set of 5 problems to appear on page 402,
including "Problem 5", which essentially describes the Petersburg Game...
The very first letter from Bernoulli (dated September 9, 1713) mentions a die
instead of a fair coin, but the lower probability (1/6) of terminating the game at each toss
makes the expectation series diverge even more rapidly.
(Bernoulli introduces other payoff sequences which are not necessarily paradoxical,
so that Montmort initially missed his point.)
A few years later,
Gabriel Cramer (1704-1752) was prompted
to address the issue from London, in a
letter to Bernoulli, dated May 21, 1728.
(Since he turned 20, in 1724, Cramer had been sharing a chair of mathematics in Geneva
with Giovanni Ludovico Calandrini , under an arrangement that called for one of them to
travel while the other was teaching.)
Cramer restated the game in its modern form, for the sake of simplicity,
with a fair coin instead of a die.
He went on to say that "mathematicians estimate money in proportion to its quantity,
and men of good sense in proportion to the usage that they may make of it".
Cramer then quantified that statement in terms of what we would now call a
"utility function" (he used the term "moral value of goods" instead).
Cramer's first example of a utility function
was simply proportional to the money amount up to a certain point
(he used 224 coins, for convenience)
and constant thereafter.
His second example was a utility function of money proportional to the square root
of the amount of money.
Either of these utility functions does assign a finite utility
to the original Petersburg game, but the second one
would fail to resolve the issue if the payoff sequence was increasing faster
(for example, if the player was payed
4n dollars for completing n+1 tosses).
In fact,
this very example may be used to show that any
utility function must have an upper bound, or else one could exhibit an infinite
sequence of prospects, the n-th of which having a utility at least equal to
2n.
Offering the n-th such prospect as payoff for successfully
completing n tosses in a Petersburg game would assign infinite "utility"
to such a game, which is not acceptable.
(The basic tenet of the utility concept assigns a finite utility rating
to a single prospect, which is what the whole Petersburg game is.)
This revived the issue originally raised by Nicolas Bernoulli,
who asked the opinion of his brilliant cousin, Daniel Bernoulli (1700-1782).
At that time, Daniel was professor of mathematics in St. Petersburg,
and his influential work on the subject would later be published (in 1738) by the
St. Petersburg Academy, which is how the paradox got its modern name.
Back in 1731, Daniel Bernoulli rediscovered (independently of Cramer)
the modern notion of utilities, which Nicolas Bernoulli kept rejecting...
Daniel also made a point which Cramer had missed entirely, namely that it is
generally crucial to consider only the entire wealth of the player and assign
a utility only to the whole thing, as the marginal utility of
an additional coin will depend on the rest of one's fortune.
Bernoulli ventured the guess that the additional utility (du) of an additional dollar (dw)
could be inversely proportional to one's entire wealth w.
This assumption (du = k dw/w) makes utility (u) a logarithmic function of the
total wealth (w).
As we are free to rescale utilities, it may then be stated without loss of generality
that this translates into u(w) = ln(w).
However, this logarithmic "utility" function suffers
from the same flaw as Cramer's square root function of money, because it's
not bounded either: If a successful sequence of n+1 tosses was payed
exp(k 2n), the game would still end up having an
infinite "utility", even for a small value of the parameter k.
With a small value like k=0.01, there's an unattractive sequence of payoffs at
first, then the growth becomes explosive:
$1.01, $1.02, $1.04, $1.08, $1.17, $1.38, $1.90, $3.60, $12.94, $167.34, $28001.13,
$784063053.14, ...
This sequence of payoffs is clearly worth a substantial premium,
but consider the related schedule where you get payed $1.00 for
any successful sequence of less than 100 tosses and exp(k 2n-100) dollars
thereafter. That gamble is worth $1.00 to absolutely anybody,
in spite of the fact that its logarithmic "utility" is infinite...
There is no way around it. Utilities are always bounded.
If we're presented with a theoretical problem where payoffs are unbounded,
as they are in the Petersburg Gamble, then the utility function itself must have an
upper limit (in practical situations, potential payoffs are always bounded,
which makes the exact mathematical form of the utility function irrelevant
beyond a certain point and the issue does not arise because of such practical limits).
If a tool, like Bernoulli's logarithmic utilities, fails to make sense of the
Petersburg Gamble for some particular payoff schedule,
then it clearly cannot be trusted to analyze any other schedule.
It turns out that only very few utility functions allow a self-consistent
analysis fully compatible with the nature of the question we are asked.
In fact, we only have the freedom to choose a single scalar parameter
(the player's so-called risk tolerance)! Read on:
There's an hidden assumption in this and other similar theoretical puzzles,
which we must make explicit in order to solve the riddle:
The question is asked out of context and must be answered likewise if
it is to be answered at all.
We are not to involve sordid details about the
rest of the player's life (size of bank accounts, mortgages, etc.).
That approach is logically consistent only with the assumption of an exponential
utility function of money, which is the only type of utility function
where decisions about a particular prospect are not influenced by the
rest of one's situation...
It does not make sense to analyze an isolated gamble except
by assuming an exponential utility function, since no other
utility function of money even permits such isolation.
This is a theoretical argument, of course, but it's clearly appropriate
for a theoretical question like the one at hand...
Since we must, we shall happily assume that the player's
utility function of money is
of the form 1-exp(-x/r) for some parameter r
(which is a dollar amount, usually called risk tolerance).
In this, x should generally be the player's total wealth, but the unique properties
of the exponential function allow us to consider that x is simply the
amount gained or lost in the gamble(s) at hand
(since changing the zero point on the money scale merely rescales exponential
utilities without affecting the comparisons relevant for decisions).
We do not have such freedom with
a more general utility function, as Daniel Bernoulli first recognized.
Also, since additive and/or (positive) multiplicative constants in the utility function
do not affect decisions, we may as well use u(x) = -exp(-x/r)
as the utility of gaining (or losing) x dollars in the gamble at hand.
(The only aesthetic thing lost in the rescaling is that we no longer
have a utility of 0 for a gain of 0.)
It's interesting to observe that the exponential utility function
(with a positive risk tolerance)
does not have a lower bound.
Therefore it could not be used to analyze a gamble with unbounded negative payoffs (or fees).
This is not surprising in view of the fact that such a gamble is clearly a major
decision which cannot be considered independently of the rest of the
player's situation, because the entire wealth of the player (and more) is at risk.
Everybody's actual overall utility function is bounded on both sides (if you can't
possibly repay a huge debt, it makes very little difference if it is $100,000,000 or
$200,000,000). The decision of whether to play the Petersburg Game is a minor one
for which the exponential utility function is entirely appropriate.
The decision to bankroll
such a game would be a major one, even for a risk-loving entity (if one was
ever foolish enough to be attracted by the tiny fees ordinary players are willing to risk).
After this long preamble, the rest is easy.
Let's call u(x) the utility of having x more dollars than initially.
If you pay y dollars for the privilege to play, the utility of playing the
Petersburg game is clearly
å u(2n-y) / 2n+1
and the gamble should be accepted if and only if this is greater than u(0).
In the particular case where u is exponential, this is equivalent to comparing
å u(2n) / 2n+1
and u(y), namely the utility of the free gamble
and the utility of a so-called certainty equivalent (CE).
The CE is whatever (minimum) amount of money we would be willing to receive as a
compensation for giving up the right to gamble.
It may not be quite the same as the (maximum) price we're willing to pay to acquire
that right!
Only in the case of the exponential (or linear) utility function are these two amounts always
equal.
The CE is the quantity actually computed in
Cramer's original text based on a square root utility function.
It was probably silently assumed at the time that the CE would not be too different from
the price one would be willing to pay.
However, rigorously speaking, the minimum acceptable selling price (the CE) and the maximum
acceptable buying price are only equal in the case of the exponential
(or linear) utility function!
All told, if a player has an exponential utility function with a
risk tolerance equal to r (expressed in dollars),
the highest price (y) s/he will be willing to pay for a shot at the Petersburg game
is given by the relation:
| ¥ | |
exp(-y/r) = |
å |
exp(-2n/r) / 2n+1 |
| n=0 | |
Once we evaluate the sum on the RHS, this is easy to solve for y
(just take the natural logarithms of both sides and multiply by -r).
The computation is best done numerically (see table below) for midrange values of r,
but we may also want to investigate what happens when r
is very large or very small:
For large values of r, we may observe that,
when r is much larger than 2n, each term of the sum is roughly
equal to 1/2n+1-1/2r.
This near-constancy goes on for a number of terms
roughly equal to the base-2 logarithm of r,
after which the terms vanish exponentially fast.
(Notice how the exponential utility function turns out to behave very much like
the original "moral value" function proposed by Cramer in 1728;
proportional to the money at first, then nearly constant after a certain threshold.)
We may thus expect the RHS to be equal to about
k-ln(r)/(2r ln(2)) for some constant k, which turns out to
be equal to 1.
The natural logarithm of that, for large values of r, would therefore be
-ln(r)/(r ln(4)), so that y is roughly equal to
ln(r)/ln(4) for large values of r
(actually, it's about 0.5549745 above that).
On the other hand, when r is very small,
the sum on the RHS essentially reduces to its first term, so that y is
extremely close to 1+r ln(2) .
The rest of the expansion is smaller than any power of r,
since the leading term equals (-r /2)exp(-1/r).
In particular, a player with a risk tolerance of zero
(r = 0) will only pay $1 for the gamble, since this is the amount
s/he is guaranteed to get back...
For educational purposes, we've included what a similar analysis would entail
for a nonexponential utility function (last two columns of the above table).
The utility function chosen is such that wealth (or equity) is 6 times the
risk tolerance appearing in the first column.
The entire fortune of the player is thus taken into
account (something we avoided with the exponential function).
Note that the price for which the player is willing to sell a right to play
(the CE, or certainty equivalent) is different from the price he would be
willing to pay to acquire such a right, although this is only significant at low
levels of risk tolerance (both prices are always equal for an exponential utility).
At a zero risk tolerance, it's the buying price
which is equal to $1 (since we're guaranteed to get $1 back, no matter what), whereas
the selling price may be significantly greater
(the value is ½ 631/5
or about $1.145086 in this particular case).
That's because a nonexponential utility function integrates future variations of the
risk tolerance and this influences the decision, which is not solely based
on the current instantaneous value of the player's risk
tolerance -u'(x)/u"(x)...
If your browser can run JavaScript (which is probably the case),
you may obtain nontabulated values by entering either the risk tolerance
or the exponential CE at the top of their respective columns (in some cases,
you may not get more than 7 or 8 significant digits from the script,
whereas the tabulated values
are correct within half a unit of the last digit displayed).
You may wish to use the table backwards:
Determine by introspection what the Petersburg Gamble is worth to you
and you will know roughly what your risk tolerance is.
For example, if you decide that a Petersburg game is worth $6,
your risk tolerance is $1872.28.
The method may not be very accurate because
you are essentially guessing on a logarithmic scale which amplifies errors
(estimating the game to be worth $6.05 would correspond to a
risk tolerance of $2008.07).
However, it's only the order of magnitude of your risk tolerance
which counts for many decisions, and the Petersburg game will allow
you to evaluate that.
|