Decisions Under Risk and Uncertainty

By Thomas, C.R., Maurice, S.C.

Edited by Paul Ducham


When the outcome of a decision is not known with certainty, a manager faces a decision-making problem under either conditions of risk or conditions of uncertainty. A decision is made under risk when a manager can make a list of all possible outcomes associated with a decision and assign a probability of occurrence to each one of the outcomes. The process of assigning probabilities to outcomes sometimes involves rather sophisticated analysis based on the manager’s extensive experience in similar situations or on other data. Probabilities assigned in this way are objective probabilities. In other circumstances, in which the manager has little experience with a particular decision situation and little or no relevant historical data, the probabilities assigned to the outcomes are derived in a subjective way and are called subjective probabilities. Subjective probabilities are based upon hunches, “gut feelings,” or personal experiences rather than on scientific data.

An example of a decision made under risk might be the following: A manager decides to spend $1,000 on a magazine ad believing there are three possible outcomes for the ad: a 20 percent chance the ad will have only a small effect on sales, a 60 percent chance of a moderate effect, and a 20 percent chance of a very large effect. This decision is made under risk because the manager can list each potential outcome and determine the probability of each outcome occurring.

In contrast to risk, uncertainty exists when a decision maker cannot list all possible outcomes and/or cannot assign probabilities to the various outcomes. When faced with uncertainty, a manager would know only the different decision options available and the different possible states of nature. The states of nature are the future events or conditions that can influence the final outcome or payoff of a decision but cannot be controlled or affected by the manager. Even though both risk and uncertainty involve less-than-complete information, there is more information under risk than under uncertainty.

An example of a decision made under uncertainty would be, for a manager of a pharmaceutical company, the decision of whether to spend $3 million on the research and development of a new medication for high blood pressure. The payoff from the research and development spending will depend on whether the president’s new health plan imposes price regulations on new drugs. The two states of nature facing the manager in this problem are (1) government does impose price regulations or (2) government does not impose price regulations. While the manager knows the payoff that will occur under either state of nature, the manager has no idea of the probability that price regulations will be imposed on drug companies. Under such conditions, a decision is made under uncertainty.

This important distinction between conditions of uncertainty and conditions of risk will be followed throughout this chapter. The decision rules employed by managers when outcomes are not certain differ under conditions of uncertainty and conditions of risk.


Before we can discuss rules for decision making under risk, we must first discuss how risk can be measured. The most direct method of measuring risk involves the characteristics of a probability distribution of outcomes associated with a particular decision. This section will describe these characteristics.

Probability Distributions

A probability distribution is a table or graph showing all possible outcomes (payoffs) for a decision and the probability that each outcome will occur. The probabilities can take values between 0 and 1, or, alternatively, they can be expressed as percentages between 0 and 100 percent. If all possible outcomes are assigned probabilities, the probabilities must sum to 1 (or 100 percent); that is, the probability that some other outcome will occur is 0 because there is no other possible outcome.

To illustrate a probability distribution, we assume that the director of advertising at a large corporation believes the firm’s current advertising campaign may result in any one of five possible outcomes for corporate sales. The probability distribution for this advertising campaign is as follows:


Each outcome has a probability greater than 0 but less than 100 percent, and the sum of all probabilities is 100 percent (= 10 + 20 + 30 + 25 + 15). This probability distribution is represented graphically in Figure 15.1.

From a probability distribution (either in tabular or in graphical form), the riskiness of a decision is reflected by the variability of outcomes indicated by the different probabilities of occurrence. For decision-making purposes, managers often turn to mathematical properties of the probability distribution to facilitate a formal analysis of risk. The nature of risk can be summarized by examining the central tendency of the probability distribution, as measured by the expected value of the distribution, and by examining the dispersion of the distribution, as measured by the Standard Deviation and coefficient of variation. We discuss first the measure of central tendency of a probability distribution.

Expected Value of a Probability Distribution

The expected value of a probability distribution of decision outcomes is the weighted average of the outcomes, with the probabilities of each outcome serving as the respective weights. The expected value of the various outcomes of a probability distribution is


where Xi is the ith outcome of a decision, pi is the probability of the ith outcome, and n is the total number of possible outcomes in the probability distribution. Note that the computation of expected value requires the use of fractions or decimal values for the probabilities pi, rather than percentages. The expected value of a probability distribution is often referred to as the mean of the distribution.

The expected value of sales for the advertising campaign associated with the probability distribution shown in Figure 15.1 is

E(sales) = (0.10)(47,500) + (0.20)(50,000) + (0.30)(52,500)

+ (0.25)(55,000) + (0.15)(57,500)

= 4,750 + 10,000 + 15,750 + 13,750 + 8,625

= 52,875

While the amount of actual sales that occur as a result of the advertising campaign is a random variable possibly taking values of 47,500, 50,000, 52,500, 55,000, or 57,500 units, the expected level of sales is 52,875 units. If only one of the five levels of sales can occur, the level that actually occurs will not equal the expected value of 52,875, but expected value does indicate what the average value of the outcomes would be if the risky decision were to be repeated a large number of times.

Dispersion of a Probability Distribution

As you may recall from your statistics classes, probability distributions are generally characterized not only by the expected value (mean) but also by the variance. The variance of a probability distribution measures the dispersion of the distribution about its mean. Figure 15.2 shows the probability distributions for the profit outcomes of two different decisions, A and B. Both decisions, as illustrated in Figure 15.2, have identical expected profit levels but different variances. The larger variance associated with making decision B is reflected by a larger dispersion (a wider spread of values around the mean). Because distribution A is more compact (less spread out), A has a smaller variance.

The variance of a probability distribution of the outcomes of a given decision is frequently used to indicate the level or degree of risk associated with that decision. If the expected values of two distributions are the same, the distribution with the higher variance is associated with the riskier decision. Thus in Figure 15.2, decision B has more risk than decision A. Furthermore, variance is often used to compare the riskiness of two decisions even though the expected values of the distributions differ.

Mathematically, the variance of a probability distribution of outcomes Xi, denoted by σ2x, is the probability-weighted sum of the squared deviations about the expected value of X:


As an example, consider the two distributions illustrated in Figure 15.3. As is evident from the graphs and demonstrated in the following table, the two distributions have the same mean, 50. Their variances differ, however. Decision A has a smaller variance than decision B, and it is therefore less risky. The calculation of the expected values and variance for each distribution are shown here:


Because variance is a squared term, it is usually much larger than the mean. To avoid this scaling problem, the standard deviation of the probability distribution is more commonly used to measure dispersion. The standard deviation of a probability distribution, denoted by σx, is the square root of the variance:


The standard deviations of the distributions illustrated in Figure 15.3 and in the preceding table are σA = 8.94 and σB = 11.40. As in the case of the variance of a probability distribution, the higher the standard deviation, the more risky the decision.

Managers can compare the riskiness of various decisions by comparing their standard deviations, as long as the expected values are of similar magnitudes. For example, if decisions C and D both have standard deviations of 52.5, the two decisions can be viewed as equally risky if their expected values are close to one another. If, however, the expected values of the distributions differ substantially in magnitude, it can be misleading to examine only the standard deviations. Suppose decision C has a mean outcome of $400 and decision D has a mean outcome of $5,000 but the standard deviations remain 52.5. The dispersion of outcomes for decision D is much smaller relative to its mean value of $5,000 than is the dispersion of outcomes for decision C relative to its mean value of $400.

When the expected values of outcomes differ substantially, managers should measure the riskiness of a decision relative to its expected value. One such measure of relative risk is the coefficient of variation for the decision’s distribution. The coefficient of variation, denoted by υ, is the standard deviation divided by the expected value of the probability distribution of decision outcomes:


The coefficient of variation measures the level of risk relative to the mean of the probability distribution. In the preceding example, the two coefficients of variation are υC = 52.5/400 = 0.131 and υD = 52.5/5,000 = 0.0105.



As we just mentioned, managers differ in their willingness to undertake risky decisions. Some managers avoid risk as much as possible, while other managers actually prefer more risk to less risk in decision making. To allow for different attitudes toward risk taking in decision making, modern Decision Theory treats managers as deriving utility or satisfaction from the profits earned by their firms. Just as consumers derived utility from the consumption of goods in expected utility theory, managers are assumed to derive utility from earning profits. Expected utility theory postulates that managers make risky decisions in a way that maximizes the expected utility of the profit outcomes. While expected utility theory does provide a tool for decisions under risk, the primary purpose of the theory, and the reason for presenting this theory here, is to explain why managers make the decisions they do make when risk is involved. We want to stress that expected utility theory is an economic model of how managers actually make decisions under risk, rather than a rule dictating how managers should make decisions under risk.

Suppose a manager is faced with a decision to undertake a risky project or, more generally, must make a decision to take an action that may generate a Range of possible profit outcomes, π1, π2, ... , πn, that the manager believes will occur with probabilities p1, p2, . . . , pn, respectively. The expected utility of this risky decision is the sum of the probability-weighted utilities of each possible profit outcome:

E[U(π)] = p1U(π1) + p2U(π2) + . . . + pnU(πn)

where U(π) is a utility function for profit that measures the utility associated with a particular level of profit. Notice that expected utility of profit is different from the concept of expected profit, which is the sum of the probability-weighted profits. To understand expected utility theory, you must understand how the manager’s attitude toward risk is reflected in the manager’s utility function for profit. We now discuss the concept of a manager’s utility of profit and show how to derive a utility function for profit. Then we demonstrate how managers could employ expected utility of profit to make decisions under risk.

A Manager’s Utility Function for Profit

Since expected utility theory is based on the idea that managers enjoy utility or satisfaction from earning profit, the nature of the relation between a manager’s utility and the level of profit earned plays a crucial role in explaining how managers make decisions under risk. As we now show, the manager’s attitude toward risk is determined by the manager’s marginal utility of profit.

It would be extremely unusual for a manager not to experience a higher level of total utility as profit increases. Thus the relation between an index of utility and the level of profit earned by a firm is assumed to be an upward-sloping curve. The amount by which total utility increases when the firm earns an additional dollar of profit is the marginal utility of profit:

MUprofit = ΔU(π) / Δπ

where U(π) is the manager’s utility function for profit. The utility function for profit gives an index value to measure the level of utility experienced when a given amount of profit is earned. Suppose, for example, the marginal utility of profit is 8. This means a $1 increase in profit earned by the firm causes the utility index of the manager to increase by eight units. Studies of attitudes toward risk have found most business decision makers experience diminishing marginal utility of profit. Even though additional dollars of profit increase the level of total satisfaction, the additional utility from extra dollars of profit typically falls for most managers.

The shape of the utility curve for profit plays a pivotal role in expected utility theory because the shape of U(π) determines the manager’s attitude toward risk, which determines which choices a manager makes. Attitudes toward risk may be categorized as risk averse, risk neutral, or risk loving. People are said to be risk averse if, facing two risky decisions with equal expected profits, they choose the less risky decision. In contrast, someone choosing the more risky decision, when the expected profits are identical, is said to be risk loving. The third type of attitude toward risk arises for someone who is indifferent between risky situations when the expected profits are identical. In this last case, a manager ignores risk in decision making and is said to be risk neutral.

Figure 15.5 shows the shapes of the utility functions associated with the three types of risk preferences. Panel A illustrates a utility function for a risk-averse manager. The utility function for profit is upward-sloping, but its slope diminishes as profit rises, which corresponds to the case of diminishing marginal utility. When profit increases by $50,000 from point A to point B, the manager experiences an increase in utility of 10 units. When profit falls by $50,000 from point A to point C, utility falls by 15 units. A $50,000 loss of profit creates a larger reduction in utility than a $50,000 gain would add to utility. Consequently, risk-averse managers are more sensitive to a dollar of lost profit than to a dollar of gained profit and will place an emphasis in decision making on avoiding the risk of loss.

In Panel B, the marginal utility of profit is constant (ΔU/Δπ = 15/50 = 0.3), and the loss of $50,000 reduces utility by the same amount that a gain of $50,000 increases it. In this case, a manager places the same emphasis on avoiding losses as on seeking gains. Managers are risk neutral when their utility functions for profit are linear or, equivalently, when the marginal utility of profit is constant.

Panel C shows a utility function for a manager who makes risky decisions in a risk-loving way. The extra utility from a $50,000 increase in profit (20 units) is greater than the loss in utility suffered when profit falls by $50,000 (10 units). Consequently, a risk-loving decision maker places a greater weight on the potential for gain than on the potential for loss. We have now developed the following relation.

Relation A manager’s attitude toward risky decisions can be related to his or her marginal utility of profit. Someone who experiences diminishing (increasing) marginal utility for profit will be a riskaverse (risk-loving) decision maker. Someone whose marginal utility of profit is constant is risk neutral.

Deriving a Utility Function for Profit

As discussed earlier, when managers make decisions to maximize expected utility under risk, it is the utility function for profit that determines which decision a manager chooses. We now show the steps a manager can follow to derive his or her own utility function for profit, U(π). Recall that the utility function does not directly measure utility but does provide a number, or index value, and that it is the magnitude of this index that reflects the desirability of a particular profit outcome.

The process of deriving a utility function for profit is conceptually straightforward. It does, however, involve a substantial amount of subjective evaluation. To illustrate the procedure, we return to the decision problem facing the manager of Chicago Rotisserie Chicken (CRC). Recall that CRC must decide where to locate the next restaurant. The profit outcomes for the three locations range from $1,000 to $6,000 per week. Before the expected utilities of each location can be calculated, the manager must derive her utility function for profits covering the range $1,000 to $6,000.

The manager of CRC begins the process of deriving U(π) by assigning minimum and maximum values that the index will be allowed to take. For the lower bound on the index, suppose the manager assigns a utility index value of 0—although any number, positive or negative, will do—to the lowest profit outcome of $1,000. For the upper bound, suppose a utility index value of 1 is assigned—any value greater than the value of the lower bound will do—to the highest profit outcome of $6,000. Again, we emphasize, choosing 0 and 1 for the upper and lower bounds is completely arbitrary, just as long as the upper bound is greater algebraically than the lower bound. For example, lower and upper bounds of -12 and 50 would also work just fine. Two points on the manager’s utility function for profit are

U($1,000) = 0 and U($6,000) = 1

Next, a value of the utility index for each of the remaining possible profit outcomes between $1,000 and $6,000 must be determined. In this case, examining profit in increments of $1,000 is convenient. To find the value of the utility index for $5,000, the manager employs the following subjective analysis: The manager begins by considering two decision choices, A and B, where decision A involves receiving a profit of $5,000 with certainty and risky decision B involves receiving either a $6,000 profit with probability p or a $1,000 profit with probability 1 - p. Decisions A and B are illustrated in Figure 15.6. Now the probability p that will make the manager indifferent between the two decisions A and B must be determined. This is a subjective determination, and any two managers likely will find different values of p depending on their individual preferences for risk.

Suppose the manager of Chicago Rotisserie Chicken decides p = 0.95 makes decisions A and B equally desirable. In effect, the manager is saying that the expected utility of decision A equals the expected utility of decision B. If the expected utilities of decisions A and B are equal, E(UA) = E(UB):

1 * U($5,000) = 0.95 * U($6,000) + 0.05 * U($1,000)

Only U($5,000) is unknown in this equation, so the manager can solve for the utility index for $5,000 of profit:

U($5,000) = (0.95 * 1) + (0.05 * 0) = 0.95

The utility index value of 0.95 is an indirect measure of the utility of $5,000 of profit. This procedure establishes another point on the utility function for profit. The sum of $5,000 is called the certainty equivalent of risky decision B because it is the dollar amount that the manager would be just willing to trade for the opportunity to engage in risky decision B. In other words, the manager is indifferent between having a profit of $5,000 for sure or making a risky decision having a 95 percent chance of earning $6,000 and a 5 percent chance of earning $1,000. The utility indexes for $4,000, $3,000, and $2,000 can be established in exactly the same way.

This procedure for finding a utility function for profit is called the certainty equivalent method. We now summarize the steps for finding a utility function for profit, U(π), in a principle.

Principle To implement the certainty equivalent method of deriving a utility of profit function, the following steps can be employed:

1. Set the utility index equal to 1 for the highest possible profit (πH) and 0 for the lowest possible profit (πL).

2. Define a risky decision to have probability p0 of profit outcome πH and probability (1 - p0) of profit outcome πL. For each possible profit outcome π0H < π0 < πL), the manager determines subjectively the probability p0 that gives that risky decision the same expected utility as receiving π0 with certainty:

p0U(πH) + (1 - p0) U(πL) = U(π0)

The certain sum π0 is called the certainty equivalent of the risky decision. Let the subjective probability p0 serve as the utility index for measuring the level of satisfaction the manager enjoys when earning a profit of π0.

Figure 15.7 illustrates the utility function for profit for the manager of Chicago Rotisserie Chicken. The marginal utility of profit diminishes over the entire range of possible profit outcomes ($1,000 to $6,000), and so this manager is a risk-averse decision maker.

Maximization of Expected Utility

When managers choose among risky decisions in accordance with expected utility theory, the decision with the greatest expected utility is chosen. Unlike maximization of expected profits, maximizing expected utility takes into consideration the manager’s preferences for risk. As you will see in this example, maximizing expected utility can lead to a different decision than the one reached using the maximization of expected profit rule.

Return once more to the location decision facing Chicago Rotisserie Chicken. The manager calculates the expected utilities of the three risky location decisions using her own utility function for profit shown in Figure 15.7. The expected utilities for the three cities are calculated as follows:

Atlanta E(UA) + 0U($1,000) + 0.2U($2,000) + 0.3U($3,000) + 0.3U($4,000) + 0.2U($5,000) + 0U($6,000)

= 0 + (0.2)(0.5) + (0.3)(0.7) + (0.3)(0.85) + (0.2)(0.95) + 0

= 0.755

Boston E(UB) = 0.1U($1,000) + 0.15U($2,000) +0.15U($3,000) +0.25U($4,000) +0.2U($5,000) +0.15U($6,000)

= (0.1)(0) + (0.15)(0.50) + (0.15)(0.7) + (0.25)(0.85) + (0.2)(0.95) + (0.15)(1)

= 0.733

Cleveland E(UC) = 0.3U($1,000) + 0.1U($2,000) + 0.1U($3,000) + 0.1U($4,000) + 0.1U($5,000) + 0.3U($6,000) 

= (0.3)(0) + (0.1)(0.5) + (0.1)(0.7) + (0.1)(0.85) + (0.1)(0.95) + (0.3)(1.0) 

= 0.600

To maximize the expected utility of profits, the manager of Chicago Rotisserie Chicken chooses to open its new restaurant in Atlanta. Even though Boston has the highest expected profit [E(π) = $3,750], Boston also has the highest level of risk (σ = 1,545), and the risk-averse manager at CRC prefers to avoid the relatively high risk of locating the new restaurant in Boston. In this case of a risk-averse decision maker, the manager chooses the less risky Atlanta location over the more risky Cleveland location even though both locations have identical expected profit levels.

To show what a risk-neutral decision maker would do, we constructed a utility function for profit that exhibits constant marginal utility of profit, which, as we have explained, is the condition required for risk neutrality. This risk-neutral utility function is presented in columns 1 and 2 of Table 15.1. Marginal utility of profit, in column 3, is constant, as it must be for risk-neutral managers. From the table you can see that the expected utilities of profit for Atlanta, Boston, and Cleveland are 0.50, 0.55, and 0.50, respectively. For a risk-neutral decision maker, locating in Boston is the decision that maximizes expected utility. Recall that Boston also is the city with the maximum expected profit [E(π) = $3,750]. This is not a coincidence. As we explained earlier, a risk-neutral decision maker ignores risk when making decisions and relies instead on expected profit to make decisions in risky situations. Under conditions of risk neutrality, a manager makes the same decision by maximizing either the expected value of profit, E(π), or the expected utility of profit, E[U(π)].

Finally, consider how a manager who is risk loving decides on a location for CRC’s new restaurant. In Table 15.2, columns 1 and 2 show a utility function for profit for which marginal utility of profit is increasing. Column 3 shows the marginal utility of profit, which, as it must for a risk-loving manager, increases as profit increases. The expected utilities of profit outcomes for Atlanta, Boston, and Cleveland are 0.32, 0.41, and 0.43, respectively. In the case of a risk-loving decision maker, Cleveland is the decision that maximizes expected utility. If Atlanta and Cleveland were the only two sites being considered, then the risk-loving manager would choose Cleveland over Atlanta, a decision that is consistent with the definition of risk loving. We now summarize our discussion in the following principle.

Principle If a manager behaves according to expected utility theory, decisions are made to maximize the manager’s expected utility of profits. Decisions made by maximizing expected utility of profit reflect the manager’s risk-taking attitude and generally differ from decisions reached by decision rules that do not consider risk. In the case of a risk-neutral manager, the decisions are identical under either maximization of expected utility or maximization of expected profit.