——
Should I switch, or stay?
—–

Let’s begin our review all the way back at the beginning, with our very first email about risk…

Bad Priors.

Everyone comes to situations of risk with pre-existing opinions. We all use our experiences to try and make sense of how the world works. These are our priors.


Priors act as a map of the territory of reality; we survey our past experiences, build abstract mental models from them, and then use those mental models to help us understand the world.


But priors can be misleading, even when they’re based on real experiences. Why?

For one, we often mistake group-indexed averages for individually-indexed averages.


For another, we often mistake uncertainty for risk.


Risk is a situation in which the variables (how likely a scenario is to happen, what we stand to lose or gain if it does) are known.

When we face risk, our best tool for decision-making is statistical analysis.


Imagine playing Russian Roulette; there’s one gun, one bullet, and 6 chambers. You can calculate your odds of success or failure.


Uncertainty is a situation in which the variables are unknown. Imagine a version of Russian Roulette where you don’t get to know how many bullets there are, or even how many chambers in the gun.

Not only can you not calculate your odds in this scenario, trying to do so will only give you a false sense of confidence.


When we face uncertainty, our best decision-making tool is game theory.


Mistaking risk for certainty is called the zero-risk illusion.


This is what happens when we get a positive result on a medical test, and convince ourselves there’s no way the test could be wrong.


Because the world is infinitely complex, we can’t always interact directly with the thing we care about (referred to as the underlying).
But there’s a more subtle (and often more damaging) illusion to think about:
Mistaking uncertainty for risk. This is known as the calculable-risk illusion.


To understand how we get to this illusion, we have to understand a bit about derivatives.

Because the world is infinitely complex, we can’t always interact directly with the things we care about (referred to as the the underlying.)


For example, we may care about the health of a company – how happy their employees are, how big their profit margin is, how much money they have in savings.


But it’s hard to really get a grip on all those variables.


To get around problems like this, we often look at some other metric (referred to as the derivative) that we believe is correlated with the thing we care about.


For example: we care about the health of the company (the underlying). But because that’s so complex, we choose to pay attention to the stock price of the company instead (the derivative). That’s because we believe that the two are correlated: if the health of the company improves, the stock price will rise.


The relationship between the underlying and the derivative is called the basis.

If you understand the basis, you can use a derivative to understand the underlying.


But the world is complicated. We often DON’T really understand the basis. Maybe we mistook causation for correlation. Or maybe we DID understand the basis, but it changed over time.


The problem is that re-examine our assumptions about how the world works.

This puts us in a situation where we mistake uncertainty for risk. We think we have enough information to calculate the odds. We think we can use statistical analysis to figure out the right thing to do.


The problem is that we often don’t have enough information. This is the “Turkey Problem”: every single data point tells us the farmer treats us well.
And that’s true…right up until Thanksgiving Day.


We cruise along, comforted by seemingly-accurate mathematical models of the world…only to be shocked when the models blow up and everything falls apart.


That’s the calculable-risk illusion.


This is how our maps can stop matching our territory.


OK – so we know that when situations are uncertain (and that’s a lot of, if not most of the time), we’re supposed to use game theory.


What are some examples of using game theory to help make decisions?


One example is the Common Knowledge Game.

Common knowledge games are situations in which we act based on what we believe other people believe.


Like a beauty contest where voting for the winning contestant wins you money, it’s not about whom you like best (first-order decision making)…
Or whom you think other people like best (second-order decision making)…


But whom you think other people will think other people like best (third-order decision making).


So: how do we know what other people know?


Watch the missionaries.


As in the case of the eye-color tribe, a system’s static equilibrium is shattered when public statements are made.


Information is injected into the system in such a way that everyone knows that everyone else knows.


Our modern equivalent is the media. We have to ask ourselves where other people think other people get their information.

Whatever statements come from these sources will affect public behavior…
…Not because any new knowledge is being created, but because everyone now knows that everyone else heard the message.


(This, by the way, is why investors religiously monitor the Federal Reserve. It’s not because the Fed tells anyone anything new about the state of the economy. It’s because it creates “common knowledge.”)

Whew! That’s a lot of stuff.


Let’s try to bring all these different ideas together in one fun example:

The Monty Hall Problem.

Monty Hall was famous television personality, best-known as the host of the game show Let’s Make a Deal.


Let’s Make a Deal featured a segment that became the setting for a famous logic problem…


One that excellently displays how our maps can become disconnected from the territory.


The problem was popularized Marilyn vos Savant in a British Newspaper. Here’s the problem as she formulated it:


Suppose you are on a game show, and you’re given the choice of three doors.

Behind one door is a car, behind the others, goats.


The rules are that you can pick any door you want, and you’ll also get a chance to switch if you want.

You pick a door, say number 1, and the host, who knows what’s behind the doors, opens another door, say number 3, which has a goat.


He says to you, “Do you want to pick door number 2?”

Is it to your advantage to switch your choice of doors?


Take a minute to think it through and come up with your own answer.
Let’s start by asking ourselves:


Is this a scenario of risk or uncertainty?


The answer is risk.

We know the odds, and can calculate our chances to win. That means statistical analysis is our friend.


So how do we calculate our odds?


The typical line of reasoning will go something like this:


Each door has a 1/3 probability of having the car behind it.


One door has been opened, which eliminates 1/3 of my chances.


Therefore, the car must be behind one of these two doors. That means I have a 50/50 chance of having picked the right door.


That means there’s no difference between sticking with this door or switching.


While this conclusion seems obvious (and believe me, this is the conclusion I came to)…


It turns out to be wrong. 🙂


Remember our discussion of medical tests?


To figure out how to think about our risk level, we imagined a group of 1,000 people all taking the same tests.


We then used the false positive rate to figure out how many people would test positive that didn’t have the disease.


Let’s apply a similar tool here.


Imagine three people playing this game. Each person picks a different door.
I’ll quote here from the book Risk Savvy, where I first learned about the Monty Hall Problem:


Assume the car is behind door 2.


The first contestant picks door 1. Monty’s only option is to open door 3, and he offers the contestant the opportunity to switch.


Switching to door 2 wins.


The second contestant picks door 3. This time, Monty has to open door 1, and switching to door 2 again wins.


Only the third contestant who picks door 2 will lose when switching.


Now it is easier to see that switching wins more often than staying, and we can calculate exactly how often: in two out of three cases.


This is why Marilyn recommended switching doors.

It becomes easier to imagine the potential outcomes if we picture a large group of people going through the same situation.

In this scenario, the best answer is to always switch.
Here’s an interesting twist, though:


Should you actually use this strategy on Let’s Make a Deal?


This is where the calculable-risk illusion rears it’s ugly head.

In the beginning of our discussion, I said the Monty Hall Problem was an example of risk. Our odds are calculable, and we understand the rules.

That’s why statistical analysis is helpful.


But reality is often far more complicated than any logic puzzle.


The question we need to ask in real life is: Will Monty ALWAYS give me the chance to switch?


For example, Monty might only let me switch if I chose the door with the car behind it.


If that’s the case, always switching is a terrible idea!


The real Monty Hall was actually asked about this question in The New York Times.

Hall explicitly said that he had complete control over how the game progressed, and that he used that power to play on the psychology of the contestant.


For example, he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might only allow them the opportunity to switch if they had a winning door.

Hall in his own words:


“After I showed them there was nothing behind one door, [Contestants would think] the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I offered. By opening that door we were applying pressure.”


“If the host is required to open a door all the time and offer you a switch, then you should take the switch…But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all depends on his mood.”


You can see this play out in this specific example, taken again from Risk Savvy:


After one contestant picked door 1, Monty opened door 3, revealing a goat.
While the contestant thought about switching to door 2, Monty pulled out a roll of bills and offered $3,000 in cash not to switch.


“I’ll switch to it,” insisted the contestant.


“Three thousand dollars,” Monty Hall repeated, “Cash. Cash money. It could be a car, but it could be a goat. Four thousand.”


The contestant resisted the temptation. “I’ll try the door.”


“Forty-five hundred. Forty-seven. Forty-eight. My last offer: Five thousand dollars.”


“Let’s open the door.” The contestant again rejected the offer.


“You just ended up with a goat,” Monty Hall said, opening the door.


And he explained: “Now do you see what happened there? The higher I got, the more you thought that the car was behind door 2. I wanted to con you into switching there, because I knew the car was behind 1. That’s the kind of thing I can do when I’m in control of the game.

What’s really happening here?


The contestant is committing the calculable-risk illusion.


They’re mistaking risk for uncertainty.


They think the game is about judging the probability that their door contains either car or goat.


But it isn’t.


The game is about understanding Monty Hall’s personality.


Whenever we shift from playing the game to playing the player, we have made the move from statistical analysis to game theory.


Instead of wondering what the probabilities are, we need to take into account:


1. Monty’s past actions, his personality, his incentives (to make the TV show dramatic and interesting)…


2. As well as what HE knows (which door has a car behind it)…


3. And what HE knows WE know… (that he knows which door has a car behind it)


4. And how that might change his behavior (since he knows we know he knows where the goals are, and he expects us to expect him to offer money if we picked the right door, he might do the opposite).


The map-territory problem can get us if we refuse to use statistical analysis where it’s warranted..


And when we keep using statistical analysis when it isn’t.


Now that we’ve seen some of these ideas in action, it’s FINALLY time to start addressing the root cause of all these emails:


The Coronavirus Pandemic.


We’ll be bringing all these mental models to bear on a tough problem:


How do I decide what to do, when so much is uncertain? And WHY is all of this so hard to understand?

1
(Visited 192 times, 1 visits today)