Learn, Make, Teach.

Category: Game Theory

All Woods Must Fail

O!
Wanderers in the shadowed land
Despair not!
For though dark they stand,
All woods there be must end at last,
And see the open sun go past:
The setting sun, the rising sun,
The day’s end,
or the day begun.
For east or west all woods must fail.
J. R. R. Tolkien

You wake suddenly into a room you do not recognize.

This is not your bed.

Not your dresser.

Not your table.

The floor is rough-hewn wood. There are windows, but they are opaque. Light filters through, but nothing of the environment is visible.

You blink; you give yourself a moment to collect your thoughts, to remember.
Nothing comes.

You cautiously place a foot on the floor: cool, smooth, unfamiliar.

You tiptoe to the bedroom door.

The knob is large, brass. It looks ancient.

Above the door knob is a large brass plate. In it’s center there is a keyhole.
You bend down.

You close one eye and peer out.

What’s on the other side of the door?

Forest.

Forest forever, in every direction.

—–

For all our pretending…

Our intellectual strutting and preening, Our claims of omnipotence and rationality, our technological marvels and accomplishments…

The world is as uncertain as ever.

Whenever humanity’s understanding seems to encroach, fast and sure, onto the ends of the universe…
I try to remind myself of the scale of what we’re discussing.

I think about chess.

Chess has 16 pieces per player and 64 spaces.

The rules are defined.

Everything that needs to be known is known.

But there are more potential games of chess than there are subatomic particles in the universe.
It is infinite…

Despite its simplicity.

That’s been my biggest takeaway from studying game theory, risk, and COVID-19 these past few months:

The universe of unknown unknowns is impossibly vast…

Even if we understand the pieces.

Even if we think we understand how they all fits together.

I’ll give you one more example, before we head off into the forest in search of practical solutions…
Isaac Newton published the Philosophiæ Naturalis Principia Mathematica in 1687.
In it, he proposed three laws of motion:

1: An object either remains at rest or continues to move at a constant velocity, unless acted upon by a force.

2: The vector sum of the forces on an object is equal to the mass of that object multiplied by its acceleration.

3: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

We’ve had 333 years to sit and think about these laws.

In that time, we’ve managed to invent computers with computational powers exceeding anything a human being is capable of.

With these tools – Newton’s Laws and our computers – we can precisely model the movements of bodies through space.

If we know their starting points and their velocities, we can perfectly plot the paths they’ll take.
We can literally calculate their future.

Of course, for each body we add into the problem, the calculations get more complex.
Eventually the system interactions become so intricate that it is impossible to calculate. It becomes chaotic, non-repeating.

Infinite.

How many bodies does it take for the problem to become incalculable?

With our 333 years of pondering Newton’s Laws?

With our super-powerful computers?

With all the human knowledge in all the world?

How many bodies?

Three.

—–

The door swings open.

It creaks, briefly, but the sound fades, absorbed into the thick, humid air.

Tress in every direction. They are massive, towering things.

Sun filters through the pine needles and dapples the ground like so many little spotlights. It’s not morning, but it’s hard to tell exactly where the sun is overhead.

The trees seem to come straight up to the door. There’s room to walk, but only just.
It should feel oppressive, like they are crowding you. Instead, it feels like you’ve interrupted a conversation.

You step out; the forest floor is soft and dry. As you look around, the door behind you swings shut.

You reach out, but it latches. You try to open it but it’s locked.

You take a breath and hold it.

The sweet taste of undergrowth, copper in the soil, a sense memory of an old Christmas tree.

Which way do you go?

—-

Complexity at the root of the universe.

So uncertainty is at the root of the universe.

So anxiety is at the root of the universe.

Anxiety is a perfectly normal reaction to the impossible task of trying to understand and predict a chaotic infinity of possibilities…

With a very limited, very non-infinite mind.

Despite that fact, we all have to wake up each day and do what needs to be done; to honor our commitments to ourselves and one another.

How do we navigate an uncertain world?

We choose the best path we can with the minimum amount of anxiety.

We use simple systems that allow us to quickly compare risks across categories.

We acknowledge our tendency to endlessly re-think, re-play, and re-consider our decisions…

And figure out how to let go.

We do the best we can, while minimizing our chances of losing too much.

In other words:

Heuristics…

Micromorts…

and MinMax Regret.

We discussed these concepts in an earlier post, so I won’t belabor them now.
Instead, what I want to do in this email is spell out…

Step by step….

Exactly how you can use these ideas to get a simple, practical estimate of how much risk you are willing to take on…

And to use that estimate to help you make the everyday decisions that affect your life.

—–

You walk until you get tired.

Something’s wrong, but you’re not sure what.

You don’t know where you are, so you could’ve chosen any direction at all.

You decided to simply go wherever the forest seems less dense, more open.

After a while (hours? days?) the trees have gotten further and further apart.

The slightly-more-open terrain has made walking easier.

You’re making more progress; towards what, you don’t know.

Every now and then you reach out to touch one of the passing trees; to trail your fingers along its bark.

The rough bumps and edges give you some textural variation, a way of marking the passing of time.

You look up. The sun doesn’t seem to have moved.

The sunlight still dapples. It’s neither hot nor cold. It isn’t much of anything.

Then you realize:

You haven’t heard a single sound since you’ve been out here.

Not even your own footsteps.

—–

Every good heuristic has a few components:

A way to search for the information we need…

A clear point at which to stop…

And a way to decide.

Let’s take each of these in turn.

Searching

We’ve discussed the “fog of pandemic” at length over the past few months.

With so much information, from so many sources, how do we know what to trust?

How do we know what’s real?

The truth is, 

we don’t.

In the moment, it is impossible to determine what’s “true” or “false.” As a group we may slowly get more accurate over time. Useful information builds up and gradually forces out less-useful information.

But none of that helps us right here, right now – which is when we have to make our decisions.

So what do we do?

We apply a heuristic to the search for information.

What does this mean?

Put simply: set a basic criteria for when you’ll take a piece of information seriously, and ignore everything that doesn’t meet that criteria.

Here’s an example of such a heuristic:

Take information seriously only when it is reported by both the New York Times and the Wall Street Journal.

Why does this work?

1. These are “credible” sources that are forced to fact-check their work.

2. These sources are widely monitored and criticized, meaning that low-quality information will often be called out.

3. These sources are moderate-left (NYT) and moderate-right (WSJ). Thus, information that appears in both will be less partisan on average.

While this approach to vetting information might be less accurate than, say, reading all of the best epidemiological journals and carefully weighing the evidence cited….

Have you ever actually done that?

Has anyone you know ever done that?

Have half the people on Twitter who SAY they’ve done that, actually done that?

Remember:

Our goal is not only to make the best decisions possible…

It’s to decrease our anxiety along the way.

Using a simple search heuristic allows us to filter information quickly, discarding the vast majority of noise and focusing as much as possible on whatever signal there is.

You don’t have to use my heuristic; you can make your own.

Swap in any two ideologically-competing and well-known sources for the NYT and the WSJ.
Specifically, focus on publications that have:

– A public-facing corrections process
– A fact-checking process
– Social pressure (people get upset when they “get it wrong”)
– Differing ideological bents
– Print versions (television and internet tend to be too fast to properly fact-check)

Whenever a piece of information needs to be assessed, ask:

Is this information reported in both of my chosen sources?

If not, ignore it and live your life.

Stopping

When do you stop looking for more information, and simply make a decision?
This is a complicated problem. It’s even got it’s own corner of mathematics, called optimal stopping.

In our case, we need a way to prevent information overload…the constant sense of revision that happens when we’re buffeted by an endless stream of op-eds, breaking news, and recent developments.

I’ve written about this a bit in my blog post on information pulsing.

The key to reducing the amount of anxiety caused by the news is to slow it’s pulse.

If we control the pace at which information flows into our lives, we control the rate at which we need to process that information and reduce the cognitive load it requires.

My preferred pace is once a week.

I get the paper every Sunday. I like the Sunday paper because it summarizes the week’s news. Anything important that happened that week shows up in the Sunday paper in some shape or form.

The corollary is that I deliberately avoid the news every other day of the week.

No paper, no radio, no TV news, nothing online.

This gives me mental space to pursue my own goals while keeping me informed and preventing burnout.

Presuming that we’re controlling the regular pulse of information into our lives, we also need a stopping point for decision making.

Re-examining your risk management every single week is too much.

Not only is it impractical, it predisposes us to over-fitting – trying too hard to match our mental models to the incoming stream of data.

My recommendation for now is to re-examine your COVID risk management decisions once a month.

Once a month is enough to stay flexible, which I think is necessary in an environment that changes so rapidly.

But it’s not so aggressive that it encourages over-fitting, or causes too much anxiety.

We are treating our risk management like long-term investments.

Check on your portfolio once a month to make sure things are OK, but put it completely out of your head the rest of the time.

—-

You walk on, always following the less-wooded trail.

The trees are more sparse now.

It’s easier to walk, easier to make your way.

Eventually, you come to a clearing.

Your legs ache. You find a small log and sit down, taking a breath.

The air is warm. It hangs over you.

You breathe again.

Your eyes close.

Maybe you sleep.

You’re not sure.

None of it seems real.

Maybe you’re still dreaming.

But maybe you aren’t.

You could lie down, here. The ground is soft. There’s a place to comfortably lay your head.
It would be easy enough to drift away. It would be pleasant.

Or, you could push on.

Keep walking.

Maybe progress is being made.

Maybe it isn’t that far.

But maybe it is.

———

Deciding

We come now to the final stage of our process – deciding.

We’ve set parameters for how we’ll search for information…

And rules for how we’ll stop searching.

Now we need to use the information we take in to make useful inferences about the world – and use those inferences to determine our behavior.

This stage has a bit more steps to it.

Here’s the outline:

1. Get a ballpark risk estimate using micromorts for your state.
2. Play the common knowledge game.
3. Establish the personal costs of different decisions within your control.
4. Choose the decision that minimizes the chances of your worst-case scenario.

Let’s break each of these down in turn.

1. Get a ballpark risk estimate using micromorts for your state.

I’ve actually built you a handy COVID-19 Micromort Calculator that will calculate your micromorts per day and month based on your state’s COVID-19 data.

But if you don’t want to use my calculator, here’s how to do this on your own:

– Find the COVID-19 related deaths in your state for the last 30 days. Why your state? Because COVID-19 is highly variable depending on where you live.

– Find the population of your state (just google “My State population” and it should come right up).


(note: My COVID-19 Micromort calculator pulls all this data for you).

– Go to this URL: https://rorystolzenberg.github.io/micromort-calculator

– Enter the state’s COVID-19 deaths in the “Deaths” box.

– Enter the state population in the “People In Jurisdiction” box.

– In the “Micromorts per day” section put “30” in the “Days Elapsed” box.

Your calculator should look something like this:

You’ve now calculated the average micromorts of risk per day in your state.

To compare this risk to other risks, situate your micromorts on this spreadsheet:
https://docs.google.com/spreadsheets/d/1xo3VrcDu6sMNGyykGKxgvRXv9kCwnFNAMHBoyc1y67Q/edit?usp=sharing

Take a look at the list and figure out how much risk we’re really talking about.
For example, the risk level in the image above is 4.63 micromorts – let’s round that to 5.
That means that I have about as much risk of dying from COVID-19 as I would of dying during a scuba dive, and more risk than I’d take on during a rock climb.

It’s also riskier than would be allowed at a workplace in the UK.

However, it’s less risky than going under general anesthetic, or skydiving.

Keep in mind, however, that these risks are per day.

Comparing apples to apples, I can ask:

“My COVID-19 risk is equivalent to the risk of going scuba diving every single day. Is that an acceptable risk level for me?”

2. Play the common knowledge game.

Now that we’ve got a rough estimate of risk, let’s think about other people.
You know.

Them.

Statistical risk matters, obviously.

But COVID-19 has a unique property:

It’s viral.

Literally.

If I’m riding a motorcycle, my risk does not increase if other people ride motorcycles, too.

For COVID-19? The actions of others have a big effect on my personal risk level.

This is where the common knowledge game comes in handy.

(You’ll recall our discussion of Common Knowledge games in previous emails, namely 
The Beauty ContestMissionaries, and Monty Hall.)

We don’t need to just weigh our own options…

We need to weigh what we think other people will do.

As an example:

My own state, Connecticut, has seen declining case numbers of COVID-19 for a few months now.
That gradual decline has led to a loosening of restrictions and a general increase in economic activity.

And that’s great!

But when it comes to sending our child to school next year, I’m still extremely worried.

Why?

Because I’m assuming that other people will see the declining case count as an indication that they can take on more risk.

What happens when people take on more risk in a pandemic?

Case numbers go up.

I ran into a similar issue early in the pandemic with regards to where I work.

I have a small office in a building downtown.

My room is private but other people share the space immediately outside my door.
Throughout the highest-risk days of the pandemic, when everyone else was staying home, I kept coming into the office to work.

Why?

Because everyone else was staying home.

They reacted rationally to the risks, and so my office building was empty.
Since it was only me, my personal risk remained low.

Now that risk levels are lower, people have started coming back to work…

Which means I am now more likely to work from home.

Why?

My actual risk remains the same, or higher, since more people have COVID-19 now than they did in the beginning.

But because case counts are declining, people feel safer and are more likely to come into the office, increasing my exposure.

Again:

Statistical risk matters…

But so does what other people do about that risk.

So.

While our micromort number is extremely useful, we need to run it through a filter:

How do I think the people around me will react to this level of risk?

What is “common knowledge” about our risk level?

What are the “missionaries” (news sources that everyone believes everyone listens to) saying, and how will that affect behavior?

Factor this into your decision-making.

—–

You keep moving.

Little by little, the ground becomes a trail, and the trail becomes a path.

Have other people been this way?

It’s hard to tell.

Maybe just deer.

But it’s a path. A way forward.

You think you detect some slight movement in the sun overhead.

Maybe, just maybe, time is passing after all.

With the path, there’s something to cut through the sameness – some way to judge distance.

Forward movement is forward movement.

You keep moving.

And then, something you never expected:

A fork.

Two paths.

One to the left, one to the right.

They each gently curve in opposite directions. You can’t see where they lead.

Something touches your back.

The wind.

Wind? you wonder. Was it always there?

Which way do I go?

—–

3. Establish the personal costs of different decisions within your control.

We’ve thought about risk, and we’ve thought about how other people will react.

Let’s take a moment to think about costs.

Every decision carries a cost.

It could simply be an opportunity cost (“If I do this, I can’t do this other thing…”)

Or the cost could be more tangible (“If I don’t go to work, I’ll lose my job.”)

One of the things that’s irritating about our discourse over COVID-19 is the extent to which people seem to assume that any action is obviously the right way to go…while ignoring it’s costs.

Yes, lockdowns carry very real costs – economic, emotional, physical.

Yes, not going into lockdowns carries very real costs – hospitalizations, deaths, economic losses.

Even wearing masks – something I am 100% in favor of – has costs. It’s uncomfortable for some, hampers social interaction, is inconvenient, etc.

We can’t act rationally if we don’t consider the costs.

So let’s do that.

Think through your potential outcomes.

You could get sick.

You could die.

There’s always that.

What else?

Maybe the kids miss a year of school.

What would the emotional repercussions be?

Or logistical?

Could you lose your job?

Lose income?

Have trouble paying bills?

What if there are long-term health effects?

What if the supply chain gets disrupted again…what if food becomes hard to find?

Think everything through.

Feel free to be dire and gloomy here…we’re looking for worst-case scenarios, not what is likely to happen.

Once you’ve spent some time figuring this out, make a quick list of your worst-cases.

Feel them emotionally.

We’re not looking to be most rational here. We’re getting in touch with our emotional reality.

We’re not saying, “What’s best for society? What do people want me to do?”

We’re asking:

Which of these scenarios would cause me the most regret?

Regret is a powerful emotion.

It is both social and personal. In many cases, we would rather feel pain than regret.

“Tis better to have loved and lost, than to have never loved at all.”

Rank your potential outcomes by “most regret” to “least regret.”

Which one is at the top?

Which outcome would you most regret?

THAT’S your worst-case scenario.

4. Choose the decision that minimizes the chances of your worst-case scenario.

Once you know:

– your rough statistical risk (micromorts)
– how other people will react (common knowledge game)
– and your own worst-case scenario (regret)

…You can start putting a plan in place to minimize your risk.

Here we are utilizing a strategy of “MinMax Regret.”

The goal is not to say “how can I optimize for the best possible scenario”….

…Because that’s difficult to do in such uncertain times.

It’s much easier to simply cover our bases and make sure that we do everything in our power to protect ourselves.

Thinking about your worst case scenario from Step 3, what can you do to ensure it doesn’t happen?

What stays? What goes?

Restaurants?

Visiting your parents?

Play dates for the kids?

What are you willing to give up in order to ensure the highest regret scenario doesn’t happen?

My own worst-case scenario?

Getting someone in a high risk category (like my Mom, or my son, who has asthma) sick.

What am I willing to give up to avoid that?

Eating at restaurants is out…

But we’ll get take out and eat outside.

Business trips are out. Easy choice.

I wear a mask.

I haven’t visited my mom, even though we miss her.

Can’t get her sick if we don’t see her.

But I visited my grandmother by talking to her through her window, with a mask on.

I’m not saying these decisions are objectively right or wrong…

But they were consistent with my goal:

Avoid the regret of getting the vulnerable people I love sick.

Once you’ve thought this through…

What’s my current risk?

How will other people react?

What’s my worst case scenario?

What am I willing to give up to minimize the possibility of that happening?

…Set up some ground rules.

What you’ll do, what you won’t.

What you’ll avoid, what you’ll accept.

And then don’t think about it at all until next month.

Give yourself the unimaginable relief…

Of deciding….

And then?

Forgetting.

—–

How long has it been?

Time seems to have stopped.

Or perhaps, moved on.

You keep walking, mostly as a way of asserting control.

My choice. Keep walking.

The path curved for a bit, then it straightened back out.

Slowly, but surely, it got wider and wider…the edges of the forest on either side drifting further and further apart.

It was like a curtain drawing back.

Your eyes were on the road, but as you look up now you realize….
You’re not in the forest anymore.

You’re not even on the path.

It’s open, all around.

Wide, impossibly wide. The sky and the earth touch each other.

The horizon is everywhere.

You’re glad you kept walking.

You’re glad you didn’t stop.

All woods must fail, you think.

As long as you keep walking.

Monty Hall

——
Should I switch, or stay?
—–

Let’s begin our review all the way back at the beginning, with our very first email about risk…

Bad Priors.

Everyone comes to situations of risk with pre-existing opinions. We all use our experiences to try and make sense of how the world works. These are our priors.


Priors act as a map of the territory of reality; we survey our past experiences, build abstract mental models from them, and then use those mental models to help us understand the world.


But priors can be misleading, even when they’re based on real experiences. Why?

For one, we often mistake group-indexed averages for individually-indexed averages.


For another, we often mistake uncertainty for risk.


Risk is a situation in which the variables (how likely a scenario is to happen, what we stand to lose or gain if it does) are known.

When we face risk, our best tool for decision-making is statistical analysis.


Imagine playing Russian Roulette; there’s one gun, one bullet, and 6 chambers. You can calculate your odds of success or failure.


Uncertainty is a situation in which the variables are unknown. Imagine a version of Russian Roulette where you don’t get to know how many bullets there are, or even how many chambers in the gun.

Not only can you not calculate your odds in this scenario, trying to do so will only give you a false sense of confidence.


When we face uncertainty, our best decision-making tool is game theory.


Mistaking risk for certainty is called the zero-risk illusion.


This is what happens when we get a positive result on a medical test, and convince ourselves there’s no way the test could be wrong.


Because the world is infinitely complex, we can’t always interact directly with the thing we care about (referred to as the underlying).
But there’s a more subtle (and often more damaging) illusion to think about:
Mistaking uncertainty for risk. This is known as the calculable-risk illusion.


To understand how we get to this illusion, we have to understand a bit about derivatives.

Because the world is infinitely complex, we can’t always interact directly with the things we care about (referred to as the the underlying.)


For example, we may care about the health of a company – how happy their employees are, how big their profit margin is, how much money they have in savings.


But it’s hard to really get a grip on all those variables.


To get around problems like this, we often look at some other metric (referred to as the derivative) that we believe is correlated with the thing we care about.


For example: we care about the health of the company (the underlying). But because that’s so complex, we choose to pay attention to the stock price of the company instead (the derivative). That’s because we believe that the two are correlated: if the health of the company improves, the stock price will rise.


The relationship between the underlying and the derivative is called the basis.

If you understand the basis, you can use a derivative to understand the underlying.


But the world is complicated. We often DON’T really understand the basis. Maybe we mistook causation for correlation. Or maybe we DID understand the basis, but it changed over time.


The problem is that re-examine our assumptions about how the world works.

This puts us in a situation where we mistake uncertainty for risk. We think we have enough information to calculate the odds. We think we can use statistical analysis to figure out the right thing to do.


The problem is that we often don’t have enough information. This is the “Turkey Problem”: every single data point tells us the farmer treats us well.
And that’s true…right up until Thanksgiving Day.


We cruise along, comforted by seemingly-accurate mathematical models of the world…only to be shocked when the models blow up and everything falls apart.


That’s the calculable-risk illusion.


This is how our maps can stop matching our territory.


OK – so we know that when situations are uncertain (and that’s a lot of, if not most of the time), we’re supposed to use game theory.


What are some examples of using game theory to help make decisions?


One example is the Common Knowledge Game.

Common knowledge games are situations in which we act based on what we believe other people believe.


Like a beauty contest where voting for the winning contestant wins you money, it’s not about whom you like best (first-order decision making)…
Or whom you think other people like best (second-order decision making)…


But whom you think other people will think other people like best (third-order decision making).


So: how do we know what other people know?


Watch the missionaries.


As in the case of the eye-color tribe, a system’s static equilibrium is shattered when public statements are made.


Information is injected into the system in such a way that everyone knows that everyone else knows.


Our modern equivalent is the media. We have to ask ourselves where other people think other people get their information.

Whatever statements come from these sources will affect public behavior…
…Not because any new knowledge is being created, but because everyone now knows that everyone else heard the message.


(This, by the way, is why investors religiously monitor the Federal Reserve. It’s not because the Fed tells anyone anything new about the state of the economy. It’s because it creates “common knowledge.”)

Whew! That’s a lot of stuff.


Let’s try to bring all these different ideas together in one fun example:

The Monty Hall Problem.

Monty Hall was famous television personality, best-known as the host of the game show Let’s Make a Deal.


Let’s Make a Deal featured a segment that became the setting for a famous logic problem…


One that excellently displays how our maps can become disconnected from the territory.


The problem was popularized Marilyn vos Savant in a British Newspaper. Here’s the problem as she formulated it:


Suppose you are on a game show, and you’re given the choice of three doors.

Behind one door is a car, behind the others, goats.


The rules are that you can pick any door you want, and you’ll also get a chance to switch if you want.

You pick a door, say number 1, and the host, who knows what’s behind the doors, opens another door, say number 3, which has a goat.


He says to you, “Do you want to pick door number 2?”

Is it to your advantage to switch your choice of doors?


Take a minute to think it through and come up with your own answer.
Let’s start by asking ourselves:


Is this a scenario of risk or uncertainty?


The answer is risk.

We know the odds, and can calculate our chances to win. That means statistical analysis is our friend.


So how do we calculate our odds?


The typical line of reasoning will go something like this:


Each door has a 1/3 probability of having the car behind it.


One door has been opened, which eliminates 1/3 of my chances.


Therefore, the car must be behind one of these two doors. That means I have a 50/50 chance of having picked the right door.


That means there’s no difference between sticking with this door or switching.


While this conclusion seems obvious (and believe me, this is the conclusion I came to)…


It turns out to be wrong. 🙂


Remember our discussion of medical tests?


To figure out how to think about our risk level, we imagined a group of 1,000 people all taking the same tests.


We then used the false positive rate to figure out how many people would test positive that didn’t have the disease.


Let’s apply a similar tool here.


Imagine three people playing this game. Each person picks a different door.
I’ll quote here from the book Risk Savvy, where I first learned about the Monty Hall Problem:


Assume the car is behind door 2.


The first contestant picks door 1. Monty’s only option is to open door 3, and he offers the contestant the opportunity to switch.


Switching to door 2 wins.


The second contestant picks door 3. This time, Monty has to open door 1, and switching to door 2 again wins.


Only the third contestant who picks door 2 will lose when switching.


Now it is easier to see that switching wins more often than staying, and we can calculate exactly how often: in two out of three cases.


This is why Marilyn recommended switching doors.

It becomes easier to imagine the potential outcomes if we picture a large group of people going through the same situation.

In this scenario, the best answer is to always switch.
Here’s an interesting twist, though:


Should you actually use this strategy on Let’s Make a Deal?


This is where the calculable-risk illusion rears it’s ugly head.

In the beginning of our discussion, I said the Monty Hall Problem was an example of risk. Our odds are calculable, and we understand the rules.

That’s why statistical analysis is helpful.


But reality is often far more complicated than any logic puzzle.


The question we need to ask in real life is: Will Monty ALWAYS give me the chance to switch?


For example, Monty might only let me switch if I chose the door with the car behind it.


If that’s the case, always switching is a terrible idea!


The real Monty Hall was actually asked about this question in The New York Times.

Hall explicitly said that he had complete control over how the game progressed, and that he used that power to play on the psychology of the contestant.


For example, he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might only allow them the opportunity to switch if they had a winning door.

Hall in his own words:


“After I showed them there was nothing behind one door, [Contestants would think] the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I offered. By opening that door we were applying pressure.”


“If the host is required to open a door all the time and offer you a switch, then you should take the switch…But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all depends on his mood.”


You can see this play out in this specific example, taken again from Risk Savvy:


After one contestant picked door 1, Monty opened door 3, revealing a goat.
While the contestant thought about switching to door 2, Monty pulled out a roll of bills and offered $3,000 in cash not to switch.


“I’ll switch to it,” insisted the contestant.


“Three thousand dollars,” Monty Hall repeated, “Cash. Cash money. It could be a car, but it could be a goat. Four thousand.”


The contestant resisted the temptation. “I’ll try the door.”


“Forty-five hundred. Forty-seven. Forty-eight. My last offer: Five thousand dollars.”


“Let’s open the door.” The contestant again rejected the offer.


“You just ended up with a goat,” Monty Hall said, opening the door.


And he explained: “Now do you see what happened there? The higher I got, the more you thought that the car was behind door 2. I wanted to con you into switching there, because I knew the car was behind 1. That’s the kind of thing I can do when I’m in control of the game.

What’s really happening here?


The contestant is committing the calculable-risk illusion.


They’re mistaking risk for uncertainty.


They think the game is about judging the probability that their door contains either car or goat.


But it isn’t.


The game is about understanding Monty Hall’s personality.


Whenever we shift from playing the game to playing the player, we have made the move from statistical analysis to game theory.


Instead of wondering what the probabilities are, we need to take into account:


1. Monty’s past actions, his personality, his incentives (to make the TV show dramatic and interesting)…


2. As well as what HE knows (which door has a car behind it)…


3. And what HE knows WE know… (that he knows which door has a car behind it)


4. And how that might change his behavior (since he knows we know he knows where the goals are, and he expects us to expect him to offer money if we picked the right door, he might do the opposite).


The map-territory problem can get us if we refuse to use statistical analysis where it’s warranted..


And when we keep using statistical analysis when it isn’t.


Now that we’ve seen some of these ideas in action, it’s FINALLY time to start addressing the root cause of all these emails:


The Coronavirus Pandemic.


We’ll be bringing all these mental models to bear on a tough problem:


How do I decide what to do, when so much is uncertain? And WHY is all of this so hard to understand?

All In Our Heads


What’s left to do but go to the beach?


As we come into the home stretch of our latest series of emails..
(Dealing with risk, uncertainty, and how to think about the Coronavirus pandemic)…


You may have noticed that I’ve been avoiding something.
I’ve been completely mute about a critically important component to understanding your COVID-19 risk:

How risky it actually is.


There hasn’t been a single statistic, figure, fatality rate or case number.


No models.


No predictions.


Nothing.


And this glaring omission is the topic of this email.


I am going to try and make an argument I have been building to for two months now.


Namely:


We cannot know how risky COVID-19 is…


And trying to find out is only making it worse.


If that sounds like a problematic statement to you, I get it.


All I can ask is that you stick with me through this email.


Let’s start where ALL good philosophical discussions start:


On the internet.


I’d like to share some examples of recent COVID-19 posts I’ve found.


Before I do, a few points:


– It doesn’t matter when these arguments were made.

– I’m not arguing for or against any of the arguments made in these posts.


– These are purely meant as example of rhetoric, so don’t get hung up on any of the numbers they use or don’t use.


Cool?

Cool.

All of these posts were made by very smart people.


All of these people are using publicly-available COVID data to make their arguments.


And while I’ve only given you a few quick screenshots, you can tell these people are arguing forcefully and rationally.


These are smart people, being smart.


So let me ask you:


Do you feel smarter?


Now that you’ve read these, do you understand the situation better?

My guess is….

No.


My guess is that you’ve actually read several threads, or posts, or articles like these.


Well-argued, quoting numerous “facts,” dutifully citing their sources…


And you’ve come away with more confused than before.


I know that that’s been my experience.

To dig into why this is the case, we have to take a bit of a journey.


We’re going to start with COVID-19, follow it all the way back to the roots of Western Philosophy, make a hard left into game theory…

And end up back at COVID-19 again.


Let’s start with a (seemingly) basic question:


Why is it so hard for smart people to agree on even basic facts about Coronavirus?


I’m not talking about your Uncle who gets all his news from YouTube because he thinks aliens control the media.


I’m talking smart people with educated backgrounds.

People who understand bias, who understand systems.


How is it possible that even THEY can’t agree on basic things like how dangerous the virus is?


There are two basic categories or problems with understanding Coronavirus:


One is logistical (it’s just really hard to get good information)…


And one is epistemological (it’s just really hard for us to know things, in general).


Let’s start with the first category – logistical.

Gathering accurate data about Coronavirus is extremely difficult.


For one, the size and scale of the outbreak makes this a unique event.


There are very few historical parallels, and none within living memory of most of the people who study this kind of thing.


Two, Coronaviruses (the general family that our current pandemic comes from) are not well understood.


Funding to study them, before the pandemic, was hard to come by.


Not only that, but Coronaviruses are notoriously difficult to grow in lab cultures.


All of this means that only a handful of scientists specialized in Coronaviruses…leaving us woefully unprepared.


On top of a general lack of knowledge, we have to be careful to separate Coronavirus and COVID-19 (the disease that the virus causes).


While Coronavirus doesn’t seem to vary much around the world, COVID-19 does. That’s because the disease effects people differently depending on their unique health risks, the society in which they live, etc.


If you’re overweight, your experience of COVID-19 may be very different than someone who’s not.


Same if the area where you live has a lot of pollution.


Or if you smoke.


Or if you’re above 40.


Or if you’re diabetic.


All this makes the overall impact of the disease extremely hard to predict. We’re not sure what the important risk factors are, or how dramatically they impact the disease’s progression.

Take the fatality rate, for example.


Personally, I’ve seen people claim the fatality rate is as high as 7%, and others say it’s as low as .05%.

Why is this so hard to figure out?


The number we’re talking about is referred to as the “case fatality rate,” or CFR. CFR is the percentage of people diagnosed with COVID-19 who die.

That seems pretty straightforward.

But, as we mentioned above, the disease’s effects vary from country to country.


CFR also changes based on how many people you test, and WHO you test – if you test only people in the emergency room, or people in high-risk demographics, the percentage of fatalities will be higher. If you test everyone on Earth, the percentage of fatalities will be lower.


The CFR will also change based on the quality of treatment; after all, better treatment should result in a better chance of living.


That means that CFR will not only change from country to country, but will change within the same country over time as treatments evolve and testing ramps up.

Based solely on the logistical challenges around understanding an event of this scale, we should expect a great deal of uncertainty.


But.


Our fundamental problem with understanding coronavirus is NOT that we lack data or smart people to help us process that data.


Our problem is that data doesn’t help us at all.


How could this be the case?


If we need to understand something, won’t more data help with that?


This brings us to the second category of problem that I mentioned earlier:
Epistemological.

See, the thing that I’ve found so completely fascinating about this pandemic…


Is how directly it brings us out of the routine of our everyday lives…
Right into the domains of Western Philosophy.


What I’m going to argue here is that our struggle to understand coronavirus is directly related to our struggle to understand anything.


See, normally, we get away with not really having to know how we know stuff.


We operate according to “generally accepted best practice” or “custom,” and get along fine.


But Coronavirus is different.


Someone might say, “I just listen to what a majority of epidemiologists say.”


But how do we know which epidemiologist to listen to?


Or how many epidemiologists need to agree to constitute a majority?


Or whether the majority believing something is even evidence that it’s true?


Or whether it’s all a media plot?


We’ve officially left the realm of epidemiology…


And entered the realm of epistemology (the theory of knowledge).

This is why our online arguments over fatality rates go nowhere:


They aren’t about what we know…

They’re about how we know what we know.


And with that, I’d like to introduce you to Karl Popper and Thomas Kuhn.

Karl Popper was a British philosopher who lived from 1902 to 1994.


Popper was an incredibly influential philosopher of science, and I will be the very first to admit that my understanding of his work is limited. I highly encourage you to research Popper on your own, rather than taking my own interpretation of him as fact.

That said, Popper’s primary contribution to my own mental models has been this:


Theories can never be empirically proven, only falsified.


Here’s what that means to me:


We can never be absolutely sure that what we believe corresponds exactly to external reality.


Western Philosophy in general has spent the last 200 years more or less demolishing the idea that we can somehow escape our own mental constructs, sense impressions, and physical limitations to achieve a “God’s-Eye View” of the world as it is.


That doesn’t mean there is no external reality, necessarily; just that we can never be 100% sure that we know exactly what it is.


So, how do we go about gaining knowledge?


Popper argues that we gain knowledge by falsifying beliefs, rather than “proving” them, per se.


Let’s take a hypothetical example:


Say a friend of yours claims that the Dodo is extinct.


This may or may not be “true” in the absolute sense. And, indeed, even if we haven’t seen a Dodo in a century or more, we can’t be absolutely certain, for instance, that there isn’t some cave somewhere with a hidden colony of Dodo.


There remains a possibility, even if it’s small, that the Dodo is not extinct.
However, I can falsify the statement “the Dodo is extinct” very easily, simply by locating a living Dodo.

Thus, knowledge proceeds negatively; not by discovering absolute truths, but by systematically falsifying the untrue.


Knowledge, then, is a process by which we become more and more accurate over time through falsification.


That may seem like it makes sense, but if knowledge is ONLY falsification, why do we feel like we “know” things?


How do some ideas become “accepted wisdom”?


To understand this piece of the puzzle, we come to Thomas Kuhn.


Thomas Kuhn was an American philosopher of science who lived form 1922 to 1996.


Kuhn’s most famous work is the Structure of Scientific Revolutions, in which he proposed a model for understanding how science advances.


Let’s pick up where we left off with Popper.


Popper proposed that knowledge is about finding what isn’t true, a process that becomes more accurate over time…


(even though we can never be 100% sure what we know is right).


Imagine you’re a scientist studying the weather.


You perform several experiments, testing (and often disproving) your hypotheses.


In one experiment, you discover an interesting connection between electrical currents and snowfall in Ohio.


You publish your findings in the prestigious Journal of Ohio Snowfall Studies.


I, upon reading your article, decide to try and replicate your findings.


Upon doing so, I find myself unable to recreate the results that got you so excited.
In fact, MY findings indicate there is 

no connection between electrical currents and Ohio snowfall.


I publish MY findings, which seem to falsify YOUR findings.


A colleague, interested in the resulting debate, then attempts to reconcile the two sets of findings by offering a theory that a third element (blood pressure of nearby raccoons) is actually the cause of the phenomena.


A FOURTH scientist then tries to disprove the Stressed-Out-Racoon Hypothesis…


And so on.


Note that the scientific process described above is largely destructive; it is through gradual falsification that knowledge is being progressed.
Over time, these layers of falsification build up, leaving very few ideas unscathed.


The ideas that are left standing gradually become accepted wisdom. So accepted wisdom is just “all the stuff we haven’t been able to falsify.”


Kuhn’s insight was that just because something is “accepted wisdom” does not mean it is true.


In fact, it is through the accumulation of accepted wisdom that entire eras of scientific progress are overturned.


How is this possible?


Our understanding of the world is never complete. Remember – Popper argues that we can’t ever have a fully accurate, absolutely certain view of the world.


The best we can get are theories that are useful; theories that prove their worth by making the world a better place.


But a theory’s usefulness doesn’t guarantee its accuracy.


For example, I might believe that singing to my plants every day helps them grow.


Now, that theory might be useful; it might encourage me to water the plants every day. It might lower my overall stress levels by getting me to spend time outside.


But that doesn’t mean singing to the plants really helps them grow in and of itself.


Kuhn argued that something similar happens in the sciences. Over time, through experimentation, we find useful ideas; ideas that help us progress.
But because our understanding is always limited, and there is no possibility of being 100% certain, mistakes will always creep in to systems of knowledge we build.


We can’t avoid this; it’s inevitable. Humans make mistakes, and so mistakes are built in to everything that we do.


In science and philosophy, these mistakes manifest as seemingly unsolvable problems, contradictory bits of knowledge, and straight-up weirdness that just doesn’t “fit” what we know “know to be “true.”


In Kuhn’s formulation, when these questions become too much to bear – when the blind spots in our picture of the world become too obvious – a revolution occurs.


This means that the “paradigm” (Kuhn’s word for the scientific consensus of how the world works) becomes undone…and a new paradigm takes its place.


Just as Copernicus replaced the Ptolemaic System…and Newton undermined Copernicus…and the Theory of Relativity destroyed Newton…


So does scientific consensus advance…


By destroying itself.


In other words:


Knowledge is an act of creative destruction.


Now.


We’ve gone very high-level here…


So let’s bring this back to Earth:


What does this have to do with Coronavirus?


I’m certainly not arguing that Coronavirus is the herald of a scientific revolution.


The basic tools for understanding what’s happening (epidemiology, statistical analysis, modeling, etc) have been around for a long time.
What I’m arguing is that our attempts to “fix” our knowledge of the virus and to “get it right” are actually making it harder to understand.


Stay with me, now. 🙂


We discussed, above, the idea that science moves forward through a process of creative destruction by systematically falsifying and undermining itself.
Typically, this process is contained within the pages of the various journals and academic conferences of a given field.


Want to debate the current state of genetics? There are designated places to do so.


Want to get into an argument about geology? Figure out where that debate is occurring and get started!


The debate over Coronavirus, though is not happening within the pages of academic journals.


It is not limited to scholarly conferences and meet ups.


It is occurring everywhere at once.


It’s on the internet.


The news.


The paper.


Twitter.


Facebook.


Your friends are discussing it.


Your neighbors bring it up.


Just as the virus itself is endemic (meaning, literally everywhere) so is information about the virus.


Not only is it everywhere, everyone is weighing in.


Epidemiologists? Sure. They’re celebrities now.


But also:


General practitioners, statisticians, politicians, teachers, plastic surgeons, marketers, historians, plumbers, pundits, accountants…


Everyone has an opinion.


When you combine:


Impossibly large amounts of uncertain data….


With an impossibly large amount of people interpreting that data…


You get a massive, exponential increase in the amount of information in the system.


It isn’t that we “don’t know” enough about COVID-19 and the Coronavirus…


It’s that we know too much, and have no way of discerning what’s right.


Let’s return, briefly, to the idea of the common knowledge game that we addressed in “The Beauty Contest” and “Missionaries.”


How could anyone possibly comprehend such a mess of information?


We can’t. It’s impossible.


So…what do we do instead?


We turn to Missionaries.


Missionaries are the people we look to in order to know what other people know.


They’re the news channels, pundits, journalists…the sources, not of “truth,” but of “common knowledge.”


The problem with looking to Missionaries now, however, is that they’re wrong.


And I know they’re wrong, because they have to be wrong.


Remember:


Logistically, it’s nearly impossible to get a grip on what COVID-19 does. Science is certainly our best option.


But epistemologically, science only advances by undermining itself.
And today, that process is being exponentially multiplied across every possible news outlet, Twitter feed and YouTube channel…


Meaning that all anyone hears…


All anyone sees…


Is missionary after missionary contradicting themselves.


Science is working; don’t get me wrong.


Slowly, but surely, we are getting closer and closer to the truth.
But what we as a culture are perceiving (possibly for the first time in Western Society)


Is just how self-destructive scientific progress actually is.


And while I don’t think we can understand the absolute chaos of COVID-19 information…


do think we can use game theory to predict how people will react to seeing large-scale falsification in progress for the first time.


I think people will simply conclude that no one knows anything, and that worrying about it is too cognitively-taxing.


For fun, I googled “beaches packed” just now.


Here’s the very first article that came up:


When the message gets too chaotic, too impenetrable? People stop listening.
To be clear, I absolutely do not blame anyone who has started tuning COVID-19 out at this point.


Yes, other countries had wildly different approaches, and different outcomes.


Yes, different messaging and different strategies could have prevented countless deaths here in the United States.


But that didn’t happen.


Now, what we’re left with is an impenetrable “fog of pandemic” made exponentially worse by our desire for more information.


In our attempt to understand the virus we have undermined any attempt to address it.


By putting science on display, we have guaranteed our loss of faith in it.


What’s left to do but go to the beach?

Missionaries

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

—–
How do we create Common Knowledge?
—–

We start this week with a riddle. It’s a famous one, so no Googling…give yourself the chance to try and think it through. 🙂

Here it is as written in Terence Tao’s Blog:

“There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours.

Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces).

If a tribesperson does discover his or her own eye color, then their religion compels them to leave the island at noon the following day.

All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).

Note: for the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander.

Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).

One day, a blue-eyed foreign missionary visits to the island and wins the complete trust of the tribe.

One evening, he addresses the entire tribe to thank them for their hospitality.

However, not knowing the customs, the missionary makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.

What effect, if anything, does this faux pas have on the tribe?

With this (admittedly strange) riddle we return to the Common Knowledge game.

You’ll recall that we introduced the idea of a Common Knowledge game in last week’s email…

(And if you missed it, I’ve gone ahead and posted it on the blog in order to make sure we don’t leave our new subscribers behind.)

…as what happens when many people are making second, third, and fourth order decisions.

The crowd acts not based on how each individual person thinks, but on what they think about what other people think.

The question then becomes:

How do we know what other people think?

To be clear, I don’t think we ever know what other people think…not really.

For our purposes we are most interested in figuring out how other people decide what they think other people think.

In other words: “How do we know what’s common knowledge?”

First, let’s define common knowledge.

In one sense, common knowledge is just stuff that pretty much everyone knows.

The earth orbits the sun.

The Statue of Liberty is in New York City.

But common knowledge is more than just what we know…it’s what we know other people know.

From Wikipedia:

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.”

It’s not just what you know – it’s what you know that I know you know I know.

How do we figure that out?

This brings us back to the island of the eye-color tribe.

Let’s start with the answer and work backwards (if you still want to take a stab at solving the riddle yourself, don’t read any further).

What effect does the missionary’s pronouncement have on the tribe?

All 100 tribe members with blue eyes leave the island at noon on 100 days after the missionary’s statement.

Why?

To help work this out, imagine that there was only one person with blue eyes on the island. What would happen then?

The missionary would pronounce that they see a person with blue eyes.

The blue-eyed island would immediately realize that the missionary must be referring to them; after all, they know that every other islander has brown eyes. Therefore, they would leave the island at Noon the next day.

Now, let’s make things slightly more complicated, and imagine that there are two blue-eyed islanders – let’s call them Marty and Beth.

The missionary states that they see a person with blue eyes.

Marty thinks: “Wow! He just called out Beth as having blue eyes. That means Beth will leave the island tomorrow.”

Beth thinks: “Yup – he’s talking about Marty. Poor Marty! He’ll have to leave the island tomorrow.

Tomorrow rolls around. Both Beth and Marty gather around with everyone else to watch the blue-eyed islander leave the island…

And no one leaves.

Beth and Marty stare at each other. The other islanders stand around awkwardly.

Beth thinks: “Wait a minute…Marty has blue eyes. He should know that he needs to leave, because he knows everyone else’s eye color, and can deduce that his eyes are blue.

But if he didn’t leave, that means that he thinks he doesn’t have to, because someone else should have deduced that their eyes are blue. And since I know that everyone else’s eyes are brown, that means….”

Marty thinks: “Wait a minute…Beth has blue eyes. She should know that she needs to leave, because she knows everyone else’s eye color, and can deduce that her eyes are blue.

But if she didn’t leave, that means that she thinks she doesn’t have to, because someone else should have deduced that their eyes are blue. And since I know that everyone else’s eyes are brown, that means….”

Beth and Marty together: “MY EYES MUST BE BLUE!”

And so Beth and Marty leave the island together at noon the next day.

This process continues for each new islander we add to the “blue eyes” group. And so the generalized rule for the riddle is:

If N is the number of blue-eyed islanders, nothing happens until the Nth day, whereupon all blue-eyed islanders leave simultaneously.

There’s a critical element to this riddle that many people (namely, me) miss:

Everyone already knows everyone else’s eye color.

It isn’t that the islanders are learning anything truly new about other people. They aren’t. It has nothing to do with what people know.

What changes the situation on the island is missionary’s public announcement.

It’s know that people suddenly know – it’s that they know that other people know.

Common Knowledge is created by public statement, even when all the information was already available privately.

It isn’t about eye color; it’s about the missionary.

Once we hear the missionary’s message, it takes time for everyone to process. It takes time for people to make strategic decisions.

The more ambiguity in the message, the more time it takes to have an effect (for example, “Beth has blue eyes” is an unambiguous message that would have had an immediate effect. “I see someone with blue eyes” is ambiguous, and takes N+1 days to have an effect.)

We ask the question at the beginning:

How do we know what other people think?

The answer: we create an understanding of what other people know by listening to the missionaries we believe everyone else listens to.

The message of the missionary has to be widely propagated and public. What are the channels that “everyone” watches? What are the sources that “everyone” follows?

Once the message is broadcast, there’s a time delay on the effect based on the amount of ambiguity in the message and the difficulty of the strategic decision-making that follows.

But.

Once everyone hears the message…

And everyone updates their beliefs about what other people believe…

Change can be both sweeping and immediate.

This is how massive changes can seemingly happen overnight. How “stable” regimes can be toppled, social movements can ignite, and stable equilibrium everywhere can be dramatically and irrevocably disrupted.

The Common Knowledge game teaches us some critical lessons:

1.) It isn’t the knowledge itself that matters, but what we believe other people know.

2.) You have better be aware of who the missionaries are, and what they’re saying.

Otherwise, it might soon be YOUR turn to leave the island.

The Beauty Contest

This post was originally an email sent to the Better Questions Email List.

For more like it, please sign up – it’s free.

Since we’ve all had enough of being cooped up inside and self-quarantining…

Let’s imagine a simpler time.

You’re walking down an ocean boardwalk.

You can feel the sun on your neck, warming you all the way through.

The smells of salt air, cotton candy, roasting peanuts linger in the air.
The sounds of summer wash over you:
The gentle murmur of ocean waves…
The joyous cries of children playing…

The mechanical music of various sidewalk rides and games…
The calls of seagulls floating overhead.

You work your way through the crowds. You’re not headed anywhere in particular, just enjoying the scene.
As you walk you come across a large crowd gathered around an elevated stage. You stop to see what’s going on.
Three men stand on the stage next to each other. A carnival barker shouts through a megaphone:

“Ladies and gentlemen! This is your chance to be handsomely rewarded! Great prizes in store for those who are the best judge of beauty! How finely tuned is your romantic apparatus? Now is the time to find out!”

The barker gestures to the men, who smile and wave.

The three handsome contestants of the beauty contest.

“Each one of you gets a vote – which of these dashing young men is the handsomest?

Which of these rugged exemplars of American manhood is the most appealing?

Vote, and if your choice gets the most votes, a fabulous prize will be yours!”

Someone hands out slips of paper to the crowd; you’ve got a few moments to decide on your vote. You’ve got admit…you like the sound of “fabulous prizes.” It’d be nice to win.

So…

Who do you vote for?
Let’s leave aside the obvious fact that it would be impossible to choose. Putting anyone in such a hopeless scenario would be cruel.

Underlying this situation is one of the most powerful mental models I’ve ever come across.

It’s an idea that – once you get it – changes how you view the world around you.

Rather than telling you, though…let’s see if we can reason it out. 🙂

How do we decide who to vote for?

The simplest answer is, “Vote for whomever you like the most.”

Are you attracted to the cool, mysterious vibe of contestant 1?

Vote for contestant 1.
Maybe you’re a huge pie fan, and so you’re drawn to the caring and lovable vibe of contestant 2.

Vote for contestant 2.

Or maybe it’s the raw, animal sexuality of the muscular 3 that draws you in.

Vote for contestant 3.

This is first-order decision making. Just vote for your preference!

But is this best way to vote, if our goal is to win the prize?

Probably not.

If we want to win, we need to make sure that we vote for the winning contestant.

That means we need to think not just about who we like, but how other people will vote.
How do you think other people will vote, on average?

We might say:

“Well, I prefer 1, because I like babies. But I think most people are attracted to hyper-masculine muscle-men, so most people will vote for 3…therefore I’ll vote for 3.”

Or we might think:

“While I prefer 3, I think that much raw machismo will simply be too over-stimulating for most. They’ll be so simultaneously titillated and intimidated that they will, in a kind of sexual panic, vote for the more soothing and comforting presence of 2.”
We’re now making decisions based on how we believe other people will vote.

This is second-order decision making:

Making decisions based on how we believe other people will vote.

Let me draw your attention to two important points here:
1. Second-order decision making requires us to have an opinion about what other people think.
We’re making guesses about what’s in other people’s heads. Hard to do.

2. Second-order decision making assumes that everyone else is making first-order decisions.

“They’re voting based on their preferences, while I’m voting based on what I think their preferences are.

But is this the best way to vote, if we want to win the prize?
Probably not.
Think about this:

Do you believe you are the only person to come up with second order thinking?

Do you believe you’re the only one smart enough to realize you shouldn’t vote based on your preferences, but on what you believe about other people’s preferences?
Probably not.

If you’ve discovered second order decision making, it’s probable that other people have as well.
And if they’ve discovered second order decision making, that means they’re not deciding based on their preferences, but rather on what they believe the preferences of other people will be.
This brings us to third-order decision making:

Making decisions based on what we believe other people believe about what other people think.
So, it’s not:

“Who do I think is most attractive?”
And it’s not:

“Who do I think most people find most attractive?”

It’s:

“Who do most people think most people think is most attractive?”

There’s no reason to stop there, either.
We could keep going to fourth and fifth-degree decision making if we wanted.

This idea is nicely summarized in this scene from The Princess Bride:

It’s this kind of thinking…

But spread out among a large group of people.

In game theory, this is called the Common Knowledge Game.

How do we decide best, when the best decision depends on how other people decide…

And everyone is making second or third order decisions?
How do we think about what other people think about what other people think?

Do I put the Iocaine powder in my glass, or yours?

The Common Knowledge Game, once you start to look for it, is everywhere…

And it’s critical to thinking clearly about situations like a pandemic.

© 2024 No Less Than.

Theme by Anders NorenUp ↑