Learn, Make, Teach.

Category: Decisions (Page 1 of 2)

The Logical Thinking Process, Part One: Five Simple Pieces

What if there was a step by step way to solving any problem?

A simple, straight-forward path to getting everything you ever wanted?

To avoiding pitfalls and obstacles? To maximizing your potential and living your best life?

Well…there is.

It’s called the Logical Thinking Process.

It’s got five simple pieces.

Five simple ways to think more effectively and see the world more clearly.

Over the next few emails, I’m going to be showing you how you can immediately apply these steps to your own life.

But first:

Let’s talk about problems.


What is the Logical Thinking Process?

“A bad system will beat a good person every time.”

  • W. Edward Deming

Every human organization is a system.

This applies to “organizations” like your workplace…

But also to your relationships.

Your marriage is an organization of two. Your family is an organization of three or four or five or more.

The social groups you interact with are systems. The restaurant you order takeout from is a system….and, of course, the app you used to order is a system as well.

Systems grow in complexity as they grow in size. Changes in one part of the system affect other parts of the system. Often, these chains of causation grow so long and so complicated that the true “inciting event” may be completely invisible to us.

It’s the “butterfly effect”: we don’t see snow and know that a butterfly has flapped its wings in India. We see the effect, but we rarely understand the cause.

That doesn’t stop us from placing blame, though.

When something happens we tend to assume the cause is “proximal” – i.e., nearby. What happened immediately before our problem emerged? Surely, that must be the cause, right?

These dual tendencies – to miss the true causes of an event, and to place blame for the problems we run into – lead us to blame ourselves for most of our problems. After all, WE are the proximal cause of most of the things that happen to us.

Weigh more than I’d like? That’s my fault. I should have more willpower.

Needed to study, but didn’t? That’s my fault. Why can’t I focus?

Hurt someone I love? That’s my fault. Why am I so careless?

It’s a very, very short leap from my personal shortcomings are the source of my problems…

To I am not a very good person.

And that’s a tough place to get out of, once you’re there.

But, as Deming said: A bad system will defeat a good person every time.

It’s pointless to shame ourselves, degrade ourselves, beat ourselves up…

If we haven’t at least tried to address our problem from a systems perspective, first.

Weigh more than I’d like? Maybe if I remove trigger foods from the house…

Needed to study, but didn’t? Maybe it’s too noisy…what if I got noise-canceling headphones…?

Hurt someone I love? Is there something in our dynamic that sets me off? Could I prevent that from happening…?

That’s what the Logical System Process is all about:

We don’t focus on the person…

We focus on the systems that the person operates within.

And that makes all the difference.


What is the Logical Thinking Process, anyway?

It’s a process. You can run absolutely any problem you want through it – from purely personal issues to international.

There are five steps.

Each step has a specific purpose, and each will move you further towards solving your target problem.

The first step, the Goal Tree, is used to define a single goal we aim to achieve and what is necessary to get there.

Next is the Current Reality Tree, which explores why we have not already reached the goal. What’s in our way?

Once we define why we haven’t already solved our problem, we often discover deep and seemingly-insurmountable conflicts within us. The third step is to solve these conflicts with a Conflict Resolution Diagram.

The fourth step, the Future Reality Tree, is used to map out a strategy to achieve our goal.

Finally, the Prerequisite Tree is used to define the individual steps you need to take right now.

And that’s it.

Sometimes, you need all five steps to address a problem…

Sometimes, you’ll only need one or two.

I said these steps were simple…and they are.

That doesn’t mean that they’ll be easy, however.

Thinking logically – really examining our biases and assumptions about the world – can be difficult.

They payoffs, however, are incredible…and will radically transform your life.

Next week, we get right into things…with the Goal Tree.

All Woods Must Fail

O!
Wanderers in the shadowed land
Despair not!
For though dark they stand,
All woods there be must end at last,
And see the open sun go past:
The setting sun, the rising sun,
The day’s end,
or the day begun.
For east or west all woods must fail.
J. R. R. Tolkien

You wake suddenly into a room you do not recognize.

This is not your bed.

Not your dresser.

Not your table.

The floor is rough-hewn wood. There are windows, but they are opaque. Light filters through, but nothing of the environment is visible.

You blink; you give yourself a moment to collect your thoughts, to remember.
Nothing comes.

You cautiously place a foot on the floor: cool, smooth, unfamiliar.

You tiptoe to the bedroom door.

The knob is large, brass. It looks ancient.

Above the door knob is a large brass plate. In it’s center there is a keyhole.
You bend down.

You close one eye and peer out.

What’s on the other side of the door?

Forest.

Forest forever, in every direction.

—–

For all our pretending…

Our intellectual strutting and preening, Our claims of omnipotence and rationality, our technological marvels and accomplishments…

The world is as uncertain as ever.

Whenever humanity’s understanding seems to encroach, fast and sure, onto the ends of the universe…
I try to remind myself of the scale of what we’re discussing.

I think about chess.

Chess has 16 pieces per player and 64 spaces.

The rules are defined.

Everything that needs to be known is known.

But there are more potential games of chess than there are subatomic particles in the universe.
It is infinite…

Despite its simplicity.

That’s been my biggest takeaway from studying game theory, risk, and COVID-19 these past few months:

The universe of unknown unknowns is impossibly vast…

Even if we understand the pieces.

Even if we think we understand how they all fits together.

I’ll give you one more example, before we head off into the forest in search of practical solutions…
Isaac Newton published the Philosophiæ Naturalis Principia Mathematica in 1687.
In it, he proposed three laws of motion:

1: An object either remains at rest or continues to move at a constant velocity, unless acted upon by a force.

2: The vector sum of the forces on an object is equal to the mass of that object multiplied by its acceleration.

3: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

We’ve had 333 years to sit and think about these laws.

In that time, we’ve managed to invent computers with computational powers exceeding anything a human being is capable of.

With these tools – Newton’s Laws and our computers – we can precisely model the movements of bodies through space.

If we know their starting points and their velocities, we can perfectly plot the paths they’ll take.
We can literally calculate their future.

Of course, for each body we add into the problem, the calculations get more complex.
Eventually the system interactions become so intricate that it is impossible to calculate. It becomes chaotic, non-repeating.

Infinite.

How many bodies does it take for the problem to become incalculable?

With our 333 years of pondering Newton’s Laws?

With our super-powerful computers?

With all the human knowledge in all the world?

How many bodies?

Three.

—–

The door swings open.

It creaks, briefly, but the sound fades, absorbed into the thick, humid air.

Tress in every direction. They are massive, towering things.

Sun filters through the pine needles and dapples the ground like so many little spotlights. It’s not morning, but it’s hard to tell exactly where the sun is overhead.

The trees seem to come straight up to the door. There’s room to walk, but only just.
It should feel oppressive, like they are crowding you. Instead, it feels like you’ve interrupted a conversation.

You step out; the forest floor is soft and dry. As you look around, the door behind you swings shut.

You reach out, but it latches. You try to open it but it’s locked.

You take a breath and hold it.

The sweet taste of undergrowth, copper in the soil, a sense memory of an old Christmas tree.

Which way do you go?

—-

Complexity at the root of the universe.

So uncertainty is at the root of the universe.

So anxiety is at the root of the universe.

Anxiety is a perfectly normal reaction to the impossible task of trying to understand and predict a chaotic infinity of possibilities…

With a very limited, very non-infinite mind.

Despite that fact, we all have to wake up each day and do what needs to be done; to honor our commitments to ourselves and one another.

How do we navigate an uncertain world?

We choose the best path we can with the minimum amount of anxiety.

We use simple systems that allow us to quickly compare risks across categories.

We acknowledge our tendency to endlessly re-think, re-play, and re-consider our decisions…

And figure out how to let go.

We do the best we can, while minimizing our chances of losing too much.

In other words:

Heuristics…

Micromorts…

and MinMax Regret.

We discussed these concepts in an earlier post, so I won’t belabor them now.
Instead, what I want to do in this email is spell out…

Step by step….

Exactly how you can use these ideas to get a simple, practical estimate of how much risk you are willing to take on…

And to use that estimate to help you make the everyday decisions that affect your life.

—–

You walk until you get tired.

Something’s wrong, but you’re not sure what.

You don’t know where you are, so you could’ve chosen any direction at all.

You decided to simply go wherever the forest seems less dense, more open.

After a while (hours? days?) the trees have gotten further and further apart.

The slightly-more-open terrain has made walking easier.

You’re making more progress; towards what, you don’t know.

Every now and then you reach out to touch one of the passing trees; to trail your fingers along its bark.

The rough bumps and edges give you some textural variation, a way of marking the passing of time.

You look up. The sun doesn’t seem to have moved.

The sunlight still dapples. It’s neither hot nor cold. It isn’t much of anything.

Then you realize:

You haven’t heard a single sound since you’ve been out here.

Not even your own footsteps.

—–

Every good heuristic has a few components:

A way to search for the information we need…

A clear point at which to stop…

And a way to decide.

Let’s take each of these in turn.

Searching

We’ve discussed the “fog of pandemic” at length over the past few months.

With so much information, from so many sources, how do we know what to trust?

How do we know what’s real?

The truth is, 

we don’t.

In the moment, it is impossible to determine what’s “true” or “false.” As a group we may slowly get more accurate over time. Useful information builds up and gradually forces out less-useful information.

But none of that helps us right here, right now – which is when we have to make our decisions.

So what do we do?

We apply a heuristic to the search for information.

What does this mean?

Put simply: set a basic criteria for when you’ll take a piece of information seriously, and ignore everything that doesn’t meet that criteria.

Here’s an example of such a heuristic:

Take information seriously only when it is reported by both the New York Times and the Wall Street Journal.

Why does this work?

1. These are “credible” sources that are forced to fact-check their work.

2. These sources are widely monitored and criticized, meaning that low-quality information will often be called out.

3. These sources are moderate-left (NYT) and moderate-right (WSJ). Thus, information that appears in both will be less partisan on average.

While this approach to vetting information might be less accurate than, say, reading all of the best epidemiological journals and carefully weighing the evidence cited….

Have you ever actually done that?

Has anyone you know ever done that?

Have half the people on Twitter who SAY they’ve done that, actually done that?

Remember:

Our goal is not only to make the best decisions possible…

It’s to decrease our anxiety along the way.

Using a simple search heuristic allows us to filter information quickly, discarding the vast majority of noise and focusing as much as possible on whatever signal there is.

You don’t have to use my heuristic; you can make your own.

Swap in any two ideologically-competing and well-known sources for the NYT and the WSJ.
Specifically, focus on publications that have:

– A public-facing corrections process
– A fact-checking process
– Social pressure (people get upset when they “get it wrong”)
– Differing ideological bents
– Print versions (television and internet tend to be too fast to properly fact-check)

Whenever a piece of information needs to be assessed, ask:

Is this information reported in both of my chosen sources?

If not, ignore it and live your life.

Stopping

When do you stop looking for more information, and simply make a decision?
This is a complicated problem. It’s even got it’s own corner of mathematics, called optimal stopping.

In our case, we need a way to prevent information overload…the constant sense of revision that happens when we’re buffeted by an endless stream of op-eds, breaking news, and recent developments.

I’ve written about this a bit in my blog post on information pulsing.

The key to reducing the amount of anxiety caused by the news is to slow it’s pulse.

If we control the pace at which information flows into our lives, we control the rate at which we need to process that information and reduce the cognitive load it requires.

My preferred pace is once a week.

I get the paper every Sunday. I like the Sunday paper because it summarizes the week’s news. Anything important that happened that week shows up in the Sunday paper in some shape or form.

The corollary is that I deliberately avoid the news every other day of the week.

No paper, no radio, no TV news, nothing online.

This gives me mental space to pursue my own goals while keeping me informed and preventing burnout.

Presuming that we’re controlling the regular pulse of information into our lives, we also need a stopping point for decision making.

Re-examining your risk management every single week is too much.

Not only is it impractical, it predisposes us to over-fitting – trying too hard to match our mental models to the incoming stream of data.

My recommendation for now is to re-examine your COVID risk management decisions once a month.

Once a month is enough to stay flexible, which I think is necessary in an environment that changes so rapidly.

But it’s not so aggressive that it encourages over-fitting, or causes too much anxiety.

We are treating our risk management like long-term investments.

Check on your portfolio once a month to make sure things are OK, but put it completely out of your head the rest of the time.

—-

You walk on, always following the less-wooded trail.

The trees are more sparse now.

It’s easier to walk, easier to make your way.

Eventually, you come to a clearing.

Your legs ache. You find a small log and sit down, taking a breath.

The air is warm. It hangs over you.

You breathe again.

Your eyes close.

Maybe you sleep.

You’re not sure.

None of it seems real.

Maybe you’re still dreaming.

But maybe you aren’t.

You could lie down, here. The ground is soft. There’s a place to comfortably lay your head.
It would be easy enough to drift away. It would be pleasant.

Or, you could push on.

Keep walking.

Maybe progress is being made.

Maybe it isn’t that far.

But maybe it is.

———

Deciding

We come now to the final stage of our process – deciding.

We’ve set parameters for how we’ll search for information…

And rules for how we’ll stop searching.

Now we need to use the information we take in to make useful inferences about the world – and use those inferences to determine our behavior.

This stage has a bit more steps to it.

Here’s the outline:

1. Get a ballpark risk estimate using micromorts for your state.
2. Play the common knowledge game.
3. Establish the personal costs of different decisions within your control.
4. Choose the decision that minimizes the chances of your worst-case scenario.

Let’s break each of these down in turn.

1. Get a ballpark risk estimate using micromorts for your state.

I’ve actually built you a handy COVID-19 Micromort Calculator that will calculate your micromorts per day and month based on your state’s COVID-19 data.

But if you don’t want to use my calculator, here’s how to do this on your own:

– Find the COVID-19 related deaths in your state for the last 30 days. Why your state? Because COVID-19 is highly variable depending on where you live.

– Find the population of your state (just google “My State population” and it should come right up).


(note: My COVID-19 Micromort calculator pulls all this data for you).

– Go to this URL: https://rorystolzenberg.github.io/micromort-calculator

– Enter the state’s COVID-19 deaths in the “Deaths” box.

– Enter the state population in the “People In Jurisdiction” box.

– In the “Micromorts per day” section put “30” in the “Days Elapsed” box.

Your calculator should look something like this:

You’ve now calculated the average micromorts of risk per day in your state.

To compare this risk to other risks, situate your micromorts on this spreadsheet:
https://docs.google.com/spreadsheets/d/1xo3VrcDu6sMNGyykGKxgvRXv9kCwnFNAMHBoyc1y67Q/edit?usp=sharing

Take a look at the list and figure out how much risk we’re really talking about.
For example, the risk level in the image above is 4.63 micromorts – let’s round that to 5.
That means that I have about as much risk of dying from COVID-19 as I would of dying during a scuba dive, and more risk than I’d take on during a rock climb.

It’s also riskier than would be allowed at a workplace in the UK.

However, it’s less risky than going under general anesthetic, or skydiving.

Keep in mind, however, that these risks are per day.

Comparing apples to apples, I can ask:

“My COVID-19 risk is equivalent to the risk of going scuba diving every single day. Is that an acceptable risk level for me?”

2. Play the common knowledge game.

Now that we’ve got a rough estimate of risk, let’s think about other people.
You know.

Them.

Statistical risk matters, obviously.

But COVID-19 has a unique property:

It’s viral.

Literally.

If I’m riding a motorcycle, my risk does not increase if other people ride motorcycles, too.

For COVID-19? The actions of others have a big effect on my personal risk level.

This is where the common knowledge game comes in handy.

(You’ll recall our discussion of Common Knowledge games in previous emails, namely 
The Beauty ContestMissionaries, and Monty Hall.)

We don’t need to just weigh our own options…

We need to weigh what we think other people will do.

As an example:

My own state, Connecticut, has seen declining case numbers of COVID-19 for a few months now.
That gradual decline has led to a loosening of restrictions and a general increase in economic activity.

And that’s great!

But when it comes to sending our child to school next year, I’m still extremely worried.

Why?

Because I’m assuming that other people will see the declining case count as an indication that they can take on more risk.

What happens when people take on more risk in a pandemic?

Case numbers go up.

I ran into a similar issue early in the pandemic with regards to where I work.

I have a small office in a building downtown.

My room is private but other people share the space immediately outside my door.
Throughout the highest-risk days of the pandemic, when everyone else was staying home, I kept coming into the office to work.

Why?

Because everyone else was staying home.

They reacted rationally to the risks, and so my office building was empty.
Since it was only me, my personal risk remained low.

Now that risk levels are lower, people have started coming back to work…

Which means I am now more likely to work from home.

Why?

My actual risk remains the same, or higher, since more people have COVID-19 now than they did in the beginning.

But because case counts are declining, people feel safer and are more likely to come into the office, increasing my exposure.

Again:

Statistical risk matters…

But so does what other people do about that risk.

So.

While our micromort number is extremely useful, we need to run it through a filter:

How do I think the people around me will react to this level of risk?

What is “common knowledge” about our risk level?

What are the “missionaries” (news sources that everyone believes everyone listens to) saying, and how will that affect behavior?

Factor this into your decision-making.

—–

You keep moving.

Little by little, the ground becomes a trail, and the trail becomes a path.

Have other people been this way?

It’s hard to tell.

Maybe just deer.

But it’s a path. A way forward.

You think you detect some slight movement in the sun overhead.

Maybe, just maybe, time is passing after all.

With the path, there’s something to cut through the sameness – some way to judge distance.

Forward movement is forward movement.

You keep moving.

And then, something you never expected:

A fork.

Two paths.

One to the left, one to the right.

They each gently curve in opposite directions. You can’t see where they lead.

Something touches your back.

The wind.

Wind? you wonder. Was it always there?

Which way do I go?

—–

3. Establish the personal costs of different decisions within your control.

We’ve thought about risk, and we’ve thought about how other people will react.

Let’s take a moment to think about costs.

Every decision carries a cost.

It could simply be an opportunity cost (“If I do this, I can’t do this other thing…”)

Or the cost could be more tangible (“If I don’t go to work, I’ll lose my job.”)

One of the things that’s irritating about our discourse over COVID-19 is the extent to which people seem to assume that any action is obviously the right way to go…while ignoring it’s costs.

Yes, lockdowns carry very real costs – economic, emotional, physical.

Yes, not going into lockdowns carries very real costs – hospitalizations, deaths, economic losses.

Even wearing masks – something I am 100% in favor of – has costs. It’s uncomfortable for some, hampers social interaction, is inconvenient, etc.

We can’t act rationally if we don’t consider the costs.

So let’s do that.

Think through your potential outcomes.

You could get sick.

You could die.

There’s always that.

What else?

Maybe the kids miss a year of school.

What would the emotional repercussions be?

Or logistical?

Could you lose your job?

Lose income?

Have trouble paying bills?

What if there are long-term health effects?

What if the supply chain gets disrupted again…what if food becomes hard to find?

Think everything through.

Feel free to be dire and gloomy here…we’re looking for worst-case scenarios, not what is likely to happen.

Once you’ve spent some time figuring this out, make a quick list of your worst-cases.

Feel them emotionally.

We’re not looking to be most rational here. We’re getting in touch with our emotional reality.

We’re not saying, “What’s best for society? What do people want me to do?”

We’re asking:

Which of these scenarios would cause me the most regret?

Regret is a powerful emotion.

It is both social and personal. In many cases, we would rather feel pain than regret.

“Tis better to have loved and lost, than to have never loved at all.”

Rank your potential outcomes by “most regret” to “least regret.”

Which one is at the top?

Which outcome would you most regret?

THAT’S your worst-case scenario.

4. Choose the decision that minimizes the chances of your worst-case scenario.

Once you know:

– your rough statistical risk (micromorts)
– how other people will react (common knowledge game)
– and your own worst-case scenario (regret)

…You can start putting a plan in place to minimize your risk.

Here we are utilizing a strategy of “MinMax Regret.”

The goal is not to say “how can I optimize for the best possible scenario”….

…Because that’s difficult to do in such uncertain times.

It’s much easier to simply cover our bases and make sure that we do everything in our power to protect ourselves.

Thinking about your worst case scenario from Step 3, what can you do to ensure it doesn’t happen?

What stays? What goes?

Restaurants?

Visiting your parents?

Play dates for the kids?

What are you willing to give up in order to ensure the highest regret scenario doesn’t happen?

My own worst-case scenario?

Getting someone in a high risk category (like my Mom, or my son, who has asthma) sick.

What am I willing to give up to avoid that?

Eating at restaurants is out…

But we’ll get take out and eat outside.

Business trips are out. Easy choice.

I wear a mask.

I haven’t visited my mom, even though we miss her.

Can’t get her sick if we don’t see her.

But I visited my grandmother by talking to her through her window, with a mask on.

I’m not saying these decisions are objectively right or wrong…

But they were consistent with my goal:

Avoid the regret of getting the vulnerable people I love sick.

Once you’ve thought this through…

What’s my current risk?

How will other people react?

What’s my worst case scenario?

What am I willing to give up to minimize the possibility of that happening?

…Set up some ground rules.

What you’ll do, what you won’t.

What you’ll avoid, what you’ll accept.

And then don’t think about it at all until next month.

Give yourself the unimaginable relief…

Of deciding….

And then?

Forgetting.

—–

How long has it been?

Time seems to have stopped.

Or perhaps, moved on.

You keep walking, mostly as a way of asserting control.

My choice. Keep walking.

The path curved for a bit, then it straightened back out.

Slowly, but surely, it got wider and wider…the edges of the forest on either side drifting further and further apart.

It was like a curtain drawing back.

Your eyes were on the road, but as you look up now you realize….
You’re not in the forest anymore.

You’re not even on the path.

It’s open, all around.

Wide, impossibly wide. The sky and the earth touch each other.

The horizon is everywhere.

You’re glad you kept walking.

You’re glad you didn’t stop.

All woods must fail, you think.

As long as you keep walking.

One In a Million


What’s Worse Than Death?

Right here, at the beginning of our penultimate COVID-19 email…


I’d like to take a moment to mourn.

It’s very possible you haven’t taken a moment to let it all settle in.


What we’ve lost.


For us, it was Dolly, the wonderful woman who knit my kids caps in the winter and always had a kind word for me.


It was the end of Max’s pre-school year – the sudden vanishing of his friends, teachers, and daily schedule.


It was the loss of Oliver’s birthday party. He loves parties. He’s turning 6 and it feels like a very big deal to him. He doesn’t quite get why no one can be there to celebrate.


It was the surprise 40th birthday party my wife had planned…


The ability for my wife to go to the gym (or, indeed, have any time to herself at all)…


The tension we feel now when a neighbor’s kid comes over to play.


There’s no sense time.


It’s everywhere, constantly.


We’re always afraid, or unsure, or angry, or judgmental, or worried.


For you, it might have been a friend or loved one.


Or your job.


Or your health.


Or maybe it was just the ability to duck out to the store for a few minutes.


To grab a bite to eat and chat with your server.


Whether you measure the cost in lives,


or economic impact,


or disruption of everyday routines,


or the pervasive anxiety and loss that now seem woven into the very fiber of everyday life…


The cost of the Coronavirus pandemic has been high.


And while the actual virus has not been evenly distributed…


There isn’t anyone who hasn’t paid part of that cost.


And I could see you reading this series of emails we’ve been working on and coming away pretty bummed out about the future.


After all, the case we’ve been building has been fairly pessimistic in regards to our ability to understand what we’re going through.


In Bad Priors, I wrote that people primarily understand probabilities by referring to their past experiences (called “priors”).


In Map Meets Territory, I argued that base rates (the average outcomes of similar events) often provide a better view on how the world really works than priors do.

In 8 Months To Livewe complicated that picture a bit by pointing out that individuating data is necessary…even if it sometimes throws us off track.


In No Basis, we discussed the different between risk (where the probabilities are known) and uncertainty (where they aren’t). We also explored ways in which our use of both priors and base rates can lead us astray when the underlying relationships between things change over time.


How should we make decisions? By using statistical analysis for situations of risk, and game theory for situations of uncertainty.


In The Beauty Contest, we discussed one such application of game theory: the common knowledge game, where we act based on what we believe other plays believe.


That begged the question of how we know what other people believe.


In Missionaries, we discussed the role of “common knowledge,” and how injections of information by well-known public authorities can have widespread effects on seemingly-stable systems.


By that point, we’d covered both decision-making tool sets: statistical analysis (priors and base rates) and game theory (common knowledge games being just one example). We then addressed the core problem behind all of this: how can we tell if what we’re facing is risk, or uncertainty?


In False Positive, we discussed zero-risk illusion (where a sense of certainty leads us to overlook probabilities, as with medical testing) and the calculable-risk illusion (where we think we know the odds, but don’t know what we don’t know…as in the “Turkey Problem.”)


In Monty Hallwe brought all of these ideas togethers to discuss the “Monty Hall Problem,” a fascinating riddle that combined probability, statistical analysis, game theory, and, of course, our tendency to confuse which is which.


In All In Our Heads, we finally got around to discussing the virus itself. I argued that part of why COVID-19 is so frustratingly hard to understand is that we mistake the very-public scientific process (which is self-destructive), with “getting it wrong.”


The problem isn’t lack of information, but having too much information with no widely-accepted criteria for choosing what to believe.


That’s led many of us to simply tune-out, or pick whatever interpretation of the data best fits our desired outcome.


So.


Where does this all leave us?


Should we, as I implied in our last email, simply throw up our hands and go to the beach?


If not, when CAN we go to the beach again?


Is there a way of navigating the anxiety, of taking back control over our lives…


Or is it just loss after loss until there’s nothing left?


Because let’s be very, very real for a moment:

Lives are not the only thing we can lose.

Schadenfreude

Schadenfreude is pleasure derived from the misfortune of another.
One of the disturbing effects of COVID-19 has been the way that it has encouraged schadenfreude to make inroads into American life.
Do you recognize this guy?

If you recognized him, it’s probably from one of dozens of social media posts that went viral about his death.

His name was Richard Rose.


Rose made several sarcastic, snarky, anti-mask posts on his Facebook timeline.

He then contracted the disease…

Experienced complications…

…And passed away.


He was 37.


Facebook left his profile up “as a memorial.” Predictably, these last posts now serve as places for people to dunk on Rose post-mortem:

It’s the same on Twitter and Instagram.


Let’s be clear:


1.) I think mask-wearing is an obvious and low-cost way to hopefully lessen the spread of COVID-19.


2.) I’m very tired of the “anti-masker” discourse and find their argument unconvincing.


But you know what else I am?


Mad as hell.


You know what Richard Rose was?


An American.


A person.


Go through his posts. Scroll down a bit further on the page.


What’s he doing?


Posting dumb-ass memes.


Making corny jokes.


Sharing pictures of nights out with his friends.


Talking about NASCAR.


Are some of his posts tasteless? Yeah.


Do I disagree with most of them? Sure.


Was this guy wrong about a lot of stuff? Probably.


But you know what?


All of us are wrong.


You know who else thought that masks wouldn’t help to fight COVID-19?


The WHO and the CDC.


You know, the “experts” all of us “right thinking” people put our trust in.


They explicitly stated that masks wouldn’t help to fight the spread of coronavirus.


And yeah, maybe Richard Rose didn’t listen to the same experts you do, or watch the same news channels, or read the same papers.


Maybe he didn’t update his priors when he should have.


But I can absolutely guarantee that every single one of us is doing the exact same thing about something.


Maybe not to the same degree. Maybe not about the same issues.


But we’re all doing it.


We’re all wrong.


And you know what?


None of us deserve to die for it.


You don’t.


And Richard Rose didn’t.


He was a 37 year old man.


He had friends.


He had a life.


He had value.


The moment you fall into the trap of hoping – just a little bit – that the other side “gets what’s coming to them…”


That they “learn their lesson…”


And “pay for their mistakes…”


The moment you start wanting to “own the libs…”


Or “shut up the Trumpers…”


More than you want to save lives?


The moment you start caring more about being right than about being human?


You throw away the only tool for change we really have:


Empathy.


Empathy is how we bridge the gap.


Empathy is how we work together to solve problems.


Empathy is how we discover – and fight for – shared values.


Without empathy, there is no “us.”


There is no country.


There is no “greater good.”


There’s just our party vs. their party.


Our numbers vs. their numbers.


And then every conversation looks like this:

What if they had “One foot in the grave?”


Not to be insensitive.


Give up the work of empathy and you start to believe that life matters less if you’re old…


Or you’re fat…


Or you’re a minority…


Or you’re from the wrong part of the country.


The wrong party.


The wrong side of the debate.


The wrong side of history.


You want the country to be less polarized?


You want to push back in the other direction?


Start caring about people more than you care about being right.


Stop treating being wrong as a cardinal sin when THEY do it, and as a simple mistake when WE do it.


Stop dancing on graves and start helping.


Because we can’t do this alone.


Remember:


The virus can take your life.


But your empathy? Your humanity?


You have to give that away.


——————–


Let’s talk numbers.


Despite all my discussion of the self-destructive nature of scientific knowledge..


Of the massive influx of noise into the system…


Of the layer upon layer of game theory and statistics and yadda yadda yadda…


We all still have to decide what we’re going to do.


How we’re going to live.


How much risk to take on.


Do I send the kids back to school?


Is it OK to go to the movies?


We don’t get to opt-out just because it’s hard.


And I think we can all agree:


If the “authorities” were going to swoop in and figure this out, they’d have done it by now.


No one’s coming.


We’re on our own.


This responsibility to constantly choose – to make what feels like they COULD be life or death decisions – can be anxiety-inducing.


Even if you think the whole thing is overblown, navigating the topsy-turvy terrain of our all-new-everyday-lives is exhausting.


So.


Let’s talk strategy.


By the end of these emails, I’m going to try and leave you with a:


1. Simple
2. Concrete
3. Specific
4. Data-driven


…plan for wading through the endless sea of information and planning your OWN Coronavirus mitigation strategy.


Really.


To do that, I need to spend a little bit of time introducing you to three more (last!) important concepts:


Micromorts…


Heuristics…


And MinMax Regret.

Heroin on Mount Everest

Let’s start with a coin flip.


If you flip a coin 20 times, what are the odds you’ll get 20 heads?


It’s about one in a million.


This is a very useful little number.


We each take on roughly a one-in-a-million chance of dying simply by getting out of bed. On average, each person has about a one in a million chance of not making it to suppertime.


That one-in-a-million chance of dying?


That’s a micromort.


Now, your individual risk varies, of course; it changes depending on which country you’re in (it’s actually higher in the US), how old you are, your health, and so on.


But as a unit of risk, it provides a useful jumping off point.


If every assumes about one micromort of risk per day, we can compare the fatality rates of different activities in micromorts.


It becomes a level playing field where assessing risk is easier.


For example:


Eating 40 tablespoons of peanut butter in a sitting? One micromort.


You also take one additional micromort for every two days you live in New York City or Boston.


Driving a car gets you one additional micromort for every 100 miles.
A motorcycle? That’s 17 micromorts per 100 miles.


Skydiving adds approximately eight to nine micromorts per jump.


Running a marathon? Roughly 7 micromorts per run.


Going swimming? That’s actually 12 micromorts.


Playing American Football gets you 20 micromorts.


Using heroin? 30 micromorts per injection.


Giving birth is worth 170 micromorts.


Scaling Mt. Everest? A whopping 40,000 micromorts.


Of course, your circumstances affect your individual risk.


But calculating individual risk is incredibly complicated.


Micromorts provide us a fast, easy way of comparing risk levels. If you’re an avid football player but would never THINK of injecting heroin, micromorts provide an easy way of comparing the two.


They also provide a useful way of cutting through the statistical noise surrounding COVID-19.


We’ll get to those calculations in a bit…


But even after we tally up our micromorts, we may still have questions:


Are cases going up because of testing, or spread?


Are deaths really surging, or is this a backlog of unreported cases?


Are hospitals getting paid for every COVID-19 death they report?


These are all totally legitimate, interesting questions.


The problem is, neither you nor I have any reliable way of finding out…


And we need to make decisions affecting our safety, and the safety of others…


Right now.


That brings us to heuristics.

Close Enough For Government Work

My Dad had a saying:


“Close only counts in grenades and horseshoes.”


His point was that “kind of” being right wasn’t enough. Accuracy counts.


And he was right.


But.


As we’ve seen, it’s impossible to have a perfect understanding of what COVID-19 does, or the risks it poses.


Sure, a consensus is slowly forming.


But the consensus has been wrong in the past, and it would be intellectually irresponsible to suggest that we’ve definitely got it right now because “this time it’s different.”


Yes, we need to pay attention to what the experts say. Doctors, epidemiologists, statisticians – these people bring decades of knowledge to bear on a complicated problem.


But complicated problems are just that: complicated.


The good news is that, despite my Dad’s advice, close does count…a lot.


We don’t need a perfect understanding of the future to manage our risk.


After all, we do this every day with our investments.


You split your savings between bonds and stocks, money market and securities. You keep some of it in cash.


You don’t do this because you know what the stock market is going to do. The future is always uncertain.


But we can be certain about that uncertainty.


We don’t have to know exactly what’s going to happen to know that anything could happen. Every day, we accept our inability to predict the future and adjust our strategies accordingly.


How? We use heuristics. Rules of thumb.


For investing, it’s “invest more in bonds as you get closer to retirement.”


Simple, straight forward, no crystal ball necessary.


And, there’s research that shows that general rules of thumb (or “heuristics”) perform just as well or better than complicated mathematical decision models that try to predict the future.


How?


In cognitive science there is something known as the “accuracy-effort tradeoff.”


The basic idea is that accuracy takes effort. There’s a cost to going for perfect accuracy.


That may seem counter-intuitive, so let’s use a familiar example:


You’re playing baseball.


You’re out in left field.


The batter hits the ball up into the air. It’s headed in your direction.


How should you figure out the best place to run to, in order to catch the ball?


Complex mathematical equations are probably the most accurate way of predicting the path of a flying object.


We could model the arc of the baseball, given enough time and effort.


But that’s the thing: that accuracy requires time and effort.


And since baseball players are not likely to be able to do differential calculus in their heads during a game…


How do we figure out where the ball is headed?


We use heuristics. Rules of thumb.


The baseball problem, for example, is solved using the Gaze Heuristic:


Fix your gaze on the ball, start running, and adjust your running speed so that the angle of gaze remains constant.


Studies show that people using the gaze heuristic will consistently end up at the exact spot where the ball hits the ground.


Not bad, eh?


But here’s an element of this you may have missed:


We require zero knowledge of the variables affecting the ball to use the Gaze heuristic.


We don’t need to know how hard the ball was hit, or the wind speed, or the weather.


Nothing.


We simply look, apply the rule, and act.


Simple. Efficient.


And surprisingly accurate.


Yes, we can get more accurate if we measure everything exactly and do the math.


But that requires effort. And if we can still get a good result from the heuristic, our rule of thumb ends up being the most efficient choice.


Even though there’s not a feasible way for us to truly understand everything about COVID-19…


If we wise up to the fact that COVID-19 is a situation of uncertainty (where the odds and risks are unknown)…


Rather than buying into the media narrative that it’s a situation of risk (where we totally are getting the odds right this time, even though we got it wrong literally every other time)…


We can still make rational decisions about how much risk we’re willing to take on.


Just stop trying to do the math, and start using heuristics.

Worse Than Death

If we accept that using heuristics can help us make rational decisions under uncertainty…


How do we do that, exactly?


First, we need to figure out what we’re up against.


What are our risks? What are our potential payoffs?


One of the critical insights we get from game theory is that rational actors can have very different ideas on what the “best” and “worst” outcomes of a game are.


You might, for example, prioritize personal liberty over physical well-being (Ben Franklin: “Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety.”)


Someone else might think that wearing a mask is a small price to pay for even a marginal decrease in risk to their community (JFK: “Ask not what your country can do for you…”)


Both of these people can be perfectly rational, despite having very different conclusions.


In game theoretical terms, rational doesn’t mean “I agree with your evaluation of payoffs and risks.”


It means your actions are consistent with your evaluations.


Everything comes down to what we value.


So…


What do you value?


Let’s make a list of some of the possible risks of COVID-19.


Death is an obvious one.


Even if you think COVID-19 is overblown media hype – let’s say, no more dangerous than the average flu…


There’s still a chance you could die. After all, the flu can kill you, right?


Death is the risk that gets all the press because…well. It’s kind of an all-or-nothing sort of deal.


When you’re dead, you’re dead. You’re out of the game.


Are there other risks to COVID-19?


Sure.


You could get someone else sick.

Even if you’re very unlikely to die from COVID-19, you could pass it someone who is.

How would you feel if you knew that someone died because of you – even if it was an accident?

Maybe you feel responsibility for that. Maybe you don’t.

How would you feel if it was someone you knew?

A grandparent?

A neighbor?

A son or a daughter?

I’m not saying these risks are likely – I’m just saying they exist.

What other risks are there?

It’s unclear what the long-term consequences of COVID-19 are.

Some say there are no long-term consequences.

Others complain of symptoms lasting months, or of structural damage to the lungs that could pose a problem for the rest of your life.

This is highly uncertain, of course. Just like everything else.

But the risk is there. So it goes on our list.

What else?

Hospitalization is a risk.

There’s a significant economic impact to going to the hospital.

Maybe your insurance covers it, maybe it doesn’t. Maybe you have money saved up, maybe you don’t.

Time off work might be an issue for you.

Add that to the list.

I’m sure you can think of some more. Add those to the list as well.

Of course, we have to weigh the risks of reacting to COVID-19, as well.

Lockdowns bring their own risks. Lack of exercise, lack of social interaction, depression.

All those go on the list.

There’s the economic impact.

You might have lost your job. Or you might be right on the edge.

Others might lose their jobs as well. A staggering number already have, with US jobless numbers for May rumored to be near 20%.

That’s a lot of desperate people.

Add it all to the list.

We could keep going, but I think you get the picture:

No matter what we do, there will be serious consequences.

There’s always an opportunity cost.

So…

How do we choose?

Game theory’s goal is always to analyze the potential strategies in any game, finding the ideal solution.

But in some games, finding the ideal solution is difficult because the outcomes depends on what the other players do.

If you’re uncertain about how the other players will act, how do you choose the path forward?

The two most common game theory strategies are:

Maximax, where you take the action with the highest potential payoff (i.e., you maximize your maximum return.)

This strategy generally correlates with the highest risk, but it’s got the highest possible reward, too.

Maximin is a strategy where you maximize your minimum payoff. You cap your downside, knowing that even in the worst case scenario, you’ll get something decent.

This strategy reduces your potential losses, but at the cost of losing out on big payoffs.

Both of these strategies are all about the numbers.

For purely rational people, they make a lot of sense.

But we aren’t purely rational, are we?

No.

We’ve got emotions to deal with.

We’re loss averse – meaning we feel more pain when we lose a dollar than we feel joy when we gain a dollar.

Luckily, there’s a strategy that incorporates that human messiness into a nice little package:

MinMax Regret.

MinMax regret is unlike the other strategies we’ve discussed so far, in that it isn’t concerned with the raw numerical values of your payoffs.

Instead, MinMax regret seeks to minimize the maximum amount of regret you’d feel.

Regret is a little more nuanced that pure mathematical payoffs.

For example, you may feel that a certain investment is riskier than you’re comfortable with.

But how would you feel if your friends were already invested?

What if they all get rich, and you were the only one who stayed out?

Sometimes, the regret you’d feel being the only one missing out on a massive payday is actually worse than the potential monetary loss.

Regret, in a game theoretical sense, is the difference between the decision you made and the optimal decision you could have made.

There’s an opportunity cost to everything we do…and that needs to get factored in.

Let’s bring this back to COVID-19.

We’ve made a giant list of all of our risks…

Death, hospitalization, feeling ill, getting someone else sick, losing a job, someone else losing their job.

Let’s leave our “objective” assessments to the side for the moment…

And really feel these alternatives.

Imagine them.

Picture them.

Visualize the scene. Get the details right.

Who’s there?

What is it like?

What’s going through your mind?

Spend too much time thinking about this statistic or that statistic…

And you lose touch with the lived reality of what we’re talking about.

None of us are going to live through a statistic.

ALL of us are experiencing the lived reality of COVID-19.

We’ll also have to live through the repercussions of whatever we do.

Maybe the guilt of getting someone else sick.

Or the shame of having been wrong.

Or the frustration at taking every precaution and still getting the disease.

Or the anger of sacrificing, only to find out that it all meant nothing.

There’s always an opportunity cost.

And sometimes, there’s regret.

Question is:

Which is worse?

Monty Hall

——
Should I switch, or stay?
—–

Let’s begin our review all the way back at the beginning, with our very first email about risk…

Bad Priors.

Everyone comes to situations of risk with pre-existing opinions. We all use our experiences to try and make sense of how the world works. These are our priors.


Priors act as a map of the territory of reality; we survey our past experiences, build abstract mental models from them, and then use those mental models to help us understand the world.


But priors can be misleading, even when they’re based on real experiences. Why?

For one, we often mistake group-indexed averages for individually-indexed averages.


For another, we often mistake uncertainty for risk.


Risk is a situation in which the variables (how likely a scenario is to happen, what we stand to lose or gain if it does) are known.

When we face risk, our best tool for decision-making is statistical analysis.


Imagine playing Russian Roulette; there’s one gun, one bullet, and 6 chambers. You can calculate your odds of success or failure.


Uncertainty is a situation in which the variables are unknown. Imagine a version of Russian Roulette where you don’t get to know how many bullets there are, or even how many chambers in the gun.

Not only can you not calculate your odds in this scenario, trying to do so will only give you a false sense of confidence.


When we face uncertainty, our best decision-making tool is game theory.


Mistaking risk for certainty is called the zero-risk illusion.


This is what happens when we get a positive result on a medical test, and convince ourselves there’s no way the test could be wrong.


Because the world is infinitely complex, we can’t always interact directly with the thing we care about (referred to as the underlying).
But there’s a more subtle (and often more damaging) illusion to think about:
Mistaking uncertainty for risk. This is known as the calculable-risk illusion.


To understand how we get to this illusion, we have to understand a bit about derivatives.

Because the world is infinitely complex, we can’t always interact directly with the things we care about (referred to as the the underlying.)


For example, we may care about the health of a company – how happy their employees are, how big their profit margin is, how much money they have in savings.


But it’s hard to really get a grip on all those variables.


To get around problems like this, we often look at some other metric (referred to as the derivative) that we believe is correlated with the thing we care about.


For example: we care about the health of the company (the underlying). But because that’s so complex, we choose to pay attention to the stock price of the company instead (the derivative). That’s because we believe that the two are correlated: if the health of the company improves, the stock price will rise.


The relationship between the underlying and the derivative is called the basis.

If you understand the basis, you can use a derivative to understand the underlying.


But the world is complicated. We often DON’T really understand the basis. Maybe we mistook causation for correlation. Or maybe we DID understand the basis, but it changed over time.


The problem is that re-examine our assumptions about how the world works.

This puts us in a situation where we mistake uncertainty for risk. We think we have enough information to calculate the odds. We think we can use statistical analysis to figure out the right thing to do.


The problem is that we often don’t have enough information. This is the “Turkey Problem”: every single data point tells us the farmer treats us well.
And that’s true…right up until Thanksgiving Day.


We cruise along, comforted by seemingly-accurate mathematical models of the world…only to be shocked when the models blow up and everything falls apart.


That’s the calculable-risk illusion.


This is how our maps can stop matching our territory.


OK – so we know that when situations are uncertain (and that’s a lot of, if not most of the time), we’re supposed to use game theory.


What are some examples of using game theory to help make decisions?


One example is the Common Knowledge Game.

Common knowledge games are situations in which we act based on what we believe other people believe.


Like a beauty contest where voting for the winning contestant wins you money, it’s not about whom you like best (first-order decision making)…
Or whom you think other people like best (second-order decision making)…


But whom you think other people will think other people like best (third-order decision making).


So: how do we know what other people know?


Watch the missionaries.


As in the case of the eye-color tribe, a system’s static equilibrium is shattered when public statements are made.


Information is injected into the system in such a way that everyone knows that everyone else knows.


Our modern equivalent is the media. We have to ask ourselves where other people think other people get their information.

Whatever statements come from these sources will affect public behavior…
…Not because any new knowledge is being created, but because everyone now knows that everyone else heard the message.


(This, by the way, is why investors religiously monitor the Federal Reserve. It’s not because the Fed tells anyone anything new about the state of the economy. It’s because it creates “common knowledge.”)

Whew! That’s a lot of stuff.


Let’s try to bring all these different ideas together in one fun example:

The Monty Hall Problem.

Monty Hall was famous television personality, best-known as the host of the game show Let’s Make a Deal.


Let’s Make a Deal featured a segment that became the setting for a famous logic problem…


One that excellently displays how our maps can become disconnected from the territory.


The problem was popularized Marilyn vos Savant in a British Newspaper. Here’s the problem as she formulated it:


Suppose you are on a game show, and you’re given the choice of three doors.

Behind one door is a car, behind the others, goats.


The rules are that you can pick any door you want, and you’ll also get a chance to switch if you want.

You pick a door, say number 1, and the host, who knows what’s behind the doors, opens another door, say number 3, which has a goat.


He says to you, “Do you want to pick door number 2?”

Is it to your advantage to switch your choice of doors?


Take a minute to think it through and come up with your own answer.
Let’s start by asking ourselves:


Is this a scenario of risk or uncertainty?


The answer is risk.

We know the odds, and can calculate our chances to win. That means statistical analysis is our friend.


So how do we calculate our odds?


The typical line of reasoning will go something like this:


Each door has a 1/3 probability of having the car behind it.


One door has been opened, which eliminates 1/3 of my chances.


Therefore, the car must be behind one of these two doors. That means I have a 50/50 chance of having picked the right door.


That means there’s no difference between sticking with this door or switching.


While this conclusion seems obvious (and believe me, this is the conclusion I came to)…


It turns out to be wrong. 🙂


Remember our discussion of medical tests?


To figure out how to think about our risk level, we imagined a group of 1,000 people all taking the same tests.


We then used the false positive rate to figure out how many people would test positive that didn’t have the disease.


Let’s apply a similar tool here.


Imagine three people playing this game. Each person picks a different door.
I’ll quote here from the book Risk Savvy, where I first learned about the Monty Hall Problem:


Assume the car is behind door 2.


The first contestant picks door 1. Monty’s only option is to open door 3, and he offers the contestant the opportunity to switch.


Switching to door 2 wins.


The second contestant picks door 3. This time, Monty has to open door 1, and switching to door 2 again wins.


Only the third contestant who picks door 2 will lose when switching.


Now it is easier to see that switching wins more often than staying, and we can calculate exactly how often: in two out of three cases.


This is why Marilyn recommended switching doors.

It becomes easier to imagine the potential outcomes if we picture a large group of people going through the same situation.

In this scenario, the best answer is to always switch.
Here’s an interesting twist, though:


Should you actually use this strategy on Let’s Make a Deal?


This is where the calculable-risk illusion rears it’s ugly head.

In the beginning of our discussion, I said the Monty Hall Problem was an example of risk. Our odds are calculable, and we understand the rules.

That’s why statistical analysis is helpful.


But reality is often far more complicated than any logic puzzle.


The question we need to ask in real life is: Will Monty ALWAYS give me the chance to switch?


For example, Monty might only let me switch if I chose the door with the car behind it.


If that’s the case, always switching is a terrible idea!


The real Monty Hall was actually asked about this question in The New York Times.

Hall explicitly said that he had complete control over how the game progressed, and that he used that power to play on the psychology of the contestant.


For example, he might open their door immediately if it was a losing door, might offer them money to not switch from a losing door to a winning door, or might only allow them the opportunity to switch if they had a winning door.

Hall in his own words:


“After I showed them there was nothing behind one door, [Contestants would think] the odds on their door had now gone up to 1 in 2, so they hated to give up the door no matter how much money I offered. By opening that door we were applying pressure.”


“If the host is required to open a door all the time and offer you a switch, then you should take the switch…But if he has the choice whether to allow a switch or not, beware. Caveat emptor. It all depends on his mood.”


You can see this play out in this specific example, taken again from Risk Savvy:


After one contestant picked door 1, Monty opened door 3, revealing a goat.
While the contestant thought about switching to door 2, Monty pulled out a roll of bills and offered $3,000 in cash not to switch.


“I’ll switch to it,” insisted the contestant.


“Three thousand dollars,” Monty Hall repeated, “Cash. Cash money. It could be a car, but it could be a goat. Four thousand.”


The contestant resisted the temptation. “I’ll try the door.”


“Forty-five hundred. Forty-seven. Forty-eight. My last offer: Five thousand dollars.”


“Let’s open the door.” The contestant again rejected the offer.


“You just ended up with a goat,” Monty Hall said, opening the door.


And he explained: “Now do you see what happened there? The higher I got, the more you thought that the car was behind door 2. I wanted to con you into switching there, because I knew the car was behind 1. That’s the kind of thing I can do when I’m in control of the game.

What’s really happening here?


The contestant is committing the calculable-risk illusion.


They’re mistaking risk for uncertainty.


They think the game is about judging the probability that their door contains either car or goat.


But it isn’t.


The game is about understanding Monty Hall’s personality.


Whenever we shift from playing the game to playing the player, we have made the move from statistical analysis to game theory.


Instead of wondering what the probabilities are, we need to take into account:


1. Monty’s past actions, his personality, his incentives (to make the TV show dramatic and interesting)…


2. As well as what HE knows (which door has a car behind it)…


3. And what HE knows WE know… (that he knows which door has a car behind it)


4. And how that might change his behavior (since he knows we know he knows where the goals are, and he expects us to expect him to offer money if we picked the right door, he might do the opposite).


The map-territory problem can get us if we refuse to use statistical analysis where it’s warranted..


And when we keep using statistical analysis when it isn’t.


Now that we’ve seen some of these ideas in action, it’s FINALLY time to start addressing the root cause of all these emails:


The Coronavirus Pandemic.


We’ll be bringing all these mental models to bear on a tough problem:


How do I decide what to do, when so much is uncertain? And WHY is all of this so hard to understand?

All In Our Heads


What’s left to do but go to the beach?


As we come into the home stretch of our latest series of emails..
(Dealing with risk, uncertainty, and how to think about the Coronavirus pandemic)…


You may have noticed that I’ve been avoiding something.
I’ve been completely mute about a critically important component to understanding your COVID-19 risk:

How risky it actually is.


There hasn’t been a single statistic, figure, fatality rate or case number.


No models.


No predictions.


Nothing.


And this glaring omission is the topic of this email.


I am going to try and make an argument I have been building to for two months now.


Namely:


We cannot know how risky COVID-19 is…


And trying to find out is only making it worse.


If that sounds like a problematic statement to you, I get it.


All I can ask is that you stick with me through this email.


Let’s start where ALL good philosophical discussions start:


On the internet.


I’d like to share some examples of recent COVID-19 posts I’ve found.


Before I do, a few points:


– It doesn’t matter when these arguments were made.

– I’m not arguing for or against any of the arguments made in these posts.


– These are purely meant as example of rhetoric, so don’t get hung up on any of the numbers they use or don’t use.


Cool?

Cool.

All of these posts were made by very smart people.


All of these people are using publicly-available COVID data to make their arguments.


And while I’ve only given you a few quick screenshots, you can tell these people are arguing forcefully and rationally.


These are smart people, being smart.


So let me ask you:


Do you feel smarter?


Now that you’ve read these, do you understand the situation better?

My guess is….

No.


My guess is that you’ve actually read several threads, or posts, or articles like these.


Well-argued, quoting numerous “facts,” dutifully citing their sources…


And you’ve come away with more confused than before.


I know that that’s been my experience.

To dig into why this is the case, we have to take a bit of a journey.


We’re going to start with COVID-19, follow it all the way back to the roots of Western Philosophy, make a hard left into game theory…

And end up back at COVID-19 again.


Let’s start with a (seemingly) basic question:


Why is it so hard for smart people to agree on even basic facts about Coronavirus?


I’m not talking about your Uncle who gets all his news from YouTube because he thinks aliens control the media.


I’m talking smart people with educated backgrounds.

People who understand bias, who understand systems.


How is it possible that even THEY can’t agree on basic things like how dangerous the virus is?


There are two basic categories or problems with understanding Coronavirus:


One is logistical (it’s just really hard to get good information)…


And one is epistemological (it’s just really hard for us to know things, in general).


Let’s start with the first category – logistical.

Gathering accurate data about Coronavirus is extremely difficult.


For one, the size and scale of the outbreak makes this a unique event.


There are very few historical parallels, and none within living memory of most of the people who study this kind of thing.


Two, Coronaviruses (the general family that our current pandemic comes from) are not well understood.


Funding to study them, before the pandemic, was hard to come by.


Not only that, but Coronaviruses are notoriously difficult to grow in lab cultures.


All of this means that only a handful of scientists specialized in Coronaviruses…leaving us woefully unprepared.


On top of a general lack of knowledge, we have to be careful to separate Coronavirus and COVID-19 (the disease that the virus causes).


While Coronavirus doesn’t seem to vary much around the world, COVID-19 does. That’s because the disease effects people differently depending on their unique health risks, the society in which they live, etc.


If you’re overweight, your experience of COVID-19 may be very different than someone who’s not.


Same if the area where you live has a lot of pollution.


Or if you smoke.


Or if you’re above 40.


Or if you’re diabetic.


All this makes the overall impact of the disease extremely hard to predict. We’re not sure what the important risk factors are, or how dramatically they impact the disease’s progression.

Take the fatality rate, for example.


Personally, I’ve seen people claim the fatality rate is as high as 7%, and others say it’s as low as .05%.

Why is this so hard to figure out?


The number we’re talking about is referred to as the “case fatality rate,” or CFR. CFR is the percentage of people diagnosed with COVID-19 who die.

That seems pretty straightforward.

But, as we mentioned above, the disease’s effects vary from country to country.


CFR also changes based on how many people you test, and WHO you test – if you test only people in the emergency room, or people in high-risk demographics, the percentage of fatalities will be higher. If you test everyone on Earth, the percentage of fatalities will be lower.


The CFR will also change based on the quality of treatment; after all, better treatment should result in a better chance of living.


That means that CFR will not only change from country to country, but will change within the same country over time as treatments evolve and testing ramps up.

Based solely on the logistical challenges around understanding an event of this scale, we should expect a great deal of uncertainty.


But.


Our fundamental problem with understanding coronavirus is NOT that we lack data or smart people to help us process that data.


Our problem is that data doesn’t help us at all.


How could this be the case?


If we need to understand something, won’t more data help with that?


This brings us to the second category of problem that I mentioned earlier:
Epistemological.

See, the thing that I’ve found so completely fascinating about this pandemic…


Is how directly it brings us out of the routine of our everyday lives…
Right into the domains of Western Philosophy.


What I’m going to argue here is that our struggle to understand coronavirus is directly related to our struggle to understand anything.


See, normally, we get away with not really having to know how we know stuff.


We operate according to “generally accepted best practice” or “custom,” and get along fine.


But Coronavirus is different.


Someone might say, “I just listen to what a majority of epidemiologists say.”


But how do we know which epidemiologist to listen to?


Or how many epidemiologists need to agree to constitute a majority?


Or whether the majority believing something is even evidence that it’s true?


Or whether it’s all a media plot?


We’ve officially left the realm of epidemiology…


And entered the realm of epistemology (the theory of knowledge).

This is why our online arguments over fatality rates go nowhere:


They aren’t about what we know…

They’re about how we know what we know.


And with that, I’d like to introduce you to Karl Popper and Thomas Kuhn.

Karl Popper was a British philosopher who lived from 1902 to 1994.


Popper was an incredibly influential philosopher of science, and I will be the very first to admit that my understanding of his work is limited. I highly encourage you to research Popper on your own, rather than taking my own interpretation of him as fact.

That said, Popper’s primary contribution to my own mental models has been this:


Theories can never be empirically proven, only falsified.


Here’s what that means to me:


We can never be absolutely sure that what we believe corresponds exactly to external reality.


Western Philosophy in general has spent the last 200 years more or less demolishing the idea that we can somehow escape our own mental constructs, sense impressions, and physical limitations to achieve a “God’s-Eye View” of the world as it is.


That doesn’t mean there is no external reality, necessarily; just that we can never be 100% sure that we know exactly what it is.


So, how do we go about gaining knowledge?


Popper argues that we gain knowledge by falsifying beliefs, rather than “proving” them, per se.


Let’s take a hypothetical example:


Say a friend of yours claims that the Dodo is extinct.


This may or may not be “true” in the absolute sense. And, indeed, even if we haven’t seen a Dodo in a century or more, we can’t be absolutely certain, for instance, that there isn’t some cave somewhere with a hidden colony of Dodo.


There remains a possibility, even if it’s small, that the Dodo is not extinct.
However, I can falsify the statement “the Dodo is extinct” very easily, simply by locating a living Dodo.

Thus, knowledge proceeds negatively; not by discovering absolute truths, but by systematically falsifying the untrue.


Knowledge, then, is a process by which we become more and more accurate over time through falsification.


That may seem like it makes sense, but if knowledge is ONLY falsification, why do we feel like we “know” things?


How do some ideas become “accepted wisdom”?


To understand this piece of the puzzle, we come to Thomas Kuhn.


Thomas Kuhn was an American philosopher of science who lived form 1922 to 1996.


Kuhn’s most famous work is the Structure of Scientific Revolutions, in which he proposed a model for understanding how science advances.


Let’s pick up where we left off with Popper.


Popper proposed that knowledge is about finding what isn’t true, a process that becomes more accurate over time…


(even though we can never be 100% sure what we know is right).


Imagine you’re a scientist studying the weather.


You perform several experiments, testing (and often disproving) your hypotheses.


In one experiment, you discover an interesting connection between electrical currents and snowfall in Ohio.


You publish your findings in the prestigious Journal of Ohio Snowfall Studies.


I, upon reading your article, decide to try and replicate your findings.


Upon doing so, I find myself unable to recreate the results that got you so excited.
In fact, MY findings indicate there is 

no connection between electrical currents and Ohio snowfall.


I publish MY findings, which seem to falsify YOUR findings.


A colleague, interested in the resulting debate, then attempts to reconcile the two sets of findings by offering a theory that a third element (blood pressure of nearby raccoons) is actually the cause of the phenomena.


A FOURTH scientist then tries to disprove the Stressed-Out-Racoon Hypothesis…


And so on.


Note that the scientific process described above is largely destructive; it is through gradual falsification that knowledge is being progressed.
Over time, these layers of falsification build up, leaving very few ideas unscathed.


The ideas that are left standing gradually become accepted wisdom. So accepted wisdom is just “all the stuff we haven’t been able to falsify.”


Kuhn’s insight was that just because something is “accepted wisdom” does not mean it is true.


In fact, it is through the accumulation of accepted wisdom that entire eras of scientific progress are overturned.


How is this possible?


Our understanding of the world is never complete. Remember – Popper argues that we can’t ever have a fully accurate, absolutely certain view of the world.


The best we can get are theories that are useful; theories that prove their worth by making the world a better place.


But a theory’s usefulness doesn’t guarantee its accuracy.


For example, I might believe that singing to my plants every day helps them grow.


Now, that theory might be useful; it might encourage me to water the plants every day. It might lower my overall stress levels by getting me to spend time outside.


But that doesn’t mean singing to the plants really helps them grow in and of itself.


Kuhn argued that something similar happens in the sciences. Over time, through experimentation, we find useful ideas; ideas that help us progress.
But because our understanding is always limited, and there is no possibility of being 100% certain, mistakes will always creep in to systems of knowledge we build.


We can’t avoid this; it’s inevitable. Humans make mistakes, and so mistakes are built in to everything that we do.


In science and philosophy, these mistakes manifest as seemingly unsolvable problems, contradictory bits of knowledge, and straight-up weirdness that just doesn’t “fit” what we know “know to be “true.”


In Kuhn’s formulation, when these questions become too much to bear – when the blind spots in our picture of the world become too obvious – a revolution occurs.


This means that the “paradigm” (Kuhn’s word for the scientific consensus of how the world works) becomes undone…and a new paradigm takes its place.


Just as Copernicus replaced the Ptolemaic System…and Newton undermined Copernicus…and the Theory of Relativity destroyed Newton…


So does scientific consensus advance…


By destroying itself.


In other words:


Knowledge is an act of creative destruction.


Now.


We’ve gone very high-level here…


So let’s bring this back to Earth:


What does this have to do with Coronavirus?


I’m certainly not arguing that Coronavirus is the herald of a scientific revolution.


The basic tools for understanding what’s happening (epidemiology, statistical analysis, modeling, etc) have been around for a long time.
What I’m arguing is that our attempts to “fix” our knowledge of the virus and to “get it right” are actually making it harder to understand.


Stay with me, now. 🙂


We discussed, above, the idea that science moves forward through a process of creative destruction by systematically falsifying and undermining itself.
Typically, this process is contained within the pages of the various journals and academic conferences of a given field.


Want to debate the current state of genetics? There are designated places to do so.


Want to get into an argument about geology? Figure out where that debate is occurring and get started!


The debate over Coronavirus, though is not happening within the pages of academic journals.


It is not limited to scholarly conferences and meet ups.


It is occurring everywhere at once.


It’s on the internet.


The news.


The paper.


Twitter.


Facebook.


Your friends are discussing it.


Your neighbors bring it up.


Just as the virus itself is endemic (meaning, literally everywhere) so is information about the virus.


Not only is it everywhere, everyone is weighing in.


Epidemiologists? Sure. They’re celebrities now.


But also:


General practitioners, statisticians, politicians, teachers, plastic surgeons, marketers, historians, plumbers, pundits, accountants…


Everyone has an opinion.


When you combine:


Impossibly large amounts of uncertain data….


With an impossibly large amount of people interpreting that data…


You get a massive, exponential increase in the amount of information in the system.


It isn’t that we “don’t know” enough about COVID-19 and the Coronavirus…


It’s that we know too much, and have no way of discerning what’s right.


Let’s return, briefly, to the idea of the common knowledge game that we addressed in “The Beauty Contest” and “Missionaries.”


How could anyone possibly comprehend such a mess of information?


We can’t. It’s impossible.


So…what do we do instead?


We turn to Missionaries.


Missionaries are the people we look to in order to know what other people know.


They’re the news channels, pundits, journalists…the sources, not of “truth,” but of “common knowledge.”


The problem with looking to Missionaries now, however, is that they’re wrong.


And I know they’re wrong, because they have to be wrong.


Remember:


Logistically, it’s nearly impossible to get a grip on what COVID-19 does. Science is certainly our best option.


But epistemologically, science only advances by undermining itself.
And today, that process is being exponentially multiplied across every possible news outlet, Twitter feed and YouTube channel…


Meaning that all anyone hears…


All anyone sees…


Is missionary after missionary contradicting themselves.


Science is working; don’t get me wrong.


Slowly, but surely, we are getting closer and closer to the truth.
But what we as a culture are perceiving (possibly for the first time in Western Society)


Is just how self-destructive scientific progress actually is.


And while I don’t think we can understand the absolute chaos of COVID-19 information…


do think we can use game theory to predict how people will react to seeing large-scale falsification in progress for the first time.


I think people will simply conclude that no one knows anything, and that worrying about it is too cognitively-taxing.


For fun, I googled “beaches packed” just now.


Here’s the very first article that came up:


When the message gets too chaotic, too impenetrable? People stop listening.
To be clear, I absolutely do not blame anyone who has started tuning COVID-19 out at this point.


Yes, other countries had wildly different approaches, and different outcomes.


Yes, different messaging and different strategies could have prevented countless deaths here in the United States.


But that didn’t happen.


Now, what we’re left with is an impenetrable “fog of pandemic” made exponentially worse by our desire for more information.


In our attempt to understand the virus we have undermined any attempt to address it.


By putting science on display, we have guaranteed our loss of faith in it.


What’s left to do but go to the beach?

Missionaries

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

—–
How do we create Common Knowledge?
—–

We start this week with a riddle. It’s a famous one, so no Googling…give yourself the chance to try and think it through. 🙂

Here it is as written in Terence Tao’s Blog:

“There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours.

Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces).

If a tribesperson does discover his or her own eye color, then their religion compels them to leave the island at noon the following day.

All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).

Note: for the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander.

Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).

One day, a blue-eyed foreign missionary visits to the island and wins the complete trust of the tribe.

One evening, he addresses the entire tribe to thank them for their hospitality.

However, not knowing the customs, the missionary makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.

What effect, if anything, does this faux pas have on the tribe?

With this (admittedly strange) riddle we return to the Common Knowledge game.

You’ll recall that we introduced the idea of a Common Knowledge game in last week’s email…

(And if you missed it, I’ve gone ahead and posted it on the blog in order to make sure we don’t leave our new subscribers behind.)

…as what happens when many people are making second, third, and fourth order decisions.

The crowd acts not based on how each individual person thinks, but on what they think about what other people think.

The question then becomes:

How do we know what other people think?

To be clear, I don’t think we ever know what other people think…not really.

For our purposes we are most interested in figuring out how other people decide what they think other people think.

In other words: “How do we know what’s common knowledge?”

First, let’s define common knowledge.

In one sense, common knowledge is just stuff that pretty much everyone knows.

The earth orbits the sun.

The Statue of Liberty is in New York City.

But common knowledge is more than just what we know…it’s what we know other people know.

From Wikipedia:

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.”

It’s not just what you know – it’s what you know that I know you know I know.

How do we figure that out?

This brings us back to the island of the eye-color tribe.

Let’s start with the answer and work backwards (if you still want to take a stab at solving the riddle yourself, don’t read any further).

What effect does the missionary’s pronouncement have on the tribe?

All 100 tribe members with blue eyes leave the island at noon on 100 days after the missionary’s statement.

Why?

To help work this out, imagine that there was only one person with blue eyes on the island. What would happen then?

The missionary would pronounce that they see a person with blue eyes.

The blue-eyed island would immediately realize that the missionary must be referring to them; after all, they know that every other islander has brown eyes. Therefore, they would leave the island at Noon the next day.

Now, let’s make things slightly more complicated, and imagine that there are two blue-eyed islanders – let’s call them Marty and Beth.

The missionary states that they see a person with blue eyes.

Marty thinks: “Wow! He just called out Beth as having blue eyes. That means Beth will leave the island tomorrow.”

Beth thinks: “Yup – he’s talking about Marty. Poor Marty! He’ll have to leave the island tomorrow.

Tomorrow rolls around. Both Beth and Marty gather around with everyone else to watch the blue-eyed islander leave the island…

And no one leaves.

Beth and Marty stare at each other. The other islanders stand around awkwardly.

Beth thinks: “Wait a minute…Marty has blue eyes. He should know that he needs to leave, because he knows everyone else’s eye color, and can deduce that his eyes are blue.

But if he didn’t leave, that means that he thinks he doesn’t have to, because someone else should have deduced that their eyes are blue. And since I know that everyone else’s eyes are brown, that means….”

Marty thinks: “Wait a minute…Beth has blue eyes. She should know that she needs to leave, because she knows everyone else’s eye color, and can deduce that her eyes are blue.

But if she didn’t leave, that means that she thinks she doesn’t have to, because someone else should have deduced that their eyes are blue. And since I know that everyone else’s eyes are brown, that means….”

Beth and Marty together: “MY EYES MUST BE BLUE!”

And so Beth and Marty leave the island together at noon the next day.

This process continues for each new islander we add to the “blue eyes” group. And so the generalized rule for the riddle is:

If N is the number of blue-eyed islanders, nothing happens until the Nth day, whereupon all blue-eyed islanders leave simultaneously.

There’s a critical element to this riddle that many people (namely, me) miss:

Everyone already knows everyone else’s eye color.

It isn’t that the islanders are learning anything truly new about other people. They aren’t. It has nothing to do with what people know.

What changes the situation on the island is missionary’s public announcement.

It’s know that people suddenly know – it’s that they know that other people know.

Common Knowledge is created by public statement, even when all the information was already available privately.

It isn’t about eye color; it’s about the missionary.

Once we hear the missionary’s message, it takes time for everyone to process. It takes time for people to make strategic decisions.

The more ambiguity in the message, the more time it takes to have an effect (for example, “Beth has blue eyes” is an unambiguous message that would have had an immediate effect. “I see someone with blue eyes” is ambiguous, and takes N+1 days to have an effect.)

We ask the question at the beginning:

How do we know what other people think?

The answer: we create an understanding of what other people know by listening to the missionaries we believe everyone else listens to.

The message of the missionary has to be widely propagated and public. What are the channels that “everyone” watches? What are the sources that “everyone” follows?

Once the message is broadcast, there’s a time delay on the effect based on the amount of ambiguity in the message and the difficulty of the strategic decision-making that follows.

But.

Once everyone hears the message…

And everyone updates their beliefs about what other people believe…

Change can be both sweeping and immediate.

This is how massive changes can seemingly happen overnight. How “stable” regimes can be toppled, social movements can ignite, and stable equilibrium everywhere can be dramatically and irrevocably disrupted.

The Common Knowledge game teaches us some critical lessons:

1.) It isn’t the knowledge itself that matters, but what we believe other people know.

2.) You have better be aware of who the missionaries are, and what they’re saying.

Otherwise, it might soon be YOUR turn to leave the island.

False Positive

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

—-
How scared should I be?
—-

For the past few weeks we’ve been discussing risk and uncertainty.

Risk is a situation in which we know the odds, know the payoffs, and can plan accordingly.

Uncertainty is a situation where we don’t know the odds, or are unsure about the payoffs.

Nearly every difficult decision falls into one of these two categories.

Understanding the difference is critical to making wise decisions…

But we screw it up.

All the time.

In fact, we make this error so often that it has a special name:

The Illusion of Certainty.

The illusion of certainty comes in two flavors:

The zero-risk illusion (where risk is mistaken for certainty)…

And the calculable-risk illusion (where uncertainty is mistaken for risk).

These are the two primary ways our maps stop reflecting the territory…

The engine of our mistakes.

Let’s start with the zero-risk illusion.

We encounter the zero-risk illusion whenever we enter a risk situation (with calculable odds of winning or losing)…

But believe that we know for sure what will happen.

A simple example:

You come across a man on the street. He waves you down.

He holds a single playing card in his hand. He shows it to you – an Ace of Spades.

“I’m running a memory experiment,” he explains.

He turns the card face-down in his palm.

“I’ll give you 50$,” he says, 

“if you can name the card in my hand.”

You ponder this.

He showed you the card; you can easily recall it was the Ace of Spades.

Winning the 50$ seems like a sure thing.

You tell the man your guess.

He turns the card over, revealing….a Joker.

What happened?

Well, as it turns out, the man is a magician, and you’re on one of those TV shows where people embarrass the overly-credulous.

Instead of 50$, your co-workers make fun of you for several weeks as your beet-red face is beamed across the country for all to see.

This was a situation of risk that was mistaken for certainty.

You didn’t know he was a magician, and so assumed the card in his palm would be the card he showed you.

You fell victim to the zero-risk illusion.

That might seem a bit far-fetched, though, so let’s examine another scenario where this illusion occurs.

You go for your annual check up and get the usual series of blood tests.

Your doctor enters the room carrying a clipboard. She looks very concerned. She stares at you and sighs.

“I’m sorry,” she says, “but you have Barrett’s Syndrome. It’s an incredibly rare condition characterized by having a gigantic brain and devastatingly high levels of attractiveness…

…There is no known cure.”


Is the room spinning? you think. Your skin feels flush.

“What’s the prognosis, doc?”

She looks you right in the eyes. She appears both empathetic and strong. She’s good at this, you think.

The average life expectancy of someone with Barrett’s Syndrome is…
….8 months.”

Some of you may remember this scene from an earlier email (titled “8 Months to Live”) in which we discussed the importance of individual vs. group indexing.

For the moment, though, let’s forget that discussion.

What if you had asked a different question?

Let’s go in a new direction this time.

We rejoin our scene, already in progress:

“The average life expectancy of someone with Barrett’s Syndrome is…
….8 months.”

You pause. You furrow your brow.

“Not good, doc. Can I ask – how sure are we? How good is this test?”

The doctor nods, as if she understands why you would ask, but that look of sympathy-slash-pity hasn’t left her face.

“I’m sorry, I know you’re looking for some hope right now…but the test is extremely accurate. The test is nearly 100% accurate in detecting the disease when it’s present, and the false positive rate is only 5 in 100,000.

“Hmmmm. Doesn’t sound good for me, I guess. Let me ask you, doc – exactly how many people have this syndrome? Maybe I can join a support group.”

“The disease is very rare. Only .0001% of the population has Barrett’s syndrome.”

She clearly expects you to be resigned to your fate.

Instead, you are….smiling?

How scared should you be?

We trust so much in technology that it can cause us to fall victim to the zero-risk illusion.

Because we believe medical science to be accurate, receiving a positive test result for something as devastating as Barrett’s Syndrome can cause extreme anxiety.

But let’s think about those numbers a bit.

Let’s start with a random selection of 100,000 people.

Based on what we know about Barrett’s Syndrome, how many people in this population should have the disease?

Based on that .0001% number, we’d expect ten people to have Barrett’s Syndrome in a population of 100,000.

Because the test is very accurate in detecting the disease, we’d expect all those people to test positive.

Our false positive rate was 5 out of 100,000, which means that out of our group of 100,000 we should also expect 5 people to test positive that don’t have the disease.

That means that we have 10 real positives….and 5 false ones.

So if you test positive for Barrett’s Syndrome, you’ve got a 2-to-1 chance of having the disease.

Not great, but not certain, either.

This scenario, while hypothetical, plays out every day across the country.

Routine medical screenings with seemingly low false-positive rates produce far higher numbers of wrong diagnoses than it might seem – simply because of the scale at which they’re administered.

In this situation, we have a risk – about 2-to-1 – of having Barrett’s Syndrome. But that risk seems like certainty.

The other form of the illusion of certainty is the calculable-risk illusion…and it’s the one which feels most appropriate to our current global situation.

The calculable-risk illusion occurs when we fool ourselves into thinking that we know the odds.

We trick ourselves into believing we know more than we do, that we’re making a rational calculation…

When, in reality, no “rational calculation” is possible.

Donald Rumsfeld put this quite famously:

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say we know there are some things we do not know.

But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

The calculable-risk illusion occurs when we mistake known unknowns for unknown unknowns.

This – to use the framework we’ve been developing over the past few weeks – is when we need to leave the world of statistical analysis and enter the world of game theory….

To stop playing the game and start playing the players.

Failure to do so can have extremely dire consequences.

My favorite example of this is called the “Turkey Problem,” and comes from Nassim Taleb’s Antifragile, which I’ll quote here:

A turkey is fed for a thousand days by a butcher; every day confirms to its staff of analysts that butchers love turkeys “with increased statistical confidence.”

The butcher will keep feeding the turkey until a few days before Thanksgiving. Then comes that day when it is really not a very good idea to be a turkey.
So with the butcher surprising it, the turkey will have a revision of belief—right when its confidence in the statement that the butcher loves turkeys is maximal and “it is very quiet” and soothingly predictable in the life of the turkey.

“The absence of evidence is not evidence of absence.” The turkey predicts with “increasing certainty” that the farmer will continue to feed him, because he has no evidence it will ever be otherwise.

Because negative events happen less often does not mean they are less dangerous; in fact, it is usually the opposite. Don’t look at 
evidence , which is based on the past; look at potential. And don’t just look for “evidence this theory is wrong;” is there any evidence that it’s right?

The turkey is not wrong in saying that it’s life is quiet and peaceful; after all, all historical evidence tells him this is so.

His error was mistaking uncertainty for risk, believing he understood the odds.

If you’ve seen me rant on Twitter or in my various Live Streams about “epistemic humility…”

This is why.

It is not purely a moral concern:

Humility in the face of a complex universe is a means of survival.

Because:

There are known knowns…

And known unknowns…

But it’s in the Unknown Unknowns…

…that the universe hides its hatchets.

No Basis

——–
On what basis?
——–

Over the past few weeks we’ve built a system for understanding how we make decisions.

First, we needed to understand that people come to problems with different priors – good and bad.

Then, we needed to understand the importance of consulting the base rate.
Last week, we added a wrinkle:

The usefulness of the base rate depends a lot on whether it is group indexed or individually indexed.

Why spend so much time on probabilities?

Because we need to understand probabilities to estimate our level of risk.

After all, nearly every decision we make entails some form of risk…whether we know it or not.

This week, we introduce a big idea that will that I will reference many times throughout the rest of this series:

The difference between risk and uncertainty.

Risk is a situation where you have an idea of your potential costs and payoffs.

“Hmm. I’ve got a 50% chance of winning 100$, and it costs 60$ to play. Is this worth it?”

When you’re faced with risk, statistical analysis is your friend.

“Well, let’s see. 50% of 100$ is 50$, so that’s my average payoff. The cost is 60$, so I would average out at a 10$ loss. That’s not a good bet.”

Uncertainty is a situation where you don’t know the potential payoffs or costs.

“Hmmm. I’ve got some chance to win some money. I don’t know how much, or what my chances are. Is this worth it?”

Using statistical analysis in situations of uncertainty will almost always lead you astray. Instead, we need to turn to game theory (which we’ll discuss in a future email).

If I could leave you with a single takeaway, it would be this:

To act rationally, we need to understand whether we are experiencing risk or uncertainty.

This is much harder than it sounds.

To explain why, let’s borrow a bit from the world of investment…

And discuss derivatives.

A “derivative” is something that shares a relationship with something else you care about.

The thing you care about is called the “underlying.”

Sometimes it’s hard to affect the underlying. It can be easier to interact with the derivative instead.

I’ll pick an embarrassing personal issue as an example, because why not?
Let’s discuss body fat and attractiveness.

I was (and sometimes still am) insecure about how I look. I think this is a pretty common feeling. I didn’t think of myself as physically attractive, and I wanted to improve that.

My physical attractiveness is the underlying. The thing I cared about.

It’s hard to directly change your attractiveness. Your facial features, bone structure, facial symmetry, etc, are permanent, short of serious surgery.

So, instead of directly changing my attractiveness, I looked for a derivative, something that was easier to change.

The derivative I settled on was body fat percentage.

“The less body fat I have,” I reasoned, “the more attractive I will be. Body fat and and attractiveness are related, so by changing the former I can improve the latter.”

(Of course, this sounds well-reasoned in this description. I’m leaving out all the self-loathing, etc., but I can assure you it was there).

The relationship between the derivative and the underlying is the basis.

In my head, the basis here was simple: as body fat goes down, attractiveness goes up.When we express the basis in this way – as a formula that helps us to decide on a strategy – we are solving a problem via algorithm.

“If this, then that.”

X=Y.

Humans are hard-wired algorithmic problem solvers. Our super-power is the ability to notice the basis and use algorithms to predict the future.

We are pattern-seekers, always trying to understand what the basis is
(“I’ve noticed that the most attractive men have less body fat…”)

And once we think we know the basis, we tend to use it to try to predict the future…

(“If I lose body fat, I will become more attractive…”)

Or explain the present…

(“He is attractive because he haslittle body fat.”)

The amazing thing about this kind of judgement is that it’s often more accurate and useful than, for example, complex statistical regression or series analysis.

Simple rules of thumb have served us well for thousands of years.

But there is a danger hidden inside this way of thinking.

Let’s introduce one more concept:

Basis risk.

Basis risk is the damage that could occur if the relationship between the underlying and derivative isn’t what you thought…

…Or doesn’t perform as you expected.

Example:

You believe that the more water you drink (the derivative)…

The better you will feel (the underlying).

Thus, the basis is:

Drink more water = feel better.

So you drink 3 gallons of water a day from your tap.

But.

You didn’t realize that your tap water comes from a well located just off the grounds of a decommissioned nuclear power plant.

The water you’re drinking contains trace amounts of radiation that will, over time, cause you to grow 17 additional eyeballs.

In small amounts, the effect was unnoticeable…

At your current rate of 3 gallons a day the effect is…

EYE-CATCHING

(hold for applause)

Anyway.

Your problem was misunderstanding the basis.

It wasn’t:

Drink more water = feel better

It was:

Drink more water = feel worse.

The basis risk was severe health complications and an exorbitantly high optometrist bill.

We love to solve problems via algorithm, but if the relationship between derivative and underlying isn’t what we thought it was – or isn’t as tight as we thought it was…

Disaster follows.

Always.

It’s critical that we get the basis right. We must understand how changes to the derivative affect the underlying.

But this is much harder to do than it might seem.

For one, the world is complex.

Things that seem related often aren’t; things that ARE related don’t behave in the ways we expect.

Every part of the system affects every other part; the chain of causation can be difficult to pry apart.

But even when we DO work out the basis correctly, it can change over time.

Let’s return to my struggles with body image; specifically, the relationship between body fat and attractiveness.

Assume, for a moment, that you believe my presumptions to be true, and that less body fat really does make someone more attractive.

(By the way, there’s a huge amount of evidence that this isn’t true at all, as the excessive amount of internet drooling over this guy shows.)

Will that basis always be true?

After all, we’re not discussing laws of nature here. We’re discussing people – messy, complicated, and ever-evolving.

We don’t need to resort to hypotheticals to imagine a world in which body fat was considered attractive…

We can find examples of idealized bodies with non-zero body fact percentages in the ancient world:

Roman Statues showing classically “ideal” bodies with non-zero body fat percentages.

Even today, “curvy” bodies are attractive:

Some modern examples of “curvy” body types.

The basis between body fat and attractiveness is ambiguous, and has changed over time.

Whether it’s a “dad bod” on TV or a Roman statue, less body fat isn’t ALWAYS better for attractiveness.

If body-image-problems-Dan doesn’t update his algorithm

He could end up dieting, stressing, struggling, even hurting my long-term health…

And actually decrease his overall attractiveness, the very thing he was trying to improve.

(Why am I speaking in the third-person now?)

This is the basis risk.

This is what happens when the relationship between derivative and underlying changes over time.

This is what happens when we drift from risk…

To uncertainty.

The algorithm stops working.

The formula says “X” when it should say “Y”…

And everyone suffers as a result.

All of us are INCREDIBLE at creating algorithms and TERRIBLE at updating them.

We tend to view updating algorithms as “changing our minds” or “being wrong…”
Rather than as acknowledging that the world is complex…

And that even if we were right yesterday, that doesn’t mean we’re right today.

The key to managing our basis risk is constantly monitoring how well the underlying and the derivative track with one another.

The moment these start to drift apart, we need to be able to admit that the correlation isn’t what we once thought it was….and to update our algorithms.

Maybe that way we can actually have our cake…

And eat it, too.

8 Months To Live

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

——
How worried should I be that I have 8 months to live?
——

So far, we’ve discussed how we make decisions involving probability:

We form beliefs about how likely certain events are based on our own experiences.

– These beliefs are “Priors.”

– They act as a map, guiding us through uncertain territory.

– The base rate is the probability of an event occurring. To find it, we look at the average outcome of similar events in the past.

– Base rates are often far more useful for decision making than our priors.

– But because people love narratives – individuating data – we often ignore base rates. We say “it’s different this time.”

So:

Is decision-making just a matter of asking “what’s the base rate?”

No.

Because just as our priors can be misleading if they are not representative…

Base rates can be misleading as well.

Imagine this scenario:

You’re a 20 year old personal trainer. Let’s call you Jill.

(You remember Jill – our personal trainer slash administrative assistant from last week?)

This is your fifth trip to the doctor’s office this month.

You wait, nervously sipping a protein shake.

It’s been days of invasive tests, blood samples, medical forms. Will they finally be able to find what’s wrong with me?

The door opens. Your doctor enters.

You look up hopefully, but her face is grim. Stoic. Professional.

She sights, sits down in front of you, and says:

“I’m sorry…but based on your test results, you most likely have Barrett’s Syndrome.

It’s a rare condition characterized by having a gigantic brain and devastatingly-high levels of attractiveness.

There is no known cure.”

You lower your protein shake. Is the room spinning? Your skin feels flush.

“What’s the prognosis, doc?”

She looks you right in the eyes.

She’s both empathetic and strong. Wow, she’s good at this, you think.

“The average life expectancy of someone with Barrett’s Syndrome is…

….8 months.”

The room goes dark. She has a kind voice.

Your last conscious perception is of your protein shake, falling to the floor and spilling everywhere.

Let’s ask an important question:

How worried should Jill be?

On the face of things, she should be pretty worried.

The base rate here is clear:

The average patient lives for 8 months after receiving this diagnosis.

But all averages are not created equal.

To help us understand Jill’s predicament, we need to bring in two important mental models:

Mean vs. median…

And Individual indexing vs. Group indexing.

Let’s start with mean and median.

There are different measures of “average,” or central tendency:

The median, which is how most of us think of averages, is the total number of items divided by the number of sharers.

The mean, another way of measuring “central tendency,” is the middle point.

If you line up 5 kids by height, the middle child will be shorter than two and taller than two. That’s the mean.

When we hear “average life expectancy of 8 months,” our natural reaction is to extrapolate.

We think “the average is 8 months, so I am likely to have only 8 months to live.”

But what if that 8 months is the mean?

That would mean that half of people would live longer than 8 months. It all depends on which average we’re talking about.

Does this sound far-fetched?

I’d agree with you, except this exact thing happened to evolutionary biologist Stephen Jay Gould.

From Gould’s wonderful essay, The Median Isn’t The Message:

“In July 1982, I learned that I was suffering from abdominal mesothelioma, a rare and serious cancer usually associated with exposure to asbestos.”

The literature couldn’t have been more brutally clear: mesothelioma is incurable, with a median mortality of only eight months after discovery. I sat stunned for about fifteen minutes, then smiled and said to myself: so that’s why they didn’t give me anything to read. Then my mind started to work again, thank goodness.

When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good.”


We tend to think of averages as “real” – as concrete things, out there in the universe.

But they aren’t.

Averages are abstractions – a way of thinking about the world. They aren’t determinants.

And while the average doesn’t truly exist except in our minds…

Variations around the average are all that exist.

It reminds me of these images of “the average” person that made their way around the internet a few years ago:

This person is a figment; they don’t exist. And we know that.

But it’s hard to shake the feeling that that “8 months” number means something for Jill.

After all, it isn’t based on nothing. So how do we use it?

Let’s come back to Gould. He’s been diagnosed with mesothelioma, which has a mean survival time of 8 months.

If the mean is 8 months, then half of mesothelioma patients must live longer than that.

But which half?

“I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation’s best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.”
The 8 months survival number is a group index.

It accounts for everyone and mashes their survival rates together.

But Jill isn’t everyone; she’s not the average.

What we need to know is:

What’s the survival rate for people like Jill?

That’s an individual index.

Jill’s healthy, she’s young, she’s fit.

And because of this, Jill is likely on the other side of the mean.

Back to Gould:

“I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call “right skewed.” (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right.

In skewed distributions, variation to one side of the central tendency is more stretched out – left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned.

After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn’t much room for the distribution’s lower (or left) half – it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives.”

In fact, Gould ended up living for another twenty years – before eventually succumbing to a different disease.

While averages are useful, we always need to account for the ways in which our situation is not average…

In other words, we need to take into account our individuating data.

Here is where we try to bridge the gap between map and territory:

By understanding that our priors can be useful…

But only when used to further our understanding of the base rate.

Combining the narrative power of human thought…

With our ability to see patterns at a high level…

…is how good decisions are made.

More art than science.

Map Meets Territory

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

——
What is the territory?
——

In our last email, we discussed the fact that we rarely come to a decision without any data.

We have pre-existing beliefs about how likely or unlikely certain outcomes are.

These beliefs are called priors, and they influence our decision-making both before…

(“Based on how many Toms I know, what percentage of men do I think are named Tom?”)

…and after the fact…

(“Google says 95% of men are named Tom, but that can’t be right, because I haven’t met that many Toms.”)

Whether our priors reflect reality depends on how representative our experiences are.

Because of this, our priors can be more or less accurate, even when based on real experiences.

You can think of priors as quick approximations used to help make guesses about complicated problems.

In this way, they are much like maps – they help us get to where we want to go, but they are an imperfect reflection of reality (just as a map necessarily leaves out huge amounts of detail).

This is why:

“The map is not the territory.”

Alfred Korzybski

If priors are our map…

Then what is the territory, exactly?

Take a hypothetical woman – we’ll call her Jill.

You don’t know much about Jill…only that her friends call her “a bit wild, and very outgoing.”

I’ll ask you to make a guess about Jill…

Is it more likely that Jill is:

  • An administrative assistant?
  • Or a personal trainer?

Now that you’ve read last weeks email on Bad Priors, you may have a sense of how you make a guess like this…

You’ll consult your priors and compare the administrative assistants and personal trainers you’ve met to your image of Jill.

And herein lies a problem.

Human beings love narratives, and when presented with striking (but perhaps misleading) information we use those narratives to help us make decisions.

In this case, Jill’s outgoing nature seems to make her a perfect fit for personal training – she’ll like talking to clients and have lots of energy.

Meanwhile, her wild side seems to make her a poor choice for a quiet office setting.

This seems to make sense…

But we’ve been strung along by the narrative of who Jill is…

And we’ve ignored the base rate.

The base rate is the likelihood that some event will happen, based on the average outcome of similar events.

If 1 out of 100 people in your high school drop out, the base rate of dropping out at your school is 1%.

If 10 people each year in your town are killed by escaped laboratory mice driven by an endless thirst for revenge, and your town has 10,000 residents, then the base rate of being killed by MurderMice is 10/10,000 (or .1%) .

Let’s think about this applies to Jill.

Google tells me there were about 374,000 personal trainers in the US – let’s assume that’s low and round up to 500,000.

Meanwhile, there are over 3 million administrative assistants in the US.
Even if we assume that Jill’s personality doubles her chance of becoming a trainer…

(something I’d be very unsure about)

…she’s still 3 times more likely to be a secretary.

Humans love stories.

And because of that, we tend to put far too much weight on “individuating data”…

Characteristics we recognize, patterns we’ve seen in our own lives.

We consult our priors, notice patterns, construct narratives from those patterns, and then use those narratives to predict what will happen.

In doing so, we ignore the base rate – and, perhaps, an uncomfortable truth:

It is the average that is predictive, not the individual.

Knowing the average number of deaths on airplanes is more predictive for us than a friend telling us about their dramatic near-death experience…

The average outcome of investing in high-risk, high-volatility stocks is more instructive for us than the story of a neighbor who made his billions investing in a startup that breeds MurderMice.

If our priors are the map…

The base rate is the territory.

Wherever possible, we need to not simply consult our own priors…

…but learn to consult the base rate as well.

This is a simple principle in theory, but incredibly hard to apply in real life.

The pull of narrative is incredibly strong in us. We are trained to look for the particular, for the individuating, for the specific.

We think in stories, not in averages.

But averages help us make better decisions.

Asking “what happened before?” is just as, if not more valuable, then asking “what seems like it’s going to happen now?”

When our priors lead us away from understand the base rate…

our map diverges from the territory…

and our decisions become more and more inaccurate.

And sooner or later, we look up from the map…

And find ourselves in uncharted territory.

« Older posts

© 2024 No Less Than.

Theme by Anders NorenUp ↑