No Less Than.

Learn, Make, Teach.

Page 3 of 5

All In Our Heads


What’s left to do but go to the beach?


As we come into the home stretch of our latest series of emails..
(Dealing with risk, uncertainty, and how to think about the Coronavirus pandemic)…


You may have noticed that I’ve been avoiding something.
I’ve been completely mute about a critically important component to understanding your COVID-19 risk:

How risky it actually is.


There hasn’t been a single statistic, figure, fatality rate or case number.


No models.


No predictions.


Nothing.


And this glaring omission is the topic of this email.


I am going to try and make an argument I have been building to for two months now.


Namely:


We cannot know how risky COVID-19 is…


And trying to find out is only making it worse.


If that sounds like a problematic statement to you, I get it.


All I can ask is that you stick with me through this email.


Let’s start where ALL good philosophical discussions start:


On the internet.


I’d like to share some examples of recent COVID-19 posts I’ve found.


Before I do, a few points:


– It doesn’t matter when these arguments were made.

– I’m not arguing for or against any of the arguments made in these posts.


– These are purely meant as example of rhetoric, so don’t get hung up on any of the numbers they use or don’t use.


Cool?

Cool.

All of these posts were made by very smart people.


All of these people are using publicly-available COVID data to make their arguments.


And while I’ve only given you a few quick screenshots, you can tell these people are arguing forcefully and rationally.


These are smart people, being smart.


So let me ask you:


Do you feel smarter?


Now that you’ve read these, do you understand the situation better?

My guess is….

No.


My guess is that you’ve actually read several threads, or posts, or articles like these.


Well-argued, quoting numerous “facts,” dutifully citing their sources…


And you’ve come away with more confused than before.


I know that that’s been my experience.

To dig into why this is the case, we have to take a bit of a journey.


We’re going to start with COVID-19, follow it all the way back to the roots of Western Philosophy, make a hard left into game theory…

And end up back at COVID-19 again.


Let’s start with a (seemingly) basic question:


Why is it so hard for smart people to agree on even basic facts about Coronavirus?


I’m not talking about your Uncle who gets all his news from YouTube because he thinks aliens control the media.


I’m talking smart people with educated backgrounds.

People who understand bias, who understand systems.


How is it possible that even THEY can’t agree on basic things like how dangerous the virus is?


There are two basic categories or problems with understanding Coronavirus:


One is logistical (it’s just really hard to get good information)…


And one is epistemological (it’s just really hard for us to know things, in general).


Let’s start with the first category – logistical.

Gathering accurate data about Coronavirus is extremely difficult.


For one, the size and scale of the outbreak makes this a unique event.


There are very few historical parallels, and none within living memory of most of the people who study this kind of thing.


Two, Coronaviruses (the general family that our current pandemic comes from) are not well understood.


Funding to study them, before the pandemic, was hard to come by.


Not only that, but Coronaviruses are notoriously difficult to grow in lab cultures.


All of this means that only a handful of scientists specialized in Coronaviruses…leaving us woefully unprepared.


On top of a general lack of knowledge, we have to be careful to separate Coronavirus and COVID-19 (the disease that the virus causes).


While Coronavirus doesn’t seem to vary much around the world, COVID-19 does. That’s because the disease effects people differently depending on their unique health risks, the society in which they live, etc.


If you’re overweight, your experience of COVID-19 may be very different than someone who’s not.


Same if the area where you live has a lot of pollution.


Or if you smoke.


Or if you’re above 40.


Or if you’re diabetic.


All this makes the overall impact of the disease extremely hard to predict. We’re not sure what the important risk factors are, or how dramatically they impact the disease’s progression.

Take the fatality rate, for example.


Personally, I’ve seen people claim the fatality rate is as high as 7%, and others say it’s as low as .05%.

Why is this so hard to figure out?


The number we’re talking about is referred to as the “case fatality rate,” or CFR. CFR is the percentage of people diagnosed with COVID-19 who die.

That seems pretty straightforward.

But, as we mentioned above, the disease’s effects vary from country to country.


CFR also changes based on how many people you test, and WHO you test – if you test only people in the emergency room, or people in high-risk demographics, the percentage of fatalities will be higher. If you test everyone on Earth, the percentage of fatalities will be lower.


The CFR will also change based on the quality of treatment; after all, better treatment should result in a better chance of living.


That means that CFR will not only change from country to country, but will change within the same country over time as treatments evolve and testing ramps up.

Based solely on the logistical challenges around understanding an event of this scale, we should expect a great deal of uncertainty.


But.


Our fundamental problem with understanding coronavirus is NOT that we lack data or smart people to help us process that data.


Our problem is that data doesn’t help us at all.


How could this be the case?


If we need to understand something, won’t more data help with that?


This brings us to the second category of problem that I mentioned earlier:
Epistemological.

See, the thing that I’ve found so completely fascinating about this pandemic…


Is how directly it brings us out of the routine of our everyday lives…
Right into the domains of Western Philosophy.


What I’m going to argue here is that our struggle to understand coronavirus is directly related to our struggle to understand anything.


See, normally, we get away with not really having to know how we know stuff.


We operate according to “generally accepted best practice” or “custom,” and get along fine.


But Coronavirus is different.


Someone might say, “I just listen to what a majority of epidemiologists say.”


But how do we know which epidemiologist to listen to?


Or how many epidemiologists need to agree to constitute a majority?


Or whether the majority believing something is even evidence that it’s true?


Or whether it’s all a media plot?


We’ve officially left the realm of epidemiology…


And entered the realm of epistemology (the theory of knowledge).

This is why our online arguments over fatality rates go nowhere:


They aren’t about what we know…

They’re about how we know what we know.


And with that, I’d like to introduce you to Karl Popper and Thomas Kuhn.

Karl Popper was a British philosopher who lived from 1902 to 1994.


Popper was an incredibly influential philosopher of science, and I will be the very first to admit that my understanding of his work is limited. I highly encourage you to research Popper on your own, rather than taking my own interpretation of him as fact.

That said, Popper’s primary contribution to my own mental models has been this:


Theories can never be empirically proven, only falsified.


Here’s what that means to me:


We can never be absolutely sure that what we believe corresponds exactly to external reality.


Western Philosophy in general has spent the last 200 years more or less demolishing the idea that we can somehow escape our own mental constructs, sense impressions, and physical limitations to achieve a “God’s-Eye View” of the world as it is.


That doesn’t mean there is no external reality, necessarily; just that we can never be 100% sure that we know exactly what it is.


So, how do we go about gaining knowledge?


Popper argues that we gain knowledge by falsifying beliefs, rather than “proving” them, per se.


Let’s take a hypothetical example:


Say a friend of yours claims that the Dodo is extinct.


This may or may not be “true” in the absolute sense. And, indeed, even if we haven’t seen a Dodo in a century or more, we can’t be absolutely certain, for instance, that there isn’t some cave somewhere with a hidden colony of Dodo.


There remains a possibility, even if it’s small, that the Dodo is not extinct.
However, I can falsify the statement “the Dodo is extinct” very easily, simply by locating a living Dodo.

Thus, knowledge proceeds negatively; not by discovering absolute truths, but by systematically falsifying the untrue.


Knowledge, then, is a process by which we become more and more accurate over time through falsification.


That may seem like it makes sense, but if knowledge is ONLY falsification, why do we feel like we “know” things?


How do some ideas become “accepted wisdom”?


To understand this piece of the puzzle, we come to Thomas Kuhn.


Thomas Kuhn was an American philosopher of science who lived form 1922 to 1996.


Kuhn’s most famous work is the Structure of Scientific Revolutions, in which he proposed a model for understanding how science advances.


Let’s pick up where we left off with Popper.


Popper proposed that knowledge is about finding what isn’t true, a process that becomes more accurate over time…


(even though we can never be 100% sure what we know is right).


Imagine you’re a scientist studying the weather.


You perform several experiments, testing (and often disproving) your hypotheses.


In one experiment, you discover an interesting connection between electrical currents and snowfall in Ohio.


You publish your findings in the prestigious Journal of Ohio Snowfall Studies.


I, upon reading your article, decide to try and replicate your findings.


Upon doing so, I find myself unable to recreate the results that got you so excited.
In fact, MY findings indicate there is 

no connection between electrical currents and Ohio snowfall.


I publish MY findings, which seem to falsify YOUR findings.


A colleague, interested in the resulting debate, then attempts to reconcile the two sets of findings by offering a theory that a third element (blood pressure of nearby raccoons) is actually the cause of the phenomena.


A FOURTH scientist then tries to disprove the Stressed-Out-Racoon Hypothesis…


And so on.


Note that the scientific process described above is largely destructive; it is through gradual falsification that knowledge is being progressed.
Over time, these layers of falsification build up, leaving very few ideas unscathed.


The ideas that are left standing gradually become accepted wisdom. So accepted wisdom is just “all the stuff we haven’t been able to falsify.”


Kuhn’s insight was that just because something is “accepted wisdom” does not mean it is true.


In fact, it is through the accumulation of accepted wisdom that entire eras of scientific progress are overturned.


How is this possible?


Our understanding of the world is never complete. Remember – Popper argues that we can’t ever have a fully accurate, absolutely certain view of the world.


The best we can get are theories that are useful; theories that prove their worth by making the world a better place.


But a theory’s usefulness doesn’t guarantee its accuracy.


For example, I might believe that singing to my plants every day helps them grow.


Now, that theory might be useful; it might encourage me to water the plants every day. It might lower my overall stress levels by getting me to spend time outside.


But that doesn’t mean singing to the plants really helps them grow in and of itself.


Kuhn argued that something similar happens in the sciences. Over time, through experimentation, we find useful ideas; ideas that help us progress.
But because our understanding is always limited, and there is no possibility of being 100% certain, mistakes will always creep in to systems of knowledge we build.


We can’t avoid this; it’s inevitable. Humans make mistakes, and so mistakes are built in to everything that we do.


In science and philosophy, these mistakes manifest as seemingly unsolvable problems, contradictory bits of knowledge, and straight-up weirdness that just doesn’t “fit” what we know “know to be “true.”


In Kuhn’s formulation, when these questions become too much to bear – when the blind spots in our picture of the world become too obvious – a revolution occurs.


This means that the “paradigm” (Kuhn’s word for the scientific consensus of how the world works) becomes undone…and a new paradigm takes its place.


Just as Copernicus replaced the Ptolemaic System…and Newton undermined Copernicus…and the Theory of Relativity destroyed Newton…


So does scientific consensus advance…


By destroying itself.


In other words:


Knowledge is an act of creative destruction.


Now.


We’ve gone very high-level here…


So let’s bring this back to Earth:


What does this have to do with Coronavirus?


I’m certainly not arguing that Coronavirus is the herald of a scientific revolution.


The basic tools for understanding what’s happening (epidemiology, statistical analysis, modeling, etc) have been around for a long time.
What I’m arguing is that our attempts to “fix” our knowledge of the virus and to “get it right” are actually making it harder to understand.


Stay with me, now. 🙂


We discussed, above, the idea that science moves forward through a process of creative destruction by systematically falsifying and undermining itself.
Typically, this process is contained within the pages of the various journals and academic conferences of a given field.


Want to debate the current state of genetics? There are designated places to do so.


Want to get into an argument about geology? Figure out where that debate is occurring and get started!


The debate over Coronavirus, though is not happening within the pages of academic journals.


It is not limited to scholarly conferences and meet ups.


It is occurring everywhere at once.


It’s on the internet.


The news.


The paper.


Twitter.


Facebook.


Your friends are discussing it.


Your neighbors bring it up.


Just as the virus itself is endemic (meaning, literally everywhere) so is information about the virus.


Not only is it everywhere, everyone is weighing in.


Epidemiologists? Sure. They’re celebrities now.


But also:


General practitioners, statisticians, politicians, teachers, plastic surgeons, marketers, historians, plumbers, pundits, accountants…


Everyone has an opinion.


When you combine:


Impossibly large amounts of uncertain data….


With an impossibly large amount of people interpreting that data…


You get a massive, exponential increase in the amount of information in the system.


It isn’t that we “don’t know” enough about COVID-19 and the Coronavirus…


It’s that we know too much, and have no way of discerning what’s right.


Let’s return, briefly, to the idea of the common knowledge game that we addressed in “The Beauty Contest” and “Missionaries.”


How could anyone possibly comprehend such a mess of information?


We can’t. It’s impossible.


So…what do we do instead?


We turn to Missionaries.


Missionaries are the people we look to in order to know what other people know.


They’re the news channels, pundits, journalists…the sources, not of “truth,” but of “common knowledge.”


The problem with looking to Missionaries now, however, is that they’re wrong.


And I know they’re wrong, because they have to be wrong.


Remember:


Logistically, it’s nearly impossible to get a grip on what COVID-19 does. Science is certainly our best option.


But epistemologically, science only advances by undermining itself.
And today, that process is being exponentially multiplied across every possible news outlet, Twitter feed and YouTube channel…


Meaning that all anyone hears…


All anyone sees…


Is missionary after missionary contradicting themselves.


Science is working; don’t get me wrong.


Slowly, but surely, we are getting closer and closer to the truth.
But what we as a culture are perceiving (possibly for the first time in Western Society)


Is just how self-destructive scientific progress actually is.


And while I don’t think we can understand the absolute chaos of COVID-19 information…


do think we can use game theory to predict how people will react to seeing large-scale falsification in progress for the first time.


I think people will simply conclude that no one knows anything, and that worrying about it is too cognitively-taxing.


For fun, I googled “beaches packed” just now.


Here’s the very first article that came up:


When the message gets too chaotic, too impenetrable? People stop listening.
To be clear, I absolutely do not blame anyone who has started tuning COVID-19 out at this point.


Yes, other countries had wildly different approaches, and different outcomes.


Yes, different messaging and different strategies could have prevented countless deaths here in the United States.


But that didn’t happen.


Now, what we’re left with is an impenetrable “fog of pandemic” made exponentially worse by our desire for more information.


In our attempt to understand the virus we have undermined any attempt to address it.


By putting science on display, we have guaranteed our loss of faith in it.


What’s left to do but go to the beach?

Missionaries

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

—–
How do we create Common Knowledge?
—–

We start this week with a riddle. It’s a famous one, so no Googling…give yourself the chance to try and think it through. 🙂

Here it is as written in Terence Tao’s Blog:

“There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours.

Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces).

If a tribesperson does discover his or her own eye color, then their religion compels them to leave the island at noon the following day.

All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).

Note: for the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander.

Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).

One day, a blue-eyed foreign missionary visits to the island and wins the complete trust of the tribe.

One evening, he addresses the entire tribe to thank them for their hospitality.

However, not knowing the customs, the missionary makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.

What effect, if anything, does this faux pas have on the tribe?

With this (admittedly strange) riddle we return to the Common Knowledge game.

You’ll recall that we introduced the idea of a Common Knowledge game in last week’s email…

(And if you missed it, I’ve gone ahead and posted it on the blog in order to make sure we don’t leave our new subscribers behind.)

…as what happens when many people are making second, third, and fourth order decisions.

The crowd acts not based on how each individual person thinks, but on what they think about what other people think.

The question then becomes:

How do we know what other people think?

To be clear, I don’t think we ever know what other people think…not really.

For our purposes we are most interested in figuring out how other people decide what they think other people think.

In other words: “How do we know what’s common knowledge?”

First, let’s define common knowledge.

In one sense, common knowledge is just stuff that pretty much everyone knows.

The earth orbits the sun.

The Statue of Liberty is in New York City.

But common knowledge is more than just what we know…it’s what we know other people know.

From Wikipedia:

Common knowledge is a special kind of knowledge for a group of agents. There is common knowledge of p in a group of agents G when all the agents in G know p, they all know that they know p, they all know that they all know that they know p, and so on ad infinitum.”

It’s not just what you know – it’s what you know that I know you know I know.

How do we figure that out?

This brings us back to the island of the eye-color tribe.

Let’s start with the answer and work backwards (if you still want to take a stab at solving the riddle yourself, don’t read any further).

What effect does the missionary’s pronouncement have on the tribe?

All 100 tribe members with blue eyes leave the island at noon on 100 days after the missionary’s statement.

Why?

To help work this out, imagine that there was only one person with blue eyes on the island. What would happen then?

The missionary would pronounce that they see a person with blue eyes.

The blue-eyed island would immediately realize that the missionary must be referring to them; after all, they know that every other islander has brown eyes. Therefore, they would leave the island at Noon the next day.

Now, let’s make things slightly more complicated, and imagine that there are two blue-eyed islanders – let’s call them Marty and Beth.

The missionary states that they see a person with blue eyes.

Marty thinks: “Wow! He just called out Beth as having blue eyes. That means Beth will leave the island tomorrow.”

Beth thinks: “Yup – he’s talking about Marty. Poor Marty! He’ll have to leave the island tomorrow.

Tomorrow rolls around. Both Beth and Marty gather around with everyone else to watch the blue-eyed islander leave the island…

And no one leaves.

Beth and Marty stare at each other. The other islanders stand around awkwardly.

Beth thinks: “Wait a minute…Marty has blue eyes. He should know that he needs to leave, because he knows everyone else’s eye color, and can deduce that his eyes are blue.

But if he didn’t leave, that means that he thinks he doesn’t have to, because someone else should have deduced that their eyes are blue. And since I know that everyone else’s eyes are brown, that means….”

Marty thinks: “Wait a minute…Beth has blue eyes. She should know that she needs to leave, because she knows everyone else’s eye color, and can deduce that her eyes are blue.

But if she didn’t leave, that means that she thinks she doesn’t have to, because someone else should have deduced that their eyes are blue. And since I know that everyone else’s eyes are brown, that means….”

Beth and Marty together: “MY EYES MUST BE BLUE!”

And so Beth and Marty leave the island together at noon the next day.

This process continues for each new islander we add to the “blue eyes” group. And so the generalized rule for the riddle is:

If N is the number of blue-eyed islanders, nothing happens until the Nth day, whereupon all blue-eyed islanders leave simultaneously.

There’s a critical element to this riddle that many people (namely, me) miss:

Everyone already knows everyone else’s eye color.

It isn’t that the islanders are learning anything truly new about other people. They aren’t. It has nothing to do with what people know.

What changes the situation on the island is missionary’s public announcement.

It’s know that people suddenly know – it’s that they know that other people know.

Common Knowledge is created by public statement, even when all the information was already available privately.

It isn’t about eye color; it’s about the missionary.

Once we hear the missionary’s message, it takes time for everyone to process. It takes time for people to make strategic decisions.

The more ambiguity in the message, the more time it takes to have an effect (for example, “Beth has blue eyes” is an unambiguous message that would have had an immediate effect. “I see someone with blue eyes” is ambiguous, and takes N+1 days to have an effect.)

We ask the question at the beginning:

How do we know what other people think?

The answer: we create an understanding of what other people know by listening to the missionaries we believe everyone else listens to.

The message of the missionary has to be widely propagated and public. What are the channels that “everyone” watches? What are the sources that “everyone” follows?

Once the message is broadcast, there’s a time delay on the effect based on the amount of ambiguity in the message and the difficulty of the strategic decision-making that follows.

But.

Once everyone hears the message…

And everyone updates their beliefs about what other people believe…

Change can be both sweeping and immediate.

This is how massive changes can seemingly happen overnight. How “stable” regimes can be toppled, social movements can ignite, and stable equilibrium everywhere can be dramatically and irrevocably disrupted.

The Common Knowledge game teaches us some critical lessons:

1.) It isn’t the knowledge itself that matters, but what we believe other people know.

2.) You have better be aware of who the missionaries are, and what they’re saying.

Otherwise, it might soon be YOUR turn to leave the island.

False Positive

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

—-
How scared should I be?
—-

For the past few weeks we’ve been discussing risk and uncertainty.

Risk is a situation in which we know the odds, know the payoffs, and can plan accordingly.

Uncertainty is a situation where we don’t know the odds, or are unsure about the payoffs.

Nearly every difficult decision falls into one of these two categories.

Understanding the difference is critical to making wise decisions…

But we screw it up.

All the time.

In fact, we make this error so often that it has a special name:

The Illusion of Certainty.

The illusion of certainty comes in two flavors:

The zero-risk illusion (where risk is mistaken for certainty)…

And the calculable-risk illusion (where uncertainty is mistaken for risk).

These are the two primary ways our maps stop reflecting the territory…

The engine of our mistakes.

Let’s start with the zero-risk illusion.

We encounter the zero-risk illusion whenever we enter a risk situation (with calculable odds of winning or losing)…

But believe that we know for sure what will happen.

A simple example:

You come across a man on the street. He waves you down.

He holds a single playing card in his hand. He shows it to you – an Ace of Spades.

“I’m running a memory experiment,” he explains.

He turns the card face-down in his palm.

“I’ll give you 50$,” he says, 

“if you can name the card in my hand.”

You ponder this.

He showed you the card; you can easily recall it was the Ace of Spades.

Winning the 50$ seems like a sure thing.

You tell the man your guess.

He turns the card over, revealing….a Joker.

What happened?

Well, as it turns out, the man is a magician, and you’re on one of those TV shows where people embarrass the overly-credulous.

Instead of 50$, your co-workers make fun of you for several weeks as your beet-red face is beamed across the country for all to see.

This was a situation of risk that was mistaken for certainty.

You didn’t know he was a magician, and so assumed the card in his palm would be the card he showed you.

You fell victim to the zero-risk illusion.

That might seem a bit far-fetched, though, so let’s examine another scenario where this illusion occurs.

You go for your annual check up and get the usual series of blood tests.

Your doctor enters the room carrying a clipboard. She looks very concerned. She stares at you and sighs.

“I’m sorry,” she says, “but you have Barrett’s Syndrome. It’s an incredibly rare condition characterized by having a gigantic brain and devastatingly high levels of attractiveness…

…There is no known cure.”


Is the room spinning? you think. Your skin feels flush.

“What’s the prognosis, doc?”

She looks you right in the eyes. She appears both empathetic and strong. She’s good at this, you think.

The average life expectancy of someone with Barrett’s Syndrome is…
….8 months.”

Some of you may remember this scene from an earlier email (titled “8 Months to Live”) in which we discussed the importance of individual vs. group indexing.

For the moment, though, let’s forget that discussion.

What if you had asked a different question?

Let’s go in a new direction this time.

We rejoin our scene, already in progress:

“The average life expectancy of someone with Barrett’s Syndrome is…
….8 months.”

You pause. You furrow your brow.

“Not good, doc. Can I ask – how sure are we? How good is this test?”

The doctor nods, as if she understands why you would ask, but that look of sympathy-slash-pity hasn’t left her face.

“I’m sorry, I know you’re looking for some hope right now…but the test is extremely accurate. The test is nearly 100% accurate in detecting the disease when it’s present, and the false positive rate is only 5 in 100,000.

“Hmmmm. Doesn’t sound good for me, I guess. Let me ask you, doc – exactly how many people have this syndrome? Maybe I can join a support group.”

“The disease is very rare. Only .0001% of the population has Barrett’s syndrome.”

She clearly expects you to be resigned to your fate.

Instead, you are….smiling?

How scared should you be?

We trust so much in technology that it can cause us to fall victim to the zero-risk illusion.

Because we believe medical science to be accurate, receiving a positive test result for something as devastating as Barrett’s Syndrome can cause extreme anxiety.

But let’s think about those numbers a bit.

Let’s start with a random selection of 100,000 people.

Based on what we know about Barrett’s Syndrome, how many people in this population should have the disease?

Based on that .0001% number, we’d expect ten people to have Barrett’s Syndrome in a population of 100,000.

Because the test is very accurate in detecting the disease, we’d expect all those people to test positive.

Our false positive rate was 5 out of 100,000, which means that out of our group of 100,000 we should also expect 5 people to test positive that don’t have the disease.

That means that we have 10 real positives….and 5 false ones.

So if you test positive for Barrett’s Syndrome, you’ve got a 2-to-1 chance of having the disease.

Not great, but not certain, either.

This scenario, while hypothetical, plays out every day across the country.

Routine medical screenings with seemingly low false-positive rates produce far higher numbers of wrong diagnoses than it might seem – simply because of the scale at which they’re administered.

In this situation, we have a risk – about 2-to-1 – of having Barrett’s Syndrome. But that risk seems like certainty.

The other form of the illusion of certainty is the calculable-risk illusion…and it’s the one which feels most appropriate to our current global situation.

The calculable-risk illusion occurs when we fool ourselves into thinking that we know the odds.

We trick ourselves into believing we know more than we do, that we’re making a rational calculation…

When, in reality, no “rational calculation” is possible.

Donald Rumsfeld put this quite famously:

Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know.

We also know there are known unknowns; that is to say we know there are some things we do not know.

But there are also unknown unknowns—the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.

The calculable-risk illusion occurs when we mistake known unknowns for unknown unknowns.

This – to use the framework we’ve been developing over the past few weeks – is when we need to leave the world of statistical analysis and enter the world of game theory….

To stop playing the game and start playing the players.

Failure to do so can have extremely dire consequences.

My favorite example of this is called the “Turkey Problem,” and comes from Nassim Taleb’s Antifragile, which I’ll quote here:

A turkey is fed for a thousand days by a butcher; every day confirms to its staff of analysts that butchers love turkeys “with increased statistical confidence.”

The butcher will keep feeding the turkey until a few days before Thanksgiving. Then comes that day when it is really not a very good idea to be a turkey.
So with the butcher surprising it, the turkey will have a revision of belief—right when its confidence in the statement that the butcher loves turkeys is maximal and “it is very quiet” and soothingly predictable in the life of the turkey.

“The absence of evidence is not evidence of absence.” The turkey predicts with “increasing certainty” that the farmer will continue to feed him, because he has no evidence it will ever be otherwise.

Because negative events happen less often does not mean they are less dangerous; in fact, it is usually the opposite. Don’t look at 
evidence , which is based on the past; look at potential. And don’t just look for “evidence this theory is wrong;” is there any evidence that it’s right?

The turkey is not wrong in saying that it’s life is quiet and peaceful; after all, all historical evidence tells him this is so.

His error was mistaking uncertainty for risk, believing he understood the odds.

If you’ve seen me rant on Twitter or in my various Live Streams about “epistemic humility…”

This is why.

It is not purely a moral concern:

Humility in the face of a complex universe is a means of survival.

Because:

There are known knowns…

And known unknowns…

But it’s in the Unknown Unknowns…

…that the universe hides its hatchets.

No Basis

——–
On what basis?
——–

Over the past few weeks we’ve built a system for understanding how we make decisions.

First, we needed to understand that people come to problems with different priors – good and bad.

Then, we needed to understand the importance of consulting the base rate.
Last week, we added a wrinkle:

The usefulness of the base rate depends a lot on whether it is group indexed or individually indexed.

Why spend so much time on probabilities?

Because we need to understand probabilities to estimate our level of risk.

After all, nearly every decision we make entails some form of risk…whether we know it or not.

This week, we introduce a big idea that will that I will reference many times throughout the rest of this series:

The difference between risk and uncertainty.

Risk is a situation where you have an idea of your potential costs and payoffs.

“Hmm. I’ve got a 50% chance of winning 100$, and it costs 60$ to play. Is this worth it?”

When you’re faced with risk, statistical analysis is your friend.

“Well, let’s see. 50% of 100$ is 50$, so that’s my average payoff. The cost is 60$, so I would average out at a 10$ loss. That’s not a good bet.”

Uncertainty is a situation where you don’t know the potential payoffs or costs.

“Hmmm. I’ve got some chance to win some money. I don’t know how much, or what my chances are. Is this worth it?”

Using statistical analysis in situations of uncertainty will almost always lead you astray. Instead, we need to turn to game theory (which we’ll discuss in a future email).

If I could leave you with a single takeaway, it would be this:

To act rationally, we need to understand whether we are experiencing risk or uncertainty.

This is much harder than it sounds.

To explain why, let’s borrow a bit from the world of investment…

And discuss derivatives.

A “derivative” is something that shares a relationship with something else you care about.

The thing you care about is called the “underlying.”

Sometimes it’s hard to affect the underlying. It can be easier to interact with the derivative instead.

I’ll pick an embarrassing personal issue as an example, because why not?
Let’s discuss body fat and attractiveness.

I was (and sometimes still am) insecure about how I look. I think this is a pretty common feeling. I didn’t think of myself as physically attractive, and I wanted to improve that.

My physical attractiveness is the underlying. The thing I cared about.

It’s hard to directly change your attractiveness. Your facial features, bone structure, facial symmetry, etc, are permanent, short of serious surgery.

So, instead of directly changing my attractiveness, I looked for a derivative, something that was easier to change.

The derivative I settled on was body fat percentage.

“The less body fat I have,” I reasoned, “the more attractive I will be. Body fat and and attractiveness are related, so by changing the former I can improve the latter.”

(Of course, this sounds well-reasoned in this description. I’m leaving out all the self-loathing, etc., but I can assure you it was there).

The relationship between the derivative and the underlying is the basis.

In my head, the basis here was simple: as body fat goes down, attractiveness goes up.When we express the basis in this way – as a formula that helps us to decide on a strategy – we are solving a problem via algorithm.

“If this, then that.”

X=Y.

Humans are hard-wired algorithmic problem solvers. Our super-power is the ability to notice the basis and use algorithms to predict the future.

We are pattern-seekers, always trying to understand what the basis is
(“I’ve noticed that the most attractive men have less body fat…”)

And once we think we know the basis, we tend to use it to try to predict the future…

(“If I lose body fat, I will become more attractive…”)

Or explain the present…

(“He is attractive because he haslittle body fat.”)

The amazing thing about this kind of judgement is that it’s often more accurate and useful than, for example, complex statistical regression or series analysis.

Simple rules of thumb have served us well for thousands of years.

But there is a danger hidden inside this way of thinking.

Let’s introduce one more concept:

Basis risk.

Basis risk is the damage that could occur if the relationship between the underlying and derivative isn’t what you thought…

…Or doesn’t perform as you expected.

Example:

You believe that the more water you drink (the derivative)…

The better you will feel (the underlying).

Thus, the basis is:

Drink more water = feel better.

So you drink 3 gallons of water a day from your tap.

But.

You didn’t realize that your tap water comes from a well located just off the grounds of a decommissioned nuclear power plant.

The water you’re drinking contains trace amounts of radiation that will, over time, cause you to grow 17 additional eyeballs.

In small amounts, the effect was unnoticeable…

At your current rate of 3 gallons a day the effect is…

EYE-CATCHING

(hold for applause)

Anyway.

Your problem was misunderstanding the basis.

It wasn’t:

Drink more water = feel better

It was:

Drink more water = feel worse.

The basis risk was severe health complications and an exorbitantly high optometrist bill.

We love to solve problems via algorithm, but if the relationship between derivative and underlying isn’t what we thought it was – or isn’t as tight as we thought it was…

Disaster follows.

Always.

It’s critical that we get the basis right. We must understand how changes to the derivative affect the underlying.

But this is much harder to do than it might seem.

For one, the world is complex.

Things that seem related often aren’t; things that ARE related don’t behave in the ways we expect.

Every part of the system affects every other part; the chain of causation can be difficult to pry apart.

But even when we DO work out the basis correctly, it can change over time.

Let’s return to my struggles with body image; specifically, the relationship between body fat and attractiveness.

Assume, for a moment, that you believe my presumptions to be true, and that less body fat really does make someone more attractive.

(By the way, there’s a huge amount of evidence that this isn’t true at all, as the excessive amount of internet drooling over this guy shows.)

Will that basis always be true?

After all, we’re not discussing laws of nature here. We’re discussing people – messy, complicated, and ever-evolving.

We don’t need to resort to hypotheticals to imagine a world in which body fat was considered attractive…

We can find examples of idealized bodies with non-zero body fact percentages in the ancient world:

Roman Statues showing classically “ideal” bodies with non-zero body fat percentages.

Even today, “curvy” bodies are attractive:

Some modern examples of “curvy” body types.

The basis between body fat and attractiveness is ambiguous, and has changed over time.

Whether it’s a “dad bod” on TV or a Roman statue, less body fat isn’t ALWAYS better for attractiveness.

If body-image-problems-Dan doesn’t update his algorithm

He could end up dieting, stressing, struggling, even hurting my long-term health…

And actually decrease his overall attractiveness, the very thing he was trying to improve.

(Why am I speaking in the third-person now?)

This is the basis risk.

This is what happens when the relationship between derivative and underlying changes over time.

This is what happens when we drift from risk…

To uncertainty.

The algorithm stops working.

The formula says “X” when it should say “Y”…

And everyone suffers as a result.

All of us are INCREDIBLE at creating algorithms and TERRIBLE at updating them.

We tend to view updating algorithms as “changing our minds” or “being wrong…”
Rather than as acknowledging that the world is complex…

And that even if we were right yesterday, that doesn’t mean we’re right today.

The key to managing our basis risk is constantly monitoring how well the underlying and the derivative track with one another.

The moment these start to drift apart, we need to be able to admit that the correlation isn’t what we once thought it was….and to update our algorithms.

Maybe that way we can actually have our cake…

And eat it, too.

8 Months To Live

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

——
How worried should I be that I have 8 months to live?
——

So far, we’ve discussed how we make decisions involving probability:

We form beliefs about how likely certain events are based on our own experiences.

– These beliefs are “Priors.”

– They act as a map, guiding us through uncertain territory.

– The base rate is the probability of an event occurring. To find it, we look at the average outcome of similar events in the past.

– Base rates are often far more useful for decision making than our priors.

– But because people love narratives – individuating data – we often ignore base rates. We say “it’s different this time.”

So:

Is decision-making just a matter of asking “what’s the base rate?”

No.

Because just as our priors can be misleading if they are not representative…

Base rates can be misleading as well.

Imagine this scenario:

You’re a 20 year old personal trainer. Let’s call you Jill.

(You remember Jill – our personal trainer slash administrative assistant from last week?)

This is your fifth trip to the doctor’s office this month.

You wait, nervously sipping a protein shake.

It’s been days of invasive tests, blood samples, medical forms. Will they finally be able to find what’s wrong with me?

The door opens. Your doctor enters.

You look up hopefully, but her face is grim. Stoic. Professional.

She sights, sits down in front of you, and says:

“I’m sorry…but based on your test results, you most likely have Barrett’s Syndrome.

It’s a rare condition characterized by having a gigantic brain and devastatingly-high levels of attractiveness.

There is no known cure.”

You lower your protein shake. Is the room spinning? Your skin feels flush.

“What’s the prognosis, doc?”

She looks you right in the eyes.

She’s both empathetic and strong. Wow, she’s good at this, you think.

“The average life expectancy of someone with Barrett’s Syndrome is…

….8 months.”

The room goes dark. She has a kind voice.

Your last conscious perception is of your protein shake, falling to the floor and spilling everywhere.

Let’s ask an important question:

How worried should Jill be?

On the face of things, she should be pretty worried.

The base rate here is clear:

The average patient lives for 8 months after receiving this diagnosis.

But all averages are not created equal.

To help us understand Jill’s predicament, we need to bring in two important mental models:

Mean vs. median…

And Individual indexing vs. Group indexing.

Let’s start with mean and median.

There are different measures of “average,” or central tendency:

The median, which is how most of us think of averages, is the total number of items divided by the number of sharers.

The mean, another way of measuring “central tendency,” is the middle point.

If you line up 5 kids by height, the middle child will be shorter than two and taller than two. That’s the mean.

When we hear “average life expectancy of 8 months,” our natural reaction is to extrapolate.

We think “the average is 8 months, so I am likely to have only 8 months to live.”

But what if that 8 months is the mean?

That would mean that half of people would live longer than 8 months. It all depends on which average we’re talking about.

Does this sound far-fetched?

I’d agree with you, except this exact thing happened to evolutionary biologist Stephen Jay Gould.

From Gould’s wonderful essay, The Median Isn’t The Message:

“In July 1982, I learned that I was suffering from abdominal mesothelioma, a rare and serious cancer usually associated with exposure to asbestos.”

The literature couldn’t have been more brutally clear: mesothelioma is incurable, with a median mortality of only eight months after discovery. I sat stunned for about fifteen minutes, then smiled and said to myself: so that’s why they didn’t give me anything to read. Then my mind started to work again, thank goodness.

When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good.”


We tend to think of averages as “real” – as concrete things, out there in the universe.

But they aren’t.

Averages are abstractions – a way of thinking about the world. They aren’t determinants.

And while the average doesn’t truly exist except in our minds…

Variations around the average are all that exist.

It reminds me of these images of “the average” person that made their way around the internet a few years ago:

This person is a figment; they don’t exist. And we know that.

But it’s hard to shake the feeling that that “8 months” number means something for Jill.

After all, it isn’t based on nothing. So how do we use it?

Let’s come back to Gould. He’s been diagnosed with mesothelioma, which has a mean survival time of 8 months.

If the mean is 8 months, then half of mesothelioma patients must live longer than that.

But which half?

“I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation’s best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.”
The 8 months survival number is a group index.

It accounts for everyone and mashes their survival rates together.

But Jill isn’t everyone; she’s not the average.

What we need to know is:

What’s the survival rate for people like Jill?

That’s an individual index.

Jill’s healthy, she’s young, she’s fit.

And because of this, Jill is likely on the other side of the mean.

Back to Gould:

“I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call “right skewed.” (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right.

In skewed distributions, variation to one side of the central tendency is more stretched out – left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned.

After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn’t much room for the distribution’s lower (or left) half – it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives.”

In fact, Gould ended up living for another twenty years – before eventually succumbing to a different disease.

While averages are useful, we always need to account for the ways in which our situation is not average…

In other words, we need to take into account our individuating data.

Here is where we try to bridge the gap between map and territory:

By understanding that our priors can be useful…

But only when used to further our understanding of the base rate.

Combining the narrative power of human thought…

With our ability to see patterns at a high level…

…is how good decisions are made.

More art than science.

Map Meets Territory

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

——
What is the territory?
——

In our last email, we discussed the fact that we rarely come to a decision without any data.

We have pre-existing beliefs about how likely or unlikely certain outcomes are.

These beliefs are called priors, and they influence our decision-making both before…

(“Based on how many Toms I know, what percentage of men do I think are named Tom?”)

…and after the fact…

(“Google says 95% of men are named Tom, but that can’t be right, because I haven’t met that many Toms.”)

Whether our priors reflect reality depends on how representative our experiences are.

Because of this, our priors can be more or less accurate, even when based on real experiences.

You can think of priors as quick approximations used to help make guesses about complicated problems.

In this way, they are much like maps – they help us get to where we want to go, but they are an imperfect reflection of reality (just as a map necessarily leaves out huge amounts of detail).

This is why:

“The map is not the territory.”

Alfred Korzybski

If priors are our map…

Then what is the territory, exactly?

Take a hypothetical woman – we’ll call her Jill.

You don’t know much about Jill…only that her friends call her “a bit wild, and very outgoing.”

I’ll ask you to make a guess about Jill…

Is it more likely that Jill is:

  • An administrative assistant?
  • Or a personal trainer?

Now that you’ve read last weeks email on Bad Priors, you may have a sense of how you make a guess like this…

You’ll consult your priors and compare the administrative assistants and personal trainers you’ve met to your image of Jill.

And herein lies a problem.

Human beings love narratives, and when presented with striking (but perhaps misleading) information we use those narratives to help us make decisions.

In this case, Jill’s outgoing nature seems to make her a perfect fit for personal training – she’ll like talking to clients and have lots of energy.

Meanwhile, her wild side seems to make her a poor choice for a quiet office setting.

This seems to make sense…

But we’ve been strung along by the narrative of who Jill is…

And we’ve ignored the base rate.

The base rate is the likelihood that some event will happen, based on the average outcome of similar events.

If 1 out of 100 people in your high school drop out, the base rate of dropping out at your school is 1%.

If 10 people each year in your town are killed by escaped laboratory mice driven by an endless thirst for revenge, and your town has 10,000 residents, then the base rate of being killed by MurderMice is 10/10,000 (or .1%) .

Let’s think about this applies to Jill.

Google tells me there were about 374,000 personal trainers in the US – let’s assume that’s low and round up to 500,000.

Meanwhile, there are over 3 million administrative assistants in the US.
Even if we assume that Jill’s personality doubles her chance of becoming a trainer…

(something I’d be very unsure about)

…she’s still 3 times more likely to be a secretary.

Humans love stories.

And because of that, we tend to put far too much weight on “individuating data”…

Characteristics we recognize, patterns we’ve seen in our own lives.

We consult our priors, notice patterns, construct narratives from those patterns, and then use those narratives to predict what will happen.

In doing so, we ignore the base rate – and, perhaps, an uncomfortable truth:

It is the average that is predictive, not the individual.

Knowing the average number of deaths on airplanes is more predictive for us than a friend telling us about their dramatic near-death experience…

The average outcome of investing in high-risk, high-volatility stocks is more instructive for us than the story of a neighbor who made his billions investing in a startup that breeds MurderMice.

If our priors are the map…

The base rate is the territory.

Wherever possible, we need to not simply consult our own priors…

…but learn to consult the base rate as well.

This is a simple principle in theory, but incredibly hard to apply in real life.

The pull of narrative is incredibly strong in us. We are trained to look for the particular, for the individuating, for the specific.

We think in stories, not in averages.

But averages help us make better decisions.

Asking “what happened before?” is just as, if not more valuable, then asking “what seems like it’s going to happen now?”

When our priors lead us away from understand the base rate…

our map diverges from the territory…

and our decisions become more and more inaccurate.

And sooner or later, we look up from the map…

And find ourselves in uncharted territory.

The Beauty Contest

This post was originally an email sent to the Better Questions Email List.

For more like it, please sign up – it’s free.

Since we’ve all had enough of being cooped up inside and self-quarantining…

Let’s imagine a simpler time.

You’re walking down an ocean boardwalk.

You can feel the sun on your neck, warming you all the way through.

The smells of salt air, cotton candy, roasting peanuts linger in the air.
The sounds of summer wash over you:
The gentle murmur of ocean waves…
The joyous cries of children playing…

The mechanical music of various sidewalk rides and games…
The calls of seagulls floating overhead.

You work your way through the crowds. You’re not headed anywhere in particular, just enjoying the scene.
As you walk you come across a large crowd gathered around an elevated stage. You stop to see what’s going on.
Three men stand on the stage next to each other. A carnival barker shouts through a megaphone:

“Ladies and gentlemen! This is your chance to be handsomely rewarded! Great prizes in store for those who are the best judge of beauty! How finely tuned is your romantic apparatus? Now is the time to find out!”

The barker gestures to the men, who smile and wave.

The three handsome contestants of the beauty contest.

“Each one of you gets a vote – which of these dashing young men is the handsomest?

Which of these rugged exemplars of American manhood is the most appealing?

Vote, and if your choice gets the most votes, a fabulous prize will be yours!”

Someone hands out slips of paper to the crowd; you’ve got a few moments to decide on your vote. You’ve got admit…you like the sound of “fabulous prizes.” It’d be nice to win.

So…

Who do you vote for?
Let’s leave aside the obvious fact that it would be impossible to choose. Putting anyone in such a hopeless scenario would be cruel.

Underlying this situation is one of the most powerful mental models I’ve ever come across.

It’s an idea that – once you get it – changes how you view the world around you.

Rather than telling you, though…let’s see if we can reason it out. 🙂

How do we decide who to vote for?

The simplest answer is, “Vote for whomever you like the most.”

Are you attracted to the cool, mysterious vibe of contestant 1?

Vote for contestant 1.
Maybe you’re a huge pie fan, and so you’re drawn to the caring and lovable vibe of contestant 2.

Vote for contestant 2.

Or maybe it’s the raw, animal sexuality of the muscular 3 that draws you in.

Vote for contestant 3.

This is first-order decision making. Just vote for your preference!

But is this best way to vote, if our goal is to win the prize?

Probably not.

If we want to win, we need to make sure that we vote for the winning contestant.

That means we need to think not just about who we like, but how other people will vote.
How do you think other people will vote, on average?

We might say:

“Well, I prefer 1, because I like babies. But I think most people are attracted to hyper-masculine muscle-men, so most people will vote for 3…therefore I’ll vote for 3.”

Or we might think:

“While I prefer 3, I think that much raw machismo will simply be too over-stimulating for most. They’ll be so simultaneously titillated and intimidated that they will, in a kind of sexual panic, vote for the more soothing and comforting presence of 2.”
We’re now making decisions based on how we believe other people will vote.

This is second-order decision making:

Making decisions based on how we believe other people will vote.

Let me draw your attention to two important points here:
1. Second-order decision making requires us to have an opinion about what other people think.
We’re making guesses about what’s in other people’s heads. Hard to do.

2. Second-order decision making assumes that everyone else is making first-order decisions.

“They’re voting based on their preferences, while I’m voting based on what I think their preferences are.

But is this the best way to vote, if we want to win the prize?
Probably not.
Think about this:

Do you believe you are the only person to come up with second order thinking?

Do you believe you’re the only one smart enough to realize you shouldn’t vote based on your preferences, but on what you believe about other people’s preferences?
Probably not.

If you’ve discovered second order decision making, it’s probable that other people have as well.
And if they’ve discovered second order decision making, that means they’re not deciding based on their preferences, but rather on what they believe the preferences of other people will be.
This brings us to third-order decision making:

Making decisions based on what we believe other people believe about what other people think.
So, it’s not:

“Who do I think is most attractive?”
And it’s not:

“Who do I think most people find most attractive?”

It’s:

“Who do most people think most people think is most attractive?”

There’s no reason to stop there, either.
We could keep going to fourth and fifth-degree decision making if we wanted.

This idea is nicely summarized in this scene from The Princess Bride:

It’s this kind of thinking…

But spread out among a large group of people.

In game theory, this is called the Common Knowledge Game.

How do we decide best, when the best decision depends on how other people decide…

And everyone is making second or third order decisions?
How do we think about what other people think about what other people think?

Do I put the Iocaine powder in my glass, or yours?

The Common Knowledge Game, once you start to look for it, is everywhere…

And it’s critical to thinking clearly about situations like a pandemic.

Bad Priors

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

“The map is not the territory.” Alfred Korzybski

Are you confused by the decisions people make?

I am.

Do you struggle to understand how, exactly people could believe what they believe?

Me too.

In the next couple emails, I’m going to try and hash this out.

(Don’t worry – I haven’t moved on from productivity. Just in research mode.)

Let’s start with statistics.

Now, if your eyesrolled into the back of your head – I get it.

I, too, spent my life believing I was “bad at math” and hating everything mathematical.

As I’ve grown older, I’ve become interested in picking up the mathematical mode of thinking

…if not the calculations, exactly.

When you try to learn math without actually doing any math problems, you get drawn to statistics.

Statistics – and statistical reasoning – are immediately applicable to life, so it’s fun!

(Trust me)

Let’s start today off with a statistical puzzle:

What percentage of American men are named Tom?

No googling, now.

It is unlikely you have any special insight into this problem…

(unless you work at the Census bureau…)

But I’d still like you to make a guess.

Go ahead – think of a number that sounds right to you.

Got it? OK – here’s the answer I got from Google:

A screenshot of Google that reads "99.78 percent of people with the first name Thomas are male."

(I know, I said no Googling, but it’s my newsletter and I’ll do what I want)

OK, so Google didn’t understand the question.

You may have got a cheap laugh out of that image, though.

Let me ask you another question:

How did you make your guess?

And how did you know the one in the image was wrong?

See, it’s rare for you to come to a question like with no information whatsoever.

Every single moment of your life, you are amassing a database of information about the world.

You estimate statistical likelihoods based on what you’ve experienced.

When I asked you to guess you considered how many Toms you know, or have heard of.

Then you did a rough calculation in your head – even if you were unaware of it.

That information you brought to the table – your beliefs about how many Toms there are – is a prior.

Your priors are all the information about the world that you bring to any decision making process…

What you believe before you see any specific evidence.

You consult that information before you make a prediction.

And of course, all decisions are just predictions about how things will turn out.

Your priors influence what you predict AND how you interpret evidence afterwards.

As Jordan Ellenberg points out in How Not To Be Wrong:

“If an experiment provided statistically significant evidence that a new tweak of an existing drug slowed the growth of certain kinds of cancer, you’d probably be pretty confident the new drug was actually effective.

But if you got the exact same results by putting patients inside a plastic replica of Stonehenge, would you grudgingly accept that the ancient formations were actually focusing vibrational earth energy on the body and stunning the tumors? You would not, because that’s nutty. You’d think Stonehenge probably got lucky.

You have different priors about those two theories, and as a result you interpret the evidence differently, despite it being numerically the same.”

Our priors are essential to how we understand the world.

So the real question is:

How good are my priors?

To answer that question, we counter with another…

(Have you noticed that’s a theme with these fucking emails? Jesus, Dan)

How representative of the world around me are my experiences?

An important thing to note about priors is that they rarely spring forth from nothing. They’re based on how we experience the world.

If 50% of the men you meet are Toms, you’d be likely to extrapolate something from that.

In fact, you’d be silly not to.

But if you live in Tomsville, state capital of West Tomsginia – a state where the name Tom is 10x as common as everywhere else – that prior knowledge of Toms can lead you astray when predicting how many Toms are in the world…

Even though it’s based on lived experience.

This is the rub: our priors can be both real and inaccurate.

Take our perception of crime.

We over-estimate how likely we are to fall victim to crime, despite national violent crime rates dropping for more than a decade.

Why is this? As Brian Christian and Tom Griffiths write in Algorithms to Live By:

…the representation of events in the media does not track their frequency in the world. As sociologist Barry Glassner notes, the murder rate in the United States declined by 20% over the course of the 1990s, yet during that time period the presence of gun violence on American news increased by 600%.

And this is from someone who has strongly defended the news media.

When you talk with friends, you tend to talk about what’s most interesting, what will make a cool or fun story.

We rarely talk about what’s representative, which leads to a skewed sense of what other people experience.

Or your Instagram feed – what gets deemed worthy of getting posted to the ‘gram?

It isn’t daily occurrences, it isn’t everyday moments – it’s what stands out, what’s special.

A non-stop scroll of everyone’s special moments can make those moments seem more common than they are.

Our personal reality is like glimpsing a forest through a keyhole:

True, in the sense that it depicts reality…

But misleading in the sense that it represents only a fraction of a fraction of the whole.

All this makes it very, very difficult to maintain accurate priors.

And that brings us back to our friend Korzybski, quoted at the beginning of this essay.

“The map is not the territory.”

Reality is far too vast for us to comprehend. It’s too much data.

But we can’t give up – we have to make our way through the world.

So we use our priors as a map.

Maps help us predict what’s next – help us find our way through the landscape.

But if our map is wrong – if our priors are wrong – we end up in the wrong place.

This, by the way, is how we get to broad empathy for people, no matter how wrong-headed they may be:

Their priors are wrong….

And so are ours.

It’s just a question of how wrong.

Next week, I’ll expand a bit on this map and territory stuff. The week after, we’ll talk about how you actually apply this…

– to make more money

– to be more successful

– to avoid complete disaster.

Just so you know this is actually going somewhere.

🙂

The Factory Floor

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.

Not going to lie – this week’s email comes out swinging a bit. Not 100% sure it works.

I debated sending it at all.

In the end, there’s nothing to do but trust the process and hope it lands.

Let me know your thoughts at the end. You can always reply right to this email; I read every one.

——
Is This The Factory Floor?
—–

I’d like to point out something that you likely take for granted:

The language of work has seeped into everyday life.

This occurred to me, of course, while I was reading a textbook on Lean Manufacturing and wondering how I could apply it’s lessons to my personal life…

(This is what you read in your spare time too, right? Of course it is.

My next project – a productivity course called “Against Productivity” – is about this sort of thing.

If I’m being honest, though, I’d be reading it anyway.)

Consider love, the least “businesslike” thing we do.

What’s the most common, most cliche advice on love?

The thing you’d hear in any marriage book, dating book, relationship book…

From any therapist, sexologist, or advice columnist?

“Relationships take work.”

Laura Kipnis, in her wonderful book Against Love: A Polemic (from which I have shamelessly stolen my title), underlines the point:

“We all know that Good Marriages Take Work: we’ve been well tutored in the catechism of labor-intensive intimacy.

Work, work, work: given all the heavy lifting required, what’s the difference between work and ‘after work’ again? Work/home, office/bedroom: are you ever not on the clock?

Good relationships may take work, but unfortunately, when it comes to love, trying is always trying too hard: work doesn’t work.”

Of course, relationships aren’t the only thing we’re taught “take work.”

If you want to be a good person, you probably believe that you need to be working out

Or working on yourself

Or “doing the work” in therapy.

With all this “work” going on it’s not surprising that we’re all obsessed with productivity…

But it’s important to understand how new this concept is.

How To Lift Heavy Things

Taylorism (the “scientific management” system spearheaded by Frederick Winslow Taylor in the 1880s) is where our modern sense of “productivity” begins.

Taylor’s goal was getting the worker…

(“so stupid and phlegmatic that he more nearly resembles in his mental makeup the ox than any other type”, in Taylor’s words)

…on the factory floor to lift more and more heavy shit.

Anything that got in the way of this (say, a brief moment to enjoy the sunshine) was waste.

As Stanley Buder puts it in Capitalizing on Change: A Social History of American Business, “Taylor…substituted a mechanical work pace based on repetitive motions for the worker’s freedom to use his body and his tools as he chose.”

From this came the idea that it is an unqualified good to get ever more output from the same amount of input.

No matter what.

Enjoying the sunshine can go fuck itself – we have screws to tighten!

So, let me make a bold claim:

Productivity is dehumanizing.

But what do we really mean, when we say something’s dehumanizing?

We mean we are removing the biological – the human.

What defines the biological?

Redundancy, variation, diversity – in other words, randomness.

Capitalism abhors the random.

After all, economies of scale only emerge when every Big Mac is identical…

When the experience is guaranteed no matter what town, or state, or even country you’re in.

A Big Mac is a Big Mac is a Big Mac.

Unreliability means uncertainty and uncertainty means lost profits.

Businesses need reliability, because reliability means you can calculate a profit margin.

To achieve this, systems must be constructed, and then subordinated to.

Redundancy is eliminated.

Variation is reduced.

Diversity becomes monoculture.

As Taylor himself put it:

“In the past, man has been first; in the future, the system must be first.”

This is the logic of the factory floor: an unending conveyor belt attended to by automatons.

Pure, logical, productive, and efficient.

In other words, inhuman.

(This is the system’s greatest weakness…but that’s a discussion for another time.)

Now let’s move this unrelenting logic of the factory floor to “knowledge work” –
You know, the stuff that’s creative, requires originality, involves personal interaction….

The human stuff.

People are just so damn variable.

Our day to day performance changes depending on how we slept, whether we’ve had sex in the last week, whether we’ve gotten enough sunlight, whether our blood sugar is too high (or too low).

These squishy bits might help with survival, but they’re terrible at work.

If that’s the case, why the drive to be more productive?

Why the creeping dread that our inboxes are full, our schedules packed, our phantom phones buzzing in our pockets?

Excess Capacity Goes To The Bosses

The lie was that increased productivity at work would mean more free time at home.

This has been the promise of all since technology since the wheel.

As recently as 2013, Ross Douhat wrote in the New York Times that we were experiencing “a kind of post-employment, in which people drop out of the work force and find ways to live, more or less permanently, without a steady job. So instead of spreading from the top down, leisure time — wanted or unwanted — is expanding from the bottom up.”

Wonderful. How’s all that awesome free-time treating you, post-employment?

In fact, technology almost never “frees up” time for leisure. Excess capacity always goes to the bosses.

More time means higher standards and more demands on that time. After all – if you can respond at midnight to a work text, why shouldn’t you?

We saw this with household labor. The promise of the dishwasher, washing machine and vacuum cleaner was they would provide household workers more free time.

Instead, standards of cleanliness rose with the excess capacity, and now mom is on her hands and knees cleaning the bathroom tiles with a toothbrush instead of reading Proust in the garden.

(Happy Mother’s Day, by the way!)

What this expansion of “work time” does is suffocate our “work-less” time.

“Work/Life balance” is a meme in our culture because one always overlaps with the other. There’s never a clean break.

So, rather than becoming more productive at work and finding ourselves with oodles of free time…

We find ourselves becoming more productive at work, and having less free time as a result.

And so, why not try what worked for the boss – and turn the tools of scientific management on ourselves?

And so Taylor waltzes into our homes, clipboard in hand.

Our desperate hope:

If we can become more productive in our vanishing “free” time…

We can cram more life into less hours.

Kanban Boards – popularized inside Toyota manufacturing plants – in our kitchens, helping keep track of chores.

Bullet journaling – “the analog method for the digital age that will help you track the past, order the present, and design your future” – to keep track of self-care.

Sex, intimacy, and date nights – scheduled across a shared Google Calendar.

We’ve bought into the belief that because business is effective, it must be good; 

And so the tools of business can help us live a decent life.

But they can’t.

Because business is about efficiency;

And efficiency is about process;

And efficient processes cut variety, chance, and waste;

And thus, efficient processes are dehumanizing.

And living a good life is all about being human, with all the messiness that entails.

——

You and I both know what the ultimate extension of this thought process looks like.

You and I have both seen the logic of the factory floor applied to living things.

Because a slaughterhouse is just another kind of factory.

No words

Just one pure moment of silence

Don’t worry.

You’ll never see it coming –

you’ll be far too busy.

—-

Because I don’t want to end it there…

And because I stated at the beginning that I’m working on a course on productivity…

What’s to do about all this?

I have a few thoughts:

1. Productivity needs to be, first and foremost, hedonistic.

By which I mean, it is designed to produce pleasure in your life. Fuck your boss, fuck your job, fuck your sidehustle –

The goal of productivity is pleasure.

Period.

2. Productivity needs to be humanizing.

Meaning:

We don’t build a system so that we can imprison ourselves in it.

We need to build a system that is human; one that allows for serendipity, for aesthetic joy, for the days when we just don’t feel like getting shit done.

It’s not about doing more. It’s about being effective in our human pursuits.

If you’re wondering what that looks like in practice…

Well, I’m working on it.

Stay tuned – I’ll tell you when it’s ready.

d

Sending Pens To Space

This post was originally an email sent to the Better Questions Email List. For more like it, please sign up – it’s free.


Am I Sending Pens To Space?


Some stories are just too good to be true.

This was my initial reaction upon reading about NASA’s issue with pens. In space.

Here’s a quick summary from Scientific American:

“During the height of the space race in the 1960s, legend has it, NASA scientists realized that pens could not function in space. They needed to figure out another way for the astronauts to write things down. So they spent years and millions of taxpayer dollars to develop a pen that could put ink to paper without gravity. But their crafty Soviet counterparts, so the story goes, simply handed their cosmonauts pencils.”

The version I read mentioned that NASA scientists eventually settled on some kind of nitrogen-powered mechanism that would literally force the ink downwards and onto the paper.

Everything about this story is perfect – and it seems perfect for illustrating a very specific kind of lesson.

Of course, the world is not so kind.

As Scientific American continues:

“Originally, NASA astronauts, like the Soviet cosmonauts, used pencils, according to NASA historians. In fact, NASA ordered 34 mechanical pencils from Houston’s Tycam Engineering Manufacturing, Inc., in 1965. They paid $4,382.50 or $128.89 per pencil. When these prices became public, there was an outcry and NASA scrambled to find something cheaper for the astronauts to use.”

Regardless of whether it’s complete bullshit or not – as, I would wager, most morality tales are – the lesson remains:

We love complex solutions to simple problems.

Why is that, exactly?

My guess is that there’s a few forces at play – but the main two I can see:

  • Simplicity is more confusing than we expect.
  • The questions we ask shape the answers we come up with.

Let’s start with the second point, since this is…you know. The Better Questions email.

Let me paraphrase Nassim Taleb, who, in Antifragile, paraphrases his character Fat Tony:

“Every question has, within it, the seed of it’s own answer.”

And since humans are problem solving creatures, it’s very easy to jump to “finding the answer” before we ever “analyze the question.”

This makes us easily led, and easily misled.

It’s also one of the reasons I particularly enjoy the “Trolley Problem” memes.

The Trolley Problem is a famous ethical quandary, which Wikipedia summarizes:

“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options:

  1. Do nothing and allow the trolley to kill the five people on the main track.
  2. Pull the lever, diverting the trolley onto the side track where it will kill one person.

Which is the more ethical option? Or, more simply: What is the right thing to do?”

While the formal trolley problem can seem a bit dour, the Trolley Problem MEME turns it on its head by rejecting the duality of the question and positing any number of humorous or absurd problems and outcomes.

The humor – for me, anyway – rests on the idea of subverting the question itself, rather than putting yourself in knots trying to come up with the “right” answer.

Similarly, you get very different outcomes if you ask “How do we make pens work in space?” or “how do I most conveniently write in space?”

It’s important to take a moment and analyze the questions people ask you…and wonder what assumptions are embedded within.

Which brings us to the next force pushing us towards complexity:

The fear of simplicity.

It’s possible we lean on “complex” solutions because nothing scares us more than the complexity of the simple.

What happens if we do the “simple” thing and it doesn’t work?

The more time we spend planning, building, complicating…

The less time we spend throwing our ideas against the walls of reality and seeing what sticks.

The feedback we get is far too often, “Hey – you don’t even understand the basics.”

No point in making an elaborate diet plan involving limiting specific types of macros…

If you can’t simply measure what you eat.

Or control your food choices.

And so on.

The simple is more complex than we give it credit for…

And our retreat to complexity is based more often on fear than it is on sophistication.

So, ask yourself:

Am I sending pens to space?

« Older posts Newer posts »

© 2024 No Less Than.

Theme by Anders NorenUp ↑