Free Novel Read

Rationality- From AI to Zombies Page 2

329. Bayesians vs. Barbarians

  330. Beware of Other-Optimizing

  331. Practical Advice Backed by Deep Theories

  332. The Sin of Underconfidence

  333. Go Forth and Create the Art!

  Bibliography

  Preface

  You hold in your hands a compilation of two years of daily blog posts. In retrospect, I look back on that project and see a large number of things I did completely wrong. I’m fine with that. Looking back and not seeing a huge number of things I did wrong would mean that neither my writing nor my understanding had improved since 2009. Oops is the sound we make when we improve our beliefs and strategies; so to look back at a time and not see anything you did wrong means that you haven’t learned anything or changed your mind since then.

  It was a mistake that I didn’t write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.

  In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing, which was that I didn’t realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn’t realize that part was the priority; and regarding this I can only say “Oops” and “Duh.”

  Yes, sometimes those big issues really are big and really are important; but that doesn’t change the basic truth that to master skills you need to practice them and it’s harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)

  A third huge mistake I made was to focus too much on rational belief, too little on rational action.

  The fourth-largest mistake I made was that I should have better organized the content I was presenting in the sequences. In particular, I should have created a wiki much earlier, and made it easier to read the posts in sequence.

  That mistake at least is correctable. In the present work Rob Bensinger has reordered the posts and reorganized them as much as he can without trying to rewrite all the actual material (though he’s rewritten a bit of it).

  My fifth huge mistake was that I—as I saw it—tried to speak plainly about the stupidity of what appeared to me to be stupid ideas. I did try to avoid the fallacy known as Bulverism, which is where you open your discussion by talking about how stupid people are for believing something; I would always discuss the issue first, and only afterwards say, “And so this is stupid.” But in 2009 it was an open question in my mind whether it might be important to have some people around who expressed contempt for homeopathy. I thought, and still do think, that there is an unfortunate problem wherein treating ideas courteously is processed by many people on some level as “Nothing bad will happen to me if I say I believe this; I won’t lose status if I say I believe in homeopathy,” and that derisive laughter by comedians can help people wake up from the dream.

  Today I would write more courteously, I think. The discourtesy did serve a function, and I think there were people who were helped by reading it; but I now take more seriously the risk of building communities where the normal and expected reaction to low-status outsider views is open mockery and contempt.

  Despite my mistake, I am happy to say that my readership has so far been amazingly good about not using my rhetoric as an excuse to bully or belittle others. (I want to single out Scott Alexander in particular here, who is a nicer person than I am and an increasingly amazing writer on these topics, and may deserve part of the credit for making the culture of Less Wrong a healthy one.)

  To be able to look backwards and say that you’ve “failed” implies that you had goals. So what was it that I was trying to do?

  There is a certain valuable way of thinking, which is not yet taught in schools, in this present day. This certain way of thinking is not taught systematically at all. It is just absorbed by people who grow up reading books like Surely You’re Joking, Mr. Feynman or who have an unusually great teacher in high school.

  Most famously, this certain way of thinking has to do with science, and with the experimental method. The part of science where you go out and look at the universe instead of just making things up. The part where you say “Oops” and give up on a bad theory when the experiments don’t support it.

  But this certain way of thinking extends beyond that. It is deeper and more universal than a pair of goggles you put on when you enter a laboratory and take off when you leave. It applies to daily life, though this part is subtler and more difficult. But if you can’t say “Oops” and give up when it looks like something isn’t working, you have no choice but to keep shooting yourself in the foot. You have to keep reloading the shotgun and you have to keep pulling the trigger. You know people like this. And somewhere, someplace in your life you’d rather not think about, you are people like this. It would be nice if there was a certain way of thinking that could help us stop doing that.

  In spite of how large my mistakes were, those two years of blog posting appeared to help a surprising number of people a surprising amount. It didn’t work reliably, but it worked sometimes.

  In modern society so little is taught of the skills of rational belief and decision-making, so little of the mathematics and sciences underlying them . . . that it turns out that just reading through a massive brain-dump full of problems in philosophy and science can, yes, be surprisingly good for you. Walking through all of that, from a dozen different angles, can sometimes convey a glimpse of the central rhythm.

  Because it is all, in the end, one thing. I talked about big important distant problems and neglected immediate life, but the laws governing them aren’t actually different. There are huge gaps in which parts I focused on, and I picked all the wrong examples; but it is all in the end one thing. I am proud to look back and say that, even after all the mistakes I made, and all the other times I said “Oops” . . .

  Even five years later, it still appears to me that this is better than nothing.

  —Eliezer Yudkowsky,

  February 2015

  Biases: An Introduction

  by Rob Bensinger

  It’s not a secret. For some reason, though, it rarely comes up in conversation, and few people are asking what we should do about it. It’s a pattern, hidden unseen behind all our triumphs and failures, unseen behind our eyes. What is it?

  Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls. Perhaps three of the ten balls will be red, and you’ll correctly guess how many red balls total were in the urn. Or perhaps you’ll happen to grab four red balls, or some other number. Then you’ll probably get the total number wrong.

  This random error is the cost of incomplete knowledge, and as errors go, it’s not so bad. Your estimates won’t be incorrect on average, and the more you learn, the smaller your error will tend to be.

  On the other hand, suppose that the white balls are heavier, and sink to the bottom of the urn. Then your sample may be unrepresentative in a consistent direction.

  That sort of error is called “statistical bias.” When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction.

  If you’re used to holding knowledge and inquiry in high esteem, this is a scary prospect. If we want to be sure that learning more will help us, rather than making us worse off than we were before, we need to discover and correct for biases in our data.

  The idea of cognitive bias in psychology works in an analogous way. A cognitive bias is a systematic error in how we think, as opposed to a random error or one that’s merely caused by our ignorance. Whereas statistical bias skews a sample so that it less closely resembles a larger population, cognitive biases skew our beliefs so that they less accurately represent the
facts, and they skew our decision-making so that it less reliably achieves our goals.

  Maybe you have an optimism bias, and you find out that the red balls can be used to treat a rare tropical disease besetting your brother. You may then overestimate how many red balls the urn contains because you wish the balls were mostly red. Here, your sample isn’t what’s biased. You’re what’s biased.

  Now that we’re talking about biased people, however, we have to be careful. Usually, when we call individuals or groups “biased,” we do it to chastise them for being unfair or partial. Cognitive bias is a different beast altogether. Cognitive biases are a basic part of how humans in general think, not the sort of defect we could blame on a terrible upbringing or a rotten personality.1

  A cognitive bias is a systematic way that your innate patterns of thought fall short of truth (or some other attainable goal, such as happiness). Like statistical biases, cognitive biases can distort our view of reality, they can’t always be fixed by just gathering more data, and their effects can add up over time. But when the miscalibrated measuring instrument you’re trying to fix is you, debiasing is a unique challenge.

  Still, this is an obvious place to start. For if you can’t trust your brain, how can you trust anything else?

  It would be useful to have a name for this project of overcoming cognitive bias, and of overcoming all species of error where our minds can come to undermine themselves.

  We could call this project whatever we’d like. For the moment, though, I suppose “rationality” is as good a name as any.

  Rational Feelings

  In a Hollywood movie, being “rational” usually means that you’re a stern, hyperintellectual stoic. Think Spock from Star Trek, who “rationally” suppresses his emotions, “rationally” refuses to rely on intuitions or impulses, and is easily dumbfounded and outmaneuvered upon encountering an erratic or “irrational” opponent.2

  There’s a completely different notion of “rationality” studied by mathematicians, psychologists, and social scientists. Roughly, it’s the idea of doing the best you can with what you’ve got. A rational person, no matter how out of their depth they are, forms the best beliefs they can with the evidence they’ve got. A rational person, no matter how terrible a situation they’re stuck in, makes the best choices they can to improve their odds of success.

  Real-world rationality isn’t about ignoring your emotions and intuitions. For a human, rationality often means becoming more self-aware about your feelings, so you can factor them into your decisions.

  Rationality can even be about knowing when not to overthink things. When selecting a poster to put on their wall, or predicting the outcome of a basketball game, experimental subjects have been found to perform worse if they carefully analyzed their reasons.3,4 There are some problems where conscious deliberation serves us better, and others where snap judgments serve us better.

  Psychologists who work on dual process theories distinguish the brain’s “System 1” processes (fast, implicit, associative, automatic cognition) from its “System 2” processes (slow, explicit, intellectual, controlled cognition).5 The stereotype is for rationalists to rely entirely on System 2, disregarding their feelings and impulses. Looking past the stereotype, someone who is actually being rational—actually achieving their goals, actually mitigating the harm from their cognitive biases—would rely heavily on System-1 habits and intuitions where they’re reliable.

  Unfortunately, System 1 on its own seems to be a terrible guide to “when should I trust System 1?” Our untrained intuitions don’t tell us when we ought to stop relying on them. Being biased and being unbiased feel the same.6

  On the other hand, as behavioral economist Dan Ariely notes: we’re predictably irrational. We screw up in the same ways, again and again, systematically.

  If we can’t use our gut to figure out when we’re succumbing to a cognitive bias, we may still be able to use the sciences of mind.

  The Many Faces of Bias

  To solve problems, our brains have evolved to employ cognitive heuristics—rough shortcuts that get the right answer often, but not all the time. Cognitive biases arise when the corners cut by these heuristics result in a relatively consistent and discrete mistake.

  The representativeness heuristic, for example, is our tendency to assess phenomena by how representative they seem of various categories. This can lead to biases like the conjunction fallacy. Tversky and Kahneman found that experimental subjects considered it less likely that a strong tennis player would “lose the first set” than that he would “lose the first set but win the match.”7 Making a comeback seems more typical of a strong player, so we overestimate the probability of this complicated-but-sensible-sounding narrative compared to the probability of a strictly simpler scenario.

  The representativeness heuristic can also contribute to base rate neglect, where we ground our judgments in how intuitively “normal” a combination of attributes is, neglecting how common each attribute is in the population at large.8 Is it more likely that Steve is a shy librarian, or that he’s a shy salesperson? Most people answer this kind of question by thinking about whether “shy” matches their stereotypes of those professions. They fail to take into consideration how much more common salespeople are than librarians—seventy-five times as common, in the United States.9

  Other examples of biases include duration neglect (evaluating experiences without regard to how long they lasted), the sunk cost fallacy (feeling committed to things you’ve spent resources on in the past, when you should be cutting your losses and moving on), and confirmation bias (giving more weight to evidence that confirms what we already believe).10,11

  Knowing about a bias, however, is rarely enough to protect you from it. In a study of bias blindness, experimental subjects predicted that if they learned a painting was the work of a famous artist, they’d have a harder time neutrally assessing the quality of the painting. And, indeed, subjects who were told a painting’s author and were asked to evaluate its quality exhibited the very bias they had predicted, relative to a control group. When asked afterward, however, the very same subjects claimed that their assessments of the paintings had been objective and unaffected by the bias—in all groups!12,13

  We’re especially loathe to think of our views as inaccurate compared to the views of others. Even when we correctly identify others’ biases, we have a special bias blind spot when it comes to our own flaws.14 We fail to detect any “biased-feeling thoughts” when we introspect, and so draw the conclusion that we must just be more objective than everyone else.15

  Studying biases can in fact make you more vulnerable to overconfidence and confirmation bias, as you come to see the influence of cognitive biases all around you—in everyone but yourself. And the bias blind spot, unlike many biases, is especially severe among people who are especially intelligent, thoughtful, and open-minded.16,17

  This is cause for concern.

  Still . . . it does seem like we should be able to do better. It’s known that we can reduce base rate neglect by thinking of probabilities as frequencies of objects or events. We can minimize duration neglect by directing more attention to duration and depicting it graphically.18 People vary in how strongly they exhibit different biases, so there should be a host of yet-unknown ways to influence how biased we are.

  If we want to improve, however, it’s not enough for us to pore over lists of cognitive biases. The approach to debiasing in Rationality: From AI to Zombies is to communicate a systematic understanding of why good reasoning works, and of how the brain falls short of it. To the extent this volume does its job, its approach can be compared to the one described in Serfas, who notes that “years of financially related work experience” didn’t affect people’s susceptibility to the sunk cost bias, whereas “the number of accounting courses attended” did help.

  As a consequence, it might be necessary to distinguish between experience and expertise, with expertise meaning “the development of a schematic princi
ple that involves conceptual understanding of the problem,” which in turn enables the decision maker to recognize particular biases. However, using expertise as countermeasure requires more than just being familiar with the situational content or being an expert in a particular domain. It requires that one fully understand the underlying rationale of the respective bias, is able to spot it in the particular setting, and also has the appropriate tools at hand to counteract the bias.19

  The goal of this book is to lay the groundwork for creating rationality “expertise.” That means acquiring a deep understanding of the structure of a very general problem: human bias, self-deception, and the thousand paths by which sophisticated thought can defeat itself.

  A Word About This Text

  Rationality: From AI to Zombies began its life as a series of essays by Eliezer Yudkowsky, published between 2006 and 2009 on the economics blog Overcoming Bias and its spin-off community blog Less Wrong. I’ve worked with Yudkowsky for the last year at the Machine Intelligence Research Institute (MIRI), a nonprofit he founded in 2000 to study the theoretical requirements for smarter-than-human artificial intelligence (AI).

  Reading his blog posts got me interested in his work. He impressed me with his ability to concisely communicate insights it had taken me years of studying analytic philosophy to internalize. In seeking to reconcile science’s anarchic and skeptical spirit with a rigorous and systematic approach to inquiry, Yudkowsky tries not just to refute but to understand the many false steps and blind alleys bad philosophy (and bad lack-of-philosophy) can produce. My hope in helping organize these essays into a book is to make it easier to dive in to them, and easier to appreciate them as a coherent whole.

  The resultant rationality primer is frequently personal and irreverent—drawing, for example, from Yudkowsky’s experiences with his Orthodox Jewish mother (a psychiatrist) and father (a physicist), and from conversations on chat rooms and mailing lists. Readers who are familiar with Yudkowsky from Harry Potter and the Methods of Rationality, his science-oriented take-off of J.K. Rowling’s Harry Potter books, will recognize the same irreverent iconoclasm, and many of the same core concepts.