Daniel Kahneman is a celebrated psychologist, economist, Nobel Prize winner, and author of the much-lauded Thinking, Fast and Slow and his latest, Noise: A Flaw in Human Judgment.
What We Discuss with Daniel Kahneman:
- Why we don’t always produce the same results when faced with the same facts on two different occasions.
- How noise — in this context, variability in judgments that should be identical — influences our choices.
- How the detrimental effects of noise in medicine, law, economic forecasting, forensic science, bail, child protection, strategy, performance reviews, and personnel selection can ruin (and even end) lives.
- How to tell the difference between noise and good old-fashioned bias.
- How we can reduce the role of noise and bias in our lives to make our best choices.
- And much more…
Like this show? Please leave us a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Making decisions is one of the hardest things required of us as human beings. Add to this the elements of noise (variability in judgments that should be identical) and bias (judgment errors that are consistent), and it’s a wonder we ever make good decisions.
On this episode, we talk to famed psychologist, economist, Nobel Prize winner, and author of Thinking, Fast and Slow and Noise: A Flaw in Human Judgment Daniel Kahneman about how we can identify when noise and bias interfere in our decisions and reduce their role in the process so we can make our best possible choices. Listen, learn, and enjoy!
Please Scroll Down for Featured Resources and Transcript!
Please note that some of the links on this page (books, movies, music, etc.) lead to affiliate programs for which The Jordan Harbinger Show receives compensation. It’s just one of the ways we keep the lights on around here. Thank you for your support!
Sign up for Six-Minute Networking — our free networking and relationship development mini course — at jordanharbinger.com/course!
This Episode Is Sponsored By:
LifeLock gives you all-in-one protection for your identity, devices, and online privacy; there’s a victim every three seconds, so don’t become one of them. Save up to 25 percent off your first year of LifeLock at lifelock.com/jordan!
Blue Moon Belgian White is refreshing for the palate of the sophisticated beer aficionado without succumbing to snobbery. Not sure if it’s available in your area? get.bluemoonbeer.com allows you to find local stores that carry Blue Moon or have it delivered in under three hours!
Have you ever thought about the fact that where you choose to live directly affects the “you” you become? Apartments.com has the most listings, which means you have the most apartments, townhomes, condos, and houses to choose from. Change your apartment, change the world at Apartments.com here!
Simple Mobile was founded on the idea that there is a better way to do wireless. Unlimited, no-contract plans starting at $25. Just text BYOP to 611611 to see if your phone is compatible, and visit simplemobile.com to find out more!
The Good is a completely plant-based, non-GMO skincare product that has 96 percent of its users reporting healthier and more energetic-looking skin. Get 20 percent off your first purchase of The Good at calderalab.com/jordan, or use code JORDAN at checkout!
Miss the show we did with The 48 Laws of Power author Robert Greene? Catch up here with episode 117: What You Need to Know about the Laws of Human Nature!
Thanks, Daniel Kahneman!
Click here to let Jordan know about your number one takeaway from this episode!
And if you want us to answer your questions on one of our upcoming weekly Feedback Friday episodes, drop us a line at firstname.lastname@example.org.
Resources from This Episode:
- Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein
- Thinking, Fast and Slow by Daniel Kahneman
- Daniel Kahneman | Princeton University
- Daniel Kahneman | The Nobel Prize
- Daniel Kahneman: The Riddle of Experience vs. Memory | TED 2010
- Post-Olympic Depression Haunts the Games’ Winners and Losers | The Atlantic
- Brian Keating | Losing the Nobel Prize | Jordan Harbinger
- Nobel Prize Winner Daniel Kahneman: Lessons from Hitler’s SS and the Danger In Trusting Your Gut | Forbes
- Noise: How to Overcome the High, Hidden Cost of Inconsistent Decision Making | HBR
- How to Unleash the Wisdom of Crowds | The Conversation
- Can ‘Decision Hygiene’ Help Fight ‘Noise’ Behind Bad Judgment in the Legal Profession? | Mass LOMAP
- How the Wisdom of Crowds, and of the Crowd Within, Are Affected by Expertise | Cognitive Research: Principles and Implications
- Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women | Reuters
- Terrorism Risk Coverage in the Post-9/11 Era: A Comparison of New Public-Private Partnerships in France, Germany, and the US | SpringerLink
- What Happens When Self-Driving Cars Kill People? | Forbes
- Algorithms Should’ve Made Courts More Fair. What Went Wrong? | Wired
- Kasparov vs. Deep Blue: The Match That Changed History | Chess.com
- Do I Really Have to Be Actively Open-Minded? | Psychology Today
- Superforecasting: The Art and Science of Prediction by Philip Tetlock and Dan Gardner
- The Peak-End Rule, Explained | The Decision Lab
- Annie Duke | How to Make Decisions Like a Poker Champ | Jordan Harbinger
- Herding, Social Influence, and Economic Decision-Making: Socio-Psychological and Neuroscientific Analyses | Philosophical Transactions of the Royal Society
- Does America Have a Problem With ‘Bullshit Receptivity’? | Psychology Today
- The Bias Snowball and the Bias Cascade Effects: Two Distinct Biases That May Impact Forensic Decision Making | Journal of Forensic Sciences
Daniel Kahneman | When Noise Destroys Our Best of Choices (Episode 518)
Jordan Harbinger: Coming up next on The Jordan Harbinger Show.
[00:00:03] Daniel Kahneman: It's not a matter of judgment. So in methods of judgment, some level of noise is always expected. Otherwise, it wouldn't be judgment. So the big thing here is that there's a lot more noise than people expect. So, you know, our motto in summarizing, what we'd learned was wherever there is judgment, there is noise and there's a lot more of it than you think.
[00:00:32] Jordan Harbinger: Welcome to the show. I'm Jordan Harbinger. On The Jordan Harbinger Show, we decode the stories, secrets, and skills with the world's most fascinating people. We have in-depth conversations with people at the top of their game, astronauts and entrepreneurs, spies and psychologists, and even the occasional journalist turned poker champion, war correspondent, or neuroscientists. Each episode turns our guests' wisdom into practical advice that you can use to build a deeper understanding of how the world works and become a better critical thinker.
[00:01:00] If you're new to the show, or you're looking for a handy way to tell your friends about it, we have episodes starter packs. These are collections of your favorite episodes organized by popular topics. These will help new listeners get a taste of everything that we do here on the show. Just visit jordanharbinger.com/start to get started or to help somebody else get started with us. Of course, I always appreciate it when you share the show, that's why I ask you to do it every time you listen.
[00:01:23] Today, Daniel Kahneman, Dr. Daniel Kahneman, he is like the Sigmund Freud of our generation, except he's not wrong about almost everything, so far anyway. In 2015, The Economist listed him as the seventh most influential economist in the world, which is amazing because he's not actually an economist. He won the Nobel prize in economics as well, even though he's a psychologist, I can only assume that they just really, really want him to become an economist and count him as one of their own. And I can't blame them because the man is brilliant. He's also a pretty cool cat. As you'll hear today on the show. Today, we discuss decision-making and why us humans don't always produce the same results when faced with the same facts on two different occasions, which is kind of the entire point of science, right? Reproducing the exact same results. These differences in outcomes are called noise. And they're the subject of Dr. Kahneman's latest book also called Noise. It seems that whoever titled the book had the same level of creativity as the person and who named this podcast.
[00:02:18] Noise in most instances might seem minimal or trivial, but once you multiply blind noise by every single decision you make during the day. And all the decisions from all of the other humans you're working with or around, you can end up with their gigantic sets of problems. For example, noise comes into play when we're making decisions like who should go to prison and for how long, who needs surgery and what kind, what kind of medication to take and how much. You get the idea. People's lives are actually at stake here. So it behooves us as a species to try and figure out what causes noise and how to minimize it, which is what we'll explore here today. It's hard to discuss noise without talking about bias as well. And people recognize bias in others, much better than they recognize it in themselves. For example, I have no bias whatsoever, but to all of you, it's really shocking. This is called the bias blind spot, acknowledging bias in others, but not in yourself. And we'll go into how we can shine a light on those blind spots so we can make better decisions. There's a whole lot in this one. Dr. Kahneman is next level. It is really an honor to have him here on the show.
[00:03:17] And if you're wondering how I managed to book all these amazing folks, Nobel prize winners, authors, thinkers, performers, creators every week, it's because of the network. I'm teaching you how to build your network for free, whether you're using it for personal reasons, business reasons, it'll help you. I promise you it's been a game changer for me and for my business. The courses free jordanharbinger.com/course is where you can find it. And most of the guests on the show, they subscribe to the course. They contribute to the course in some way. Come join us, you'll be in smart company where you belong. Now, let's get to it with Daniel Kahneman.
[00:03:51] When you won the Nobel prize I assume you were happy and satisfied, but as somebody who studies psychology and in fact won a Nobel prize for their efforts, I'm sure you're familiar with this concept of Olympic gold medalists who get depressed after winning and how the expectation of how they would feel never quite matches the reality of how they do feel. And I'm wondering if you had that same experience with the Nobel prize, like, were you standing there in real-time and thinking, "This is great, but I'm probably going to be subject to that same phenomenon.
[00:04:20] Daniel Kahneman: No, I did not have that worry. I think the gold medalist is a very different thing and it was actually better than expected. So there were many things about getting the Nobel that were big surprise, like finding out that other people who know me are delighted that I got it. So you're spreading joy all around. That was a big pleasure.
[00:04:43] Jordan Harbinger: Do you think it's different from winning a gold medal because those people are working for the medal and you were maybe just doing the work and the prize was ancillary to the—?
[00:04:52] Daniel Kahneman: That is certainly part of it. You keep competing with yourself in sports. That wasn't true for me. I don't think that's true for most scientists.
[00:05:03] Jordan Harbinger: Competing with yourself, do you mean, so when you look at science, are you thinking, "I'm just researching, I'm trying to uncover truth," whereas an athlete is looking at other people and also comparing themselves to other people and comparing themselves to themselves — that's not happening with science?
[00:05:17] Daniel Kahneman: Much less, I think. It is competitive, but the essence of it is not to win. So the essence of it is something else.
[00:05:25] Jordan Harbinger: That's true. I guess if you — well, maybe there's less coming in second place or third place in science.
[00:05:31] Daniel Kahneman: That's right.
[00:05:32] Jordan Harbinger: And also if you do, you can just keep going, whereas in the Olympics, your career might be over at that point because you're too old or something.
[00:05:38] Daniel Kahneman: Yeah. There's really people who work without thinking about their ranking. And occasionally, you know, you get a prize and that's wonderful, but this is not what you were working for. And you don't live with it for all that long. You forget , even for me.
[00:05:55] Jordan Harbinger: So, yeah, I suppose you don't wake up every morning and look at your Nobel prize certificates or whatever the statuette and think — you're looking at the work. Not at the medal.
[00:06:04] Daniel Kahneman: Yeah. I don't think that people think well that much about their life, but I certainly don't and you know, it's been almost 20 years and you know, it's really faded. It's not that I think about it often.
[00:06:17] Jordan Harbinger: It makes it, I guess, it's, it's more people like me that talk about it versus you.
[00:06:22] Daniel Kahneman: I think that's true. I think, you know that for some reason, we, I don't understand because the price is really much overrated. I mean, there are many prizes that are at least as important or more significant for a scientist, but this one is the one that the public knows about and it becomes part of your identity when you get it. So other people think about you in that way, but of course I don't.
[00:06:46] Jordan Harbinger: That's interesting. I don't think you hear too many people say that Nobel prize is overrated. I think that's a good clip right there. I've heard that the Nobel prize can ruin your career. You mentioned this in a previous interview, you said, "Winning a Nobel prize could actually be bad for you if you are a scientist." How can this be possible? Obviously that didn't happen with you?
[00:07:06] Daniel Kahneman: Well, I mean, it can be bad for you if it happens too early. It can be bad for you. If you get distracted. There are lots of temptations, lots of opportunities, lots of things, lots of calls on your time. And there are people whose careers don't progress very much after that. I was quite old when I got it, I was 68, so I wasn't in big danger and it really didn't damage me in any way at all, I don't think.
[00:07:34] Jordan Harbinger: Yeah, I suppose, at that point, you can retire if you want to and just say, "All right, well—" but you didn't do that, obviously. I mean, you're doing a full book tour at this point. You have to love the work if at this point in your life, you're still making the rounds on shows like mine. You know, it's got to be because you love the work and not because you're looking for recognition after, after winning a Nobel prize.
[00:07:54] Daniel Kahneman: I think that's true. It's the thing that I most enjoy doing. So that's what I do.
[00:07:59] Jordan Harbinger: I know you've probably been telling this story a lot lately, but it seems to be something that interviewers love, myself included. I know you grew up in part in Nazi-occupied Paris, and I'm wondering how that experience may or may not have gotten you interested in the complexities of people in human nature.
[00:08:19] Daniel Kahneman: The story is that in 1941, there was a curfew for Jews and the Jews had to wear a yellow star. And I was seven years old and I was playing with a friend and I forgot the time and I missed the curfew. So I put my sweater inside out and I walked home as I was approaching home alone on the street, actually it was the sort of street, there was a German soldier walking toward me. He was wearing the black uniform, which meant he was SS. I knew enough that that was the worst of the worst. And then, we approached each other. He beckoned me, picked me up. I was quite afraid that he would see inside my sweater, but he didn't and he hugged me real tight. And then he put me down, took out his wallet, showed me a picture of the little boy and gave me some money. And then, he went his way and I went home. And this was the lesson, you know, and the complexity of human nature. I mean, it's the irony. I think I could see how complicated it was and the irony of it all. That here is this man, and he's not all bad. And he has a boy who was very much like me at the same time, he probably would kill me.
[00:09:38] Jordan Harbinger: It seems like that would make you, or make people, myself included, that's kind of a scary thought because we'd like to think that we can evaluate people based on some impression or even as a collective group. But when you look at somebody who is in the SS, which is something you had to work to get into, and those were the people who ran the concentrations camps and things like that, and orchestrated the extermination of millions of Jews that you and I are both related to, you know, that's even more scary because it makes them these complex characters, not just bad people who were sociopath's all around. This is a guy who loved his family and walked around and gave you a hug and money out of his own wallet. But if he'd seen the star would have — I mean, what happens if you miss curfew, even if you're seven years old in Nazi-occupied Paris, do you know?
[00:10:25] Daniel Kahneman: I don't know, but probably not all that much. But it wasn't yet the extermination phase that started a bit later.
[00:10:34] Jordan Harbinger: Yeah. I suppose nothing good is probably the answer to that.
[00:10:37] Daniel Kahneman: Nothing good.
[00:10:37] Jordan Harbinger: Yeah. You spent the rest of the war moving around and hiding, and I know you'd mentioned you'd lived in like a converted chicken coop — did I understand that correctly?
[00:10:46] Daniel Kahneman: Yes. That's where I was living when we were liberated. We were in a converted chicken coop in a small village.
[00:10:53] Jordan Harbinger: Does every single interviewer try to read into the story and make it mean something about your current field of study even if it doesn't?
[00:11:00] Daniel Kahneman: I mean, you know, many people ask me whether my trial would influence my life and I don't find it. I mean, I was in Europe during the war and during the Holocaust, but, you know, I survived. I was never really hungry. I was never tortured, you know, so I was very lucky and I don't think it influenced my life a lot thereafter. And the complexity of life was something that I think I would have come to in any event, but certainly those experiences added to it.
[00:11:31] Jordan Harbinger: It just seems, I think, hard for people like me to believe that living in a chicken coop and moving around and losing some of your friends and family and parents and things like that during the war couldn't have influenced your life. I think that's why people keep, maybe, asking about this, not just because it's sort of poetic in some way that this happened, but also how could that not be info — I mean, you lived with a chicken coop. Nobody does that, right? Like that's not a normal thing for kids to grow up in a chicken coop. That's converted because you're being hunted. But I guess, were you young enough that that didn't affect you. Do you think that is why?
[00:12:07] Daniel Kahneman: And, you know, even all the people come out — I mean, the striking thing, you know, when you live in Israel, there were many survivors of camps with numbers tattooed on their arm would clearly, you know, suffered horrors and they were living perfectly normal lives. People really overcome. And I had relatively, not all that much to overcome.
[00:12:29] Jordan Harbinger: We've discussed your work a lot on this show in the past. You weren't there. But you know, I read Thinking, Fast and Slow. And we talked about system one and system two thinking. Today, I'd like to talk about noise, which is the subject of your new book, which I read. And of course loved. Can you tell us briefly what you mean by noise? You have a shooting range analogy that I think is really apt.
[00:12:50] Daniel Kahneman: Well, I take a measurement analogy actually. Suppose you are trying to measure a line and you measure it with a very fine ruler and you measure it repeatedly. Now, the first thing that's going to happen is you're not going to get exactly the same number every time. So if the scale is fine enough, there's going to be variation. That variation is noise. You are measuring the same object. The measurements in principle should be identical, but they vary. That's noise. And it's different from bias. Bias is if you are consistently and fairly consistently overestimating the length of the line, that's a bias. If you're underestimating, that's the bias, but you could have no bias. You could on average be just accurate. And yet there's substantial variation. That variation is noise, and it's the same with respect to judgment.
[00:13:45] So when you have different judges looking at the same crime, we imagine different judges, looking at the same defendant in principle, we would want them to set the same sentence. And when they don't set the same sentence, it's a bit like the measurement of the line that doesn't come out the same way, except that in the case of sentences, the differences are huge between judges and that we know. So there is a lot of noise in human judgment and that's what the book is about.
[00:14:18] Jordan Harbinger: So to recap, looking at the — there's a bathroom scale example, which I think is also possibly from the book. So to see the difference between bias and noise. If on average, the readings the scale gives are too high, or they're always too low compared to say a scale you know is accurate. That scale is biased, but if you step on the scale once at eight o'clock in the morning and it gives you one way and you step on it again, it also at eight o'clock and then 8:01 and then 8:02, and you get different readings that is noise, especially if they're some are above a hundred pounds and some are below a hundred pounds, but you know, you weigh a hundred pounds. That's what's noisy. And it seems like a scale. It could be both biased and also noisy.
[00:14:58] Daniel Kahneman: Most scales are both biased and noisy, certainly in my bathroom scale is certainly noisy and I suppose, it's biased because that's true of every scale. And judgements are in general, both biased and noisy. And the motivation for the book was the realization that actually in terms of overall contribution to error, noise may often be more important than bias. And that was something that was sort of new to me. It was a new thought and it's certainly something that is widely neglected. It's not a common thought. People think of error as bias, but I now think that noise is at least as important as bias.
[00:15:43] Jordan Harbinger: Many other studies — and this is what you were touching on before — demonstrate noise and professional judgments. You mentioned judges, radiologists you mentioned in the book as well, disagree on their readings of images and cardiologists on their surgery decisions. So this is not just, "Am I three pounds heavier than I was yesterday?" And is that accurate or is it noise? These are economic forecasts. These are fingerprint experts disagreeing about whether there's a match on a weapon, judges sentencing. It's not also that small of a number. I think the example you gave in the book was, or one of the examples you gave in the book was that there are sentences for, I think it was check fraud and one was 30 days in jail. And the other sentence was like 16 years, exact same circumstances, exact same crime. These are life-changing/life-ruining decisions based on noise.
[00:16:37] Daniel Kahneman: In those cases, you're talking about a system like a justice system, and you would want the investor system to speak in one voice, regardless of which particular judge speaks for it. Every judge speaks for the system. In an insurance company, every underwriters speaks for the company and sets. In the emergency room, every doctor, there are many situations, many systems in which you expect people in the same role to be basically interchangeable, and you want them to be uniform in the judgment they give. And it's economically significant, it's socially significant that actually people don't agree and they agreed much less than they expect to.
[00:17:21] Jordan Harbinger: Is there noise in every human judgment? Is this something that's specific to one we're looking at objects or trying to make a determination or is it pretty much ubiquitous?
[00:17:32] Daniel Kahneman: Well, you know, when we say that something is a matter of judgment, we allow for disagreement, we expect disagreement. So if it's a calculation, Then it has one answer and it's not a matter of judgment. So in methods of judgment, some level of noise is always expected. Otherwise, it wouldn't be dropped. So the big thing here is that there's a lot more noise than people expect. So, you know, our motto at summarizing, but we'd learned was wherever there is judgment, there is noise, and there's a lot more of it than you think.
[00:18:09] Jordan Harbinger: Which is a little terrifying, especially when we look at the cause of the noise. One particularly troubling example was that judges are — and I remember studying this in law school, that judges may be more lenient after they've eaten. And if they're hungry that causes harsher sentencing, or they're more lenient and cool whether you'd mentioned in the book as well, when their football team wins the weekend before they're more lenient. I mean, that is scary because it's not like, oh, they looked at the background and they misjudged this person's ability to do harm on society. They're just hungry or they're in a bad mood or they were late in somebody cut them off in traffic and now you're in jail for an extra decade. That's scary.
[00:18:50] Daniel Kahneman: I think it should be scary. And it's not only that, you know, it's you get one judge and the defendant reminds one judge of his daughter and he wouldn't remind another judge of his daughter or the other judge may not like his daughter. So a lot of chance and the thing that's actually very striking when you study noise, is that it's easy to think, for example, that some judges are more severe than others, more lenient than others. It's easy to imagine the judges pass different sentences when they're in a good mood or in a bad mood, but the biggest source of noise is actually. That when judges look at the same case, they really don't see it in the same way. Judges have taste. They have taste in crimes. They have taste in defendants. The tastes are as different from each other, as our personalities are from each other. And that is very strange because most of the time, you know, when I look at the world, I think I see it the way it is. So I think that you see it in the same way, but actually you and I probably don't see the world in the same way.
[00:20:01] Jordan Harbinger: If we see judges handing out some sentences that are much more harsh than others, it might look like noise, or it might look like some other sort of bias. How can we tell if something is noise or if it's racism, for example, especially when it comes to sentencing?
[00:20:18] Daniel Kahneman: Well, you can never detect noise in a single decision noise. As a statistical observation, it applies to a set of judgments. That should not vary and they do. That's when we have noise. So you can never detect noise sometimes on some occasions, when it's particularly blatant, you can see bias in a single judgment, but you will never see noise in a single judgment. And that is part of the reason that noise is generally neglected, which was part of the reason for writing this book.
[00:20:54] Jordan Harbinger: You're listening to The Jordan Harbinger Show with our guest Daniel Kahneman. We'll be right back.
[00:20:59] This episode is sponsored in part by LifeLock. Getting your COVID-19 vaccine is something to celebrate, but think before posting a picture of your vaccine card on social media, it contains or may contain personal information like your name and your birth date that can be used by cybercriminals to steal your identity. Of course, it's important to understand how cybercrime and identity theft are affecting our lives. Because every day we put our information at risk on the Internet. In an instant, a cybercriminal can harm what's yours, your finances, your credit, your reputation. That's why we use LifeLock up in this piece. LifeLock helps detect a wide range of identity threats, like your social security number for sale on the dark web. If they detect your information has potentially been compromised, they'll send you an alert. And if you do get all jacked up over here and somebody steals your identity, you got access to a dedicated restoration specialist. In other words, they make all the phone calls and send all those letters and try and straighten it all out. So you don't have to take time off of work or even think about it.
[00:21:53] Jen Harbinger: No one can prevent all identity theft or monitor all transactions at all businesses, but you can keep what's yours with LifeLock by Norton. Join now and save up to 25 percent off your first year at lifelock.com/jordan. That's lifelock.com/jordan for 25 percent off.
[00:22:08] Jordan Harbinger: This episode is also sponsored by Blue Moon. Blue Moon is on a mission to bring some brightness to your life, break up the routine, which we can all use right about. Now, Blue Moon has a refreshing flavor. They throw a little Valencia orange peel in there for a subtle sweetness, hints of coriander — not that I would ever notice. Blue Moon Belgian White is one of a kind beer. I've been drinking it since college. I just had a friend over from Japan and he and I kicked it during Memorial Day weekend. And we were hanging out, drinking up some Blue Moon, maybe a little bit too much, but you know, it's been a while since I've socialized with other humans. Jen's newest hobby is baking pizzas in our backyard oven. So you can guess who the Guinea pig is here. Blue Moon has been the perfect pairing to wash down all the pizza that I've been eating lately. Have I told you all that? I've gained a little bit of weight, unrelated. That's all right. I've been working out. So embrace the return to your favorite bar, restaurant or backyard with a Blue Moon to break the routine and bring a little bit of brightness back to your life. Also be sure to try Blue Moon's latest brew Blue Moon Light Sky light and refreshing wheat beer, brewed with real Tangerine peel for a lighter exceptional taste and only 95 calories in 3.6 grams of carbs per 12 ounces.
[00:23:11] Jen Harbinger: Reach for a Blue Moon when you're in need for some added brightness, get Blue Moon and Light Sky delivered by visiting get.bluemoonbeer.com/jordan to see your delivery options. That's get.bluemoonbeer.com/jordan. Blue Moon made brighter. Celebrate responsibly. Blue Moon Brewing Company Golden Colorado Ale.
[00:23:28] Jordan Harbinger: Now back to Daniel Kahneman on The Jordan Harbinger Show.
[00:23:33] In the book you discuss the wisdom of crowds. And the example, I think, is guessing the weight of a cow or an ox or something along those lines. I suppose this is a famous example and I vaguely remember being forced to learn about this in a statistics class in college. Can you take us through this because this is the wisdom of crowds on the one hand, somehow that gets more accurate, which I kind of didn't really see coming as somebody who looks at groups of humans and things, what are you all thinking collectively, somehow we're right, when it comes to guessing simple thing.
[00:24:03] Daniel Kahneman: Well, it turns out that suppose you have a set of judgments at the same object and you average them. Then the more judgments there are that you average, the less noise there is. And you know, there is a statistical function. If the judgments are identical, are independent of each other, then the noise goes down with the square root of the number of observations. It's completely predictable. And if you have enough observations, noise goes down to zero. In that case, in that classic experiment by Francis Galton, there were more than a thousand people, I think. And they got the weight of the ox, on average was two pounds off. But even if there had been a bias, when you average a thousand judgments, noise is gone. So noise is a phenomenon of individual judgments or of judgments of small groups of people or judgments of groups of people that are not independent of each other. When you get independent judgements on average them, noise will go away.
[00:25:09] Jordan Harbinger: So then as a layman, it seems like noise would always cancel itself out. Like in some cases, if I'm guessing too high and another case is too low, don't those errors average out and that's why we tolerate it in the first place?
[00:25:21] Daniel Kahneman: Well—
[00:25:22] Jordan Harbinger: Not really.
[00:25:22] Daniel Kahneman: There's a big difference. If you're looking at the same line and you're measuring the same line repeatedly, then errors do cancel out. But if an underwriter sets the premium too high in for one case and too low for another case that underwriter has made two mistakes, they don't cancel out. And the same for doctors who over treat or under treat, over treating one patient and under treating another is okay on average, but you've made two mistakes. So errors cancel out in judgments of the same object. That's where noise disappears. In judgments of different objects, every variability always causes there and they don't cancel out at all. It's a very common misunderstanding by the way.
[00:26:11] Jordan Harbinger: Oh, okay, so this makes sense when you give a medical example, right? So if I'm looking at the average dosage, given to patients is 10 milligrams and that turns out to be fine and they get treated with that. But if I give somebody a hundred and I give somebody else one and they both die, those errors don't cancel out, especially for those two patients.
[00:26:30] Daniel Kahneman: Exactly.
[00:26:31] Jordan Harbinger: The hospital wide, it looks like it cancels out. But for those people I just made, like you said, too, very dangerous.
[00:26:38] Daniel Kahneman: That's right.
[00:26:39] Jordan Harbinger: Yeah.
[00:26:39] Daniel Kahneman: So you can have judgements or decisions that are unbiased on average, but if they are noisy, a lot of mistakes are being made.
[00:26:48] Jordan Harbinger: Yeah. Okay. So right, if I say something's going to take one week, but it takes two. And the next time I say we need three weeks, but only ended up needing one, I'm not correct on average, I'm just wrong a lot.
[00:26:58] Daniel Kahneman: Yeah. That's exactly it.
[00:27:00] Jordan Harbinger: Okay.
[00:27:00] Daniel Kahneman: And you know, we had another example that I think is useful, which is that if you have a company doing hiring and half of the people do hiring favor men and half favor women, then on average, there is no bias, but a lot of errors are being made. That is half of the people will miss out on talented men and half of them will miss out on talented women. That doesn't cancel out.
[00:27:26] Jordan Harbinger: Decision-making processes that reduce noise, improve systems and recurring decisions, but also they can improve singular decision. So this might be a little bit complex for people who are jogging right now and listening to this. But if we want to create a decision-making process, that reduces noise, it's great if we're making those decisions, let's say 10 times a day. But what about for big decisions, like getting married or taking a big job? Can we create systems that make those decisions less noisy as well?
[00:27:56] Daniel Kahneman: Well, yes. The logic of this is that we know of some procedures that if followed, when you make judgments or decision-making, repeated judgements or decisions will reduce noise and also by the way, reduce bias. So we not procedures that improve judgments, repeated judgements, we call them collectively decision hygiene. It's a very funny term, but we deliberately mean it that this is like washing your hands. So it's a set of procedures that are almost guaranteed to improve judgment on the whole, now, when you make many of them, but now think about the judgment that you make only once a singular decision. Something that could be very important to getting married, you should apply the certain hygiene to a single decision. There is no reason to think that decisions are different from repeated decisions. And if you have something that improves a procedure that improves repeated decisions, the same procedure would also improve singular decision.
[00:29:02] Jordan Harbinger: So the example of decision hygiene can be sort of explained with regular hygiene, right? Like if I'm washing my hands regularly, I don't look at my hand and go, "Oh, I'm pretty sure I just killed some staph, I probably killed some COVID-19 bacteria. I definitely killed various varieties of the common cold." I just know that my hands are clean or cleaner because I just killed a bunch of germs with soap and water. What does that look like in terms of decision-making? You know, what's the hand-washing version of decision-making? Are there specific practicals and tips we can use to make our decisions cleaner?
[00:29:38] Daniel Kahneman: Yeah. One of them, for example, there is a recommendation that when you face a decision breaking up the problem into aspects, into features, and evaluating each feature independently of the other and not making a global judgment until you're all done until you have the information about all the features, we call that delaying intuition. Delaying the global judgment, that will improve your judgment. We know that when you have multiple people involved in making a decision, you want them to reach their judgments and decisions independently of each other before they discuss them. Discussion reduces the independence. There is a general principle and the principle is elements of judgment should be independent of each other. It's like when you have multiple witnesses to a crime, you don't want them to talk with each other before you examine them.
[00:30:37] Jordan Harbinger: Okay. Right. So I wouldn't want to say, "Hey, look at this diagram, this looks like—" and then say a bunch of conclusions. I would say, "What does this look like to you?" without giving any conclusions and have those people come to conclusions on their own and be pretty confident in those conclusions or as confident as they can be, and then only after that say, "So this to me looks like XYZ. What did you come up with?" And make sure that they've already maybe written these down. They're not going to be subject to me influencing them at all.
[00:31:07] And I think in the book, the example you gave was fingerprint experts. This is problematic because I always assumed that when crime scene fingerprints were lifted, the fingerprints were nice and clean and they look like the ones I put on paper when I do an FBI security clearance for myself or something like that but they don't, they're partial. And then they might even tell the FBI fingerprint expert or the police fingerprint expert, "Hey, this is the person who matches the description of the suspect." This is the person's fingerprints who was there at the time, which might make them go, "Well, this looks like a match to me if it's a five foot, 10 white dude who was there at the time, then yeah, it's probably him." We would just want to give them that in a vacuum and say, "Is this them?" "Wrong, these are Abraham Lincoln's fingerprints. You don't know what you're talking about." Like, we want to be able to do that, right?
[00:31:55] Daniel Kahneman: And that applies the principle of independence. That is you worked each judgment to be made independently of the other. When you are giving people information that can bias them, you are making them less effective and they provide less useful information. It's a very general principle.
[00:32:15] Jordan Harbinger: If it's such a general principle, why do we seem to fail at this so often? I mean, I was horrified when I heard that fingerprinting experts are often given context for the fingerprint. That seems like the last thing we should be doing when looking for an impartial opinion or an expert opinion.
[00:32:31] Daniel Kahneman: Well, you know, if when applied the procedures that you were describing earlier, where a decision is to be made and by a group of people, and each of them makes the decision independently and only then do they start discussing. If you follow that procedure in the first place, it's a lot of work because people have to do their homework individually instead of hearing each other and reaching a consensual decision. So it's a lot of work. In the second place, people will discover how much noise there is, because it will turn out that their opinions will differ from each other. And actually people don't like discovering how much noise there is.
[00:33:11] There is a nice story that we heard a true story about college admission, where a psychologist was visiting a place where, you know, college admission decisions are being made. And he noticed that two people read each essay, but the first person who reads the essay leaves a grade that the other person reading the essay sees, and the psychologists were there, said, "Look, this is not the best procedure. The best procedure would be for the first person to read the essay and write his grade at the back of the essay. So the second person wouldn't see it." And the answer was, "We used to do it that way, but we stopped because there was so much disagreement." So people don't realize that discovering disagreement is actually a good thing. They feel that when you discover disagreement, this is a bad thing. That's a fundamental mistake. Discovering noise, realizing that there is noise, is good for the organization, although it's unpleasant.
[00:34:16] Jordan Harbinger: I see this does make a little bit of sense, right? Because if you and I are a team grading, a course that we teach together, then you're giving them a D minus and I'm saying, "No, this is a B minus. I mean, they had some errors here—" then you and I have to figure out how to reconcile that noise. We can either average the grades or we can say, "You're wrong and I'm right. And here's why." But if I just give them a B minus and you see it and you go, "Ah, Jordan liked him. I wasn't in love with the answers. I'll give them a C minus instead of a D minus because Jordan already liked it. He must've checked some other things. Maybe seeing something I'm not, and I have 48 more of these things." That makes our job easier. So the incentives are actually wrong, organizationally.
[00:34:56] Daniel Kahneman: Absolutely.
[00:34:57] Jordan Harbinger: Yeah. That—
[00:34:58] Daniel Kahneman: You put that very well.
[00:34:59] Jordan Harbinger: Thank you. Yeah. It seems like a bad idea though if it's life and death. Like great if we're grading papers and somebody passes who shouldn't — eh, it happens. Hopefully they'll learn later. If I'm putting someone in jail for 16 years and you would've given him 30 days, now, we've got a real problem on our hands. And that's a real example.
[00:35:15] Daniel Kahneman: Absolutely.
[00:35:16] Jordan Harbinger: When I'm making decisions as an individual, you mentioned that we can try and make the same decision in different settings. So it's almost like we're making a crowd, we're getting the wisdom of crowds, but we're the only crowd, we're just giving ourselves a crowd of decisions. How would that work in practice? What does that look like?
[00:35:32] Daniel Kahneman: Well, it's called the crowd within actually, and it's not very different from the idea of sleep on it. That is you make a judgment at one time and you say, "Let that judgment not be final. I revisit it. I'll ask myself that same question tomorrow." And quite possibly you'll get a somewhat different answer tomorrow. And by the way, the average of your first answer and your second answer is going to be usually more accurate than either one of them.
[00:36:02] Jordan Harbinger: So sleeping on it is always a good practical, I guess—
[00:36:05] Daniel Kahneman: When you can, it's a good idea. And sleeping on it for several weeks, by the way, is a better idea because the more you delay, the more independent you become and the more independent they become, the less noise there is.
[00:36:19] Jordan Harbinger: I've kind of noticed a little bit of this in my own life. For example, if somebody says, "Hey, why don't you have this author on your show?" And I go, "I don't care about this," and I delete the email. I will often go back and check it later because sometimes I'm hungry. I'm tired. It's the end of the day. And I found that I shouldn't evaluate potential guests for the show or advertisers or whatever when I'm hungry or tired. I do it first thing in the morning now, because I've said no to a lot of great people, because I would have said no to — I would have said no to Mother Teresa, because I haven't eaten for seven hours, right? I wouldn't be interested at all. And I realized you can't make every decision like that, but when it comes to big decisions, it seems like the more contexts we can manufacturer, maybe the better off we are, right?
[00:37:04] Daniel Kahneman: Well, I mean, you know what? The advice that I give which I follow myself is as if you're talking to a doctor and considering surgery and the doctor looks very tired, don't take the surgery. Wait until you get a doctor that looks really rested. Because when they are very tired, they make poor decisions. And that's noise by the way, because doctors make different decisions in the morning and late in the afternoon.
[00:37:29] Jordan Harbinger: That's almost like getting a second opinion, but it's not just getting a second opinion. It's to get a second opinion in a different context. Don't just go to the same tired doctor in the same tired doctor's office, or talk to the guy in the same office at the same time of day, right? Try and make the context as different as possible. What about algorithms? I know you've mentioned this in other work. These have to be more consistent, right? They're able to be anyway.
[00:37:53] Daniel Kahneman: The main characteristic of algorithms is that they are noise three. So if you have an issue, it can be a simple rule. If you apply a simple rule than when you present the same problem twice, you're going to get exactly the same answers. But when it's judgment and you present the same problem twice, you're going to get different answers. That gives algorithms or rules the basic advantage over human judgment, that they are noise-free.
[00:38:23] Jordan Harbinger: So humans are in a way inferior to statistical modeling, right? Because we're noisy and they're not.
[00:38:29] Daniel Kahneman: Well, that depends on the amount of information that the human has and that the algorithm has. In many situations, the humans have information that is very difficult to code. The data, there aren't enough data to train an algorithm. So there are many situations in which the algorithm, there is no algorithm that can compete with the human judgment. When the situation is such that the algorithm can compete, that is when the information is codeable and there are enough data and you can develop a rule. Then typically I'm sad to say, algorithms are going to be more accurate than humans and in part, because they are noise-free. And also in many cases, because there are less biased,
[00:39:14] Jordan Harbinger: I've read that there can be bias in algorithms too. And I suppose that, that comes down to whoever's coding the algorithm, right? Like if it's a bunch of, uh, Asian and white and Indian dudes that live in the bay area, which is where I am, which is typically who's coding algorithms, these days, there could be some bias in there that is more favorable to men than women or to Americans than non-Americans. Right, that could happen?
[00:39:36] Daniel Kahneman: Oh, certainly. I mean, you know, there was a famous study, I think, in Amazon where they found. That's when they applied an algorithm to select to hire people, I think, or to consider them for promotion, that algorithm was really biased against women. And the reason it was biased against women, whether the algorithm has been trained on people who had been successful at Amazon in the past, and these tended to be men. So it's easy for algorithms to be biased, but it's also possible to overcome the biases of algorithms. And then the advantage they keep is that they're noise-free.
[00:40:16] Jordan Harbinger: I don't envy whoever has to come up with the solution to that problem, because now you have to decide, is this algorithm hiring more men than women because it's biased or is it hiring more men than women because there's something that men have that women don't? That we don't really necessarily want to put in our company literature right now, or in our employee handbook, because it's an uncomfortable truth. That seems like a pretty thorny issue.
[00:40:39] Daniel Kahneman: It is very thorny, but you know, it shows up in a lot of contexts and the very definition of what is bias is extremely complicated. So there are no simple answers in that domain. But I think I would say this, that there probably is a bias against algorithms currently in society. Algorithms have a reputation that is worse than they deserve.
[00:41:06] Jordan Harbinger: Why do you think that that is? Is it because we haven't used them well, or is there another reason?
[00:41:12] Daniel Kahneman: I think there is a fairly simple reason that in general, we prefer natural thing to artificial or man-made things. Think about a self-driving car. The idea of a self-driving car killing a person is horrible. And much more than the idea of one driver killing a person, a human driver. Similarly for vaccines, the idea that you give someone a vaccine and they die from the vaccine is horrible. And by not using the vaccine, you might be killing dozens or hundreds of people. But the one person that the vaccine kills weighs like a lot of people dying natural death. So that asymmetry between artificial and natural explains, I think, at least a good part of our opposition to algorithms.
[00:42:04] Jordan Harbinger: This is The Jordan Harbinger Show with our guest Daniel Kahneman. We'll be right back.
[00:42:09] Apartments.com knows that we've been doing everything from home lately. Working from home, exercising from home schooling, from home breakfast, lunch, and deterring from home, listening to this podcast from home, wishing we were anywhere else on the planet, in from home. But with all of that extra time we've had inside our homes, we've gained a new found appreciation for making sure our place is the right place for us. That's where apartments.com comes in. Apartments.com has the most rental listings across apartments, houses, townhomes, and condos, as well as powerful search tools. So it's easy to find that special somewhere that offers exactly what you need. And thanks to its 3D virtual tours. You can now explore your potential new place from anywhere that includes such exotic locales as your boudoir, walk-in pantry your Alfresco dining area, even your guest powder room if you're feeling adventurous, just about anywhere with an Internet connection. So let your fingers enjoy a stroll across the nearest keyboard and visit apartments.com to start your rental search today. Apartments.com, the most popular place to find a place.
[00:44:07] This episode is sponsored in part by Caldera + Lab. Guys, if you're anything like me, you don't have a skincare regime. Jen's constantly nagging me to put on some lotion. After I wash my face or shaving, I'm guilty of not really putting in too much effort into skincare. Now that I'm in my 40s, probably a good time to start. I'm guilty of occasionally dipping into Jen's skincare stash as well until that is, I found an easy to use design for men's skin product called The Good by Caldera + Lab. It's non-toxic vegan, not that you should be eating it, and a multifunction serum that you use at night before bed, which well, of course, when I remember to do it, it's an easy one-step routine that leaves the skin a little bit more moisturized. I don't feel all crackly in the morning and they even have gone the extra mile in sourcing. All ingredients are either organically farmed or actually wild harvested by hand with a team of botanists in Jackson Hole, Wyoming. I don't know. They ski over and pick flowers, the serum and oil, but it doesn't cause breakouts. It doesn't go on greasy. Best of all, you can try it a hundred percent risk-free and if you don't love it, they'll refund you in full.
[00:45:06] Jen Harbinger: Receive 20 percent off your first purchase of The Good at calderalab.com/jordan, or use code Jordan at checkout.
[00:45:13] Jordan Harbinger: Thank you so much for listening to this show. It means the world to me. This is a great episode. I can, I know it already. You're enjoying the crap out of this one. I enjoyed recording it to learn more and get links to all the discounts and the deals from the sponsors, we put all those codes, all those websites and everything, they're all in one page. Just go to jordanharbinger.com/deals. That's where you'll find them all in one spot. Please do consider supporting those who support us. And don't forget, we have worksheets for many episodes of the show. If you want some of the drills and exercises and main takeaways talked about during the show, those are all in one easy place as well. That link is in the show notes at jordanharbinger.com/podcast. And now for the conclusion of our episode with Daniel Kahneman.
[00:45:55] We sort of see the inverse of this too, right? Where if I heard that somebody died in a car accident, of course, that's tragic. It's horrible. But it happens every day, probably thousands of times globally, or even thousands of times in the United States. I really don't know, but we don't spend trillions of dollars trying to prevent that. But if we have buildings blown up by terrorists, we are going to war. We are mobilizing the whole country, even though the odds of me getting blown up in a skyscraper are pretty much zero. We're throwing the entire country's military and defense resources at that problem. But if tomorrow 10,000 more people die of car accidents, it's just the way the cars are, you know, it's the risk of living your life.
[00:46:35] Daniel Kahneman: Absolutely. I mean, we know that the people's reaction to risks, they are not mathematically sensible. People have dreaded certain things like dying in a terrorist accident. There was a funny experiment some years ago where people in the experiment, they were considering the situation. You go to Europe, where at the time there was a lot of terrorists and you could buy insurance against dying during your trip from a terrorism incident, or you could buy insurance against dying on your trip for any reason. And people are willing to pay more to insure themselves against dying in a terrorist incident than in dying from any reason, which really doesn't make sense. But dying from terrorism incident is more frightening than dying. And that's one of the reasons that we had the differences.
[00:47:28] Jordan Harbinger: I love the idea that we're going to have self-driving cars at some point. I know how many friends of mine have gotten injured or killed in car accidents. We all know many people who that's happened to. It's just so tragic, but self-driving cars, it sounds like they can't just be twice as good at preventing accidents. They have to be like a hundred times better. And even then the knuckleheads of the 22nd century are going to be the people who go like, "I have freedom, I'm driving my own car. I'm better than any machine." And we're going to have to go, "What are you talking about? You've been in three car accidents in your life. These cars drive millions of miles a week and there's like one or two accidents a month globally because of self-driving. And you think you can just drive around and be better than them. Like, you're insane." We're going to have to contend with that.
[00:48:16] Daniel Kahneman: This is going to happen. You're absolutely right. It's happening already. I mean, that type of bias against the algorithm against rules. We see it a lot.
[00:48:25] Jordan Harbinger: As a student of law or a former student of law, current lawyer, it seems to me troubling in some way that I can't quite put my finger on that we would allow algorithms to decide who's guilty, who's not, who's going to jail for how long. We love the idea that we have human intervention in the form of a jury, but also in the form of a judge who can say, "Okay, you did this bad thing, but like, you know, I see contrition in you. You do seem like somebody who's not going to do this again. There were aggravating circumstances I can take into account." We can probably program that, but I don't really want to throw myself at the mercy of that algorithm somehow.
[00:49:00] Daniel Kahneman: I know. And that's an example of the bias that we're talking about, but there's been research in one situation, very detailed research. And this is for judges who grant parole or bail — not parole but bail, and that's a decision that's made millions of times every year. And it turns out that an algorithm can clearly do better than judges. You know, there are two risks. One risk is keeping people in jail who would do nothing. And the other is releasing people who commit crimes, and you can reduce both by using algorithm relative to the performance of judges. There are absolutely compelling data on that. And at the same time, implementing bail by algorithm is going to be quite difficult socially for the reason that you mentioned earlier.
[00:49:50] Jordan Harbinger: Like this is the human in me, I guess the human bias where I go, but asylum judges, this is so important. College applicants, this is so important. Doctors, more likely to prescribe opiates when they're tired, stressed, fatigued, and yet I'm still like, "Eeh, I don't really want to let a robot, who's never tired, never stressed, and has all of the perfect information all the time." I still resist that and it just doesn't make any sense to me why.
[00:50:13] Daniel Kahneman: The same thing when Kasparov was playing with Deep Blue in chess. We are all rooting for the human and we root for the humans against algorithms. We like the human. We identify with the human. So that creates a very large bias and it creates a bias in part, because as we were saying earlier, dying in an accident with a self-driving car is somehow more shocking. We all feel that than somebody who just died in an accident. Similarly for, you know, if you had diagnosis by artificial intelligence, diagnostic errors would be viewed much — they would be considered much worse if they're the result of an AI than if they're produced by a human doctor.
[00:50:59] Jordan Harbinger: I suppose there's obvious ways to control for this, right? We have the algorithm to diagnose. We don't show the doctors, the doctors diagnose, did it match? Great. Okay. Put a little check mark next to it. If it didn't match, we have to figure out who's wrong. Train the algorithm or train the doctors or see where things are wrong. I mean, so it seems like a problem that we can obviously solve and given more data, misdiagnoses will be virtually nonexistent at some point.
[00:51:24] Daniel Kahneman: You know, there's going to be a difficulty here. And the difficulty is that when you put the algorithm in competition with the doctor and the algorithm has, and the doctor have essentially equivalent data, the algorithm will actually do better. And when there is a conflict and that there's a lot of research showing that it's the algorithm that should have the last word and not the human, because the human is noisier. And yet, you know, this is a very unpopular position. I feel uncomfortable. You feel uncomfortable. Everyone feels uncomfortable about it, but we're going to have to face this in the coming decades. The role of algorithms is going to increase, no question. And those questions will be faced.
[00:52:08] Jordan Harbinger: It's funny to think. And well, not funny at the time, of course, but it's funny for me to think now, three doctors say you must go under the knife and have surgery right away. And the algorithm says, "Definitely don't get surgery. You're more likely to pass away." Of course, I'm going to listen to the doctors. They're sitting there, imploring me. And the robot is just printing out on some screen, "Don't have the surgery you might die." I can't picture myself listening to the humans, even when we all know that the algorithm should have the last word. It's just very, very counterintuitive. And now, that this is what probably how my dad feels when I go, "Hey, this car is self-driving. Look, I can let go of the steering wheel." And he's panicking in the vehicle. That's the same thing.
[00:52:44] Daniel Kahneman: There is a development we're getting used to it. I mean, algorithms are playing an increasing role in our lives. They are recommending films. They're making decisions. They're granting loans in banks. They're making many decisions and their role will increase. Some will get used to it. And eventually I think people will get used to self-driving cars, but self-driving cars will have, as you said, there'll have to be many hundreds of times safer than humans before they're acceptable.
[00:53:13] Jordan Harbinger: Sure. Yeah, I suppose the difference is I don't die in a flaming wreck. If I don't get a loan for my house, I can just go to another bank. Right? If I spend an hour watching a terrible movie on Netflix before turning it off. I'm pretty escaping more or less unscathed from that decision.
[00:53:28] Daniel Kahneman: Absolutely right.
[00:53:29] Jordan Harbinger: How is noise different from, let's say simple variability, right? I'm thinking of evolution. If nature can select between animals with big, giant horns versus animals with smaller horns that are the same species and the bigger horns win over time. How was that variability good? But in the other cases, the variation or the noise is bad.
[00:53:50] Daniel Kahneman: Well, I mean, you know, we define noise as variability that you don't want. So when you have many judges looking at similar cases, you want them to be the same. Because there is no feedback, variability, the engine of evolution, variability is a wonderful thing. That diversity is a wonderful thing, but there are situations in which you don't want variability because it cannot be used for anything. The fact that different doctors vary in their recommendation, there is no feedback mechanism that would produce better medicine out of that variability. So that's noise. That's not evolution. Noise by definition is variability that you don't want.
[00:54:37] Jordan Harbinger: Oh, okay. Good. Okay. That's a good distinction because it seems like in many ways we want options for, especially for nature to choose what the—
[00:54:45] Daniel Kahneman: Absolutely.
[00:54:46] Jordan Harbinger: —best survival is. Active open-minded thinking is a concept that comes up in the book. I'd love to talk about this because it seems like an extremely useful skill. What is it?
[00:54:56] Daniel Kahneman: Well, it is being willing to train your mind and actually being active means that you're mostly enjoying changing your mind. You're looking for opportunities to learn. You're looking for opportunities to think differently. And there are people who are actively open-minded and that turns out to be quite instrumental in those people making better judgments.
[00:55:21] Jordan Harbinger: How do we train ourselves to do this better?
[00:55:24] Daniel Kahneman: I'm not sure that individuals, those can do very much. I mean, you know, you can intend to think better, but I can speak for experience. I've been doing that work for, you know, more than 50 years. And I really don't think that my judgment or decision-making has improved a lot. So I don't think that individuals can do a lot. They can do something. They can improve at the margin. Organizations can do a lot, because organizations, they think slow organizations have procedures, which they can enforce. And when we're talking about decision-making, our hope is primarily in organizations. If individuals can learn from the way that organizations do things to do similar things in their own life so much the better, but it's not going to be easy.
[00:56:15] Jordan Harbinger: Also in the book is the concept of superforecasters. I'd love to hear something about these people because they seem extremely rare, but also extremely valuable.
[00:56:24] Daniel Kahneman: Well superforecasters is actually a term of art. Psychologist, Phil Tetlock, and his wife, Barbara Mellers have studied forecasting tournaments. So many thousands of people participate in, long time, exercises where they're asked to make probability judgments about various events. Like, you know, will there be another war in the Ukraine before the end of this year? What's your probability? And it turns out that some people are much better than others at this. And there are some people and they're called superforecasters who are very good at this. And in fact, probably, although they're not specialists. They don't have access to classified intelligence. They're forecasts on strategic and economic matters tend to be probably, I hear, more accurate than those of specialists and intelligence, CIA and others.
[00:57:20] Jordan Harbinger: How can that even be possible? That's so incredible that it's almost unbelievable.
[00:57:25] Daniel Kahneman: Well, there are two things. In the first place, those are talented people, so they're selected to be talented. They are actively open-minded, they are numerous. They can think in numbers and they enjoy following events and changing their mind. They don't commit themselves too early. They stay in doubt longer than other people, but as long as it's justified and they end up being better at this particular kind of judgment of probabilistic judgment, that evaluating, keeping multiple possibilities in mind and evaluating their relative likelihood. That's a special skill and it's a very important skill.
[00:58:07] Jordan Harbinger: Do we think that that's trainable or it's just, some people tend to be good at having their brain float in a non-biased environment?
[00:58:14] Daniel Kahneman: Superforecasters there is an element that's trainable. They can be trained to improve and indeed, but you cannot take anyone and make them into superforecasters. There is an element of talent and a personality that is not easily controlled.
[00:58:30] Jordan Harbinger: Well, in closing here, one of the first things I ever learned about your work is that probably a teenager now was the peak-end rule. The idea that we remember experiences by how they end, not necessarily how we felt during the experience. I've read that you studied this using people who are undergoing colonoscopies, which I have to say, did you just pick the most uncomfortable thing that you could think of in the moment and select it based on that?
[00:58:56] Daniel Kahneman: Well, I didn't think of it. I was collaborating with the physician and he had the idea. This was many years ago when colonoscopies were painful. Now, people are roughly to sleep before a colonoscopy, so they don't even know what we're talking about. At the time, it was a very painful procedure and it turned out. That how you evaluated that procedure depended a lot on how painful the last few moments of it were. And one thing that didn't matter was how long the colonoscopy was. So that's the peak-end rule. It's how bad it was and how bad it was at the end, but how long it was, for some crazy reason people don't seem to attach much importance to that.
[00:59:41] Jordan Harbinger: So I'd rather have, in theory, a 45-minute colonoscopy that has a friendly, happy ending versus a 30-minute colonoscopy that's painful the whole way through.
[00:59:51] Daniel Kahneman: Absolutely. And there is evidence to support that, and it could be even more extreme than 45 to 30.
[00:59:58] Jordan Harbinger: Well, on that peak note in an effort to get you to remember this interview fondly, I want to thank you so much for your time and for joining us today. Rarely do I get to interview somebody of your stature and I really appreciate the opportunity.
[01:00:09] Daniel Kahneman: This was a pleasure. You're a very good interviewer.
[01:00:12] Jordan Harbinger: Ah, thank you.
[01:00:13] Daniel Kahneman: You really are.
[01:00:16] Jordan Harbinger: We've got a trailer for our interview with Robert Greene, one of the most acclaimed authors of our time. Robert's insight into human nature is second to none. And there's a reason that his books are banned in prisons yet widely read by both scholars and leaders alike. Coming right up.
[01:00:33] If we just sit in our inner tube with our hands behind her head and crack open a six-pack of beer, the river of dark nature takes us towards that waterfall of the shadow.
[01:00:41] Robert Greene: Yeah. So when we're children, if we weren't educated, if we didn't have teachers or parents telling us to study, we'd be these monsters. We're all flawed. I believe we humans naturally feel, and it's the chimpanzee in us. It's been shown that primates are very attuned to other animals in their clan. And they're constantly comparing themselves.
[01:01:06] Your dislike of that fellow artist or that other podcaster, 99 percent sure that it comes from a place of envy. You are not a rational being. Rationality is something you earn. It's a struggle. It takes effort. It takes awareness. You have to go through steps. You have to see your biases. When you think you're being rational, you're not being rational at all. You go around, everything is
[01:01:30] personal. "Oh, why did he say that? Why is my mom telling me this?" And I'm telling you, it's not personal. That's the liberating fact people are wrapped up in their own emotions, their own traumas. So you need to be aware that people have their own inner reality. People are not nearly as happy and successful as you think they are. Acknowledging that you have a dark side, that you have a shadow, that you're not such a great person as you think can actually be a very liberating feeling. And there are ways to take that shadow in that darkness and kind of turn it into something else.
[01:02:06] Jordan Harbinger: If you want to learn more about how to read others and even yourself, be sure to check out episode 117 of The Jordan Harbinger Show.
[01:02:16] Man, what a show? What a great guest? I've been trying to get him on for a freaking decade. I'm so glad it finally happened. He is a gem of a human. Now, humans often think causally, right? This happened because of X, right? Y happened because of Z as opposed to statistically, right? The odds are N that this will happen. It's human instinct to do this because statistics take careful consideration and training and math, and we didn't necessarily evolve to do that in our brain quickly without thinking. So of course these causal relationships cause all kinds of problems in themselves because they aren't necessarily correct. They're kind of anecdotal. So the statistical method here is better. We've talked a little bit about this with Annie Duke. When we talked about thinking in bets and resulting in decision-making. If you haven't heard that episode, that is episode 40 of the show. You can find it by going to jordanharbinger.com/40. You can always do that with the episode numbers. It'll take you right there.
[01:03:10] Now, when we're told things are popular, the group sees it that way, right? Social proof. So if we want upvotes on social media, we get some early up votes. I've informally tested this on Reddit and news websites and the lesson here is that social influence causes noise because of a concept called herding and shifting behavior. We see that some people are waiting in line for something. We can maybe go wait in line for that. It happens all the time here at restaurants in California. I don't know if you've seen it where you live. But also upvotes, downvotes you might even upvote or downvote something before you even see it if you're on a website or click the like button before you even really process it because so many people have liked it. And juries, they must deal with this constantly because it is very heavy on social influence and this can be for better or for worse. And the problem is they're making very important decisions when it comes to that. Now, sometimes we need to take the social pressure and the herding behavior out of decision-making.
[01:04:01] Other points from the book that we didn't get a chance to discuss today. One idea was the concept of respect experts versus other types of experts. Right? Sometimes we respect people and therefore they become experts for that reason. And we can't trust certain experts just because they sound confident or because they are a guest of honor or have a degree or have high social status. Now, here's an example, a chess master may sound timid and have a hard time explaining what they think. But a political pundit may sound very convincing, but be completely talking out of their ass or merely using judgment and that results in bad outcomes much of the time.
[01:04:40] Another concept I love was called bullsh*t receptivity. Now, I didn't have the guts to say bullsh*t receptivity in front of an 87-year-old Nobel prize-winning economist/psychologist. But some people are more likely to accept things that they hear and be impressed even if the statement is vacuous. And, I think we may have just summed up all of social media and about 90 percent of podcasts in one sentence here.
[01:05:02] The last concept that I really liked that I think you'll all enjoy is called bias cascades. Now this is a bias that leads to even more bias, and this happens all the time in our lives. But one example that could have a real disastrous outcome for someone is let's say we are a fingerprint technician, or we know a fingerprint technician. And we say this fingerprint is from a person who matches the description of the suspect in a crime. This unfortunately leads to a much higher chance that the expert will then say that the print is a match, which then leads to an investigation against that person that presumes that that is the suspect. Thus, they find more circumstantial evidence that this person could be guilty. This is terrifying from a legal perspective. As a lawyer, I'm looking at this and I'm going, what they tell fingerprint experts that this might be the suspect or that it matches the suspect. They should be working in a vacuum, but there are counter arguments to this, right? That fingerprint spirits need to know what they're working on. It results in a higher level of seriousness for the job. They find the job more rewarding when they think there's solving crime. But as you can see, these bias cascades can be really, really problematic. There's a near certainty, right? That there are people in jail right now for crimes that they did not do because of this exact type of example, this exact situation. We already know that there are people that are falsely incarcerated, right? His convictions, there's a likelihood that this type of noise and that bias cascades contribute to this exact problem, which for me, as a human and as an attorney is scary to say the least.
[01:06:38] Big thank you to Dr. Daniel Kahneman. I've really enjoyed this one. The book title is Noise. Links to his book will be in the show notes. Please use our website links if you buy the book. It does help support the show. They work in other countries. They work for Audible. They should work anywhere. If you're having trouble with those links, please do let me know. Worksheets for the episode or in the show notes. Transcripts are in the show notes and there's a video of this interview going up on our YouTube channel at jordanharbinger.com/youtube. We also have a brand new clips channel. The clips channel has cuts that don't make it into the show or highlights from the interviews that you can't see anywhere else. It's a new channel. We need all the subs we can get, go to jordanharbinger.com/clips and click that subscribe button for me. I'm at @JordanHarbinger on both Twitter and Instagram, or you can hit me on LinkedIn.
[01:07:22] I'm teaching you how to connect with great people and manage relationships, using systems and tiny habits over at our Six-Minute Networking course. The course is free. I don't need your credit card, none of that. Go to jordanharbinger.com/course. I'm teaching you how to dig that well before you get thirsty. And most of the guests on the show, they subscribed to the course. They contribute to the course. Come join us, you'll be in smart company where you belong.
[01:07:43] This show is created in association with PodcastOne. My team is Jen Harbinger, Jase Sanderson, Robert Fogarty, Millie Ocampo, Ian Baird, Josh Ballard, and Gabriel Mizrahi. Remember, we rise by lifting others. The fee for this show is that you share it with friends when you find something useful or interesting. If you know somebody who's into science, into decision-making, studies bias, or just loves a good conversation, please do share this episode with them. I hope you find something great in every episode of this show. So please share the show with those you care about. In the meantime, do your best to apply what you hear on the show, so you can live what you listen, and leave everything and everyone better than you found them.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.