Stuart Ritchie (@StuartJRitchie) is a lecturer in the Social, Genetic and Developmental Psychiatry Centre at King’s College London and author of Intelligence: All That Matters and Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth.
What We Discuss with Stuart Ritchie:
- Why good, meaningful science too often gets pushed aside by the hype around bad science that makes for sensational headlines.
- How and why incentives in research are often skewed and lead to bad science — and even outright fraud.
- What happens when good scientists are hoodwinked by bad science and vouch for it as gospel because its pedigree seems legit.
- How can you spot bad science before you adapt your lifestyle to correspond to its dodgy, worthless, or perhaps even dangerous advice?
- What is the Open Science movement, and how might it help us reform these problems in research going forward?
- And much more…
Like this show? Please leave us a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
What’s more likely: that a titillating-sounding study linking pornography and ESP proves that human beings possess an innate ability to predict the future, or that another, less flashy study trying to replicate these results and failing suggests the truth may still be “out there?” Those of us who want to trust science hope the latter is the study that makes it into the peer-reviewed journals taken seriously by the academic world. But those of us who have been paying attention to how hype affects news cycles suspect that the former is the study that actually gets submitted, published, passed around, and talked about. Sure, a retraction can always be issued for the bad science after the fact, but not everyone’s going to get the memo. And this, as Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth author Stuart Ritchie explains, is a big problem.
On this episode, Stuart joins us to dissect the reasons behind the way well-intentioned but relatively minor discoveries are distorted into modern miracles by the time they hit the press. It’s not so much that scientists have a desire to obscure the truth, but they do have to be visible enough to get the funding that makes their work possible. So it’s not science itself that happens to be the problem — or the people trying to further our understanding of that science — but the broken system around how science is financed and its findings monetized. Thankfully, Stuart has some ideas about how we might reform this system into something that serves us better. Listen, learn, and enjoy — for SCIENCE!
Please Scroll Down for Featured Resources and Transcript!
Sign up for Six-Minute Networking — our free networking and relationship development mini course — at jordanharbinger.com/course!
Microsoft Teams lets you bring everyone together in one space, collaborate, draw live, share, and build ideas with everyone on the same page and makes sure more of your team is seen and heard with up to 49 people on screen at once. Check out microsoft.com/teams for more info!
Miss our interview with Austin Meyer, the man who leads a valiant crusade against patent troll dirtbags? Catch up with episode 326: Austin Meyer | Slaying the Patent Scam Trolls here!
THANKS, STUART RITCHIE!
If you enjoyed this session with Stuart Ritchie, let him know by clicking on the link below and sending him a quick shout out at Twitter:
And if you want us to answer your questions on one of our upcoming weekly Feedback Friday episodes, drop us a line at firstname.lastname@example.org.
Resources from This Episode:
- Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie
- Intelligence: All That Matters by Stuart Ritchie
- Stuart Ritchie | Research Portal, King’s College, London
- Stuart Ritchie | Twitter
- Extrasensory Pornception: Doubts about a New Paranormal Claim | Scientific American
- What is Replication? | PLoS
- Are Eggs Good or Bad for You? New Research Rekindles the Debate | Stat
- What’s Next for Psychology’s Embattled Field of Social Priming | Nature
- Thinking, Fast and Slow by Daniel Kahneman
- When the Revolution Came for Amy Cuddy | The New York Times
- Study Finds Peanut Consumption in Infancy Prevents Peanut Allergy | National Institutes of Health
- Ending Medical Reversal: Improving Outcomes, Saving Lives by Vinayak K. Prasad and Adam S. Cifu
- ZDoggMD | Debunking Plandemic COVID-19 Pseudoscience | TJHS 354
- Science Media Centre
- IsItBullshit? | Reddit
- Rational_Skeptic | Reddit
- Skeptic | Reddit
- DeBunkThis | Reddit
- Paolo Macchiarini and Trachea Transplant Information Index | CIRCARE
- The Celebrity Surgeon Who Used Love, Money, and the Pope to Scam an NBC News Producer | Vanity Fair
- Texas Sharpshooter Fallacy | Investopedia
- We’re All ‘P-Hacking’ Now | Wired
- Meet Yoshitaka Fujii, the Most Prolific Fraudster in Modern Science | Vox
- What a Massive Database of Retracted Papers Reveals about Science Publishing’s ‘Death Penalty’ | Science
- Steven Novella | Twitter
- Acupuncture Doesn’t Work | Science-Based Medicine
- Eugenics | The Holocaust Encyclopedia
- Trofim Lysenko, the Soviet Era’s Deadliest Scientist, Is Regaining Popularity in Russia | The Atlantic
- Pseudoscience and COVID-19 — We’ve Had Enough Already | Nature
- Learning from a Retraction | The Lancet
- The Discredited Doctor Hailed by the Anti-Vaccine Movement | Nature
- How Outcome Switching is Corrupting Medical Research | Psychology Today
- Open Science Movement | United Nations Educational, Scientific and Cultural Organization
- Center for Open Science
- Retraction Watch
Transcript for Stuart Ritchie | The Science Fictions Undermining Facts (Episode 436)
Jordan Harbinger: This podcast is brought to you by Microsoft Teams. When there are more ways to be together, there are more ways to be a team.
[00:00:06] Coming up on The Jordan Harbinger Show.
[00:00:09] Stuart Ritchie: The journal itself, The Lancet, one of the world's top medical journals ran an accruing editorial that said, "Paolo Macchiarini is not guilty of misconduct." Just a few weeks later when all the stuff about the Pope and all that came out. And also, there was this documentary where they actually went and met some of the patients and saw the terrible condition they were in. After that came out, they had to all humiliatingly calm down and say, "Okay, we were wrong. Actually, this guy has always been fraudulent all along." So in this case you had big medical institutions covering up and like they were on his side, they were on the side of a psychopathic fraudster that was bleeding to the deaths of his patients.
[00:00:48] Jordan Harbinger: Welcome to the show. I'm Jordan Harbinger. On The Jordan Harbinger Show, we code the stories, secrets, and skills of the world's most fascinating people. If you're new to the show, we have in-depth conversations with people at the top of their game. That means astronauts, entrepreneurs, spies, psychologists, even the occasional undercover FBI agents. Now each show turns our guests' wisdom into practical advice that you can use to build a deeper understanding of how the world works and become a better critical thinker. If you listen to this show, chances are you're a big believer in science. You put a lot of trust into it. I do as well, but what happens when we lose some of the transparency, that is the cornerstone of the scientific process? Well, today, we'll discuss why hype in science could be bad for society bad for medicine, and bad for those of us that rely on it, which is everyone. We'll also learn how and why incentives in science are often skewed and lead to bad science and even outright fraud.
[00:01:40] If you're wondering how I managed to book all the great guests for my show, these authors, these thinkers, even celebrities every single week, it's because of my network. I'm teaching you how to build your network for free over at jordanharbinger.com/course. And by the way, most of the guests on our show, they're in the course, they contribute to the course. Come join us, you'll be in smart company. Now, here's Stuart Ritchie.
[00:02:02] The book starts with a professor from the look of it, kind of trying to prove that we're all psychic when it comes to porn. Is that — did I understand —
[00:02:09] Stuart Ritchie: Yes.
[00:02:09] Jordan Harbinger: — that correctly? Okay.
[00:02:10] Stuart Ritchie: Basically, that is what that paper and that was posted to a mainstream psychology journal.
[00:02:15] Jordan Harbinger: So what happens there? First of all, are we psychic when it comes to porn and if not, how did we get that far? How did that happen? How did that get published?
[00:02:22] Stuart Ritchie: It is kind of an absurd thing and the absurdity is part of the story. This was a paper published by Daryl Bem who's at Cornell University who is like a top-rated social psychologist, well-known, well-respected. Published his paper that he got Cornell undergraduates to come in to sit in a computer cubicle — actually, people in the US only say cubicle — no, you know it. It's also in the UK — sorry, in the UK, we say cubicle for offices and toilets.
[00:02:45] Jordan Harbinger: Oh yeah, no, we only use it for offices. Although I strongly recommend that we start using it for toilets because coincidentally, there's a lot of overlap in those two activities.
[00:02:55] Stuart Ritchie: Yeah, I think so, I think so. Anyway, so they're looking at the computer screen and they're told, "There's two pictures of curtains on the screen. Behind one, there is a picture. Behind one. there's nothing. So you just have to click the one that there's a picture behind it." And they say, "Well, I don't know what one." Then they just say, "Well, you know, just whatever one you feel." The experiment claimed to show that if you put a picture of just something really boring, like a tree or a chair or something behind one of the curtains, people get it 50/50 right. Just what you would expect. They get the picture. They clicked the one that there's a picture behind 50 percent of the time or under that. But if you put porn behind one of the curtains — so he claimed in this experiment — then they get it like slightly above chats. So they get it like 51.3 percent of the time or 53.1 percent like slightly above 50 percent.
[00:03:38] That was a statistically significant difference. And it apparently showed that the undergraduates could sense that there was porn in the future going to be shown to them. And there was like an opposite way round where they put a violent, unpleasant picture behind one of them. And they tend to avoid that one, like just by chance and they couldn't possibly have known that, that was the one that was going to come out except by psychic intuition.
[00:03:56] Jordan Harbinger: Okay. Except for — since we know that, that's not real or so far has yet to be proven, what happened here?
[00:04:02] Stuart Ritchie: Well, yeah, and I mean, people would be like breaking casinos. I think if they could like really sense a future in this way —
[00:04:07] Jordan Harbinger: Right.
[00:04:08] Stuart Ritchie: — but there are several things that occurred here. I mean, this was a paper which was used all the standard ways that psychologists use to analyze their experiments. It was a standard experimental setup. They use standard statistical analysis. They used to publish it in a mainstream psychology journal and so on, and yet it got these completely absurd results. And so what a lot of people concluded from this was that there must be some kind of problem with the standard way that psychologists analyze their data and do their research, which you know, is part of what this book is about — about what also happened, what also was kind of revealing about this was that we ran the same experiment again, and we found no such results. We found no psychic results in our replication study.
[00:04:44] Jordan Harbinger: Surprise.
[00:04:45] Stuart Ritchie: Yeah, exactly. And we sent our replication study off to the same journal that published the original. And they said, sorry, "We're not interested in publishing replications." So they were interested in publishing the super flashy, psychic result —
[00:04:56] Jordan Harbinger: Fake psychic results, fake psychic porn results. Yes.
[00:05:00] Stuart Ritchie: — and not the one that said, "You know what, actually, probably people don't have a psychic ability to detect the future." I'm not claiming that this is a fraudulent result, right?
[00:05:07] Jordan Harbinger: Okay.
[00:05:08] Stuart Ritchie: It's not a case where someone has just made up data, but it's a case where the standard methods that we use in our studies have led us to the results that we kind of all know are not true. Right? We kind of know that, that's —
[00:05:19] Jordan Harbinger: Okay. So the guy who ran this and I assume it was a guy because let's be honest, who does this psychic study?
[00:05:25] Stuart Ritchie: It was indeed a guy.
[00:05:27] Jordan Harbinger: He also knew. And his team is like, "This is bullcrap, but we're going to get it published. It's kind of like that, right? They're not like, "Wow, we've discovered this groundbreaking psychic pornography effect."
[00:05:37] Stuart Ritchie: I actually think in the world of parapsychology, which is what the research that he was doing, I think there are true believers. I think people believe that this stuff is real. I think people believe that we have a tiny, tiny tendency. It's not like you can go to the medium and get them to read your tea leaves or whatever it is, but it's that we have this kind of tiny little thing that evolution has built into us to kind of search for erotic stimuli in the future and avoid violence stimuli in the future, and also just generally saying stuff that's coming up in the future.
[00:06:04] I mean, the part of the experiment that we specifically replicated was just about word lists. Like we didn't do the same porn part.
[00:06:09] Jordan Harbinger: Sure, you didn't.
[00:06:10] Stuart Ritchie: I wouldn't even know where to look for porn.
[00:06:13] Jordan Harbinger: Oh yeah, of course.
[00:06:14] Stuart Ritchie: Yeah. Yeah. And so —
[00:06:15] Jordan Harbinger: I got a guide.
[00:06:16] Stuart Ritchie: Yeah, I think, I think we can safely assume that he believes that this is true. There was a kind of tradition in the psychology of parapsychology research, but it's been described as like the jester in the core of science because it's like a really absurd thing, which looks really ridiculous and is kind of totally — like it doesn't fit with physics. In physics, there are causes and then effects not the other way around. And yet it illustrates that there's something maybe a bit funny with the way we do science. And so if you dig into the statistics of the paper, there are lots of reasons where he might have been able to find what looked like a real result but actually, there wasn't.
[00:06:49] And you can kind of — and this is again what I got into in the book when I'm talking about the bias that scientists have. You can convince yourself that you've found something real when actually you have it and it's just noise. It's just statistical fluctuations in your data that you've collected. You know if you really believe something and you really want something to happen, if you really want something to come out of your data, which a lot of scientists do — they want to find the next cure. They want to find the next big, exciting result. They want to find the next thing that'll change the world. Then you can run the statistics in such a way that it comes out with really whatever you're looking for.
[00:07:16] Jordan Harbinger: Right. That makes sense. If you're a psychologist who studies the paranormal and there's not a whole lot to work with because we found that there just isn't really that so far, at least no proof of that.
[00:07:26] Stuart Ritchie: Yeah.
[00:07:26] Jordan Harbinger: And you're looking for that and you're looking for that. And it just so happens that if you find something, you'll be world-famous overnight, and at the top of that field forever.
[00:07:33] Stuart Ritchie: Right.
[00:07:33] Jordan Harbinger: There's a strong incentive to just maybe slide something to the left when maybe it shouldn't have been slid to the left.
[00:07:39] Stuart Ritchie: Absolutely. Absolutely.
[00:07:40] Jordan Harbinger: Thereby getting some imaginary results.
[00:07:42] Stuart Ritchie: Yeah, exactly. And it's not even just a matter of doing the statistics. It's a matter of the way people write out their paper. You know, even if they haven't really found much in their paper, they can write the paper as if this is like the most exciting result ever. And you know, the nice thing about this story and the reason that the story is so good, I had the experience of the failed replication, trying to get that published. And I think that illustrates a lot about the way science works. The kind of journals so we can talk about that.
[00:08:02] Jordan Harbinger: Yeah,
[00:08:02] Stuart Ritchie: But I think the paper itself illustrates so much about how, if we just let people just carry on with the standard way, they're doing science, they will find, like on their face, patently absurd results, like the porn sensing, that remote sensing of porn. And so I think this is like it's for good reason that this was one of the first studies that kicked off what's been described as the replication crisis in psychology.
[00:08:24] Jordan Harbinger: Right. And we'll talk about that, right?
[00:08:26] Stuart Ritchie: Yeah.
[00:08:26] Jordan Harbinger: Because look, why the hell was that published? That's what everyone's thinking. How is it that that got into a journal? It's not one of those — well, let me backup, do most scientific studies just get published?
[00:08:37] Stuart Ritchie: Well, most scientific studies that get published, and this is the vast majority of them have positive results. And that implies that not all scientific studies are getting published. When you run a scientific 514study, you're looking for your interest in testing hypotheses usually. And then obviously once you've tested that with statistics, you're sending out also a journal, you're getting that published. That's what you really want. It gets peer-reviewed. The peer reviewers check that you haven't screwed anything up in the analysis. They check you've written up in a fair way and whatever, and then it gets published.
[00:09:03] But there are lots of ways that that can go wrong along the way. You yourself —
[00:09:07] Jordan Harbinger: Let me pause this here though because if I'm looking to test a hypothesis, shouldn't like half or more than half of my tests of hypotheses, just not yield anything like, "Oh, I guess I was wrong." I mean, should that have happened more than being right?
[00:09:21] Stuart Ritchie: I think we could debate about how many hypotheses we'd expect to go wrong. Like scientists can make an educated guess. A hypothesis is usually an educated guess. So you might not expect it to be just 50/50. You might expect to be slightly higher than that because they're like testing the next thing that comes from their theory and so on.
[00:09:34] Jordan Harbinger: Right. It's not just like whole cloth like I'm not just pulling something and flipping a coin and being like a new hypothesis.
[00:09:39] Stuart Ritchie: Yeah. I think it will do this, correlate with this, or this experiment will work on this. Like yeah, people are like following a line of research. So you'd expect it to be maybe higher than 50 percent. But you wouldn't expect it to be like over 90 percent, which is what it is in some fields.
[00:09:51] Jordan Harbinger: Right.
[00:09:51] Stuart Ritchie: In psychology and psychiatry, it has been shown that over 90 percent of studies find positive evidence for the hypothesis that they're testing. And that implies either that the guy at Cornell was right and psychologists are actually psychic. Or it implies that something has gone really wrong with the way that we publish science. And as we're saying, like, there are all these biases that push us towards only seeing these posts of results.
[00:10:14] So like this isn't how things have actually happened. The scientific literature, which is supposed to be a really nice reflection, like a clear, accurate reflection of what scientists have been doing does not accurately reflect what scientists have been doing. In fact, it just shows this kind of rose-tinted view where all the results are positive because first of all, scientists don't publish studies that they do that don't support their hypothesis.
[00:10:37] Jordan Harbinger: You tried that. Right? And they were like, "Nah."
[00:10:40] Stuart Ritchie: Yeah, exactly, precisely. So that's a clear example of where they said, Hey, we'll publish the really exciting initial results. The one that shows that the psychic powers are real, but we're not interested in showing the one that they're doing, that's too boring.
[00:10:51] Jordan Harbinger: Is this why we see all this crap? Like in CNBC and it's like eating red peppers can cut cancer. It's like Scientist: Eating Red Peppers Can Cut Cancer Risk by Half. And you're like, "What? Oh, probably not though." Right?
[00:11:04] Stuart Ritchie: I think, you know, there's a bit in the book where I specifically take nutrition research to task, because I think it's exactly that. I think you have these huge datasets where people collected the data on people's dietary habits and they fill in every food they've eaten for the last two weeks or what they can remember. You know, what they ate for the last two weeks, and then, their health outcomes and all sorts of different, mental health, physical health, whatever. If you dig around in that data for long enough, you'll find something and you can find red peppers and cancer if you want. And you can find eating eggs and drinking milk and heart disease. You can find drinking red wine and heart disease, all sorts of stuff. And you can publish it.
[00:11:38] And that's why the literature on nutrition is so confusing. A huge amount of the findings is likely to be just statistical noise rather than like real signals, which is what we really want. And people want to know. This is a real failure of science because people want to know what they should and shouldn't eat.
[00:11:53] Jordan Harbinger: Yeah.
[00:11:53] Stuart Ritchie: You can see from the amount of books on this that get sold and they might have interests in this, there is. People want to know, and scientists should, at this point have been able to provide reliable evidence on that, but we're still really not at the point where we can reliably tell people that.
[00:12:04] Jordan Harbinger: Well, you must be popular among your fellow scientists for blowing up all their studies and results and making it harder for people to claim street cred by writing their bull crappy science articles.
[00:12:15] Stuart Ritchie: It's this movement, since 2011 or so when that psychic paper got published, the kind of movement of the people talking about the replication crisis has certainly caused a lot of upset. Like people are very upset, but that people are saying like, "But I've made my whole career on publishing positive results basically. And doing the statistics this way."
[00:12:30] Jordan Harbinger: And that's why you're part of the problem.
[00:12:33] Stuart Ritchie: Exactly, exactly. But you know, it must also be said that lots of scientists are like, "Oh God, yeah, right now, we have a serious problem here." And there have been some positive kind of steps in the right iteration in terms of changing the way we publish stuff.
[00:12:44] Jordan Harbinger: So social science is something you spoke about a lot in the book. So priming was one of these. Can you sort of briefly explain priming? Is that — I'm going to do a bad job, so just go ahead and do it.
[00:12:54] Stuart Ritchie: Well, it was a particular, almost like a fad in social psychology for maybe a decade or more where there were loads of these experiments about unconscious cues in the environment and how they can affect our behavior and our feelings. My favorite example of this, which I think actually might only be a footnote in the book because it has never actually been directly replicated, but I would put money on how that replication would go if someone else came along and tried to do the study, but this is one where they got people to come into a room, students again, and they were doing creativity tests.
[00:13:23] And so it's like, how many uses can you come up with for a brick. So that's, I mean, psychology is not a very high level of assessing people's creativity, if that's what we could do. But that is probably the best we can do in the lab for a creativity test. They had the people either sitting — like we had a big cardboard box in the middle of the room and they had people either sitting in the cardboard box while they were doing this test as creativity tests or sitting outside the box when they were doing the creativity test. And then what they found was that the people who were sitting outside the box and thus thinking outside the box got higher scores on the creativity test than those who were sitting in the box. And so the idea here was like, this is completely absurd to me, but the idea was that priming the concept or priming that idiom, that idiom of thinking outside the box had actually translated directly to people's creativity and made them, and given them a boost in how creative they were. And there's a whole host of studies that are like this, this wasn't just a tiny effect. This was like a huge boost to their creativity because they were saying outside the box versus inside the box.
[00:14:22] Like that type of research, that general type of research where I know things in the world influence your behavior, even though you're completely unconscious of them. Holding a cup full of warm coffee makes you feel warmer to your friends compared to holding a cup of cold water because warmth is activated in your mind and so you feel warmer in a metaphorical sense to your friends when you fill in the questionnaire, but like how much do you like your friends and family. All that kind of stuff has not done well when people have tried to replicate it. It's not done well when people have looked into the stats, but it was a big thing in psychology.
[00:14:50] And it has made its way into loads of really popular books, including by really reliable and respected people like Daniel Kahneman. There are loads of stuff in Thinking, Fast and Slow. It's ultra-popular and a really good, overall, really good book. I'm not criticizing the book. I think it's a great book, but he says in that book, there are these studies on priming, unconscious influences can have big effects on our behavior. And you know, these are published in scientific journals. This is a direct quote. He says, "You have no choice, but to believe that these are true about you, about your behavior."
[00:15:19] Jordan Harbinger: Yeah. So that's scary, right? Because this is a brilliant scientist who wrote to us, I guess, could you call that a seminal work? Because everyone reads it and everyone talks about it. I don't know if that's —
[00:15:28] Stuart Ritchie: I mean, he's the only person to have won a Nobel prize, technically in economics, but he's a psychologist. So like he shared the Nobel prize for that. So he's as big as it gets in psychology.
[00:15:38] Jordan Harbinger: Here's this guy saying, "Well, you know, even if I think that sounds fantastical, it was published and it settled science, therefore it settled science or whatever." So it almost sounds like he's talking to himself like, jeez, that doesn't sound right — but you know what, who am I to say that this person who did all this research is wrong and it's like, you're a Nobel prize-winning, multimillion-copy bestselling author, but we can't expect him to go, "You know, that doesn't sound right to me. Let me just do a brief three to five study that's double-blinded and funded by some organization just to make sure this isn't bullcrap."
[00:16:09] Stuart Ritchie: Right. And he totally would get the funding to do such a study.
[00:16:11] Jordan Harbinger: Sure.
[00:16:12] Stuart Ritchie: And, you know, to be fair to me as after, you know, several years of like these results being criticized and failing to replicate when other people tried to do them and to do the replication studies, he has come back and said, "Look, I was wrong about this. I shouldn't have been so certain." But, you know, if he — and by the way, his specialist subject is how we're irrational and how we make incorrect conclusions about stuff and how we're drawn into thinking in irrational ways — if even he can make the mistake, like what hope is there for anyone else. What hope is there for the rest of us when we're assessing science and understanding science. All these priming studies have become kind of, I think they've become less popular now in social psychology.
[00:16:44] And people are going to move on to kind of different stuff after they realize that it was kind of fun to come up with all these ideas, like warmth, the concept of warmth. "Yeah. What did I do? Oh, let's do that." There was one where it was like being higher up on an escalator made you feel more all high and mighty. So you would rate yourself more arrogantly compared to your friends and stuff. I think that actually, that study was actually fraudulent that was made up. But like there was a whole spate of these studies and it might be kind of fun to come up with, but actually quite scientifically flimsy, you know, when you look into them,
[00:17:11] Jordan Harbinger: One of the biggest ones — and this happened on our show and I had to sort of eat a little bit of crow on this. As I had a guest named Amy Cuddy and she had that power posing thing. I know you're well aware of that, but for people that don't know, this was the idea that you could — it's so Tony Robbins when you think about it. Like, "You stand up and you raise your arms in the air and you like, you can either scream or stand up in this very dominant body language." And it's like, "Look at what we found." And she did a TED Talk and it got like, I don't know, 80 million views or 28 million or whatever, umpteenth million views.
[00:17:40] Stuart Ritchie: Yes. It's the second most-watched TED Talk of all time, I think.
[00:17:43] Jordan Harbinger: Right. And it turned out to just be just not real
[00:17:46] Stuart Ritchie: Well, it was based on one study of 42 people. And it turns out that that study — the lead author of that study, who was not Amy Cuddy. And she was like one of the three, I think, authors on the study.
[00:17:55] Jordan Harbinger: Sure.
[00:17:55] Stuart Ritchie: But the lead author of the study, Dana Carney, who's at Berkeley, I think, she came out and said, "I don't believe this anymore. If you look back at the way we did the statistics in the original study, we kind of like drop the odd participant out here and there and did it kind of inconsistently. We ran the statistics in such a way that we can pick the results that were positive and kind of just kind of hushed up the ones that were negative." And, again, it's not like accusing anyone of fraud is bias, right? It's biased towards finding exciting results.
[00:18:21] Just like, you know, the psychic study that we talked about and in the nutritional studies. Only in rare cases is this like deliberate fraud, but it's like the whole way that the incentives are set up. And think about the incentive of you will have a New York Times bestselling book. You'll be able to go and have the second most-watched TED Talk of all time if you publish this study and if you do the statistics in a certain way. So you can see how these incentives are pushing people towards getting the result, rather than pushing them towards finding true things about the world, finding true facts about the world. That, you know, power posing study, there have been various attempts to replicate it. And I know Amy Cuddy is still quite pro at being true of the power posing effect. And there's a huge debate over that. Although the debate tends to be Amy Cuddy versus basically everybody else who's kind of saying, "Look, this is not reliable."
[00:19:02] Jordan Harbinger: Yeah.
[00:19:03] Stuart Ritchie: Especially the claims in the study about your testosterone raising when you do the power pose and the hormonal stuff really hasn't held up at all. The subjective feeling of power possibly is true, but it may also be — in the experiment with the comparisons between people slumped over versus people doing the power pose. And it might be more, a negative effect of slumping rather than a positive effect of doing a power pose. So it's like the whole thing might be a bad interpretation of the data in the first place. So it kind of has crumbled.
[00:19:29] Jordan Harbinger: Yeah, I mean, I understand. I don't know her well, so I can't be like, "Oh, it's all a bunch of crap so she can get speaking gigs." Like, I can't say that, but also it would be hard for me — I'll speak for myself. If I had a TED Talk and it had, however, many million views and I became famous for that and I was getting speaking gigs all over the world about that and someone said, "Hey, you know, there's a problem with our data." I probably wouldn't be like, "We got to tell everyone about this." I'd probably be like, "That's really inconvenient for me economically, professionally. I'm not sure I really want to dive into that rabbit hole right now, because I am about to retire in five years of this."
[00:20:00] Stuart Ritchie: Well, this is why I am suggesting in the book that scientists should consider more widely what the conflict of interest is, right? So at the moment, scientists if they're getting paid by a pharmaceutical company, or if they're a consultant for a pharmaceutical company and they're doing a drug trial. They say, "It's totally required. You must say that and, in the paper,, you must write in the conflict of interest section. I received money from AstraZeneca or Pfizer or whatever the company is." But they don't have to say, "I have a conflict of interest and that I published a really popular book and did a really popular TED Talk on this topic. And it would be really," as you say, "It's really inconvenient for me if the results went a different way or went a particular way. So, I just wanted to clear that as a conflict of interest." Like no one does that. And in fact, like most scientists would think that was really weird if you said that, but I think people should reconsider.
[00:20:45] I mean nutritional is another one, being suggested that nutritional researchers who are on a particular diet themselves who are following one particular diet, and then they do a study on that diet, they should declare that in the paper because they have a clear interest in proving themselves right. Like I haven't been wasting the last five years of my life.
[00:21:01] Jordan Harbinger: Yeah. Right. It turns out all this stuff I've been doing that definitely has sunk cost fallacy mixed into it turns out I was right the whole time. I just want everyone to know.
[00:21:09] Stuart Ritchie: Yeah. Yeah.
[00:21:09] Jordan Harbinger: That's a problem for, I think, medicine too, right? Because if doctors have to rely on what is low-quality science or low-quality evidence because the alternative is no science or no evidence than bad science, which is what we're talking about, makes this problem a lot worse. Right?
[00:21:26] Stuart Ritchie: Precisely, you've got to make a decision as a doctor you've got to say to a patient, like "We will treat you with this," or "I think this is the best thing to do." And if you look through the scientific literature, often the only data is low-quality stuff from small studies. The studies might be statistically dodgy in the way that we've described. You might be missing because of this publication bias idea where people only publish positive results. You might be missing studies that were genuinely done, but that just have never been published. That just hasn't made it into the journals because the journals weren't interested because of their negative results. Scientists have really failed doctors in this case. And by extension, they failed patients because they haven't provided an actual, proper, reliable, accurate summary of what has been done scientifically on all of these drugs and treatments.
[00:22:07] And so, yeah, that's why you often see this phenomenon that's been described as a medical reversal where a much bigger, higher-quality study comes along and totally flips the understanding, totally flips the evidence around on some medical treatment. I mentioned a few in the book like peanut allergy, for instance. For a long time, people were encouraged to keep their kids away from peanuts when they're babies because that would be helpful in terms of stopping them from developing a peanut allergy. Then a really big, I think 2015 or so like a really big, high-quality long-term randomized controlled trial came along that actually properly tested this and it turned out it was the absolute opposite. The best evidence was that if you expose young kids to eating peanuts, they will be less likely to develop an allergy in later life. And there's loads of treatments that have had that.
[00:22:48] There's a book called Ending Medical Reversal by Vinayak Prasad and Adam Cifu, which is well worth reading and reference in the book about medical treatments that have just gone completely opposite because the evidence was never really there in the first place. And yet that was all doctors had to use. So as I say, scientists have really failed doctors in this aspect.
[00:23:04] Jordan Harbinger: Yeah. That's a huge problem because that affects all of us. It's not just like science nerds that are like, "Oh, it turns out this thing is wrong." It's like, no, when you go to the doctor — and we see this problem now with people taking drugs that aren't really adequately studied because they're afraid of getting coronavirus or something like that. I mean, I'm literally getting emails from people that are like, "Trust me, the gargling with bleach works. I haven't gotten it yet." And I'm like, "Oh my God, you're literally ingesting poison." I get the occasional nasty letter from a crazy person that will say something like you're doing — and I feel bad in a way because these people are genuinely worried that I'm spreading disinformation and I'm like, "No, you don't realize that. What you saw in the Plandemic movie was fake and fraudulent," and we'll get into fraud in a second.
[00:23:47] Stuart Ritchie: Sure.
[00:23:48] Jordan Harbinger: That's a different thing.
[00:23:48] Stuart Ritchie: Yeah.
[00:23:49] Jordan Harbinger: But can you tell — are there any sort of basic guidelines where if we're reading a science book or we're making a change in our life, say a diet, or based on some scientific finding we've read about on CNBC or whatever we picked on earlier, how do we evaluate how strong the evidence is? How do we say this bullcrap or not?
[00:24:07] Stuart Ritchie: It's really tricky because different studies will have — you know, some of them will, for instance, be really new. And so it's really hard to tell if that's being properly evaluated. I have a little kind of a checklist in the book, in the appendix of what you should do and then any study. So you can look at how big the sample was. You can look at generally whether you feel like the authors might have a bias in one direction or another. You're when there's a reason for them to be saying what they're seeing, whether it's like political or we've just talked about, you know, other conflicts of interests.
[00:24:33] One of the useful things I think is to look for new stories that talk to other scientists that weren't involved. So the whole thing about the psychic study was that me and my colleagues as independent researchers came along and tried to replicate this. So what do independent researchers think? It's all very well getting quotes from the scientists that did the study in the first place. But what do other people think? Because, of course, science is all about that. That's all about reviewing each other's work and this kind of social process of building up knowledge. So I think you can look for — if any independent scientists have commented on it. There are specific ways to do that.
[00:25:03] In news stories, there are also places like in the UK, we have the Science Media Centre whenever any finding that comes out, that looks like it might be kind of controversial, they ask a whole bunch of other scientists what they think, and they write a little review of the paper and they say, "Well, this one had a kind of tiny sample," or, "This doesn't make sense," or, "They didn't control for this," or whatever as you can get independent views. I don't think it's just actually — like if you've got the URL of the paper, just put it into Twitter and see what people are saying about it. Like scientists often spend a lot of their time, critiquing other people's work on social media.
[00:25:33] Jordan Harbinger: Reddit will shred science even good science is not safe.
[00:25:36] Stuart Ritchie: Amazing threads on Reddit on some scientific papers. PubPeer is another website where if there's anything dodgy about a paper in terms of fraud or anything, untoward, other scientists can anonymously comment on the paper there. So they can say, "We're going to discuss it." We can say, "That data doesn't look quite right. There's something a bit funny about this. Can anyone dig into this in a little bit more detail?" Of course. You know, when you look at social media and look at Reddit and Twitter and so on, you will get the less educated comments and the less high-quality comments too so you have to bear that in mind. But I think that's what my one — one of my main pieces of advice is just to look at — you know, especially if you can't access the paper itself, just look at what other people are saying about it and see if there's a general consensus of this looks really good and really solid.
[00:26:14] Or as has been the case for many of the coronavirus papers, there's a huge, massive flurry of all this research appearing on the coronavirus. You know, there's a lot of people discussing it. So you get these threads saying, "Here's why this paper is wrong, one, two, three, four, five." So I think just checking what other people are saying rather than just relying on the results of that one paper and the way it's written is probably a good idea.
[00:26:34] Jordan Harbinger: I always look on Reddit. There's actually a subReddit called, IsItBullshit? And then, there's also Rational_Skeptic, Skeptic, DeBunkThis, and you can post things in there and say, "Hey, this paper says this?"
[00:26:44] Stuart Ritchie: Right.
[00:26:45] Jordan Harbinger: "But what's up here?" And you'll find like the top 20 posters in there will absolutely annihilate pretty much anything you dumped in there.
[00:26:52] Stuart Ritchie: Yeah. Yeah. You know I think that's extremely useful. I mean, that's basically what you're asking there is for post-publication peer review, right? So peer review tends to be done pre-publication but we knew it was inadequate. We know that all of the papers I discussed in the book — they are fraudulent. They are biased. They have mistakes in them. They are hyped up — have all, almost all anyway, all the ones that are talking about in the book and there are dozens and dozens of them have all passed peer review. They've all got through the system. That's supposed to be like the ultimate quality check. And so we need to be much more open to — you know, as you're talking about there — just asking other people, other experts, or people who just knew the field to dig into a paper and say, "Look, what are the pros and cons of this?" Even though it's been through the peer review process, I think we make a huge mistake by saying, "This is peer-reviewed. Therefore, it's reliable and trustworthy in some way." That's just shown to be totally inadequate.
[00:27:40] Jordan Harbinger: You're listening to The Jordan Harbinger Show with our guest Stuart Ritchie. We'll be right back.
[00:27:45] Now there are more ways to be a team with Microsoft Teams. Bring everyone together in a new virtual room, collaborate live, building ideas on the same page, and see more of your team on the screen at once. Learn more at microsoft.com/teams.
[00:28:00] And now back to Stuart Ritchie on The Jordan Harbinger Show.
[00:28:06] Let's get into fraud because this is where — like you think that scary. That you can just start a diet that doesn't work or do something else and it's peer-reviewed and it turns out to be BS. That can happen by mistake. That can happen via bias. And that's what we see in a lot of the social psychology that we've had on the show. I mean, I've been doing the show for almost 14 years. There are people that come on and then years later people are like, "Remember this person?" "Yeah. That lost their license," or, "This study turned out too. It would just be completely made up." And I'm like, "Oh, oops." And like the power posing thing, like, I don't think — if Amy Cuddy were here, I wouldn't say, "You're a charlatan. How dare you?"
[00:28:38] Stuart Ritchie: No.
[00:28:39] Jordan Harbinger: I'd ask her what she thinks. And I'm sure that when she explains it, it sounds perfectly rational, especially to her about why the results are still legit.
[00:28:47] Stuart Ritchie: Yeah.
[00:28:48] Jordan Harbinger: But fraud is much more terrifying and often lethal. In the study that you talk about in the book or the example you give in the book, there's this doctor that just — did he invent trachea replacement? And then that just turned out to be a bad idea and it just kills a bunch of people by this. Take us through this. This was a nightmare fuel.
[00:29:05] Stuart Ritchie: Yeah. Yeah. I started the fraud chapter with this because it's such an amazing story. So Paolo Macchiarini is his name. He was a surgeon at the Karolinska Institute and the Karolinska Institute is like the top university in Sweden or one of the top universities in Sweden. And it's where you win the Nobel prize for medicine and physiology. Like it's where they call you up and say, "Stockholm calling, you've won a Nobel prize." Like it's a seriously respected august institution. Like it's a really great place with a great medical school, the Karolinska Institute medical school. He was great there. As you say, he was replacing people's windpipes, tracheas — well, it'd been tried a lot. Like there's a really difficult problem in science —
[00:29:44] Jordan Harbinger: You said tra-kee-ah? Have I been saying tra-kee-ah wrong my whole life by saying tray-kea?
[00:29:48] Stuart Ritchie: When recorded the audiobook. The audiobook producer said to me — I said, "Tray-kea," and the audiobook producer said to me, "Isn't it tra-kee-ah?" And I said, "No, no, I'm pretty certain. It's tray-kea." And then we looked up and it's tra-kee-ah.
[00:30:01] Jordan Harbinger: Wow. Pretty much all of North America says this wrong. I've never heard anyone say that.
[00:30:07] Stuart Ritchie: It's possible that is a UK-US thing, actually, yeah.
[00:30:09] Jordan Harbinger: Wow.
[00:30:09] Stuart Ritchie: My UK producer and we're in UK when we're recording this in the audiobook.
[00:30:14] Jordan Harbinger: Yeah.
[00:30:15] Stuart Ritchie: Should we just say windpipe? I mean, I'm happy to—
[00:30:18] Jordan Harbinger: No, I'm happy for you to say it weird the rest of the show. I'm cool with that. I think — we already have this bizarre sort of amazing unusual accent that a lot of people are going to write to me and say, "It sounded interesting, just didn't understand any of it." I like it so go for it.
[00:30:32] Stuart Ritchie: This is just a standard Scottish accent. I've got, but—
[00:30:37] Jordan Harbinger: Do you want me to get you back on track?
[00:30:39] Stuart Ritchie: Yeah.
[00:30:40] Jordan Harbinger: Okay.
[00:30:40] Stuart Ritchie: He came up with this idea of taking stem cells from someone who'd had an injury.
[00:30:46] Jordan Harbinger: Okay.
[00:30:46] Stuart Ritchie: Maybe cancer or for some reason, their windpipe was blocked or damaged. And he had the idea of taking stem cells from them and then taking this artificial windpipe and it's kind of seeding it with the stem cells from the transplant recipient. So that it wouldn't be rejected. That's the big thing in transplants is that you transplant a skin graft to someone that gets rejected because it doesn't work with their immune system.
[00:31:07] Jordan Harbinger: Right. The immune system attacks the new tissue and tries to kill it.
[00:31:10] Stuart Ritchie: Precisely. And that's the ultimate problem in any kind of transplantation. And it looked like in this case that he had solved that, or he was at least one of the first people to have successful operations where you replaced a big section of someone's windpipe with this special stem cells seeded electrospun — it was made of artificial materials and it wasn't like from a donor, it was completely artificial, but it had the stem cells in. And he published papers in some of the world's top medical journals, The Lancet, for instance. It is one of the most respected medical journals in the world. And he published a couple of papers there, several other journals accepted his papers saying, "We've made a successful operation on a patient on their windpipe. We've made this massive breakthrough."
[00:31:50] It turned out that even though it was being reported as a major success in the papers that he published. And again, this is peer-reviewed publications in the scientific literature. It turned out that he was just lying about that. He had fabricated the details of the patients, and in fact, several of them had died. One of them had died before the paper even went to press, they had died and that they didn't stop to say, "Maybe we shouldn't publish this paper that says it was a really successful operation." He had a kind of second base of operations in Russia, where he was doing more of these transplants which were failing really badly.
[00:32:19] There's a horrible story that I told in the book of one of the patients talking and saying, like, "I have like pus coming out of my neck constantly. It was a big hole in my neck and it's just failed terribly." There was a little kid that he did this operation on in the US —
[00:32:31] Jordan Harbinger: Oh no.
[00:32:31] Stuart Ritchie: — that died very rapidly after it happened. And there was never any evidence that this worked because he fabricated all. And he also fabricated data on rats that he had — kind of a preliminary part of the experiment.
[00:32:42] Jordan Harbinger: Wow. This is just malignant narcissism, right? He just wanted scientific credit and he's like, "If you have to die for me to get some pats on the back, so be it."
[00:32:50] Stuart Ritchie: Right. And the fascinating thing about it was that he was a con man in other areas of his life too. There's an amazing story in—
[00:32:54] Jordan Harbinger: Surprise. surprise, right?
[00:32:55] Stuart Ritchie: Right. Exactly. So there's an amazing story in Vanity Fair, where he was like having this affair with a — I think like an NBC news producer and he said, "Oh, by the way, we're going to get married. The Pope is going to officiate our wedding. By the way, I'm the personal doctor to the Pope."
[00:33:10] Jordan Harbinger: First red flag right there.
[00:33:12] Stuart Ritchie: But he said like, "I am the Pope's personal doctor and also the Obama's are coming to our wedding and Elton John's going to be doing music at our wedding," and all this kind of stuff. It turned out he was married to someone else. He had kids the whole time. Vanity Fair contacted the Vatican. And they said, "We've never heard of a doctor with this name." Like the Pope clearly has not got a doctor who has the name of Macchiarini. He'd just been making all this up. So he was a total con man, just a classic con man.
[00:33:35] Jordan Harbinger: Yeah. Sociopathic conman.
[00:33:37] Stuart Ritchie: But isn't it amazing that a con man like that had managed to con like one of the world's top medical institutions and the world's top medical journals and all the peer reviewers that reviewed his work. There were people — some of whom are on the Nobel prize committee who were lobbying the university. "We got to get this guy employed. We got to bring him here. He's great. He's a really amazing surgeon. He's going to change the world." And it looked like he was changing the world. Like it looked like he was making — you know, if you just read the papers and took those scientific papers at their word, he was absolutely revolutionary.
[00:34:04] Jordan Harbinger: It's like some Epstein stuff, right? Where everyone's vouching and you're just like, "What happened? And is some of this just because normal people don't automatically assume that somebody's making an extraordinary claim is a psychopath that murders children? Because I don't know if I would jump to that. Right?
[00:34:17] Stuart Ritchie: One of the best explanations for it, I think is that scientists basically don't want to believe that other scientists are just making stuff up.
[00:34:24] And in some cases, that's because they themselves have co-authored a paper with that scientist. That's one of the saddest things about fraud is that you can have five authors on a paper and one of them is fraudulent. He has made up data. And none of the other ones have any idea about it, but it taints all their careers because this one person has committed fraud. It happens all the time. It's a really, really sad thing that happens to people's careers is that they have this thing. And then the investigations go on for years. And it really ruins your life for a long time because someone has committed fraud.
[00:34:50] But yeah, I think scientists just trust each other. The whole system is based on trust. When you do a peer review of a paper, you're very rarely sent the raw data that goes along with that paper. In this case, it's not like the peer reviewers were sent the medical records of the patients, like the raw medical records as they were written by the patients. They weren't involved to visit the patients and check how they were doing, all of that. They took the claims that were written down in the papers at face value, and it turned out that they were fraudulent.
[00:35:16] Jordan Harbinger: Oh, so even if you're checking someone's work, you are already getting a version that has been manipulated, has shellacking over it, has been possibly cherry-picked.
[00:35:26] Stuart Ritchie: Yeah.
[00:35:26] Jordan Harbinger: It's already been pre-select. I mean, I guess it makes sense because otherwise you'd have to do the whole study again kind of but it also means that you just can't catch—
[00:35:33] Stuart Ritchie: Yeah.
[00:35:34] Jordan Harbinger: That explains why some of this fraud just seems so lazy.
[00:35:36] Stuart Ritchie: Right. Because they assume that no one is really going to go in and really check.
[00:35:39] Jordan Harbinger: Right.
[00:35:39] Stuart Ritchie: Because it's extremely rare that anyone actually sends out the raw data. I think in the last few years, it's been increasing a little bit more that people have put their data online instead. And you can go and download that and just do some checks and just make sure it looks okay. And just double check that what they're reporting in the paper is not like completely divorced from what's in the data. You can argue about how to analyze the data. I mean, obviously, that's a big part of science, saying, "Well, no, I think you should use this model." That's all fine. That's all totally legit. But it's whether the data actually exists in the first place, or whether when you look at those numbers, when you see those raw numbers, which often people do for the first time in fraud cases, and they say, "Oh God, these roll numbers are just impossible. Like there's no way that this could have happened in reality. This is not what our data set looks like."
[00:36:18] Jordan Harbinger: One of the tells you wrote about in the book was that numbers are noisy. So when the data is really clean, like all the numbers are even, are they all within like X of each other.
[00:36:26] Stuart Ritchie: Yeah.
[00:36:26] Jordan Harbinger: It just doesn't make any sense because real numbers are just, as you say noisy. It's just like dirty data. It's like, "Oh, there's one person that has a value that's way over there." "Well, okay, they did the thing wrong." "This other person didn't fill out the questionnaire. This other person lied." And that's all in a real raw dataset where you just have people that are way off base. But if everybody sort of fits neatly into the area that you needed to reach the conclusion, you end up with all these different fallacies, one of which I think is called the Texas sharpshooter fallacy, where you just take all the data that shows what you wanted. And then you draw a circle around that and get rid of everything else. Or there's like cherry-picking —
[00:37:01] Stuart Ritchie: Yeah.
[00:37:01] Jordan Harbinger: — and things like that, that goes on where you just pick the samples of people that support what you want. And everybody else just gets tossed out and you're like, "Look, it works."
[00:37:09] Stuart Ritchie: It's really difficult actually to draw a line between fraud on one end, where people are like, just in some extreme cases, like opening up an Excel spreadsheet and typing in the numbers that they want. There's a blurry line. Where does that become analytic decisions? And what I call in the book, p-hacking, because you're trying to get your P-value, which is this really important statistic. You're trying to get it below a certain level. And that's where the cherry-picking and all that you just described comes in.
[00:37:31] So like the line between fraud and doing stuff with the statistics that you kind of know is going to give you the result that you want, or you kind of know it's going to push your data in the right direction is really hard to draw. Like it's hard to draw it, but you know, in some of the cases, in some of the really extreme cases that I talk about in the chapter, in the book on fraud, this is obviously made up.
[00:37:48] Jordan Harbinger: Yeah.
[00:37:49] Stuart Ritchie: You look at the data. One of the worst fraudsters, the guy who has had more papers retracted than any other human being in history. So obviously—
[00:37:56] Jordan Harbinger: What an accolade, right? Like, "I am the most discredited scientist that's still breathing.
[00:38:01] Stuart Ritchie: It's like the anti-Nobel prize. It's like the worst possible thing you can get in science. I think he's had 182 papers retracted. So he's a Japanese immunologist called Yoshitaka Fujii. There was an analysis done of his trials. These are all trials of like new — sorry not immunologist. He was an anesthesiologist. And they're all trials of new anesthesiology treatments, the group, you know, the experimental group and the control group had basically identical variation around their means. So they were almost like, it was a made up of data where he like added one to every score or something to make it look different. So it was really obvious when you look at it, these trials never actually happened. These trials could not have been real, no real trial would have been so regimented and perfect looking, like pristine looking like that.
[00:38:42] And yet this guy had, as I think 180 — I think people looked at his publication record, 182 trials were fake. And I think they said three of his publications were maybe based on actual, real stuff that he'd done.
[00:38:55] Jordan Harbinger: Jeez.
[00:38:55] Stuart Ritchie: But you know the first flags of this happened about a decade before he got fully busted for fraud? Like someone published a paper, the title of the paper was something like, "The statistics reported by Fujii are really nice." And what they meant was that — it didn't mean as a compliment, they meant not as like, they looked too good to be true. They're far too perfect. That was discussed at the time. And then about a decade went by where he was still publishing fake studies before someone else came along and said, "I have now found definitive evidence that this guy is making up the studies. That these can't possibly be true."
[00:39:24] And that was the case, by the way, with Paolo Macchiarini, the windpipe surgeon, because there was an investigation done. These allegations came out like whistleblowers from his — there were other doctors who were looking after the same patients. They said, "Well, these patients are doing terribly and this just doesn't fit with what he's written in these scientific papers that were being spread to the world and wherever. They went to the university and said, "We think something really bad is happening here. They got someone independent to do an investigation. And then the university said, "Actually, you know what we've done our own investigation." This independent investigation, which by the way, said that there was scientific fraud happening here. "We've completely discredited that, and we totally believe that this guy is truthful."
[00:40:02] The journal itself, The Lancet, one of the world's top medical journals ran an accruing editorial that said, "Paolo Macchiarini is not guilty of misconduct." Just a few weeks later when all the stuff about the Pope and all that came out. And also, there was this documentary where they actually went and met some of the patients and saw the terrible condition they were in. After that came out, they had to all humiliatingly calm down and see, "Okay, we were wrong. Actually, this guy has been fraudulent all along." So in this case you had big medical institutions covering up and like they were on his side, they were on the side of a psychopathic fraudster that was bleeding to the deaths of his patients.
[00:40:37] Jordan Harbinger: There are examples in the book where people were like Photoshopping the images of their dataset, right? Like they'll take one picture and they're like, "Look!" And then they do a mirror image of the same thing. And they're like, "Here's another sample." It's crazy to me some of the stuff you listed here.
[00:40:49] Stuart Ritchie: It's like microscope pictures of cells or whatever. The peer reviewers are obviously just not looking at every single one and paying enough attention. And as you say people can duplicate, they can like Photoshop things, touch things up, flip things around, recolor things to make it look more like their result that they want is true. And it happens all the time. It's really, really common in biology research. There are researchers out there who specialize in trying to detect this. And they're like trying to make AI algorithms that go through papers and say, "Wait a minute, this picture looks identical to this picture. So there's something wrong here." And it's the same principle as the data thing. Like this looks too good. This looks too perfect. There's no way that in reality, this cell would look absolutely identical in every way to this cell. There's no way that that would be the case. So someone must've touched this up.
[00:41:33] Jordan Harbinger: Imagine what's going to happen when we have AI that can go over all the past scientific data sets and go like, "Hey, there's a problem here." We're going to end up being like, "So it turns out that 30 percent of all the stuff we thought was human knowledge. It was just complete bullcrap made up by somebody who wanted to get tenure at a university."
[00:41:48] Stuart Ritchie: Genuinely, chilling thought and the only reason that that wouldn't happen for a lot of like datasets is because scientists have hidden their datasets and they've got them in a drawer—
[00:41:55] Jordan Harbinger: Right, because—
[00:41:55] Stuart Ritchie: — they got them in a drawer somewhere.
[00:41:56] Jordan Harbinger: — we can't even get the info.
[00:41:57] Stuart Ritchie: Yeah, exactly.
[00:41:58] Jordan Harbinger: Ugh, freaking infuriating. So if top journals have fraud, then less prestigious journals probably — would you say they have even more fraud most likely?
[00:42:06] Stuart Ritchie: It's hard to know. It's like this question of — all the frauds I talk about in the book are frauds that have been caught, right? So they've screwed up in some way. They've made their data look too pristine or sometimes they've admitted it or whatever. Also, another reason that the fraudsters that published papers in the very top journals might get caught more often is because of the attention that that draws. It seems like a really dumb strategy to publish your fraudulent data in a really top journal because everyone's looking at it.
[00:42:27] Jordan Harbinger: Yeah.
[00:42:28] Stuart Ritchie: The world is looking at it. There are new stories about it. A huge breakthrough has happened in whatever fields. Someone's going to look at it. And eventually, if enough people look at it, someone will say, "Just a second, that doesn't make sense." And then start digging into your fraud. As you suggested maybe the best thing to do is to kind of hide your paper, your fraudulent paper in a journal that's less prestigious. That's less well-read, that's less well known. And undoubtedly, that must happen a great deal. It's just that the scrutiny isn't there. So you just don't pick up on it. So there's this weird problem of selection bias, where it looks like we've got all these. Fraud might even be worse than the top journals, but it's probably just not getting noticed as much in the less prestigious ones.
[00:43:04] Jordan Harbinger: I looked at retraction data and it looks like 40 percent of papers that are retracted — and this is, these are rounded numbers, of course, speaking of perfect data, 40 percent or so it's because of a mistake which happens and is forgivable and is something that just happens.
[00:43:19] Stuart Ritchie: It's actually good, if you've made a mistake and you say, "Hands-off, I made a mistake, please retract my paper." That's actually really respectable. There shouldn't be a stigma against it. If you put something out there that's objectively wrong — you know, I have huge respect for scientists that say, "Oh my God, I made a mistake. I'm so sorry. Please retract this. And we'll do better next time." That's great.
[00:43:38] Jordan Harbinger: And then 20 percent looks like outright fraud, which was a much higher number than I really wanted to see, honestly.
[00:43:44] Stuart Ritchie: Yeah.
[00:43:44] Jordan Harbinger: That was a bit much.
[00:43:45] Stuart Ritchie: Yeah, it's really, really worrying, the fraud as well as plagiarism and other kinds of problems, like actual bad behavior on the part of scientists. We want to get to the point where there's not a big stigma and retractions are cool. You can retract things when you make mistakes. That's fine. And only a minority of those retractions are for real bad characters. The people who are actually like making up the data.
[00:44:05] Jordan Harbinger: What scares me is something like four and 10,000 studies are retracted. Which to me doesn't mean that there's not that much bad science. It just means that most people are not getting caught doing this.
[00:44:13] Stuart Ritchie: Yeah, absolutely. And there are undoubtedly fraudsters who are really clever about the way they fabricate their data, who make to assess that look convincing and look normal and look not too pristine. They look like a real dataset that was generated by actual data in the real world, rather than a human brain. And so they're never going to get caught and there may even be more dangerous to our scientific knowledge than the ones who have been caught. And we'll never know.
[00:44:38] Jordan Harbinger: From your book, I saw that the biggest defenders of fraud are repeat offenders that make up a large amount of retractions. You mentioned that Japanese anesthesiologist that has — I don't know, how many?
[00:44:47] Stuart Ritchie: It's like 182 retractions or something like that, a ridiculous number.
[00:44:50] Jordan Harbinger: That's a hell of a lot. I was going to say 18, but then I was like, am I off by an order of magnitude. How are guys like this still able to publish and work? How are scientists like this able to get funding, work in a lab, and put a paper out there? How is it not just, "Oh, more of this nonsense from this guy, I don't even want to open this email, delete"?
[00:45:10] Stuart Ritchie: That's often what happens after the fraud allegation is made. But the problem is that the fraud allegations get made and then sometimes years pass before any actual decisions are made by the universities. And, you know, you can understand why that is because universities —
[00:45:21] Jordan Harbinger: That's embarrassing.
[00:45:22] Stuart Ritchie: Right. They're embarrassed. They're on the side of their academics, usually against like some random person from the internet who's emailing in and also, you know, innocent until proven guilty. They do actually want to check. Like it does happen sometimes that people point out a fraudulent study and it turns out, or what they think is a fraudulent study and it turns out that it wasn't fraudulent, they've just misunderstood. That does happen. So you can see why universities — I wouldn't want my university if someone accuses me of scientific misconduct to just say, "You're fired." I want them to do a proper investigation.
[00:45:49] Jordan Harbinger: I think once you're on your 117 fraudulently retracted paper, they might be like, "You know, we've given you a few strikes here, buddy.
[00:45:57] Stuart Ritchie: The problem is that it often is all retrospective. So one is discovered, one fraudulent paper discovered, and then people look through all the old publications and they go, "Oh my God, this is really, really, really bad." It's like plagiarism, you find this with not just scientists, but like other authors who plagiarized — like bloggers who plagiarized stuff, journalists who plagiarized stuff. If you find one instance of plagiarism, you look back through the person's work and you'll probably find more because this is like a personality trait. It's something people can't help, but it's just the way they are. And it's the case with fraudsters too, you know, you find the first instance — so the first discovery of fraud in someone's publication record. And when you look back all the stuff they've been publishing and often they're quite prolific, you'll often find a lot more. All I'm doing there really is saying that personality traits exist. Like some people are antisocial in their personalities.
[00:46:42] Jordan Harbinger: Yeah.
[00:46:43] Stuart Ritchie: And that feeds into the way they do science too.
[00:46:45] Jordan Harbinger: I mean that totally makes sense. If you take a thousand scientists, you're going to find, I would assume, a roughly equal percentage of people who are delusional, narcissist, just like you would if you took a big group of lawyers or anybody.
[00:46:55] Stuart Ritchie: Absolutely, absolutely.
[00:46:56] Jordan Harbinger: Possibly more in the lawyer group, but that's just — I'm saying this is as an attorney, maybe just ran across a few more due to exposure. We'll see that can be my own bias.
[00:47:05] Stuart Ritchie: I don't want to make any assumptions there.
[00:47:07] Jordan Harbinger: Who's committing this fraud though. I do see that there's more in India and China, and I thought that was interesting. Why? Why do we think there's more fraud happening in India and China, as opposed to —? I mean, there's plenty happening in the West as well. I just want to be clear, but why is more happening from that area in the world?
[00:47:26] Stuart Ritchie: It's hard to know. I mean, in the case of China, so I have a call from a doctor and writer, Steven Novella, and he suggested that the totalitarian regime in China is not really conducive to doing science properly. Like living under a totalitarian regime—
[00:47:39] Jordan Harbinger: I can see that.
[00:47:40] Stuart Ritchie: — is not like the best atmosphere for a free exchange of ideas and criticism of people's ideas and so on. And so he particularly points to this study of acupuncture trials where 100 percent of those trials showed that acupuncture works. Even if acupuncture, it really does work. You're going to find the odd trial that shows it doesn't because of—
[00:47:56] Jordan Harbinger: Right.
[00:47:56] Stuart Ritchie: — as we've talked about, numbers are noisy. Statistics are messy, you know, sometimes you're going to undershoot the real effect of a treatment. So it's really unlikely that a hundred percent of trials that have ever been done in China on acupuncture did actually show a positive result. So something is going on there. And what he suggests is that acupuncture is part of traditional Chinese medicine, which is favored by the Chinese communist party, because it was kind of codified by Chairman Mao. And so this might be something that they want to support. That particularly, they reward scientists who come up with support for this kind of traditional Chinese medicine idea. And that gives a huge incentive for people to make up their data.
[00:48:35] Jordan Harbinger: This is The Jordan Harbinger Show with our guest, Stuart Ritchie. We'll be right back.
[00:48:41] Now, there are more ways to be a team with Microsoft Teams. With together mode, you can bring everyone together in one space in the same virtual room. You can bring the power of true collaboration to your projects with a whiteboard, drawing, sharing, and building ideas in real-time, all on the same page. And with a large gallery view, you can see more of your team all at once with up to 49 people on screen all at the same time. You can even raise your hand virtually so everyone can be seen and heard. When there are more ways to be together, there are more ways to be a team. Learn more about all the newest Teams features at microsoft.com/teams.
[00:49:15] Before we get back to it, I wanted to thank you for listening to and supporting the show. It means the world to me. Supporting the advertisers, by the way, that's what keeps the lights on around here, so go check out the deals. We put them all on one page, go to jordanharbinger.com/deals. Everything's there with the codes and all that. Also, we have worksheets for every episode of the show in case you didn't know that. The link to those is in the show notes at jordanharbinger.com/podcast. And the sponsors are in there too. Now for the conclusion of our episode with Stuart Ritchie.
[00:49:47] This reminds me in a way of the fascist and totalitarian regime, sort of finding their way into science. And it reminds me of — this is obviously more sinister, but the Nazis and eugenics, where they were like, "Hey, look, here's this scientific basis for why we hate people that aren't 'Aryan race' people." And the acupuncture thing is a little more benign, I would think—
[00:50:08] Stuart Ritchie: Yeah.
[00:50:08] Jordan Harbinger: — because it didn't result in a Holocaust or mass genocide, but it's not that different in that the messaging is, "Hey, we're the government. And we control everything including who pays you. And if you can have a job, so you better find this thing that was invented in China and contributes to our national identity is real and not a bunch of fake stuff." That's a bunch of kind of old wives' tales and traditional garbage that we just still happen to be doing in the country
[00:50:32] Stuart Ritchie: Precisely. I mean, the other historical example that — this, as you say, it happens in totalitarian regimes, fascist regimes, the other like classic historical example is Lysenkoism in Soviet Russia.
[00:50:43] Jordan Harbinger: What's this?
[00:50:44] Stuart Ritchie: Trofim Lysenko who was like a Soviet biologist who basically didn't believe in the effects of genetics as we normally know it and was a big fan of Lamarckian inheritance that is that you can acquire a trait and then you will pass that trait on to your kids if you've acquired it.
[00:50:59] Jordan Harbinger: Like if I'm reaching up for something and I get taller than my kids are tall.
[00:51:04] Stuart Ritchie: That sort of thing. Exactly. It's like that giraffe—
[00:51:05] Jordan Harbinger: Okay.
[00:51:06] Stuart Ritchie: — neck story or you get yourself really like huge muscles from working out at weightlifting and stuff. And then that will somehow pass on to some of your kids. And so the standard rules of genetic inheritance were really strongly ignited. And this kind of Lamarckian or Lysenkoism view of inheritance was pushed and scientists who pushed back against that were basically purged. You know, many historians think that this contributed really substantially to the famines that struck both in Soviet Russia, and also in most China, because they basically denied biology about how to breed crops and so this had massive effects. The evil of eugenics, unlike the evil mirror image of Lysenkoism, has killed an awful lot of people over the years. And this is because totalitarian regimes have come in and trampled all over science and told me what to do.
[00:51:49] So it always freaks me out when I see politicians of any respect, trampling over science, or like making a really strong, scientific claim that doesn't seem to be alongside the evidence. I mean, you've seen all this stuff about hydroxychloroquine and there have been all sorts of, you know, claims made about COVID-related stuff. But also when you see scientists who seem to have really strong political opinions about something or another, that freaks me out a little bit as well. So I cannot — it's naive to say that scientists shouldn't have political opinions and so on.
[00:52:14] Jordan Harbinger: Sure.
[00:52:14] Stuart Ritchie: But I wonder if they should declare them sometimes as well. And we've talked about declaring your interests and I think there's a discussion to be had about whether scientists should say, like, "I am a member of the communist party, and this may have influenced my view." "I'm a member of the Republican party and I'm doing this trial on hydroxychloroquine and maybe this is going to influence my view because I kind of want to show that Donald Trump is right when he says the hydroxychloroquine is a good treatment for COVID-19."
[00:52:38] I actually think that one of the recent examples of a retraction due to COVID-19, which is again The Lancet and also the New England Journal of Medicine, which has meant to be the top medical journal in the world. They had to retract this paper on hydroxychloroquine. They said hydroxychloroquine was bad. I kind of have a suspicion that those scientists overlooked what turned out to be massive problems with their data. They may even have been made up by one of the co-authors. It's unclear. They overlook those massive problems because basically, they wanted to get a win over Donald Trump. They wanted to say—
[00:53:07] Jordan Harbinger: I think so, yeah, because I was going to say scientists are usually more liberal versus conservative.
[00:53:12] Stuart Ritchie: Right.
[00:53:12] Jordan Harbinger: So the odds of them being like, "I want to prove Donald Trump right." Actually, it's going to probably be the other way around where they're like, "You know what, we're going to embarrass this guy. I don't even want to look at this. I'm going to have a bias that I don't necessarily see." They're not necessarily deciding to do this, but they're like, "I want this to be wrong."
[00:53:25] Stuart Ritchie: Right. And there's evidence from some of that stuff — I talked about paper and when I talk about negligence when scientists have made a mistake and it's, there are these algorithms that are trolling through scientific papers to find errors like in the statistics. And it turns out that errors are more likely to be in favor of the scientists' hypothesis, right?
[00:53:42] Jordan Harbinger: Yeah. Go figure.
[00:53:43] Stuart Ritchie: It implies that if you find a result that goes against your hypothesis, you're more likely to check it. "Oh, it turns out to be an error. Fair enough." But if you find a result that supports your view and is really great for your hypothesis, you're like, "Oh, too good to check. I'll just move on to the next thing." So it's like this bias towards finding results in a particular direction, stops us from being skeptical about our own research, which is the whole point of science, right? It's being skeptical about stuff. It's being skeptical by our own research, other people's research, and it really stops this process of skepticism in its tracks.
[00:54:13] Jordan Harbinger: How does fraud and/or just bad science and those consequences — how does that bleed out into industry and into things like drug research and medical treatment? I mean, we've seen some of this anti-vax bullcrap from a fake doctor or discredited Dr. Andrew Wakefield, like this is now in the zeitgeist if you will. And people are, "Ah, there's science that says this, and it's being hidden by the mainstream media, or whatever boogeyman, you know, people are looking for." Does this actually affect drug research and medical treatment or is it kind of like, do we catch it before that it happens?
[00:54:43] Stuart Ritchie: Sadly, it really does affect medical research. For instance, loads of medical trials, medical trials are required by law to be registered now. So since about the early 21st century, if you're doing a medical trial on human subjects, you have to register that by which, I mean, you have to go onto clinicaltrials.gov if you're in the US. And you have to say, we're going to do a trial of hydroxychloroquine for COVID-19. The outcome that we're checking is COVID-19 symptoms. If you then find that the drug doesn't affect COVID-19 symptoms, but does reduce people's headaches, say you manage it. You just happen to find that they just feel headaches. It's really, really common for the process of what's called outcome switching to occur, which is where you say, "Well, we were never really interested in COVID-19 symptoms, to begin with. What we really wanted to look at was a headache." And so you then write up the paper as if it was always about a headache, a new drug to test headache. And probably on page 55 of the paper, you can say like, "Oh, we also looked at COVID-19 symptoms," but I'm using that as an example. I'm not sure there's a study that has actually done that, but this happens a lot. This outcome switching thing where — and it's just like the cherry-picking stuff that we talked about before, where medical trials are often run by pharmaceutical companies. This does happen when you know, government-funded trials happen too. If you don't get the result you want, you just pretend you were looking for something else, all along
[00:56:04] Jordan Harbinger: Texas sharpshooter fallacy. That was — I misexplained it badly before.
[00:56:08] Stuart Ritchie: No, you explain it very well, which is you, basically, you, you find a result because of the randomness of where numbers are. And this is the Texas sharpshooter who goes up to a barn and randomly shoots, all over the side of the barn, and then he goes over and finds like where there's a little cluster of bullet holes and he draws the target around that. So you're drawing the target after you've made the shots. And that's what scientists are doing a lot of the time with this outcome switching thing. And it's really quite bland, right? Because there's a record of what — they have registered that they've written down their original prediction, their original hypothesis. And then they're just like, "You know, let's do something different. Let's just write the study up as if it was something else."
[00:56:42] And so there's been analysis of this that really shows the numbers right hand, but like large proportions of medical trials have changed somewhere along the way and aren't looking at the outcome that they originally planned to. And this is a recipe for finding false results, finding false-positive results where you think you've found something and there isn't actually anything there. It's just statistical noise. And this happens in the industry. And as you say, this is the incentives of academia and where they meet the incentive industry, which is we want to find results. We want to find exciting things that we can then market.
[00:57:12] Jordan Harbinger: This is the most important part of the show here. I want to talk about bad results doesn't mean we can just decide not to believe in science anymore, or choose what to believe without any evidence. Because I see this idea of a Plandemic debunking video, and the YouTube comments are terrifying because it's like nobody addresses the content of the video. It's just like, "You're a big pharma shill" or like, "No, look at this YouTuber," who's like a guy in a garage "He's got all the real evidence," and I'm like, "No." I'm disagreeing with them, so I'm wrong. Now you're just going to hurl an insult. It's crazy the lack of critical thinking, but people will hear something like this and go, "See science is also bad and has just as many errors as the guy who made some crap up in his living room this morning and put it on a blog.
[00:57:56] Stuart Ritchie: Yeah. Yeah. There's a few responses to that. I mean the first one is that all of the stuff that I described in the book, the fraud, and the bias and all that stuff, was discovered by other scientists. Was eventually, even though it took longer than it should, and I think there are real problems with the way that science is set up, all of that is — and, you know, by the way, that is set up to find positive results and so on, it's not a big conspiracy to keep doing the population by giving them vaccines and insert any microchips in them as I believe one of the vaccine conspiracies is currently saying.
[00:58:23] Jordan Harbinger: Right.
[00:58:23] Stuart Ritchie: What I mean by like perverse incentives and the way that the system is set up is that we're incentivizing scientists to find exciting research results rather than find true stuff. That's the situation we're in. All of that stuff discovered by other scientists has been pointed out by scientists. The process of science has eventually worked here, even if it took a longer time than should have really been the case. The other response that my book is arguing for is for us all, to raise our standards for what we count as scientific evidence.
[00:58:49] At the moment we've got this, not very clear process of peer review, sometimes it works, sometimes it doesn't. You find a peer-reviewed study and you'll really know whether it's reliable or not. When you should know, like that should be the kind of quality seal where you're like, "Yeah, I can kind of trust this." We've clearly shown that that's not the case. What I'm asking for is for people to raise their standards, for people to be more skeptical of data, be more skeptical of results, be more skeptical of interpretations. And I think if you applied that sort of reasoning, really serious statistical analysis, pre-registering your analysis before you look at the results, all that sort of stuff — I think if you applied that to the average claim by a vaccine denier or the average claim by a creationist, or any of these kinds of conspiracy theorists or anti-science science denier type people, their claims would crumble apart in a second. They're even worse.
[00:59:36] For instance, when was the last time you saw an anti-vaccinator retract a paper or retract a paper in one of the phony journals that they've got or criticize another anti-vaxxer. These people are clearly biased in one direction. They're not applying the principles of science that I advocate in this book, which is like constant skepticism sharing results with each other and sharing results with the world and so on. And I think the fact that there are people who deny science, who deny the science that we can rely on and vaccines that have been studied within an inch of their lives and they're safe in many cases, and scientists should be opened in the cases where they've been found not to be safe. Things like the MMR, which is the Andrew Wakefield stuff that you mentioned that really doesn't cause autism, the evidence is very strongly against that causing autism, which is what the original claim in the Wakefield paper was. If you look at that data, it's really clear, but the fact that there are people who say that MMR might cause autism or are worried about MMR means that we in science need to raise our standards even more. We need to say, "Look, we're nothing like what you claim. We're not hiding anything. We're being open with our research. We're being transparent. We're not doing what you're claiming we're doing," which is making a kind of tacit agreement not to discuss the flaws.
[01:00:45] In fact, I had an email from a prominent chemistry professor from one of the University of California system universities, just the other day, who said, "You're doing damage to science by writing this book because—"
[01:00:56] Jordan Harbinger: I could see why they think that.
[01:00:58] Stuart Ritchie: Right, people are going to read your book or read your op-eds that you've written or whatever, and say, "Well, I deny science." And basically, the implication of his email was, and he said, "I agree with you about the problems. I agree with you about the empirical stuff that you talk about." So what he's claiming there, what he's saying is that scientists should have all these discussions in the ivory tower and not mention them to the public and not talk about anything at all. And I think that's a recipe for disaster, and I think that's actually a recipe for feeding the conspiracy theories because it is actually making a conspiracy. It's a conspiracy of silence. It's saying, "Let's not let the man in the streets, hoi polloi, let's not tell them about any of these problems. Let's just work it all out ourselves."
[01:01:35] I don't think that's adequate. I don't think that will work. I think only sunlight and transparency is the only thing that will work here. I think that's part of this whole open science thing that I advocate in the book, which is people should be able to read the papers. Anyone should be able to look at your data in pretty much all cases. And anyone should be able to see the flaws in it. Transparency is what we're after here.
[01:01:55] Jordan Harbinger: Yeah. I mean, that's, what's beautiful about science, right? We can crack this open, shine a light everywhere in every corner of every element of science, show the problems with it, and then get better. And other disciplines or fake disciplines, or fake science, they can't and won't do that because. They can't stand up to any scrutiny. So they just go to their convention and sell their self-published books and that's the end of it.
[01:02:15] Stuart Ritchie: You know, I talk about these norms of science, like one of which is organized skepticism and their universalism and commonality and all these kinds of really important — disinterestedness is one of the most important aspects of science, which is that we shouldn't come into science with an ideology or with interest in one particular finding or another, or any kind of political or any other kind of set of beliefs that could push us in one direction or other.
[01:02:40] Charles Darwin described it as scientists should have nothing but a heart of stone when it comes to the results. So if they do an experiment and it turns out that their theory is completely wrong, they should write up that paper and publish it in the same way that they would if a similar experiment had shown them to be right. I used to spend a lot of time back in 2005, 2006, when this felt like it was a real problem before we had real problems and—
[01:03:02] Jordan Harbinger: Even bigger problems.
[01:03:03] Stuart Ritchie: I used to spend a lot of time reading creationist forums and creationist stuff because I found it really fascinating that there was this as good as evidence gets for any theory, the theory of evolution. And yet there were still these people denying it and you could see that they came in, they often explicitly came in and said, "If it goes against the Bible, I'm not going to believe it. And no, I'm going to do a scientific paper about evolution." It's like, that's completely the opposite of what science should be about. And that's really what's happening in these cases. They don't have like a Bible maybe, but they, the anti-vaccinators are very strongly taken with the latest Wakefield book or the latest stuff that's been written on some of the anti-vaccine sites. And this is clearly their ideology that their preconception, that they're bringing to it. That's anti-science and it's not just anti-science as in the results that are published in scientific journals, but it's anti the whole scientific process and the whole way that science should work
[01:03:51] Jordan Harbinger: In closing here, how do we incentivize properly? Because to hype up some fake results or some exaggerated results, that's how a lot of scientists get funding. That's how they get speaking gigs. That's how they end up getting tenure because they got a lot of social media followers. And a lot of people want to take their classes. How do we set up the incentives properly? Because they clearly are not proper right now.
[01:04:10] Stuart Ritchie: Universities are kind of aware that the way they hire scientists and the way they promote scientists and give them tenure and stuff are clearly not working. And a lot of universities now are changing, you know, the way that they run their tenure committees. And they say not just like, "How many papers do these scientists have on their CV? We'll hire the one with more papers," because that encourages scientists to just endlessly published, low-quality research. Journals are no actually changing the way they work so it doesn't just say, "We want your most exciting results," but it says, "We'll publish replication studies so they can change the way they work."
[01:04:43] Funders can also — you know, they don't want to have egg on their face when they've just poured thousands, millions of dollars into a study that turns out not to replicate so they can require that scientists will not just publish a paper in the most flashy journal, but we'll share the data with the world. They'll incentivize scientists to do stuff like be good scientific citizens and create new software that checks for errors, create new ways to be transparent and share data, create new methods that really rigorously analyze data.
[01:05:10] And I think also just talking about this stuff, so just you and I talking about this stuff now is part of how we change the incentive system because I think, we've good reason as scientists to look back at some of these real failures, the frauds, the biases, and feel shame about them. And I think just talking about it, having this discussion about the replication crisis and the way that the incentives in science are so badly wrong, I think is half the battle really of moving those incentives to bear place. And that's really why I wrote this book.
[01:05:33] Jordan Harbinger: Stuart Ritchie, thank you so much for coming on the show today. This is really interesting. I mean, it's a little niche — you know, like I was a little skeptical. "Oh my God, you're going to talk about the problems in science, whatever. How am I going to make this a mainstream?" But you did a great job of making this palatable for an audience of geeks, but maybe not scientists.
[01:05:49] Stuart Ritchie: Yeah. Thank you so much. Great to be on. Thanks.
[01:05:53] Jordan Harbinger: I've got some thoughts on this episode as usual. But before I get into that, here's a preview of my conversation with Austin Meyer. He's a software developer who exposes patent trolls and how they shake down innocent victims using legal loopholes and abuse of the system.
[01:06:08] Austin Meyer: I was working at a trade show in Oshkosh, Wisconsin, where I was sitting there in a sweltering, hot aircraft hangar, showing X-Plane my flight simulator to a steady parade of sweaty pilots, wandering through the hangar to look at my various wares, and all of a sudden, the phone rings.
[01:06:24] Hello. I noticed you've been sued for patent infringement. I'll be happy to represent you for a price.
[01:06:30] Austin Meyer: And I said, "No, I'm not going to settle with somebody I've never even heard of before for infringing on a supposed patent. I've never heard of it before.
[01:06:38] Okay. Just remember your defense cost is going to run around three million dollars.
[01:06:41] Jordan Harbinger: Wow.
[01:06:43] Austin Meyer: The patent claims to own the idea of one computer checking another computer to see if the computer program is allowed to run. The patent we were sued on had, as I recall 113 claims and every claim was almost the same. In other words, one claim would say a computer accessing another computer to unlock software. And the next thing would be software unlocked by one computer accessing another computer. Notice just the same thing over and over 113 times phrased a little bit differently each time because since it took us four years and two million dollars to overturn one of those sentences, it had the same thing written down 112 more times. So they could put us through this for the rest of our lives.
[01:07:25] Jordan Harbinger: For more with Austin Meyer, including the details of his own investigation into patent trolls and why none of us are safe, check out episode 326 of The Jordan Harbinger Show.
[01:07:37] I know I'm a geek. His topic was really interesting for me. There a lot to be said about small-scale studies and exaggerated results. A lot of this genetic research, it's just so tempting to read big results into a small amount of data. And especially when you're a new scientist, you can't afford large studies. If you're small, you're new, you're trying to make a name for yourself, so I get it. This hype though in science is dangerous. It makes a dent in the public trust for science, hyped science flies, and then reputations and corrections just come limping after it. And it makes people think that you just can't believe anything these days. Fraud likewise wastes a ton of money as grants are wasted. Tax dollars are wasted, and others waste their own funding trying to replicate results that were fraudulent in the first place.
[01:08:21] And I wondered when I was doing this. Why is it that whenever we hear of new scientific research, it's always an amazing discovery? I think it's just the news cycle. But it seems like the majority or at least a huge amount of research and studies should actually conclude, well, there was nothing there. Our hypothesis was wrong, nothing was going on. The data was bad. There's just no meat in that burger. And maybe we're just not hearing about this because the failures aren't published, but that's part of the problem, right? And if you want to see some of the very worst, most outrageous scientific frauds, take a look at the website Retraction Watch. We'll have that linked in the website. There's a leaderboard there. The scientists who have the highest number of retracted papers in history, usually because they made up their results. I mean, these are triple-digit BS, often fraudulent studies. It's just horrifying. Science is biased sometimes because scientists are human. We all have our own biases.
[01:09:12] What's the scientific finding that would most upset you if it turned out to be false. Think about that. Your favorite diet doesn't actually work. The studies in your favorite self-help book might not stand up to scrutiny. If you look at these questions and you think about this, this way you can put yourself in the shoes of the scientists who might find that one of their own results is less than solid. And you might understand why some of them might resort to massaging the data a little bit. This is especially prevalent in nutrition. This gut biome bullcrap is kind of the new trend. There are other ways to manipulate data as well. You can fluff the resume. You can do what's called salami-slicing, where you just take little bits. It's like cherry-picking, but you just take little tiny bits that lead to something and show a finding and you ignore all of the other results and you can, of course, cherry-pick which results go in which papers. This wastes the time of peer reviewers and it misleads the public. The good news is even with all these problems in science today, and there were plenty more in the book, even when people have the incentive to hide their failings, the pursuit of truth is still paramount for most scientists. And that's why we can and should still rely on science.
[01:10:17] Big thank you to Stuart Ritchie. The book title is Science Fictions. Great title links to that stuff will be in the show notes on our website. Please do use the website link if you buy the book that helps support the show. Worksheets for this episode in the show notes. Transcripts for this episode in the show notes. A video of this interview on our YouTube channel at jordanharbinger.com/youtube. And I'm at @JordanHarbinger on both Twitter and Instagram. You can also hit me on LinkedIn.
[01:10:40] I'm teaching you how to connect with great people and manage relationships, using systems in tiny habits over at our Six-Minute Networking course, which is free over at jordanharbinger.com/course. Dig that well before you get thirsty. A lot of the guests on the show are in the course, they help contribute to the course. So come join us, you'll be in smart company where you belong.
[01:10:59] This show is created in association with PodcastOne. Also, of course, my amazing team, which includes Jen Harbinger, Jase Sanderson, Robert Fogarty, Ian Baird, Millie Ocampo, Josh Ballard, and Gabriel Mizrahi. Remember, we rise by lifting others. The fee for this show is that you share it with friends when you find something useful or interesting. You know, somebody who is interested in science or is a scientist, please share this episode with them. I do hope you find something great in every episode, so please do share the show with those you care about. In the meantime, do your best to apply what you hear on the show, so you can live what you listen, and we'll see you next time.
[01:11:35] Now, there are more ways to be a team with Microsoft Teams. Bring everyone together in one space with a new virtual room, collaborate live, drawing, sharing, and building ideas with everyone on the same page. And make sure more of your team is seen and heard with up to 49 people on screen at once. Learn more about all the newest Teams features at microsoft.com/teams.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.