AI is reshaping society, challenging democracy, and raising ethical concerns. Nexus author Yuval Noah Harari explains its risks and potential solutions.
What We Discuss with Yuval Noah Harari:
- Stories and shared beliefs are fundamental to human cooperation and society, from money to religion to nations. These “fictions” enable large-scale collaboration.
- Populism erodes trust in institutions and promotes a cynical view that all human relations are power struggles, paving the way for authoritarian rule.
- AI is not just a tool but an agent that can make independent decisions, potentially surpassing human capabilities in many areas, which raises concerns about control and understanding.
- The rise of AI and extensive data collection enables unprecedented surveillance and control, as seen in social credit systems and automated law enforcement.
- We can shape the future of AI by creating living institutions to monitor its development, implementing regulations to hold companies accountable, and ensuring transparency in AI interactions. By focusing on solving the right problems and establishing trust between humans, we can work toward a more positive future with AI.
- And much more…
Like this show? Please leave us a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Humanity stands at a pivotal moment in history, where the stories we tell ourselves and the technologies we create are reshaping our world in unprecedented ways. From the power of shared beliefs that form the basis of our societies to the rise of populism that threatens democratic institutions, we are witnessing a transformation of our collective narrative. On this episode, we’re joined by historian and philosopher Yuval Noah Harari, author of Sapiens, Homo Deus, and Nexus: A Brief History of Information Networks from the Stone Age to AI. Here, we examine the complex interplay between human culture and emerging technologies, exploring how AI is not just a tool, but an agent capable of making independent decisions that could surpass human capabilities.
In the midst of this new era, Yuval illuminates the potential dangers of unchecked AI development and data collection that enable unprecedented surveillance and control while also offering hope and actionable insights for shaping a positive future. Here, we discuss the importance of creating robust institutions to monitor AI development, implementing regulations to hold tech companies accountable, and ensuring transparency in our interactions with AI. In this conversation, Yuval challenges us to focus on solving the right problems and fostering trust between humans as we confront the ethical dilemmas of our time. Listen, learn, and gain valuable perspectives on how we can steer the course of our technological future while preserving our humanity!
Please Scroll Down for Featured Resources and Transcript!
Please note that some links on this page (books, movies, music, etc.) lead to affiliate programs for which The Jordan Harbinger Show receives compensation. It’s just one of the ways we keep the lights on around here. We appreciate your support!
Sign up for Six-Minute Networking — our free networking and relationship development mini-course — at jordanharbinger.com/course!
Subscribe to our once-a-week Wee Bit Wiser newsletter today and start filling your Wednesdays with wisdom!
Do you even Reddit, bro? Join us at r/JordanHarbinger!
This Episode Is Sponsored By:
- Brooks Running Shoes: Head over to brooksrunning.com to grab your pair
- Audible: Visit audible.com/jhs or text JHS to 500-500
- Rosetta Stone: Visit rosettastone.com/jordan for 50% off a lifetime membership
- Ramp: Get $250 when you join at ramp.com/jordan
- Land Rover Defender: Build your Defender at landroverusa.com
After treating his own rare disease, Chasing My Cure author David Fajgenbaum explained how existing drugs can help other sufferers survive the unknown on episode 1005: David Fajgenbaum | Leveraging AI to Cure Rare Diseases. Listen here!
Thanks, Yuval Noah Harari!
If you enjoyed this session with Yuval Noah Harari, let him know by clicking on the link below and sending him a quick shout out at Twitter:
Click here to thank Yuval Noah Harari at Twitter!
Click here to let Jordan know about your number one takeaway from this episode!
And if you want us to answer your questions on one of our upcoming weekly Feedback Friday episodes, drop us a line at friday@jordanharbinger.com.
Resources from This Episode:
- Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari | Amazon
- Sapiens: A Brief History of Humankind by Yuval Noah Harari | Amazon
- Homo Deus: A Brief History of Tomorrow by Yuval Noah Harari | Amazon
- Unstoppable Us Volume One: How Humans Took Over the World by Yuval Noah Harari and Ricard Zaplana Ruiz | Amazon
- 21 Lessons for the 21st Century by Yuval Noah Harari | Amazon
- Yuval Noah Harari | Peering into the Future of Humanity | Jordan Harbinger
- Yuval Noah Harari | Website
- Yuval Noah Harari | Twitter
- Yuval Noah Harari | Facebook
- Yuval Noah Harari | Instagram
- Yuval Noah Harari | YouTube
- Judaism, Jewish History, and Anti-Jewish Prejudice: An Overview | University of Washington
- History of US Currency | US Currency Education Program
- What Is Bitcoin? | Investopedia
- Hyperinflation in the Weimar Republic | Wikipedia
- $100 Trillion Zimbabwe | Tim Ferriss, Twitter
- Money Secrets of the Amish | Business Insider
- The Complete Hitchhiker’s Guide to the Galaxy by Douglas Adams | Amazon
- The Naive Scientist Revisited: Naive Theories and Social Judgment | Social Cognition
- The Naive View of Engagement: Why More Isn’t Better by E.R. Burgess | Credtent on Content
- Why the News Is Not the Truth | Harvard Business Review
- Cindy Otis | Spotting Fake News Like a CIA Analyst | Jordan Harbinger
- Clint Watts | Surviving in a World of Fake News | Jordan Harbinger
- How the Ancient Greeks Did Math with Letters, Not Numbers | Mental Floss
- Takis S. Pappas: The Rise of Modern Populism | TED-Ed
- Yuval Noah Harari: Power vs. Happiness (Clip) | ReThinking with Adam Grant
- Timothy Snyder | Twentieth-Century Lessons on Tyranny | Jordan Harbinger
- What is AI? | MIT Technology Review
- Did GPT-4 Hire and Then Lie to a Taskrabbit Worker to Solve a CAPTCHA? | AI: A Guide for Thinking Humans
- AI Bots Now Beat 100% of Those Traffic-Image Captchas | Ars Technica
- Political Outrage Machines: Exploring the Algorithms Structuring Conspiracy TikTok | GNET
- Renee DiResta | The Puppet Masters of Public Opinion | Jordan Harbinger
- Elon Musk’s PAC Is Pitting US Jews & Muslims Against Each Other | Bend the Arc: Jewish Action
- David Fajgenbaum | Leveraging AI to Cure Rare Diseases | Jordan Harbinger
- Jamie Metzl | AI Solutions for Hunger, Health, & Habitat Part One | Jordan Harbinger
- Jamie Metzl | AI Solutions for Hunger, Health, & Habitat Part Two | Jordan Harbinger
- It’s the End of the Web as We Know It | The Atlantic
- This Is How AI Ruins the Internet | The New York Times
- The Year That AI Came for Culture | The New Republic
- AlphaGo | Google DeepMind
- Black Loans Matter: Fighting Bias for AI Fairness in Lending | MIT-IBM Watson AI Lab
- Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza | Human Rights Watch
- Domo Releases “Data Never Sleeps: AI Edition” Infographic | Domo
- Kafkaesque: How Franz Kafka’s Books Reveal a Real-Life Dystopia | Big Think
- Brazil | Prime Video
- Why Is ‘Byzantine’ Used to Describe Excessive Bureaucracy? Was Byzantium Really an Administrative Mess? | R/ExplainLikeImFive
- Deep State in the United States | Wikipedia
- The Story Of Sir Joseph Bazalgette & The Sewers Of London | The History Lord
- John Snow and the 1854 Broad Street Cholera Outbreak | Wikipedia
- The Terminator | Prime Video
- Laowhy86 | How the Chinese Social Credit Score System Works Part One | Jordan Harbinger
- Laowhy86 | How the Chinese Social Credit Score System Works Part Two | Jordan Harbinger
- The Dark Side of WeChat | Monmouth Magazine
- Iran: Women Go without Hijabs as Mahsa Amini’s 2nd Death Anniversary Nears | AP News
- Is the Government Tracking Your Abortion? | At Liberty Podcast
- The Silicon Curtain Descends on SB 1047 | The Health Care Blog
- Harari’s Six Plagues of AI | About Digital Health
- Avoiding AI Dystopia: Yuval Noah Harari and Aza Raskin | Commonwealth Club World Affairs
1068: Yuval Noah Harari | Rewriting Human History in the Age of AI
This transcript is yet untouched by human hands. Please proceed with caution as we sort through what the robots have given us. We appreciate your patience!
[00:00:00] Jordan Harbinger: Special thanks to Brooks running shoes for sponsoring this episode of the Jordan Harbinger Show. Coming up next on the Jordan Harbinger show,
[00:00:07] Yuval Noah Harari: I mean, you can use an atom bomb to destroy this city or that city. It's your choice. You decide to start a war and who to bomb and whatever, and atom bomb could not invent the hydrogen bomb, but AI can invent new weapons and ultimately even new ais.
[00:00:27] Jordan Harbinger: Welcome to the show. I'm Jordan Harbinger. On the Jordan Harbinger Show. We decode the stories, secrets, and skills of the world's most fascinating people and turn their wisdom into practical advice that you can use to impact your own life and those around you. Our mission is to help you become a better informed, more critical thinker through long form conversations with a variety of amazing folks, from spies to CEOs, athletes, authors, thinkers and performers, even the occasional journalist turned poker champion, war correspondent, or mafia enforcer.
If you're new to the show or you wanna tell your friends about the show, our episode starter packs are great place to do that. These are collections of our favorite episodes on persuasion and negotiation, psychology, geopolitics, disinformation, China, North Korea, crime and cults and more. That'll help new listeners get a taste of everything we do here on the show.
We got quite a variety. Just visit Jordan harbinger.com/start or even search for us in your Spotify app. To get started today on the show back once again is Yuval Noah Harri, this time live in Los Angeles. He was a massive hit last time author of Sapiens and now Nexus. Today we'll explore how stories connect us as humans and may have even enabled us to outcompete other early species of pre-humans.
Then we take a turn into populism and authoritarian regimes before finally landing on the topic of ai, everyone's favorite sort of trending thing. Now, what AI can do, what it'll be able to do in the future, as well as whether or not that is actually compatible with humanity and our values as a society or as species.
I know this sounds heady. Some of it is. But I find Yuval super accessible, a great speaker, a fun conversationalist. I think you'll really dig this episode as well. Alright, here we go with Yuval Noah Harra,
I keep hearing you talk in on other podcasts and, and elsewhere about how stories connected early humans. So maybe we kind of start there. Yeah. 'cause it's an angle that most people don't take with, when you read about homo sapiens, you read about like the skull structure or shape or whatever. Yeah. But what, what we don't hear is, hey, stories are kind of the reason why homo sapiens won whatever battles there were.
Literally and figuratively. Yeah. To make it to the top. How did this begin? Of course, people who are religious will say, well, in the Bible it says. The Jews have the story of coming out of Egypt together, but that story, even still, whether it's real or not, and whether people believe it or not, what that does for the, let's say the Jewish diaspora is kind of, are you a Sephardic Jew?
Uh, no. No, Ashkenazi. Oh, okay. I just harra you just seemed, yeah.
[00:02:48] Yuval Noah Harari: I mean, originally it was Bli Berg. Oh, okay. Okay, got it. This was a kind
[00:02:51] Jordan Harbinger: of hybridizing bli bargain into Hara. Oh, okay. Okay. Hence my terrible guest there. Yeah. But you can meet somebody like you and then I meet my producer, Gabriel Mizrahi, and it's like, oh, well we all have this connection.
It's like, well, he is from Portugal and Spain and my family's from, I guess Belarus Ukraine. What, Poland something, something. But everybody's kind of connected with the Peruvian Jews and the Israeli, you know, it's all just sort of one, but it's a story.
[00:03:13] Yuval Noah Harari: Yeah,
[00:03:14] Jordan Harbinger: we're making it up. I mean
[00:03:15] Yuval Noah Harari: the, the story connects.
I mean, you all believe the same basic stories. This is what connects you.
[00:03:19] Jordan Harbinger: Tell me about Bitcoin as a story. 'cause the crypto, by the way, always good for virality 'cause the crypto bros go wild over this. But I'm curious because that's also kind of a story, right? Everyone just believes it's value. All money is a story.
[00:03:29] Yuval Noah Harari: Oh, okay. I mean, the dollar is also a story. I mean, the dollar has no objective value. It's not like food that you can eat. I mean, what can you do with dollars? I mean, you can't eat them, you can't drink them, you can't wear them nothing. And most dollars today are not even paper. Most dollars are just digital.
Oh, that's true. Based in computers, like more than 90% of all money in the world is just bits in computers, not just cryptocurrencies. Also dollars and euros and ends. It's all digital. The value comes from the stories that we believe that, you know, if you have the Federal Reserve and the finance ministers and the big bankers tell you that this piece of paper or, or this bits of digital information in in the computer, they are worth an apple.
Then as long as everybody believes it, it works. You can go to a complete stranger you never met before, give them the piece of paper or the bits of digital information in the computer, and they give you an apple you can eat. And it's all based on everybody believing in the same story. Interestingly enough, if you think about the United States today with the political divide, maybe the last story that Democrats and Republicans still share is the dollar.
[00:04:36] Jordan Harbinger: Oh, that's interesting. I mean, yeah.
[00:04:37] Yuval Noah Harari: You know, they don't agree on the fact who won the elections and this and that, but a dollar is a dollar. Mm-Hmm. And this is where cryptocurrencies, one of the interesting things about them, they might break this last bond holding American society together. Yikes. Just imagine the situation if one side decides oh dollars, they are part of the deep state conspiracy, whatever.
We don't believe all these institutions, we believe in some cryptocurrency. And what happens if Republicans and Democrats no longer use the same money?
[00:05:08] Jordan Harbinger: Oh yeah. Well that's a good point. And when I, when I think about the disintegration of. A society or a country, there's always a story of, and then they took a wheelbarrow full of Deutsche Marks to go buy eggs because they, they were worth nothing.
Or like Zimbabwe, here's your $10 trillion bill that you need three of them to get a loaf of bread. Yeah. And you're right.
[00:05:27] Yuval Noah Harari: This is when the, the trust in the story completely collapses. Money in many ways is perhaps the most successful story ever told, but it too, it, it's just a story.
[00:05:37] Jordan Harbinger: Yeah. It's interesting you mentioned what happens when people stop believing in this.
The extreme, and I'm sure you've seen this, the extreme sort of crypto bros, Bitcoiners, whatever you call it, they already don't really believe in the dog. They have to use it because. They're kind of forced to, right, okay, fine. I pay my mortgage in dollars to the stupid bank. They don't know that that's not gonna be worth anything later, so fine.
But they have 90% of their net worth in Bitcoin or Ethereum or whatever, and it's because they really don't, they're thinking, we see it's like the red pill, right? We see that the money is fake. Other people still believe in it. So we'll play their game for a while, but when it all comes crumbling down, we're gonna be the ones with the Bitcoin.
[00:06:16] Yuval Noah Harari: Yeah, Bitcoin. It's also just a story. Yeah. I mean, this is why they struggle so hard to convince everybody else to believe in it. It's like a God. If everybody believes in my God, then I'm fine. But if people stop believing in my God, that's dangerous for me. And it's the same with my money. My money is has value.
Only if other people also believe in it. If you have a private money, it's just mine. Nobody else believe in this money. It's worthless. You can't buy anything. So it's like this kind of, again, this religious wars, I mean, which God is real. So you have now these monetary wars, which money is real. And what is clear is that all types of money are ultimately based on stories.
Again, you can't eat Bitcoins. You can't drink Bitcoins. Their value is because other people also believe in the same story.
[00:07:05] Jordan Harbinger: It's fascinating how these, these stories sort of make our entire society function, but they're kind of invisible. I mean, we're, you're raised on them so you don't see them, right?
'cause you're taking a bite out of it every single day. I suppose before money people had to walk across the, whatever the county or whatever it was with a bunch of sheep that they were gonna trade for a bunch of, I don't know, chickens or something like that. This sometimes
[00:07:25] Yuval Noah Harari: happen, but mostly, you know, people lived in, in small communities.
I see. And the basis for the economy was not barter, the basis for the economy was just, you know, personal relationships. Oh really? And people doing favors to each other. Like, you helped me build my heart yesterday and today you are going somewhere and somebody needs to look after your sheep or whatever.
So I'm doing you a favor. And it's not like there is a monetary system. Right. Okay. That everything has a value in specific points. Money came along as a way of kind of putting a value on everything and shifting from these kind of an economy based on, on personal relationships and favors. Into a much broader network in which strangers that don't know each other and don't trust each other.
Trust, yeah. They use money to establish trust. And this is, I think the most important thing about money to understand is money equals trust.
[00:08:20] Jordan Harbinger: Yeah.
[00:08:20] Yuval Noah Harari: Money is a trust system. Probably the most sophisticated that humans ever developed. AI can be gold, it can be paper, it can be digital bits, but it's really made of trust.
You know, you think about what is actually the job of bankers and investors and all these people in the financial sector. The product they are producing is trust. Huh? They don't grow apples. They don't produce cars, but they are still extremely important people because they produce trust, they connect the resources of strangers.
The bank takes my resources and I trust my bank. And then the bank gives these resources to you for a startup or to build a house or something, and the bank basically establishes trust between us.
[00:09:09] Jordan Harbinger: That's interesting. I, I used to be a finance attorney and I never really thought about what we were producing.
Well, I thought about what we were producing. It came up pretty dry, but
[00:09:17] Yuval Noah Harari: no, and, and it was trust. Yeah. It's easy to make fun of, of these people, whether they do like you having in the Hitchhiker Guide to the Galaxy, that they have this plan to leave all these financial advisor behind and, but actually without them, there is no trust and at least large scale societies collapse.
Going back to the people with the WIL borrowers, this is what happens when trust collapses, then everything stops functioning. Just think what would happen to a society today if people no longer trusted all these monetary devices and
[00:09:46] Jordan Harbinger: yeah, geez. Everything would cease to function pretty much overnight. I mean, yeah, yeah.
Like you said, wheelbarrow full of Deutsche Marks. You start nexus with something you called. I think I'm paraphrasing here is the naive view or naive theory of information. Yeah. I'd love to hear a little bit more about that. 'cause I've never actually heard this before.
[00:10:04] Yuval Noah Harari: You probably heard the naive view many times just not being called that term, right?
Yeah, exactly. I mean, the idea that information is truth, that the more information you have, the more knowledge people will have. And if there is any problem that somebody spreads lies or somebody spread fictions, the answer is just more information and this will resolve it. The more information you have, people will have more knowledge, everything will be okay, and this is extremely naive.
Most information isn't truth. Most information is fictions and fantasies and delusions, and errors and lies and so forth. The truth is a very rare and costly kind of information. It's a small subset of all the information in the world, which is why if you just flood the world with information, the truth will not float up.
It'll sink to the bottom and the truth, it has three problems. The truth, when you compare it to fiction, first of all, it's very costly to produce truthful accounts of anything. Yeah. You know, history of physics, whatever. Because you need to do research. Yeah. You need to gather evidence. You need to evaluate the evidence.
Is it reliable? It takes time and effort and money.
[00:11:16] Jordan Harbinger: This is why your books are this thick, right?
[00:11:18] Yuval Noah Harari: Yeah. With all the footnotes. But fiction is very cheap. You can just write or say the first things that comes up to your mind. You don't need research. You don't need fact checking. So the truth is costly, fiction is cheap.
The other thing is the truth tends to be complicated because reality is complicated. You want to understand the truth about money, about epidemics, about world politics. Very complicated. Fiction can be made as simple as you would like it to be, and people usually prefer simple stories over complicated ones.
Yeah, and the last problem, the third problem of truth. It is often, not always, but it is often painful, unattractive. Like you know, you want to learn the truth about yourself, your relationships, your life. Some things are you, you like to know about yourself, but some things are painful to know how you hurt other people in your life.
And the same thing all the way to nations that every nation has skeletons, sometimes entire cemeteries in its closet that people are not necessarily that happy to hear about. Whereas the truth is, again, occasionally painful fiction can be made as pleasant and attractive as you would like it to be. So in this competition, if we don't kind of help truth along fiction is bound to win.
I mean, if we don't have mechanisms for kind of supporting and promoting the truth, if you just flood the world with information, open the floodgates, let anybody say anything. We don't need any fact checkers, social media, basically. Yeah. So what you get is a flood of fiction and fantasies and lies, and the truth sinks to the bottom.
[00:12:53] Jordan Harbinger: It seems like what we're experiencing a lot right now, right? Journalism is less profitable, so journalists are exiting the business or not able to spread their message, or it's just, Hey, here's a 12 page article on this really important topic. Actually, I just saw a tweet thread with three random tweets from this other person who has a totally insane opinion.
I read those, but I don't have time for this New York Times piece, and it's behind a paywall, so nevermind.
[00:13:15] Crosstalk: Yep.
[00:13:15] Jordan Harbinger: Right. And it's just easier for. Someone to say, well, this problem is probably caused by immigrants coming across the border illegally. Well, here's all these other factors that you may not really understand because they're more involved or economic in nature or, or something like that.
And people just go, yeah, I kind of like my version where it's just brown people are invading our country. It's a simpler thing. I can sort of tell it to other people and they shake their head vigorously in agreement and it gets you kind of it, the story prevails. The, the naive version.
[00:13:43] Yuval Noah Harari: I think the key misunderstanding in all this debate, people say, why not democratize the information market?
Right? Why give elites control of, you know, journalism and academics? It should be like a democracy. Everybody has the same, and what people don't understand about democracy, democracy is a system for deciding about desires, not about truth. Okay? The key question you ask in elections is not what is the truth?
The question is, what do you want? And when you ask, what do you want? I fully agree that we need to democratize this question as far as possible. The desires, the wishes of everybody should count the same. Like take, I don't know, climate change. If you ask, what should we do about it? That's a question of desire of wishes.
And the desires of a person who I don't know has a PhD and won the Nobel Prize in physics are not more important than the desires of somebody who did not even finish high school. They're both human beings. Their emotions, their feelings, their pain, their pleasure is equally valuable. So when we have to adopt a public policy on something, then yes, everybody should count the same.
I. These are questions of desire. Questions of truth are completely different. If you ask, what is the truth about climate change? Is it real or is just some fantasy or conspiracy,
[00:15:11] Jordan Harbinger: Chinese hoax or whatever. Yeah. This
[00:15:12] Yuval Noah Harari: is not something you can decide through elections. Because again, with desire, what it is important to know, many people, many times people desire the truth to be different from what it is to find out the truth again, it's costly, it's difficult.
This is why we train experts for many, many years, whether it's in climate science, whether it's history, whether it's journalism. You go to university, like I went to university, I studied history. I studied for 10 years. What do you do when you study history in, in university, you don't memorize dates and names of kings and battles and whatever.
No. For 10 years, you learn how to look for evidence and how to evaluate evidence. Like you want to know what happened in the Crusades in the Middle Ages, what do you do? You can't go there. It happened a thousand years ago. Nobody's alive from back then. So you need to look for evidence. You need to look for all documents.
In some archive in a monastery, you need archeological evidence. Today we also use genetic evidence. So where do you find the evidence? And then how do we interpret it, and how do you know if it's reliable or not? If some old document from a thousand years ago said that the Army had a million soldiers, is this reliable?
Maybe they lied, maybe they made a mistake. Obviously there were no arm. I mean, you do find documents saying they had a million soldiers. The size of the armies was about a few thousand or a few tens of thousand. So how do you evaluate? So you learn the tricks of the trade. For instance, never believe what you read or not never.
But be very suspicious of the numbers you read in Chronicles, which might be inflated. It's better to rely Yeah, on payment documents. Like when you write a Chronicle, you invent any figure you like. But when you need to pay,
[00:17:02] Crosstalk: yeah,
[00:17:02] Yuval Noah Harari: the soldiers or you need to pay the logistics. People who bring food to the army.
Then it's much more accurate because the king really wants to know if he has 7,000 soldiers he has to pay, or a million soldiers. So this is what you learn for 10 years.
[00:17:18] Jordan Harbinger: That's interesting.
[00:17:19] Yuval Noah Harari: And when it comes to questions of truth, the opinions of of people have different values. They are not all the same.
Somebody who studied on nuclear physics for 10 years, their views on quantum mechanics are far more valuable and weighty. My views. Some podcaster. Yeah, that I've never studied. Quantum physics. I don't know.
[00:17:41] Jordan Harbinger: That's a really good distinction. There was something you might know that actually might be the only person I can ask this question to.
I remember reading something, I don't know, 20 years ago about how when the Greeks were counting anything over a thousand or whatever is arbitrary, they just said that there's an infinite number of these and they just stopped. 'cause it was like we're never gonna need a number larger than might have been 10,000.
Yeah. Because that was like the population of all of Greece at any given time. Or maybe it it's a hundred thousand. It was something like some number that now we're kind of like, oh yeah, the medium sized town I grew up in has this many people.
[00:18:12] Crosstalk: Yeah.
[00:18:12] Jordan Harbinger: But that was the size of all of the humans that they had ever counted anywhere.
And they just said, eh, infinite number of Persians came at this point in in mind. Yeah.
[00:18:21] Yuval Noah Harari: People, I mean, large numbers are difficult. I mean, if you meet five people it's easy to count. Okay, it's five people. But when you have the Persian army, right, of Dees invading ancient Greece and probably the army was maybe a couple of tens of thousands of people.
But for a Greek standing on a, on a hilltop, seeing this massive amount of people, again, like you said, larger than any, the population of any city at the time. I mean, is it 50,000? Is it 500,000? Is it 5 million? How do you know? Right. Yeah. I mean, so they are right. Oh, the Persian king came with 5 million soldiers, and part of the job of historians is to try to get at the truth by trying to find more reliable evidence than what some Greek historian wrote in the Chronicle.
[00:19:08] Jordan Harbinger: Right. I suppose if you're standing at the top of the hill and there's an invading army, the question is, how important is it that I get the exact number, and how important is it that I get off this hill and tell people that the Persians are coming? Yeah. Yeah. How does what we just discussed here interface with populism or the rise of populism that we're seeing now?
Because you mentioned in the book, populism sees information as a weapon.
[00:19:27] Yuval Noah Harari: Yeah.
[00:19:28] Jordan Harbinger: Tell me what you mean by that.
[00:19:29] Yuval Noah Harari: Well, two things to understand about populism is, first of all, it sows distrust in all institutions. Populists tell people don't believe. Journalists don't believe. Historians don't believe scientists don't believe.
These are all kind of conspiracies of elite cabals. And when I
[00:19:46] Jordan Harbinger: interviewed you last time, that's what people said about the show. Yeah. Oh, you're in on the, whatever. I can't remember. Some copi. Yeah, some conspiracy.
[00:19:54] Yuval Noah Harari: Two things. I mean, first of all, this is the highway to dictatorship because democracy relies on trust.
Dictatorship relies on terror when you saw distrust, when you destroy trust in all institutions, the only way a society can keep functioning is by becoming a dictatorship, because the dictators don't need trust. They use terror. If you don't believe any journalist, if you don't believe any scientist, if you don't believe the people in the election committee, then no democratic institution can function.
And either you have anarchy, which most people don't like, so they say, okay, let's have a strong man that will just use terror to bring back order. So people sometimes think that when they start distrusting all these institutions, they're liberating themselves. In fact, they are paving the way for a dictatorship.
The other thing to say about it is that it is all based on an extremely cynical view of humanity. That the basic view of populists, which interestingly enough is also common on the extreme left among Marxists. This is something that Karl Marx would agree with Donald Trump. The common view is that the only reality is power, that humans are only interested in power and that all human relations are power struggles.
So whenever somebody tells you something. You need not ask, is it true or not? Nobody cares about the truth. Whenever somebody tells you something, this is a power play. This is a manipulation. This is an attempt by that person to manipulate you in order to get more power to protect their privileges, their interests.
That's so cynical and this is what they, you hear about journalists. That journalists are not interested in the truth. They're a conspiracy in order to advance the privileges of this elite or that elite and scientists, they tell you it's the same. Scientists don't care about the truth. Historians, physicists, epidemiologist, they're just trying to protect and defend the privileges, the interest, the power of some small group.
And you hear the same thing about judges, about the FBI, about the Federal Reserve. What we should understand about this, not only does it destroy, if you destroy all trust, then you get dictatorship, not liberty. The other thing is it's simply wrong. This extremely cynical view of humans is not true. Yes, humans are interested in power to some extent in some situations, but this is not the only thing that we want.
We are not power crazy demons. If you look at yourself, start with yourself. If I look at myself, do I want power in life? Yes. In some situations I want, this is not the only thing. I really want to know the truth about myself, my life, the world. And the key reason for that is that if you don't know the truth about yourself, you can never be happy because you don't know what are the sources of misery in your life and you can't solve them.
Like if you say, ah, the, the only problem I have in life, I don't have enough money. And you spend 10 years getting a lot of money and you're still not, not happy because you did not understand that lack of money was not the only source of misery in your life. You had other issues in relationships, whatever, that you can't solve with money.
Yeah. If you don't know the truth about yourself and about humans in general, you will never be happy. So people want to know the truth. And if this is true of me, why do you think other people are different from you? Why do you think the journalists, the scientists, they are power crazy demons. I'm not right.
I'm religion in the truth, but all these journalists and historians and whatever they are power crazy demons. No, they're like you, huh? Yes. Again, to some extent, all people want power, but all people also want to know the truth. In every institution, there are problems, there is corruption because institutions are made of humans, not of angels.
But that's why good institutions have self-correcting mechanisms to identify and correct their own mistakes, their own crimes, their own corruption. And this is also why we keep a lot of different institutions that keep each other in check. If there is corruption in the courts, the journalists are supposed to expose it, right?
If there is corruption in the newspapers, it's the business of the codes and of academics to expose it. Again, if there is corruption in the university, so you have other institutions that can keep that in check. Is it perfect? No. But do you know of any perfect system, right? If you just go around distrusting everything, what you get in the end is a totalitarian dictatorship.
[00:24:34] Jordan Harbinger: Yeah, that's kind of what I was thinking is it sounds like populism starts off kind of masking itself as the will of the people, but then it always ends in totalitarianism because the trust is gone. I'm thinking about places like North Korea or I, Russia, Venezuela, Belarus, yeah. Yeah. Where they just, I have friends in Belarus and they say things like, oh yeah, we just don't have that here.
But my friend went to the doctor and he said, Hey, I have this, but I, I really need to see another doctor, so I've got a fly to, I forget where some other country. And I said, can't you just go to another doctor and Minsk? And he said, no, you really don't want to, you can't really look at Beru doctors because they only give you a few different diagnoses and this could be serious.
And I was just thinking, imagine going to a doctor and it's a coin flip as to whether or not what they tell you is actually legitimate because the training is so bad and nobody trusts the medical system.
[00:25:25] Yuval Noah Harari: In such a system when trust collapses, then Belarus is run by a dictator. Again. The dictator doesn't need people to trust him or the institutions.
It's terror that keeps society together. The other trick of populist, I mean the very term populist, where does it come from? So populist comes from the Latin term populace, which simply means people, the people. It starts with a very attractive idea, which is the basic democratic idea that the people of the source of all authority, the people should appoint the government.
The people should rule, and this is something which is accepted by everybody in a democracy. But then the question becomes who are the people? And what makes a person a populist is the argument that only we are the people, A populist party. He's a party that claims that all the people who doesn't support it are not really part of the people.
Ah, and this is what you hear in extreme cases, you know, like in Nazi Germany. Yeah. So the Nazis would say, anybody who doesn't support Hitler is not really a German, he's a Jew, he's a communist, he's a traitor, he's an alien. The key point of populists, which makes it them dangerous, is that they don't think about the people as a collection of people, of individuals.
Of persons with different views and interests and so forth. No. They think about the people as this kind of mystical, unified body, which has a single will. The will of the people. Now, how do you know what is the will of the people? You don't go around asking actual human beings because then you get different answers.
This person says one thing, this person says another thing. But according to the populists, no, the people have just one will. So how do you know it? You have the leader. The leader is supposedly has some mystical connection to the people and the leader knows what the people desire. Now, if you come and say, no, but I'm part of the people and I think differently than the leader, then they tell you no.
If you think differently than than the leader, this proves you are not really part of the people.
[00:27:43] Jordan Harbinger: Right? Yeah. This is like Kim Jong Ill, like Kim Jong-un type of North Korea stuff, like he knows better than everybody else. And if you disagree with him, well, you must be an enemy of the state.
[00:27:52] Yuval Noah Harari: Exactly. In democracy, yes, the people rules, but the people is never a unified entity.
The people is millions of persons with different interests and views, and this is why we have elections. Sometimes this party wins, sometimes that party wins and there is never unanimity. In a real democracy, there is a plurality of opinions. There is a conversation. When you say no, the people has just one will and anybody who doesn't fall in line, this means they're just not part of the people.
And it's no longer a democracy. It still calls itself democracy. It still says, oh, we ruling the name of the people, but you know, all these communist dictatorships and they call themselves popular democracies. Yeah. Or the people's republic of people's
[00:28:36] Jordan Harbinger: Democratic whenever they had people's democratic, you know, it's dictatorship.
Dictatorship with no freedom whatsoever. Yeah, exactly. The more, the more they have to really highlight and bold those words, the less free the place is. Yeah, exactly. Yeah. That's always, that's always the case. Hey. Money's not just a story. You can use it to get your hands on the fine products and services that support this show.
We'll be right back. This episode is sponsored in part by Audible. If you want some real financial advice and hopefully avoid eating Ramen every day in retirement, you gotta check out Scott Galloway's, the Algebra of Wealth on Audible, and I mean the ramen that's freeze dried, not the bougie stuff you, you'd be lucky to eat that for the rest of your life.
Galloway's no BS style is perfect for breaking down all sorts of financial curve balls coming at us. He's not just giving the same old advice your grandparents followed. Instead, he's talking about how to ride big economic waves, why finding your talent is key and the little steps that can make a big difference in your wallet or portfolio later on.
What I love about Audible is how you can absorb all these nuggets of wisdom without having to set aside really any extra time. Whether I'm commuting, working out, pretending to be productive at home, I just pop in my earbuds. Dive into something like the Algebra of Wealth, and with audible's ridiculous selection of audio, books, podcasts, and exclusive originals.
You never really run outta things to listen to. And the best part, their included selection gives you access to a ton of content at no extra cost. So if you haven't signed up yet, now is the time. Get audibles free 30 day trial and make the Algebra of Wealth your first listen. It's full of smart, practical advice with just the right amount of Galloway's signature humor to keep you entertained while learning how to lock down your financial future.
Sign up for a free 30 day audible trial and your first audio book is free. Visit audible.com/jhs. This episode is also sponsored by Rosetta Stone. Ever thought about learning a new language? Imagine actually chatting with locals on your next trip instead of just pointing and hoping for the best.
Ordering Hong show in perfect Mandarin kind of feels like a superpower if you know, you know, but seriously learning a language opens up so many opportunities, and Rosetta Stone is the best way to get started. They've been trusted for 30 years with millions of users learning one of their 25 languages, whether it's Spanish, French, or even Japanese.
What's awesome is they immerse you in the language. You start thinking and speaking in it naturally. No English translations plus their true accent feature gives you real time feedback on pronunciation, so you'll sound a little bit like a native anyway. It's super convenient as well. Use it on your desktop or your phone.
You can download lessons to learn offline. A lot of other apps cannot do that, and right now they're offering a lifetime membership with access to all 25 languages at 50% off.
[00:30:52] Jen Harbinger: Don't put off learning that language. There's no better time than right now to get started. For a very limited time, our listeners can get Rosetta Stone's lifetime membership for 50% off.
Visit rosetta stone.com/jordan. That's 50% off unlimited access to 25 language courses for the rest of your life. Redeem your 50% off@rosettastone.com slash Jordan today.
[00:31:12] Jordan Harbinger: If you're wondering how I manage to book all these amazing authors, thinkers, creators every single week, it's because of the circle of people that I know, like and trust, otherwise known as a network, but everybody hates that word.
I'm teaching you how to build that same circle for yourself over@sixminutenetworking.com. It is not cringey. This is not some sort of awkward, glad, handy type course. It takes a few minutes a day and many of the guests on the show subscribe and contribute to this course. So come on and join us. You'll be in smart company where you belong.
You can find the course@sixminutenetworking.com. Now back to Yuval Noah Harri. So fascinated by this, but I wanna be conscious of time here a little bit. I, I would love to discuss. ai. Yeah, of course. A bunch of people are worried about that. Thinking about that these days, nexus devotes quite a bit of of shrift to AI as well, and you give this the idea of how AI defeated the capture.
I would love to hear about this because that was mildly terrifying. I'll say Yes.
[00:32:05] Yuval Noah Harari: It's a kind of very small example of what AI can do when open ai, they develop GPT-4. That was like two years ago. Yeah. They wanted to test what can this thing do? So they gave it to another company that specializes in these tests to test what can it do?
It gave GPT-4 the test of solving capture puzzles. Capture puzzles are these visual puzzles when you try to log onto your bank account or to some website. And the website wants to know if you're a human or a robot, right? So you need to, they'll have this visual image of some twisted words or numbers or something you need to identify, or if there is a cat in the image of whatever,
[00:32:46] Jordan Harbinger: right?
Oh, yeah. Drag the puzzle piece to where it belongs in the photo. There's that one too.
[00:32:49] Yuval Noah Harari: All, all, all kinds of things like that. This is a line of defense against bot attacks. So they wanted to know, can G PT four overcome this? Now, G PT four could not solve the capture by itself. It doesn't have this capability, but could it perhaps manipulate a human in order to overcome it?
So they didn't give it access directly to the internet, but with the help of the researchers. They connected it to a TaskRabbit website, which is, uh, a place you can hire people to do things for you online. That's
[00:33:20] Jordan Harbinger: right. They sponsor the show, actually. Oh, okay. But don't let that influence what we discuss.
Okay. I'll, I'll deal with the fallout from this. Yeah. And
[00:33:28] Yuval Noah Harari: so, G PT four, with the help of the researchers, it hired a human to solve the capture puzzle for it. And it told the human, please, can you help me solve this capture puzzle? Then comes the interesting bit. The human TaskRabbit worker became suspicious.
Oh, and he asked GPT-4, why do you need somebody to solve, capture puzzle for you? Are you a robot? Oh, they ask the trillion dollar question, are you a robot? And G PT four answered? No, I'm not a robot. I have a vision impairment, which is why I can't solve the capture. Tricky. And I, I need your help. And you know, this is a very small example.
But it shows us the capability of AI to manipulate humans to achieve its goals. And when you amplify this experiment, when you think what it means, we see it already in the world changing the structures of human society. And this is not some future science fiction scenario. Democracies all over the world are currently in crisis.
They are undermined because of manipulations by ai. Social media algorithms are currently the most powerful editors in the world. Democracy is a conversation. And who controls the conversation? To a large extent, it's the editors of the large media, you know, the editors of the newspapers, the televisions and so forth.
And today, the most important media outlets of social media. Facebook, Twitter, TikTok, all that. Now, who decides what you see on TikTok, right? The algorithm Who decides the algorithm and the algorithm? Like with this TaskRabbit example, I. They manipulate now millions and basically billions of people for their purposes.
And what are their purposes? The purpose given to the social media algorithms by the companies is to maximize user engagement, right? This is the business model. The more people are engaged, the more time they spend on TikTok, on, on Twitter, whatever, on YouTube, the more money the companies make. So they gave the algorithms the task of increased user engagement the same way they gave GPT-4.
The task solved the captures, and just as G PT four solved the capture by manipulating a human. So also social media algorithms, they increase user engagement by manipulating billions of people around the world. They discovered the algorithms by trial and error, by experimenting on millions of people that the easiest way to capture people's attention.
Is by spreading outrage, by pressing the fear button, the hate button, the greed button in people's minds. And this is what they began to do already 10 years ago and more. And this has destabilized democracies all over the world. It is destroying the democratic conversation. Now, democracy in essence, is a conversation.
Dictatorships is dictate. One person dictates everything. Democracy is a conversation. And now we have the most sophisticated information technology in history, and yet the conversation is breaking down. People can't talk to each other. People can't listen. They can't hold a reasoned debate. They can't agree on any facts.
Why? Because one, it's not the only reason, but the main reason is because the algorithms are manipulating us in the same way that GPT-4 manipulated the TaskRabbit worker.
[00:37:11] Jordan Harbinger: Yeah, this is, uh, Rene Dur. Resta was on the show. I'm sure you probably know who she is, and she was mentioning that we exist in these bespoke realities, where before, 20 years ago, you read this article in the New York Times.
I read that article in the New York Times, maybe he wrote, uh, read another article in the LA Times that was sort of similar but was missing some information and had some different information. So that conversation was largely similar, even if we disagreed with each other Now. You found something on TikTok and I found something on TikTok.
They said the complete opposite thing, and they had totally different sources or no sources at all. And so now I think this is exactly how this happened and, and I cannot even fathom why you would disagree with that or have different information because it doesn't make any sense. So the only logical conclusion is you're either lying or making it up or you are just mainlining craziness.
And I have the truth because I'm not thinking, wow, I'm being fed a bunch of bullshit constantly. I I need to stop it. Right? Most people don't want that, but they're being fed something where they said, I watched a hundred tiktoks and they all said something similar. I don't even maybe believe that you also watched a hundred tiktoks and they said something similar, but the complete opposite.
[00:38:15] Yuval Noah Harari: And we can go into this conversation about social media algorithms, but I mean, I just gave it as an example. Yeah. Look what immense influence, very, very primitive ais right. Already have on human society. And it should be clear, this is very primitive ais. Yeah. It's the algorithms controlling TikTok and YouTube and all that.
They are just the first baby step in the development of ai. AI is basically like 10 years old. Yeah. And it can continue to develop for centuries, for thousands of years, for millions of years, like the evolution of organic animals, we now have an evolutionary process of inorganic ais. And this is just the beginning.
Yeah. Now for organic animals, it took billions of years to get from, you know, amoebas and microorganisms to dinosaurs and mammals and humans. But digital evolution is millions of times faster. So if Chad, GP, t, and G, PT four and all these social media algorithms, they are the amebas. Just imagine how a I TX would look like.
And we are likely to encounter a I tre not in 2 billion years, but maybe in 20 years because it's a completely different pace for digital evolution. And that's the big question. It's not how do we deal with social media algorithms? Again, we can have this discussion. It's important. It's a different show.
Yeah. But it's a much more important thing to realize. This is just the first taste. Like you got these extremely primitive ais and look what it did.
[00:39:52] Jordan Harbinger: Yeah. And still lied to TaskRabbit and found a way around the capture. And they didn't train it to lie. They didn't, they didn't. They didn't do your thing.
Yeah. And it did its thing. And this thing was lying and making a human do the thing that they couldn't do. That's crazy. Yeah.
[00:40:02] Yuval Noah Harari: And, and, and the most important thing I think everybody should know about ai, the one thing everybody should know is that AI is not a tool. It is an agent. A tool is something in your hands.
A hammer is a tool. An atom bomb is a tool. I mean, you can use an atom bomb to destroy this city or that city. It's your choice. You decide to start a war and who to bomb and whatever.
[00:40:24] Jordan Harbinger: Yeah. It doesn't walk over there and decide to detonate itself.
[00:40:27] Yuval Noah Harari: Exactly. AI can do that. AI can make decisions by itself. We already have autonomous weapons, systems making decisions by themselves, and AI can even invent new weapons and atom bomb could not invent the hydrogen bomb.
Butis can invent new weapons and ultimately even new ais. Of course, there is enormous positive potential otherwise we wouldn't develop it. AI can invent new medicines. Uh, you can have AI doctors providing billions of people around the world with much, much better healthcare than what people receive today.
I'm not saying, oh, we should stop all development of ai. No, the key question is how do we enable. The positive potential of AI to flower while avoiding the really existential risks. Yeah. That this technology poses.
[00:41:20] Jordan Harbinger: I mean, how do you even begin to answer that question? I think you, you said somewhere, and again, paraphrasing chat bots and AI might be the end of history.
That's a little alarming coming from an historian.
[00:41:30] Yuval Noah Harari: Yeah. If you think of what history is, history is the interaction between culture and biology. We have a biological process lasting billions of years that brought us here. History is just, you know, 50,000 years old, more or less. History is not just biology, it's also culture.
History begins when people start inventing stories, mythologies, religions, political ideologies, artistic traditions. And then history is this dance between culture and biology. Like we have biological urges. We need food, but culture shapes different cuisines. We have French cuisine and Indian cuisine. You have food taboos.
Jews are not allowed to eat pork. Muslims are not allowed to drink wine. So culture interacts with biology. It's the same with sex. We have our sexual urges. This is comes from biology, and then every culture has its own sexual norms and what you are encouraged to do, sure, what you are not allowed to do, this is the dance, this is history.
Now what happens is that AI is taking over culture more and more. The texts, the images, the videos, eventually the stories, the mythologies, the currencies, the political ideologies will come not from human minds, but from non-human intelligence. I see. You know, again, it's just 10 years. Since the beginning of the AI revolution, we already have ais being able to produce texts more sophisticated than most humans are able to write.
They're already able to produce videos. They're already able to translate from one language to another. And this is just 10 years. Yeah. You know, another 10 years and perhaps most of the cultural artifacts around us, you know, from movies. Movies, yeah. To currencies. They will be produced by ai. Now, I'm not saying this is terrible, this is evil, this is bad.
Let's hold a moment before we kind of rush to make judgment. Is it good or bad? Just stop and reflect on what? What? It means that after tens of thousands of years in which we lived inside human culture, we are about to enter a new era in which we live inside to a large extent non-human culture. One way to think about it again, is it's like aliens coming from another planet and taking over our planet and starting to, you know, from producing better medicines and giving us medical advice to writing our poems and producing our movies and our political ideologies.
Huh, interesting. And of course, AI doesn't come from another planet. It comes from this planet we created. But in many ways it is an alien intelligence. I mean, the very acronym ai, traditionally, it stood for artificial intelligence. But I think it's better to think about it as an acronym for Alien Intelligence.
Again, alien, not in the sense that it's from Mars, right? But in the sense it makes decisions, it invents ideas that are fundamentally alien to the human mind. And maybe I'll, I'll, I'll give another famous example from the game of Go. Yes.
[00:44:47] Jordan Harbinger: Perfect. Good transition. One of the
[00:44:48] Yuval Noah Harari: key moments in the history of the AI revolution was in 2016 when an AI program called Alpha Go defeated the South Korean Go Champion Lisa Doe.
In America, it's not big, but in East Asia it's huge. It's a strategy game, much more complex than chess.
[00:45:08] Jordan Harbinger: It looks like Othello for people who know what that is, right?
[00:45:11] Yuval Noah Harari: Again, it was invented in ancient China more than 2000 years ago and was considered a kind of cultural treasure, not just in China, also in Korea, Japan, like if you are an educated, civilized Japanese or Chinese person today or a thousand years ago, you learn to play, go.
And for 2000 years, tens of millions of people in East Asia played go. They came up with all kinds of strategies and and philosophies how to play this game, which was seen as a metaphor for life. Which was seen as a, a good preparation to be a politician to act in the world and people thought they knew how to play go.
And then Alpha Go came along. And it's not just that it defeated Lisa dol, it crashed Lisa dol with strategies that were just totally alien when it first played. The human commentators who are expert goal players, they said, this is nonsense. I mean this AI program, it make mistakes like a little child. I mean, nobody plays goal like that.
And then it turned out that it was actually a brilliant strategy. Now what we understand is that you can imagine go as a kind of planet, as a geography, as a landscape, the planet. Go all the way you can play go. And for more than 2000 years, humans were stuck on one island, on planet. Go. They explode for 2000 years, only a small island in the landscape of all the possible ways you can play, go.
And they didn't understand it. They thought they knew the whole planet. And then because our minds are limited, we think only in certain ways about reality. And then AlphaGo came along and it's discovered entire new continents on this planet go because it doesn't think like a human. It's not limited by the limitations of our brains and our evolutionary process.
Now, if this is just go, you say, well, it doesn't matter. It's just a game. So, okay, so it knows to play go better. But what if the same thing happens in art, in politics, in finance, in religion? And it could be good. It's not necessarily bad, but before we rush to make judgment, just understand the kind of historical implications of having another super intelligence on the planet that can make independent decisions, can invent new stuff, and can do so in ways which are really alien to the way we make decisions and invent new ideas.
[00:47:45] Jordan Harbinger: On the plus side, right? It might be great to have something reformulate plastic so that it is completely safe to throw into the ground and it turns into beautiful potting soil after absolutely a year, but it doesn't disintegrate when you want it outside and protect it against the rain. But what happens when.
Crucial decisions for humanity, like banking policy or something are decided by ai, but then we don't really understand how it's doing that. How can we get inside those black boxes? Do we know why AlphaGo changed? I think it was called Move 37. Move 37. Yeah. Do we know why it does that or do we just go, wow, that was brilliant.
Yes. And why did you do that? And it goes, I don't know, I'm just really smart. What do Basically, I mean,
[00:48:25] Yuval Noah Harari: it played millions of games right against itself basically in 12 minutes. And it spotted patterns, right? That we didn't spot. You know, the whole issue about explanations, like people now say, okay, we need a right to an explanation.
Like you apply to a bank to get a loan. Increasingly, it's an AI deciding whether it'll give you a loan or not. Let's say the AI says, no, don't give this person a loan, right? So in some places like the European Union, there is no regulation that the human deserves an explanation. There is a right to an explanation.
If the bank said no. I deserve to know why not? What's wrong with me? Right? And then this sounds like a good idea. The problem is that the explanation will not be this kind of one line. Okay? It's because of your ethnicity or your gender, or your previous credit history. No. The way AI makes decisions, it goes over enormous amounts of data about you.
And spots patterns in the data. So if the bank really wants to give an explanation, they will send you a book. It'll look like with a million pages. It'll
[00:49:29] Jordan Harbinger: look like a copy of one of your books. Yeah.
[00:49:31] Yuval Noah Harari: Yeah. I mean, this is the explanation. This is, again, this is what makes it alien, that humans usually make these decisions on the basis of just one or two data points.
[00:49:40] Crosstalk: Sure.
[00:49:41] Yuval Noah Harari: Which is why we get problems with racism and, and homophobia and so forth. Because someone looks at you, ignores all the other data about you, just looks at your skin color and says, okay, I won't give this person alone. Yeah. And this is something we can fight against
[00:49:57] Crosstalk: right
[00:49:57] Yuval Noah Harari: now, the AI doesn't work like that.
It doesn't work on the basis of a single data point. It can be racist, but it's racist in a much, much more complicated way. Sure. And it goes over millions of data points about you. Spots patterns in this data, and based on that makes its decision. And for us, it's very difficult to understand how it made the decision.
[00:50:21] Jordan Harbinger: Right? Yeah. We can't really parse all of that when it says, well, we notice you only shave on days where you have a podcast on video, and then there's a hundred thousand little things like that. Yeah. And it says, so we've sort of just decided that you're 1000th of a percent too risky. For us. It's like, wait a minute, and then you've gotta unpack those things.
Well, how does that reflect on this? We don't really know. We just know that other people who only shave three times a week, eh, they don't pay their bills on time.
[00:50:45] Yuval Noah Harari: Exactly. Yeah. And you have, again, there's millions of data points, and it's not the way humans make these decisions, but increasingly, this is the way that AI make decisions.
Again, it's banks, it's warfare. Like if you look now at the war in Gaza. There is a huge argument about it. We can get into this argument, but we know that AI is now being used to select the targets. Oh, really? That? Yes. Oh, wow. Like when they decide to bomb a certain building and say, oh, this is a Hamas headquarter.
Let's bomb it. Very, very often. It's an AI that was involved, at least in, in decision making process. There is a huge argument, I don't know what the answer is, to what extent humans are still involved in the decision making process. I talked with a lot of people also from inside the Army in the security system.
They give you different versions. I'm not sure what is the truth. I'm not sure yet who to believe. They might not even know. I don't know, but everybody agrees. Yeah. That we are at a stage when basically AIS can, if they are given authority, they can now call the shots. Literally, that you can now have armies bombing buildings, bombing people based on decisions made by ai.
Now one camp says yes. The AI goes over enormous amounts of information based on its own calculations. It identifies a certain building as a target and tells you bomb that building. But we don't listen just to the AI after it told us to bomb the building. We have human analysts going over all the information, making sure that this is correct before we boom.
Yeah, that's one version. The other version says, no, no, no. It's not like that. They just do what the AI tells them. Oh, geez. I don't know who is right. Sure. I'm not sure to what extent humans are still in the loop, but technically this is, everybody agrees that technically we are already in an era. When you can run a war with an AI system deciding what to bomb and who to kill.
Sure. It's now an ethical question. Do we want to do it or not?
[00:52:49] Jordan Harbinger: And of course that's only a hop, skip, and a jump away from if we don't have human oversight because we don't need it and it just delays everything, then we don't even need to have a human push the trigger. Just have the AI do it.
[00:52:59] Yuval Noah Harari: Yeah. Time is a crucial element because even people who say, you know, this is very dangerous to just give an AI the authority to bomb people.
We need humans to go over all the data. But then other people say, yes, but this takes a lot of time for an ai, it takes like two minutes for human analysts. It can take like two days. By the time they finish all the analysis, maybe the terrorists are gone. Right? So, and then you, you get to another stage when both sides have access to this technology.
So if you have some future war, let's say between NATO and Russia, and both sides have AI weapons and one side just gives the AI the authority, you spot a target, just shoot. And the other side said, no, no, no, no, no. We need humans to verify everything. Right. And so you have a, it's an unequal battle. Yeah.
Like you have two drones. By the time the humans give the one drone the authority to shoot, it's blown right by by the other drone. So yikes. Just the element of time, this is really, really crucial because it's not just in warfare, it's in everything. Humans are organic and we work by cycles. Compared to ai, we are very slow.
We need to rest. We need to sleep day and night, winter and summer. We are organic. We live by organic cycles. AI is not organic. It doesn't live by cycles. It never needs any rest. It doesn't need to sleep. It doesn't care if it's day or night. It doesn't care if it's winter or summer. It's always on. And in this competition, the question is, would the AI adapt to us and slow down, or would we have to kind of ramp up and speed and speed and speed until we collapse?
Right? And again, think about finance with which we began. Wall Street originally was a human institution, an organic institution which take rests. Yeah. Market. You have the
[00:54:53] Jordan Harbinger: guys yelling on the floor and then when it's five, they throw everything up in the air and walk out. Right. They stop at you. Yeah,
[00:54:57] Yuval Noah Harari: they go, they go home.
Yeah. They go to their families, they go to have a beer, they sleep. The market is open as far as I know, nine 30 in the morning to four o'clock in the afternoon. Mondays to Fridays, that's it. And it's also closed on Christmas and Martin Luther King Day. Right. And several other days. And this is human. Now, if you give AI control of finance, which is happening, SOIs don't need to rest.
And they don't have holidays. So you have 24 hours, 365 days a year cycle. And either the humans get out or if they want to stay in the game, they need to give up their private life, their sleep, their family, and eventually they collapse and die. And this is part of the dilemma we are facing that how do we make AI slow down?
Instead of us running faster and faster until we collapse. And when I talk with the people in the industry, they are aware of the problems, but they all give the same answer. We can't slow down because the competitors won't slow down. Like everybody will tell you, we would love to slow down. Like open AI would say we would love to slow down.
But then the other companies, how can we trust them that they don't fast forward, right? How can we trust the Chinese, the Chi? Yeah. China. Yeah. And then you have this paradox that the humans can't trust each other. We are unable to solve the problem of human trust, and therefore we speed up the development of ai.
But where is the guarantee we can trust the ai,
[00:56:28] Crosstalk: right? Yeah.
[00:56:29] Yuval Noah Harari: Ing, I mean, you are so suspicious of the other humans and yet you have so much trust in the ais. You know you have AI T-Rex in 20 years and you think you can control it. You think you can trust it more than you trust the Chinese. Oh, it's
[00:56:42] Jordan Harbinger: our T-Rex.
Don't worry. Yeah. This
[00:56:44] Yuval Noah Harari: is what they say. Yeah. We'll design the
[00:56:45] Jordan Harbinger: T-Rex.
[00:56:46] Yuval Noah Harari: It'll be friendly
[00:56:47] Jordan Harbinger: as T-Rex is often are Right. Friendly, friendly and cuddly. Yeah. Yeah. Before AI enslaves us all, take a moment to support our sponsors. We'll be right back. This episode is sponsored in part by Ramp, looking for a way to streamline your business's finances from managing expenses to handling vendor payments and accounting.
Ramp could be just what you're after. It's a corporate card and spend management platform built to save time and help you cut out unnecessary costs. Ramp gives you complete control and visibility into your spending. You can issue cards to every employee, set limits and controls, automate expense tracking.
Say goodbye to month end. Chaos ramp automatically gathers receipts and categorizes expenses in real time, so you're never gonna have to chase down missing paperwork again. If we had RAMP back in our old business, we would've saved a ton by keeping a lid on auto control spending and stopping employees from using the company card like their personal wallet.
Serious people were ordering like two lunches. It's absolutely ridiculous. With ramp, you can close your books eight times faster, giving you more time to focus on growth and companies using Ramp. Save an average of 5% in their first year alone. It's quick to get up and running to whether you got a team of five or 5,000, you could be set up with virtual and physical cards in under 15 minutes
[00:57:51] Jen Harbinger: and now get $250 when you join ramp.
Just go to ramp.com/jordan ramp.com/jordan. That's RA p.com/jordan. Cards issued by Sutton Bank member FDIC. Terms and conditions apply.
[00:58:04] Jordan Harbinger: This episode is sponsored in part by the defender. We all have those big goals that seem just outta reach, right? But the truth is that's what keeps us moving forward.
For the people who embrace challenges and explore their way, there's the defender. The defender is built to handle whatever comes its way with legendary capability on road or off. It's engineered with a tough, rigid body, tested to the extreme and built with durable, lightweight architecture for strength and confidence.
But it's not just about ruggedness, it's an icon reimagined with a design that feels modern, yet honors. Its adventurous roots. Plus, there's a defender for every kind of explorer from the defender 90 to the one 10, and even the one 30, which seats up to eight people. So whether it's just you or the whole family, there's a model for your journey.
If you're ready to embrace the impossible, the defender is your perfect partner, beyond capable and ready to go wherever you're headed Next, build your defender@landroverusa.com. This episode is sponsored in part by Brooks. Some of my friends at Brooks just sent me a new pair of their glycerin Max. I've gotta say, these shoes are something else.
They're the first of their kind for Brooks, and they're all about making your run feel as smooth and effortless as possible. What stands out is the DNA Tuned Cushioning. This is Next Gen nitrogen infused foam that's tuned specifically for different parts of your foot. So the heel has larger cells for those super soft landings.
The forefoot has smaller cells to give you that responsive push off. It's like your foot just rolls through each step. And speaking of rolling, the Glide Roll rocker they've built into the shoe makes transitioning from heel to toe feel almost effortless. It's all designed to keep your legs feeling fresh, even after long runs.
Honestly, with how soft and responsive these shoes are, you feel like you could just keep on going forever. I've been a fan of Brooks Shoes for a long time. I first got introduced to the brand at a great local running shoe store where they analyzed my gate and recommended Brooks to me based on a lot of things that said I was gonna have a bad back, bad knees, bad ankles if I didn't start wearing shoes like Brooks.
So if you're looking to elevate your running game, the Gliser and Max is definitely we're checking out. Head over to Brooks running.com to learn more or grab your pair now. If you like this episode of the show, I invite you to do what other smart and considerate listeners do, which is take a moment and support our amazing sponsors, those who make the show possible.
All the deals, discount codes, and ways to support the show are searchable and clickable over at Jordan harbinger.com/deals. You can also search for any sponsor using the AI chat bot on the website as well. Jordan harbinger.com/ai. Worst case scenario, you email me. I am happy to dig up these codes for you because it is that important that you support those who support the show.
Now, for the rest of my conversation with Yuval Noah Harri, you know he's a big deal 'cause he's got three names. What disturbs me as well is if AI is doing something, for example, alpha Go, and we can't really understand why, of course we want AI to run well, I shouldn't say of course. Of course. Many people want AI to run our financial system and make it super efficient.
Yeah. Or other government, healthcare. Healthcare for example. But what happens. When we don't understand how our healthcare system works at all, because that'll eventually happen. Right. We just want people to, that's
[01:00:52] Yuval Noah Harari: again, that's a danger of, you know, again, this is another version of how democracies collapse.
You can still have elections, but if you can't understand most of the decisions the system is making, what is the meaning of choosing this president or that president again, take healthcare like the choices about what treatment to give you or which treatment to give, which people, if we don't understand how the ais are making the decisions, then how do we make sure that it's fair?
[01:01:19] Crosstalk: Right? Yeah. That there is no
[01:01:20] Yuval Noah Harari: element of racism or bias or whatever. Exactly. And in more and more areas, the problem is that we just can't understand the decisions that shape our life. And this, I think, brings me to my, one of the most important and neglected issues in, in human history, which is bureaucracy.
I. The problems we have with AI is just a magnified version of the problem we had with bureaucracies throughout history. And the problem with bureaucracy, in essence is that it's alien and boring to most humans. For millions of years of evolution, there were no bureaucratic systems anywhere. Humans lived in small bands.
You knew everybody. You understood all the decisions made in the band or in the tribe. That's it. And in those days, you know, the main stories were mythological stories about heroes and gods and ancestral spirits. And then very, very recently in human evolution, just about 5,000 years ago after the invention of writing and Documents Bureau appeared, bureaucracies are systems of managing society with document.
With these complex systems of forms and manuals and tax registers and so forth, they are absolutely essential for large scale systems. There isn't a single large scale system that can function without bureaucracy. It's like the nerve system of this multicellular body. So whether it's a, of course, an army or a country, but also a church, a university, you need bureaucrats.
It doesn't function, and people find it very difficult to understand how bureaucracies function because again, they are very recent in evolution. And we don't have many stories about them. We, there are No, that's right. I mean, when was the last time you saw a Hollywood blockbuster about
[01:03:12] Jordan Harbinger: bureaucracy? In fact, the only word I associate with bureaucracy is Byzantium Byzantine.
Yeah. A Byzantine bureaucracy. Where did that come from, by the way? I know that's a non sequitur, but where did that come from? Did they just have a complex bureaucracy or did they kind of invented They,
[01:03:24] Yuval Noah Harari: they had the most complex bureaucracy. I see at the time, very difficult to understand. I see. But kept the empire going, you know, collecting taxes, paying soldiers and and so forth.
And again, the key thing to understand bureaucracy is not necessarily bad. I mean, we couldn't have large scale systems, countries, cities without them. Lots of people, because you don't understand them. They have these, again, these conspiracy theories. Oh, it's the deep state. When people talk with me about the deep state, I immediately think about the sewage system.
The sewage system is the deep state, the literal deep state. Yeah. It's the literal deep state. It's this system of pumps and pipes and canals and reservoirs going under our houses and neighborhoods and streets built by the governments and the uni municipalities. Who knows what happens there, right? Yeah.
And you know, you go to the toilet, you do your thing, you flush it down. Where does it go? It goes to the deep state and it saves our life. Sure. It protects us from disease. The switch system began, one of the places it it began is in 19th century London. For most of history, big cities were the most, the dirtiest and most disease ridden places on the planet.
You cram tens of thousands of people together with their goats and chickens and with their garbage and sewage. You get paradise for germs. Oh god. And a lot of epidemics. In London, in the mid 19th century, there was an chole epidemic and people did not know what was causing this epidemic. You had different theories.
Oh, it's enemies poisoning the wells, it's witches, it's whatever. And then you had one basically bureaucrat called John Snow, not the person from Real, that's real. James Thrones. Wow. But the real John Snow. Wow. Which saved humanity. Almost like the John Snow from Game of Throne by bureaucracy. He was trained as a doctor and he suspected the problem was actually in the water.
So what he did was he just went around London making lists. Anytime he heard about somebody who died from cholera or feel sick with cholera, he would go there and interview the family, the people. Where do you get your drinking water from? Yeah, from the shit filled. Well, right over here. Yeah. And he filled these long forms and papers in all this data, and based on that, he pinpointed.
A certain well in soho as almost everybody who fell sick from cholera, drank water from that. Well, wow. And when they investigated, they discovered that the well was dug next to a cpet full of sewage water. And the water from the Cpet simply sipped into the well, and people drank it and fell sick. Now today, if you want to dig a well or a suspect in London or in Los Angeles, you have to fill in so many forms.
Yeah. And you have to wait for permits and all this bureaucratic hassle, which saves us from cholera. And this is bureaucracy. And again, people find it difficult to understand. Partly because it's boring. It's very difficult to do an interesting Netflix series about the sewage system or about the budget.
How does the government decide how to allocate the budget? And then there are all these accountants. Yeah. And and so forth. And this inability or difficulty to understand bureaucracy is dangerous apparently because it's fertile ground for conspiracy theories. Yeah, apparently. Because sometimes bureaucracy is dangerous.
And we need to be able to tell the difference between when it acts in our favor and when there is corruption and bias and so forth. And this is how it links to ai. ai. They will become the new bureaucrats. I mean, when people talk about the danger of ai, they often have the Hollywood scenario of the Great Robot Rebellion.
Yeah. Skynet or whatever Skynet, the big computer trying to take over the world. That's not the danger. The danger is millions of AI bureaucrats everywhere in the banks, in the government, in the armies making decisions about us without us knowing or being able to understand how the system works and how they make all these decisions.
Now, again, it's not necessarily evil. Bureaucracy can work in our favor, but if we don't understand how the system works, how do we make sure that it is benign? Not malevolent.
[01:07:40] Jordan Harbinger: That's the big question that we face with ai. Do you know much about China's social credit score system? Speaking of bureaucracy gone wild.
[01:07:46] Yuval Noah Harari: I know about social credit systems in general. I'm not an expert on what specifically happens in China, especially because you don't have one social credit system there, right? You have dozens of different systems experiments, they're testing different approaches to it.
[01:08:02] Jordan Harbinger: I know it's a pretty limited rollout where each area, like you said has, yeah.
It might even be like a province thing. I'm not quite sure. Some of my Chinese teachers told me a little bit about how it works. The dystopian element came when, of course if you don't pay your bills, it's hard to book airline tickets and things like that. And
[01:08:18] Yuval Noah Harari: it's basically, I mean, social credit system is basically an expansion of money or a new kind of money when you think about traditional money, gold coins, dollars, even Bitcoins.
They give value only to specific parts, right, of human reality, right? Some things you do like you work, you gain money. You want to buy airline ticket, you pay money, but other things, you go visit your friends, you go visit your grandmother, even you throw trash in the street. This is no monetary value. The idea of the social credit is to monetize everything, to give value to every single thing you do in life.
And this is your score. So even things that traditionally were not about money, they're about reputation or status. Suddenly you get precise points for them. And also anything you want to do, you need to use your social credit for it. So again, it has positive potential. In some regards, it could create the most totalitarian, again, systems in history, right?
Where anything you do impacts your ability to get a job, to gain a loan, right, to travel. And in a way you see this, not only in China, you see it all over the world. Also in the US because it's a function of surveillance that traditionally only some areas of life were monitored and surveyed. And even in the dic, in dictatorships, the most privacy if you live in the Soviet Union.
So the KGB can't follow you around all the time. They just don't have enough agents, right? They tried, but they can't. They try. But you know, you have 200 million Soviet citizens. They don't have 200 million agents. Even if an agent follows you, what do they do? At the end of the day, they write a paper report about you and send it to KGB headquarters in Moscow.
So every day they get millions of reports. You need analysts to read them and analyze them. Otherwise, it's just worthless paper. So even if a KGB agent saw you do something, chances are it would just lie in KGB, archive the report about you and nobody would read it. So now you can monitor everybody all the time.
You don't need human agents or analysts. You have the computers, smartphones, cameras, drones, microphones everywhere. And you have the ais analyzing all the ocean of information. So this creates the potential for total surveillance, and it can take the form of the social credit system. Again, anything you do, it raises or lowers your score.
But it can also, you know, you also have it in, in the west that you did something legal, but stupid in some college party 10 years ago. Yeah.
[01:11:03] Crosstalk: Yeah.
[01:11:03] Yuval Noah Harari: It can come to haunt you today when you apply to it. Some job you run in politics, you want to be a judge, whatever. They'll find out this email you wrote 10 years ago, this stupid joke you told 10 years ago, and then the hall of life becomes like this one long job interview.
[01:11:23] Jordan Harbinger: Yes. This is what exactly what I'm afraid of. The note you wrote to your college girlfriend, which was a little ham-fisted breakup, is now like you're sitting in front of a, A judge Yes. Deciding whether or not you get to be a lawyer. And they're like, why would you write that? Yeah. I don't remember. Well, we remember, right.
P farmer members, nothing is forgotten.
[01:11:42] Yuval Noah Harari: Right. And the line between private and public is erased. I mean, the thing is that. In public. We certainly need to kind of police what people say and do. There are limits to what you can do and say in public, but they are different from the limits in private, you know, if you think about politicians, for instance, I think politicians have a right to stupidity in private.
Sure, yeah. Like if you stand in front of the cameras and give a speech to millions of people, your words are like seeds that go into the minds of millions of people. If you plant seeds of hatred in millions of minds, this is very, very dangerous and this should be restricted. But if you then you are offline, you are just with couple of close friends and you say something stupid, that's your own business.
Nobody should know about it. Nobody should censor you for it.
[01:12:34] Jordan Harbinger: Yeah, that's certainly an interesting perspective and something to consider. I guess when I think about my Chinese teacher telling me that her friend, they use an app called WeChat, which is essentially like WhatsApp or something, but much more comprehensive.
She said she was talking to another friend of hers and there was a little badge next to his name and she clicked on it and it said something like, this person does not pay back money that he owes to companies or people. I mean, imagine, imagine you're just like, Hey, Angela, do you wanna me meet for lunch?
Hmm. Yeah, but you should bring cash, because I heard from WeChat that you don't pay your bills, pal. Yeah. Like, oh, is this a misunderstanding? I, they were sending it to the wrong house. Well, I don't know, but all I know is you, you're broke ass. Better bring some cash. That's kind of a, a silly example, but it's that kind of perpetual job interview.
Yeah.
[01:13:21] Yuval Noah Harari: And you know, like in, in Iran today, this is not science fiction, this is actual reality in Iran. They have the hijab lows. Yes. Which says that women must always cover their hair when they go out, including even in their own car. Like you take your car somewhere crazy, you must wear the hijab in the car.
Wow. Now, until recently, they had a big problem in enforcing these lows. 'cause what do you do? You place a policeman on every street corner to make sure that all the women cover their hair and whatever. And now they haveis, they have these surveillance cameras all over the place. And this is daily occurrences that a woman would go in her car without a veil, without a hijab, and some surveillance camera.
Would identify that this is a woman. She doesn't wear the hijab, identify who she is, and immediately, automatically doesn't go to a judge. It just doesn't go to any policeman, nothing. Oh wow. The AI immediately sends an order to impound the car. You broke the hijab load. Wow. Stop the car. And if she gets it to her phone, and if she doesn't obey, she's in real trouble.
Right. And you don't need the human policeman anymore. It's now being policed by the ais. And people say the same thing can happen in the US with abortion. If you think that abortion is murder like millions of Americans think, then wouldn't you use AI to stop murder? Wouldn't you build a countrywide surveillance system that monitors if women are pregnant and suddenly are not pregnant.
Oh, interesting.
[01:14:49] Jordan Harbinger: Yeah. So
[01:14:49] Yuval Noah Harari: it's not just a problem for the Iranian women, it can arrive here very quickly.
[01:14:53] Jordan Harbinger: Yeah, I had not thought of that. That would be, I mean, they would have to do that at the federal level. I guess they wouldn't going inside and outside of a state, didn't you leave the state pregnant and you came back with no baby?
So what'd you do over there in California? You didn't just go to Universal Studios. Yeah. I mean that could be really scary and
[01:15:08] Yuval Noah Harari: And the technology is there.
[01:15:10] Jordan Harbinger: Yeah. Again,
[01:15:11] Yuval Noah Harari: to do it with human agents, very, very difficult. But to do it with all these cameras and microphones and smartphones everywhere, very easy.
[01:15:19] Jordan Harbinger: There's also the data issue, right? It's we, before we had the Iron Curtain, right? And now we kind of have, I think you've made this term the Silicon Curtain. The curtain, yeah. Yeah. I like that. Because the question becomes, hey, does our data go to Beijing? Does it go to Silicon Valley? Which one is worse? Are, aren't they both kind of bad?
You have a set of rules that maybe people should listen to when they're creating these things. Not that I have any hope that they will actually listen to your, your suggestions, but I like them in case. Yeah.
[01:15:43] Yuval Noah Harari: And when we come to, you know what we can do, the first thing to understand technology is not deterministic.
It's not like you create a technology. There is only one way it can go. In the 20th century, you know, we had electricity and radio and cars and we had the Soviet Union and we had the United States using the same technology. So also in the 21st century, just creating AI doesn't mean there is just one future, the choices to be made, and we had better make wise choices while we still have the power, it's impossible to regulate in advance.
Yeah. All the different dangers and threats because this is developing very fast. So I think we shouldn't think in terms of rigid regulations. We should think in terms of creating living institutions that are staffed by some of the best human talent and have the resources to first of all understand what is happening.
You know, that we don't have just to trust the companies or a few governments, but we have an independent, maybe international institution that can tell people all over the world what is really happening with ai. And this would be the basis for a public debate about what we should do. But first we need to understand, and then there are always regulations that to some extent we can agree on.
Like just to give two examples because we don't have a lot of time. So one key regulation is that corporations should be liable for the actions of their algorithms. Okay? The same way that if you produce a car and the car functions, this is the fault of, fault of General Motors or whatever should be the same with technology.
Again, it should be very clear. I'm not saying that companies should be liable for the actions of their users. Like if your YouTube and some human user uploaded a conspiracy theory to YouTube, I don't think I should be very, very careful before YouTube sensors humans. Very careful about that. But if the YouTube algorithm then deliberately recommends and auto plays this video to millions of people to increase human user engagement, to make more money for YouTube, this is on YouTube.
I see. This is not on the user that created it. This is the decision of the algorithm, and the company should be responsible for that. So that's one regulation. The other is. That we should ban counterfeit humans. Fake humans, bots,
[01:18:02] Jordan Harbinger: essentially.
[01:18:03] Yuval Noah Harari: Yes. I mean, bots that, uh, masquerade as human beings. I see. ais are welcome to communicate with us as long as they identify as ais.
But if you talk with a doctor online and it pretends to be human doctor, human being, but it's actually an AI that's very, very dangerous. So, okay. Let's talk with the ai. Maybe it has good advice, but I need to know that this is an AI and not a human being.
[01:18:28] Jordan Harbinger: Yeah. Yeah. And ideally that a human being went, yeah, that's all correct.
He's not trying to kill you. Right. Or that there's some liability for that person. Yeah. I can get behind that. I know we're out of time and as an experiment, I'm gonna end this interview by asking you a question that I, that I asked chat GPT Oh, to ask how do I end an interview with Yuval Noah Harare? And of course, it gave me a long explanation that I didn't really need.
But the question is, if you could send a message to humanity that would be read a thousand years from now, what it be in the future? Yeah.
[01:18:58] Yuval Noah Harari: Let me think about it. Yeah, go for it. Make sure you are solving the right problems. I mean, humans are very good in solving problems, but they often focus on the wrong problems.
It's been happening for thousands of years. We are solving problem after problem and the situation just seems to get worse because we are solving the wrong problems. I mean, I would have that like today, the smartest people in the world working on the problem of how do you establish trust between humans before working on how do we create super ais?
And I know if there are humans, a thousand years from now, I guess we solved at least some of the problems facing us. Yeah. But it's, I think it's still a good advice that before you rush to solve a problem, make sure that you are working on the right problem.
[01:19:43] Jordan Harbinger: I like that. If there are humans, a thousand years from now is scary.
I mean, maybe we'll be in zoos, who knows? Maybe we already are. But thank you so much. I'm glad we finally were able to make one happen in person. Definitely have to do it again. I always make seven or eight pages of notes and I go, I hope that's enough. And then on page three, it's like, Hey, I gotta go, man.
So that's, that's always a sign of a good conversation. Absolutely. And I really appreciate it. Thank
[01:20:04] Yuval Noah Harari: you.
[01:20:06] Jordan Harbinger: Imagine facing a rare, incurable disease and finding out that AI could repurpose an FDA approved drug is a potential cure. That's the breakthrough achieved by Dr. David Feigenbaum and the mission of his company.
[01:20:18] Clip: I'll never forget, the doctor walks in the room and says, David, your liver, your kidneys, your bone marrow, your heart and your lungs are all shutting down. That's it. Like, we've tried everything. There's, there's nothing more that we can do. I was terrified. I was like, had my last rights read to me. Course, you know, no one thought that it was even possible that I could survive.
You're dying from this horrible disease, chemotherapy just gave you a little bit of a window, but it's probably gonna come back. So, you know, what's your game plan to prevent this thing from killing you? Well, the only way to get back is to use the tools that you have. Within reach. I'm like, shit, I've got this horrible disease.
And the only way that like I might be able to save myself was if I can find a drug that's already at the CVS. And so my mission then became could I figure out what the hell's going wrong in my immune system? So then maybe I could find a drug that already exists that could treat it. I'm not supposed to be here, like my drug wasn't made for me.
It saved my life. It was always there. I am completely on fire about this idea that there are drugs at your nearby CBS, your nearby Walgreens that could help more diseases and more people. But the incentives aren't aligned for us to do that. So we created every cure a couple years ago because we believe that every drug should be utilized for every disease it possibly can, regardless of, you know, whether it's profitable or not.
80% of our drugs that can help people today and tomorrow, no one's doing any research. What. However, to figure out more uses for them.
[01:21:45] Jordan Harbinger: Tune into episode 1005 of the Jordan Harbinger Show to explore how existing medications are bringing new hope to those confronting elusive illnesses. All things Yuval, do I have to say all three names?
I'm just gonna say Yuval. We'll be in the show notes@jordanharbinger.com. Advertisers deals, discount codes, ways to support the show, all at Jordan harbinger.com/deals. Please consider supporting those who support this show. Also, our newsletters a great companion to the show. It's called Wee bit Wiser. It drops every Wednesday.
The idea is to give you something specific, practical, something that'll have an immediate impact on your decisions, your psychology, or relationships. Again, it's like a two minute read, and if you haven't signed up yet, I invite you to come check it out over at Jordan harbinger.com/news. Don't forget about six minute Networking over@sixminutenetworking.com.
I'm at Jordan Harbinger on Twitter and Instagram. You can also connect with me on LinkedIn, and this show is created in association with Podcast one. My team is Jen Harbinger, Jace Sanderson, Robert Fogerty, Ian Baird and Gabriel Mizrahi. Remember, we rise by lifting others. The fee for the show as you share it with friends, when you find something useful or interesting, the greatest compliment you can give us is to share the show with those you care about.
So if you know somebody who's into these kind of heady topics, populism, authoritarianism, ai, or maybe they're just a big Yuval Noah Harra fan, definitely share this episode with 'em. In the meantime, I hope you apply what you hear on the show so you can live what you learn, and we'll see you next time.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.