AI advocate Marc Andreessen joins us to clear up misconceptions about AI and discuss its potential impact on job creation, creativity, and moral reasoning.
What We Discuss with Marc Andreessen:
- Will AI create new jobs, take our old ones outright, or amplify our ability to perform them better?
- What role will AI play in current and future US-China relations?
- How might AI be used to shape (or manipulate) public opinion and the economy?
- Does AI belong in creative industries, or does it challenge (and perhaps cheapen) what it means to be human?
- How can we safeguard our future against the possibility that AI could get smart enough to remove humanity from the board entirely?
- And much more…
Like this show? Please leave us a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Please Scroll Down for Featured Resources and Transcript!
Please note that some links on this page (books, movies, music, etc.) lead to affiliate programs for which The Jordan Harbinger Show receives compensation. It’s just one of the ways we keep the lights on around here. We appreciate your support!
Sign up for Six-Minute Networking — our free networking and relationship development mini-course — at jordanharbinger.com/course!
This Episode Is Sponsored By:
- Airbnb: Find out how much your space is worth at airbnb.com/host
- Biöm NOBS: Get 15% off a one-month supply of NOBS at betterbiom.com/jordan
- BetterHelp: Get 10% off your first month at betterhelp.com/jordan
- Eight Sleep: Get $150 off at eightsleep.com/jordan
- Airbnb: Find out how much your space is worth at airbnb.com/host
- Warby Parker: Go to warbyparker.com/JHS and try five pairs of glasses for free
- Nobody Should Believe Me: Listen here or wherever you find fine podcasts!
Miss our conversation with Google’s Eric Schmidt? Catch up by listening to episode 201: Eric Schmidt | How a Coach Can Bring out the Best in You here!
Thanks, Marc Andreessen!
If you enjoyed this session with Marc Andreessen, let him know by clicking on the link below and sending him a quick shout out at Twitter:
Click here to thank Marc Andreessen at Twitter!
Click here to let Jordan know about your number one takeaway from this episode!
And if you want us to answer your questions on one of our upcoming weekly Feedback Friday episodes, drop us a line at friday@jordanharbinger.com.
Resources from This Episode:
- Marc Andreessen | Andreessen Horowitz
- Marc Andreessen | Twitter
- Marc Andreessen | Substack
- Why AI Will Save the World by Marc Andreessen | Andreessen Horowitz
- Our Approach to Alignment Research | OpenAI
- Aligning Advanced AI with Human Interests | Machine Intelligence Research Institute
- What Is GPT-3 and Why Is It Revolutionizing Artificial Intelligence? | Forbes
- What Is ‘AI Alignment’? Silicon Valley’s Favourite Way to Think About AI Safety Misses the Real Issues | The Conversation
- Steel Drivin’ Man: John Henry, the Untold Story of an American Legend | Virginia Museum of History & Culture
- Can Democracies Cooperate with China on AI Research? | Brookings
888: Marc Andreessen | Exploring the Power, Peril, and Potential of AI
This transcript is yet untouched by human hands. Please proceed with caution as we sort through what the robots have given us. We appreciate your patience!
[00:00:00] Jordan Harbinger: Special thanks to Airbnb for sponsoring this episode of the Jordan Harbinger Show. Maybe you've stayed at an Airbnb before and thought to yourself, yeah, this actually seems pretty doable. Maybe my place could be an Airbnb. It could be as simple as starting with a spare room or your whole place. While you're away, find out how much your place is worth at airbnb.com/host.
[00:00:18] Coming up next on the Jordan Harbinger Show.
[00:00:21] Marc Andreessen: You know, one argument is, look, the smartest people are going to be so much better at using the tool, right? That, that they're going to just like run way out ahead of everybody, and that's going to be a big driver of inequality. The other argument, you can make those arguments that these studies are already showing, which is, no, all of a sudden people with less intelligence or skill or experience all of a sudden have a superpower they didn't previously have.
[00:00:42] Jordan Harbinger: Welcome to the show. I'm Jordan Harbinger on the Jordan Harbinger show. We decode the stories, secrets, and skills of the world's most fascinating people and turn their wisdom into practical advice that you can use to impact your own life and those around you. Our mission is to help you become a better informed, more critical thinker through long form conversations with a variety of amazing folks, from spies to CEOs, athletes, authors, thinkers, performers, even the occasional drug trafficker, former jihadi, four star general, rocket scientist, or Russian chess grandmaster.
[00:01:12] And if you're new to the show, or you want to tell your friends about the show, and I always appreciate it when you do that, I suggest our episode starter packs. These are collections of our favorite episodes on persuasion, negotiation, psychology, disinformation, cyber warfare, crime, cults, and more, to help new listeners get a taste of everything we do here on the show.
[00:01:30] These are collections of our favorite episodes on persuasion and negotiation, psychology and geopolitics, disinformation and cyber warfare, crime and cults, and more, to help new listeners get a taste of everything we do here on this show. Just visit jordanharbinger. com slash start. Or search for us in your Spotify app.
[00:01:46] To get started today, a deep dive on AI with Mark Andreessen founding partner at Andreessen Horowitz, also known as a 16 Z, one of Silicon Valley's most well-known venture capital firms. Mark was around during the very early stages of the internet, or at least the worldwide web, in part inventing the web browser as we know it today.
[00:02:05] innovator and inventor himself, I am keen to hear his perspective on AI, why it probably won't actually try to kill us all, contrary to popular belief or whatever happens to be trending right now depending on when you listen to this of course. And just a note, today I will be referring to AI both as AI and as an LLM which is a specialized type of artificial intelligence that has been trained on vast amounts of text to understand existing content and generate original content.
[00:02:32] LLM stands for Large Language Model. It's something like ChatGPT, for example, if you've used that. Anyway, here we go with Mark Andreessen. I'm gonna do that shitty journalist thing where like, uh, you would expect from somebody on a 10 minute segment on a mainstream news channel. But I think a great hook and what a lot of people are wondering is, both from age 20 to age 60, is will AI kill us all either by accident or because it pulled one over on us?
[00:03:03] And I know you're not exactly utopian when it comes to AI, but you're not as cynical as a lot of the people out there that are readily available opining on this topic. I agree. So, what is the AI alignment problem that people, that you see people keep telling you you're ignoring? So,
[00:03:21] Marc Andreessen: there's, there's two dimensions to what's called AI alignment.
[00:03:24] The significance to the vocabulary because it actually started out as AI safety. So 20 years ago the topic was AI safety and then about 8, 10 years ago it kind of flipped AI alignment and that gives you kind of the, there's two dimensions to this. And so the original AI safety basically was like the Terminator movies, right?
[00:03:38] So like we're all into Hamilton and Terminator and like the machines are coming to kill us. And, you know, we're going to wake up a, you know, basically a Skynet kind of thing and it's going to have like gleaming, you know, metal robots with laser guns and like it's going to kill us, right? Because it's going to be a, you know, kind of battle to the death for, you know, the domination of earth and so forth.
[00:03:53] And so that, that was sort of the original thing and that was so called AI safety. And then about 10 years ago, um, basically what happened was a bunch of other people came along and basically said, well, it's not just whether AI is going to kill us, it's also whether it's going to destroy our society.
[00:04:06] Right. So maybe it leaves us physically alive, but it basically decides to like program our brains and, and, and sort of this concern arose at the same time that the same concern arose for social media. Um, and so, which is programming our brains anyway. Well, that's, that's the theory. Right. So, well, this is the thing.
[00:04:19] This is the thing. So basically what happened and you, you recall, just take a brief digression of what happened to social media, which is social media was, it was either viewed as like completely useless, which is like, what did your cat have for breakfast? Who cares? Or it was viewed as like purely a good thing.
[00:04:30] Right. And so when I, like Obama ran for reelection in 2012, it was like the social media campaign and there were all these like glowing cover stories about how. incredible social media was. And then you remember the Arab Spring. Social media was going to bring democracy to the Middle East, right? And then in 2016, um, you know, a different candidate won.
[00:04:46] And the, and the sort of political valence of social media changed too. Like this is the worst thing in history. Like, you know, nobody could have possibly voted for this other candidate on purpose. They were obviously tricked and they were tricked by some combination of the Russians and Facebook and, and social media in general.
[00:04:58] And so, and it was sort of at that moment that sort of the switch flipped in social media that also flipped in this sort of AI. safety alignment thing. And so that's sort of when AI became AI alignment. And so now AI alignment is much more concerned not with, are the robots going to kill us? It's much more of, are basically, is AI going to give the correct answers?
[00:05:13] And specifically, the answers that are quote unquote aligned with human values. Now, what's AI safety world of the people who worry about this stuff is the AI safety people are very, now, frustrated by this because they're like, we were never, we were never worried whether AI was going to use, like, bad words or going to have, like, the wrong political opinion.
[00:05:29] Like, we're worried whether it's going to come to kill us. Yeah. The AI safety people have renamed themselves. The ai not kill
[00:05:34] Jordan Harbinger: everyone. ISTs. Oh, that has, that has a great ring to it.
[00:05:37] Marc Andreessen: Which is very, which is very catchy. Rolls right off the top. . Exactly. And so there's like a, a schism kind of in this movement.
[00:05:42] And, and, and basically what's happened is the, the people who were worried about the, is it gonna kill everybody? That movement basically has been hijacked by a, a movement to basically try to do, say for good or for bad, the kind of speech controls. You know, sort of opinion controls that you now see on social media, censorship controls.
[00:05:57] And now, now there's basically a big push to apply those
[00:05:59] Jordan Harbinger: to AI. That is scary. And I definitely want to, I want to get there, but I'm going to get there a little bit slower because I, I know earlier pre show we were talking about Sam Harris. And he gives this example of like an AI that's purpose built, let's say for chess, it's the best chess player in the world.
[00:06:13] Gary Kasparov waking up on his best day, gets beat by this thing 10 times out of 10. And if the fate of humanity depends on us beating this chess AI. Humanity's lost forever 10 times out of 10. That's not a good scenario. How likely is it really that we build a general intelligence that's this angry god in a box that ends up killing us all?
[00:06:33] Because building, and I know very little about specific computer type of applications, but here, I would imagine it's a lot easier to build an AI that's really, really good at one thing like chess versus an AI that's like, I can outsmart all living humans. Well,
[00:06:47] Marc Andreessen: this gets to this concept of what they call so called general and artificial general intelligence, which is the idea that basically it's going to be smarter than everything.
[00:06:53] So go back a little bit in history here, because the idea of sort of an anxiety about a machine that's going to like outperform humans, um, and then lead to our demise, like that's not new. Have you ever heard, when you were a kid, you ever hear this thing, the Ballad of John Henry? Uh, yeah, yeah, but I don't know what it this was, there was, there was a whole anxiety around mechanization that took place during the industrial revolution and, and specifically, you know, there were a lot of these same concerns.
[00:07:14] It was like, you know, are these things going to be death machines? Are they going to, and by the way, you know, technology was militarized, like, you know, the people that, you know, they didn't make tanks and fighter jets and so forth and guns with it. But also there was this concern about eliminating all the jobs and, you know, causing basically everybody to become unemployed.
[00:07:26] So there were a lot of these same anxieties. around industrialization. And so in those days, if you were like a big, strong guy, like a job that you would have is you would go build the railroads and you would literally drive, drive spikes, you know, we've seen this in railroad tracks, you drive spikes into the beams to connect the tracks together.
[00:07:39] And so there was this, the, at the time the ballad, the legend goes is there was this guy, John Henry, who was like the best at doing that. And then one day the nerds showed up with a pile driving machine, right, which is this steam powered thing that could do that even better. And then there was this big contest, the whole day long contest for John Henry and the machine competed to drive the most bikes.
[00:07:55] And it turns out John Henry won the contest and then dropped out of a heart attack.
[00:07:59] Jordan Harbinger: Yes, that's, I was going to say, is this the one where the guy dies the day after he wins? Yeah. Beats the machine. Yeah,
[00:08:05] Marc Andreessen: exactly. And so like that, that became literally like a man, there's like big dispute over whether he probably existed.
[00:08:10] There was something like that, but like that, that became kind of this. Learn how to use the machine. Yeah. How to use the machine. Right. And so, and of course, you know, that led to predictions of like mass unemployment and so forth. Mm-hmm. . And then of course what happened was technology, the result of that was massive job creation.
[00:08:21] So the, the, the opposite of what everybody was worried about happened. It turned out that the existence of machines actually created jobs, uh, as opposed to, uh, destroying them. And so, which is why we sit here today and we have, you know, many more jobs in the world. So, so this is a very old concern. Uh mm-hmm.
[00:08:34] it's kind of popping back up again. And so the way to think about this is kind of very consistent with kind of this historical model, which is like, okay, what is the role of technology and kind of how the world works and how the economy works and how people work? And there's sort of a zero sum view of it, which is either we do something or the machine does it.
[00:08:49] But then there's the other thing, which is the thing that actually happens, which is there's a positive sum view of it, which is what machines do is they amplify human capabilities. Right. So like you plus a computer, right, is better than just you. By the way, you plus a computer is much better at chess, right?
[00:09:02] You plus a word processor is much better
[00:09:04] Jordan Harbinger: at writing to say the, I was going to say at least the computer knows the rules of chess. Like we're starting pretty low here.
[00:09:09] Marc Andreessen: Your podcast. So you plus a digital editing software makes you a better podcast creator, right? You plus a search engine makes you a better interviewer, right?
[00:09:16] You plus YouTube, right? Makes you a better broadcaster, right? You do things with technology in order to make yourself more effective in economic terms. What that means is it's increasing the economic function called productivity It's increasing output and so and so and this is the economic phenomenon by which machines actually create jobs as opposed to destroying jobs and so So, if we were to get to what the sort of, I don't know, utopians, dystopians hope for, which is this idea of artificial general intelligence, the result would be a massive takeoff of economic productivity that would lead to an economic boom far in excess of anything we've ever seen in history, which would lead to so much job creation that we would once again be like completely out of human labor.
[00:09:49] And this has happened for 300 years. Like this has been the pattern and I fully expect
[00:09:52] Jordan Harbinger: it to continue. We were talking, I think maybe even before you walked in, about how Socrates was like, books? People aren't going to memorize anything. And then it became like, now these people are just writing books based on knowledge that they've consumed from other books.
[00:10:04] But it's somehow still so hard for us to imagine. that there's more work to be done than we're doing right now. Let's take chess. Let's take
[00:10:11] Marc Andreessen: chess. So there are more people playing chess now than ever before. Chess as an industry is bigger than ever before. Like chess as a competitive community is bigger than ever before.
[00:10:18] Like internet chess is huge. Like chess has never been a bigger game. Right. And so basically what happened was when chess got solved by computers, basically that was like a catalyst for a surge of interest in the field and now more people play chess than ever. Right. And so it's the thing it's, it's, and again, it's this thing.
[00:10:30] It's, there's a very kind of simple thing here, which is like the world runs according to human intent. And there's all these people who kind of want to paint into it that the machines are going to get their intent, but like machines are just machines. We decide what to do with them. And just because there's a computer that can play chess better than you does not mean it's no longer fun to play chess.
[00:10:43] Jordan Harbinger: You don't think that the retraining potentially for certain classes of professionals will be very painful in the short term? Or is that just a temporary, is that just something that has to happen? Like ripping off the band aid of not needing... So many second year associates in a law firm.
[00:10:57] Marc Andreessen: I mean, this is always a concern.
[00:10:58] So let me, let me, let me make it a very explicit kind of case study. This would be kind of the shift to cars, the shift from horses to cars, you know, people who were literally blacksmiths and like, and then, and then basically the blacksmith, you know, that, that, that field no longer was, let's say a growth industry, right?
[00:11:11] By the way, there are still blacksmiths because there are still like, it's ironic what happens, right? Because like now rich people ride horses and so now they hire blacksmiths to take care of their horses. Collect
[00:11:18] Jordan Harbinger: chain
[00:11:18] Marc Andreessen: mail or whatever at the Renaissance Festival. Or do the reenactment. Exactly. They do the reenactments.
[00:11:22] Exactly. They reenact, you know.
[00:11:23] Jordan Harbinger: There's still barista and then on weekends he's hammering out chain mail. Exactly.
[00:11:27] Marc Andreessen: Imagine telling people 200 years ago that someday there was gonna there were gonna be chainmail hobbyists. Yeah, right or people riding horses for fun Like you people have been you're out of your mind.
[00:11:36] Yeah, how on earth is that gonna happen? But look there was this transition there were a lot of blacksmiths all of a sudden they weren't needed because you didn't need the horses But you didn't what you didn't need was a lot of car mechanics And so you you did have to do this retraining thing I would just make a couple of observations there One is if that's the kind of transition in an economy that is going to happen and transitions like that happen in the economy all The time like you have to get to it So delaying that from happening is basically leading people down a false path.
[00:11:58] So the, the thing that you would not have wanted to do at that time is to tell blacksmiths, you know what, it's fine. You're going to have horses forever. In fact, you know what, you should have your kids become, you know, you should be your apprentice blacksmiths because it's going to be a safe field for them.
[00:12:09] Like you don't want to like lie to people and represent the things that are going to be happening in a way that they're not. And then, and then the other side is you want to actually help them make the jump. It turns out one of the things AI is really good at is helping people learn things. Right?
[00:12:20] Interesting. And so there's a, as usual with these things, like there's a silver lining in here, which basically is what, what, one of the things I think we need to do is unleash AI as a tool to help people learn. A lot of people already use chat GPT precisely for that purpose. And so I think that's a real thing.
[00:12:31] Yeah, I think,
[00:12:32] Jordan Harbinger: I mean, we use it for that kind of thing all the time. There's associated small problems with it too. And I wish, of course, I could just plug a whole book in there and be like, just tell me the important parts. Although it does make me want to be lazier in a way that's probably not super healthy.
[00:12:44] For me as a, as a reader and a podcaster, but look, I am not usually one who says halt technological innovation because of these concerns. And I'm actually kind of surprised at the number of people who in probably any other field would be like, no, we don't need elevator operators instead of an automated, an automated elevator.
[00:13:01] And you'll see those people argue that in one breath while in the next breath being like, but AI. Is dangerous and it's going to be a problem and when I was I'm old enough to remember that we were worried about robots taking our jobs, building cars, building computers, whatever it was, and now that it's actually going to take the jobs of the lawyers and the doctors, it's like, well, wait a minute.
[00:13:22] This is the underpinning of civilization. We can't have that. And I felt like it's funny when it wasn't your job. You didn't care. Now that it's like your profession or the one you came up in, it's just a tragedy that is shaking the grounds of the earth that we walk on. And I find that, I don't know if it's deliberately hypocritical or just that's human nature.
[00:13:41] I'm not sure. I found that very, it's like, robotic Uber driver, sorry bro. Price of progress. Robotic doctor, impossible. Dangerous. Gonna kill everyone. Do you remember the learn to code meme? Uh, yes, like, that wasn't that long
[00:13:53] Marc Andreessen: ago. So in the 2000s, the learn to code thing was, it came up during the environmental kind of movement, the move to ban coal.
[00:14:00] There was always this question of what are the coal miners going to do, and there was this thing, they should learn to code. And then in the 2010s, the journalist jobs started to disappear and the journalists were, you know, because the internet, the journalists blame the internet for the loss of the jobs.
[00:14:11] And so people who don't like journalists were like, well, we're being driven out of business by the internet and the people who don't like journalists, their response was learn to code. And then of course, Twitter banned the beam under previous management and Twitter banned the meme. I didn't realize that's why
[00:14:23] Jordan Harbinger: it got
[00:14:23] Marc Andreessen: banned.
[00:14:24] That's why it got banned. Yeah. Pre Elon. That's an example of the kind of thought control, right. Of the previous social media era, which, and again, it's like, okay, like. Is an AI going to be allowed to, uh, you know, suggest that people learn to code? I do find
[00:14:34] Jordan Harbinger: it interesting though, because it's, uh, I don't know if you know this, but journalists still exist.
[00:14:38] They do. Yeah. And maybe, maybe there's not as many of them working in a certain paper, but Substack exists. Well, so this is
[00:14:46] Marc Andreessen: what happens. So professional podcaster is a new thing, right? So, so, so what happens basically is change happens. It's a fake job. I get it. What's that?
[00:14:52] Jordan Harbinger: It's a fake
[00:14:52] Marc Andreessen: job. I understand. I'm with it.
[00:14:53] Exactly. But no, it's literally what happens, right? So it's, it's basically like what, so what happened? We're like, what. What created your field? What created your field was the technology change. Yeah. Right? You're able to, you know, with very little capital, you know, you don't need a giant studio. You don't need a giant like broadcast tower in the middle of Manhattan.
[00:15:06] You're able to do what you do in a relatively small amount of CapEx. And then you're able to just go do it and you, I assume, don't have to ask anybody for permission. No. You can interview whoever you want. So far so good. You can put it out on YouTube and any number of other distribution platforms and, and, and off and away you go.
[00:15:20] And that's a field that, you know, literally didn't exist 20 years ago. And it's a massive growth field today. And so, so, so what happens is the, these things shift. Douglas Adams had a great, you know, who wrote Hitchhiker's Guide to the Galaxy had a great framing on this. He said, new, new technologies are always received by, uh, by society and, and, and sort of three stages depending on how old people are.
[00:15:37] Um, if you're between zero and 15 years old when a new technology arrives, it's just the obvious order of the world. It's just obvious that this thing exists, which by the way is how my eight year old reacts to AI. He's like, well, of course the computer answers questions like why, you know, why wouldn't it?
[00:15:48] What else is the computer good for? Exactly right. Um, he said, but if you're between the ages of 15 to 35, the technology is new and exciting and hot, and you might be able to make a living with it. And if you're above the age of 35, it's the end of the world. Yeah,
[00:15:59] Jordan Harbinger: that's how I feel about TikTok, but I know I'm just old.
[00:16:02] That's the thing, I'm like, oh, the attention span, and look at this, and then I'm like, ah, this is how old people feel. What does that make me though? Damn it.
[00:16:09] Marc Andreessen: And in fact, they are not professional tiktokers. Yeah, right. That's like an entire
[00:16:12] Jordan Harbinger: profession. I get it I hate watch them
[00:16:14] Marc Andreessen: occasionally. And old fogies like you are like what the hell is
[00:16:16] Jordan Harbinger: this, right?
[00:16:17] I'm like fine. I will go see that movie but not because i'm being influenced by this person That's not working on me
[00:16:22] Marc Andreessen: and this again. This is this is the cycle of things. So basically Uh, you know, what one form of labor becomes obsolete, another form of labor becomes like brand new and exciting. And then there's, there's a natural rotation that takes place, but we've had 300 years of industrialization.
[00:16:34] Right. And, and, and this kind of panic has recurred over and over again, kind of every step of the way. And in, you know, before the COVID disruption in 2019. We had more jobs on the planet with more people employed at higher wages than ever. And so like the, the sort of theory that there's like some threat to jobs from like robotics or AI or software or whatever, I think is just, it's a fake threat.
[00:16:50] Like it's not actually a real thing and I'm, I'm, I'm not worried about it at all. That
[00:16:53] Jordan Harbinger: is so interesting because it seems like smart people are really, unless I or we are just missing something huge, smart people who normally would have a calmer reaction to something like this. Are freaking out. And the only time I see that is when it's like a religious belief.
[00:17:09] And I've heard you mentioned something along those lines, like, Hey, this is not, it's no longer in the realm of. scientific debate. It's the religious belief that this is going to cause a problem. I'm paraphrasing you and maybe doing it poorly, but are you kind of on that same page?
[00:17:23] Marc Andreessen: Yeah. Yeah. So what happens is people like we, we basically, we got rid of, I mean, there's still religion, but like religion doesn't play a central role in our society as it used to.
[00:17:30] And so basically what ends up happening is lots of scholars have observed what happens as people end up recreating religions. And they create religions basically around their anxieties, um, and then, and then of course they deadlock, right? They, they sort of form groups and then they declare religious wars and, and, you know, there's basically at that point, you know, this is like a lot of our
[00:17:44] Jordan Harbinger: politics are like that.
[00:17:44] I was going to say that, but then I thought, do I want to do that right now? People are not, I don't know if
[00:17:48] Marc Andreessen: you've noticed, but people are not actually open to political discussion.
[00:17:51] Jordan Harbinger: I have noticed that, that is, that is a thing. I normally, I don't interview politicians on this show unless there's some other really damn good reason to do so because, well, it's like talking about.
[00:18:01] I'm afraid to even mention the word religion or Christianity or Islam. Like you, some people are going to go, Oh, that's good that you're open to that. And everyone else is going to be like, how dare you? Yeah, exactly. Right. And you can't tiptoe around. Yeah.
[00:18:12] Marc Andreessen: And so it's, it's basically the tell us when you, when you get the emotional reaction like that, that's when you realize you've kind of turned it into a religious or kind of quasi religious.
[00:18:18] Yeah. And it's kind of best to kind of, kind of just quietly step around it, let people
[00:18:21] Jordan Harbinger: do their thing. Agree. Yeah. Especially if you, I don't know, want to keep your audience and like chill mattresses like I do for a living. How good is AI in some of these fields, for example, is AI a fourth year associate at a law firm, how is it, how skilled is it, if it's in your office here at A16Z, is it, is it like, oh, we could probably get rid of some of our analysts if we had these, this AI doing this for us, or is it like, well, that's five years away?
[00:18:46] Or are you thinking like, Marguida, it's been great knowing you, but we don't need so many partners over here. Or vice versa. Or, actually, Mark, you should just retire. Exactly. We're good. We have your personality in this little box.
[00:18:58] Marc Andreessen: Exactly.
[00:18:59] Jordan Harbinger: Yeah. And it doesn't yell as much,
[00:19:01] Marc Andreessen: by the way. Exactly. And it seems smarter.
[00:19:03] So there's a really, this, this is sort of the nature of the actual kind of drama that's playing out right now in the Valley, and I think around the world around AI, like the actual substance of what's happening, which is. It's, it's this really unusual thing. It's a overnight breakthrough that's been 80 years in the making, right?
[00:19:16] So the original idea of AI as we know it today was actually in a paper written in 1943, the first paper on neural networks. It took 80 years to basically get this stuff to work and then all of a sudden it started working like incredibly well. So sitting here today, like in a sense, we're in year 81 and in a sense, we're in year one.
[00:19:32] And it's actually kind of more relevant, practically speaking, that we're actually in year one. Like this is like a brand new thing. Like a year ago, I didn't think what we see today is even possible, right? Really? Yeah. I just, I thought it was still decades in the future and like all of a sudden it showed up.
[00:19:44] And so, like, this is a very, very, very big advance. Now, having said that, like, a couple of things. Like, it is new and it's not yet perfect, right? And so, I'll just give you a specific answer to your question. So, the problem with using AI for, for example, legal, legal briefs right now is the way this generation of AI works, so called generative AI, or large language models, the way it works is it's basically a very fancy autocomplete.
[00:20:05] And the same way that your phone will autocomplete a word, this thing will autocomplete a sentence or a paragraph or, like, an entire essay or an entire legal brief. The problem with it is, um, it very badly wants to make you happy. Mm hmm. It's actually quite the opposite of, like, it wants to kill you. Like, it very badly wants to make you happy.
[00:20:19] And to make you happy, um, it will autocomplete with facts if it has them. And if it doesn't, it will make them up. That's
[00:20:23] Jordan Harbinger: the hallucination thing?
[00:20:24] Marc Andreessen: The hallucination problem. Okay. Now, the hallucination thing is really fascinating because If you are a scientist, or an academic, or a lawyer, and this thing is going to make things up, that is a giant problem.
[00:20:36] Yeah,
[00:20:36] Jordan Harbinger: we, we, every lawyer, the day after that thing happened with a lawyer filed a brief, and it was like, according to Hamill vs. Harbinger, this, da da da, the day after that happened, I think everybody who'd ever gone to law school for more than five minutes got forwarded that case and was like, don't. Do this.
[00:20:51] Don't do this. Or look at these guys who did this. Holy shit. I'm so glad that wasn't me. Right. If you're a lawyer,
[00:20:56] Marc Andreessen: you could get disbarred. Yeah. Right? Like if you're a, one of the fun things you can do is you can go, you can go on Google Scholar, which has like, you know, the database of like scientific papers and you can search for as a large language model, which is sort of the tell that, you know, it's the thing that it spits at you when it's, when it's giving you a disclaimer that doesn't know the answer.
[00:21:11] And there are like a whole bunch of scientific papers that have been published in the last year that have the text as a large language model in them, which is to say, a scientist published under his own name, something that he actually generated with GPT. Oh, wow. Which again, it's like, number one, it's like scientific, it's like publication malpractice.
[00:21:24] But number two, these things are not yet ready to write scientific papers because they will make up facts. Did they just not
[00:21:29] Jordan Harbinger: proofread the document?
[00:21:31] Marc Andreessen: That's terrifying. Yes, exactly. Apparently not. Right. And so, so anyway. So, so this is the thing, like the hallucination thing, the hallucination thing is a problem.
[00:21:39] I'll come back to that in a second. Yeah. But, but here's the other thing. There's another set of people for which this is actually pretty exciting, and this is like, you know, screenwriters, right, or um, novelists, right, or like um, or even actually some categories of lawyers. I'll, I'll come back to that one.
[00:21:51] Which is basically another word for hallucination is creativity. We now have the first computer in the history of the world that's actually able to, like, literally imagine things, right? And so if you're trying, if you want to write a screenplay, for example, and you're like, you know, give me 10 scenarios for, you know, X, Y, Z, different, you know, ways for the couple to meet or whatever, like, it will happily make them up.
[00:22:06] And if you ask for 10 more it'll make them up, and if you ask for 10 more it'll make them up, and it'll just keep making stuff up for as long as you want it to. So computers, the way to think about this, computers historically have always been hyper literal. Computers will do exactly what you tell them to do, and if you're a professional programmer your life basically is making mistakes in what you tell the computer to do, the computer doing it literally and you having to go fix your mistakes.
[00:22:23] Right, yeah. And as a programmer it's always your fault if the computer is doing something wrong. This is a new kind of computer that was called non deterministic or probabilistic or the terms that we use for it. And, and this is a new kind of computer that will make stuff up. And we have never had a computer that will make stuff up, like it's like a brand new thing.
[00:22:40] Jordan Harbinger: It really is amazing. Yeah. But how come it can't just say, by the way, I couldn't find
[00:22:44] any
[00:22:44] Jordan Harbinger: cases that said this, so here's a couple that I just made up. This is
[00:22:47] Marc Andreessen: the thing. So there's this category of technology challenge that we, I refer to as kind of these trillion, trillion dollar problems, which basically is, that is a trillion dollar problem.
[00:22:55] The amount of energy and effort that's going to solving that problem today in the technical community and AI is like super intense because whoever solves that problem is going to make like a trillion dollars. Okay. It's like a primary area. Like we have a bunch of companies working on exactly that. And of course the goal, the goal is like you, you actually still want it to be creative.
[00:23:10] You just wanted to be creative in the way that you described, which is you wanted to be creative and how it expresses itself, but not in how it makes things up. By the way, lawyers don't want a, just a totally literal, so for example, one of the, one of the reactions you get when you talk to lawyers about adopting this is, obviously, it cannot make up cases, but it is helpful to have it be creative, for example, to explore different arguments that might work in front of a jury.
[00:23:28] That's what law school is. Exactly. Generally. Right. A good one, I think. Exactly. Like, yeah, different creative ways on how to explain things, right? And so there, there's an opportunity here to kind of fuse a literal minded approach with a creative approach. Technology's not quite there yet, but there are a lot of
[00:23:40] Jordan Harbinger: people working on it.
[00:23:41] Without getting ridiculously complicated. Is the reason that's a trillion dollar question that obviously the problem must be very difficult where the computer doesn't quote unquote know if it's making something up it the all that information exists on the same plane facts that it knows quote unquote knows it's so hard not to talk about AI as if it's alive because it's the limitation of I guess our own mind but it facts that the computer quote unquote knows.
[00:24:04] Versus facts that it generates it just can't tell the difference yet. That's the issue. I think
[00:24:09] Marc Andreessen: this is very fascinating I think this goes to the nature of how this thing works. And this is the big breakthrough So the way that these things work is it doesn't start out actually knowing any facts It actually doesn't have like a concept of fact It doesn't know any like what it has basically is it has the complete corpus of all text ever written by human beings, right?
[00:24:23] Of course, right, so it's got all content off the internet. It's got like all these books And of course, there's all
[00:24:27] Jordan Harbinger: these huge fights over copyright I was gonna say how is it legal for them to be like, oh, I know everything about Harry Potter Cuz J. K. Rowling's like we'll wait Where's my check? Well, so there's, there's a big question
[00:24:34] Marc Andreessen: in there.
[00:24:35] So there's a big question in there, which is one is to learn about Harry Potter, did it have to learn about Harry Potter by reading Harry Potter or could it have read like all of the secondary material on Harry
[00:24:42] Jordan Harbinger: Potter? Sure. Fan fiction.
[00:24:44] Marc Andreessen: Fan fiction. Or, or by the way, just like movie reviews, right? Or book reviews or like student essays, right?
[00:24:48] Or like, you know, other books describing the history of Harry Potter or all of the, you know, text messages that
[00:24:52] Jordan Harbinger: people have sent. Lawyers love this argument though. Prove that we used your book and we'll pay you.
[00:24:57] Marc Andreessen: Well, this is, the other thing is, it's not illegal, like if you're doing research, if you were going to interview J.
[00:25:01] K. Rowling, it's not illegal for you to read her books and use the information in the books to construct the questions, right? And so there's actually this clause in the copyright law that basically says making kind of assemblies of copyrighted information, right, that are not literal copies but are like combinations is actually legal.
[00:25:14] Plus
[00:25:14] Jordan Harbinger: you're not actually monetizing that particular material, you're monetizing the, the result that your brain. Yeah,
[00:25:21] Marc Andreessen: and kind of ideas that come out of it. And so like, so, so anyway, so there's, there's a whole bunch of questions in there, but, but basically how this thing works is that basically you Hoover up as much text as you possibly can and you basically train it on the text.
[00:25:30] And then, so what it has is like it has in its memory, it has basically the complete index of all texts that everybody's ever written or in theory or some percentage of that. And then, and then like I said, what it does is it doesn't autocomplete. And it literally does the autocomplete like word by word, right?
[00:25:40] And so it's like basically like, okay, like he started the way that Chad GPT interprets the prompt is not as a prompt with an answer. It interprets it as the beginning of a piece of text, which it is then responsible for completing. And the way that it completes is it does it probabilistically. And so it's basically estimate, it's doing all this math to basically estimate what is most likely to be the next word in the autocompletion, right?
[00:25:58] And it just, and this is the magic of it is as a result of having all this text, it's really good at autocompleting to the level of full sentences, paragraphs, essays. Over time to full books, but it's able to do that without actually knowing that there are embedded facts. I see Okay, no, it doesn't have the built in concept that like this is a legal brief or this is a book or this is an author Or any of those things it's basically a giant text processing machine now That's part one part two is what it is doing is it is teaching itself?
[00:26:25] philosophically Philosophically, if you were a machine and you, your mission in life was to become the best autocomplete in the world, right? For any text that anybody ever threw at you, for any question that ever asked you, what's the way to do that? And the way to do that would be to have the best understanding of the world that anybody has ever had.
[00:26:40] And so there, there is this thing where the neural network of the AI is training itself with what's called a world model. It's sort of developing within itself concepts like mathematics, right, or legal briefs, or facts of different kinds, right, in order to better predict where the text should go. And, and that's the magic of it.
[00:26:55] And so, so, so the answer to your question is it may either over time evolve the concept of a fact, right, or a citation, or a book, or whatever. Or we may just need to engineer it so that it has a separate function, which is a function to be able to understand, you know, so you can, you can imagine a two part system.
[00:27:11] Part one system generates the, the text. Part two basically is the fact, you know, kind of cross checker, right? That basically is like, oh, that's a reference to a legal brief. Oh, I need to cross check that. And if it got it wrong, I need to feed it back until it gets it right. And so that's the kind of challenge that the
[00:27:25] Jordan Harbinger: engineers right now are working on.
[00:27:26] That makes sense because once it starts absorbing or basically ingesting every podcast. These are not exactly rigorous pieces of journalism, right? Like, you could make a claim today, and I might go home, release this, and someone will go, I can't believe you let Mark pull the wool over your eyes for this thing, and I'll go, oh yeah, I guess I should have looked that up after the fact.
[00:27:44] I didn't check that, because we were just having a conversation, and that's not something normally people will do. But then if it's in the AI, it's like, well, fact is... This completely wrong thing and I mean, there's a lot of podcasts where people are just talking out of their ass and this might be one of them.
[00:27:59] Yes,
[00:27:59] Marc Andreessen: that's true, but also look, there's a lot of, we don't know. There's a lot of books where that's the case too, right? Well, yeah, that's true. I suppose. There's a lot of everything where that's the case. And so then again, this is sort of the amazing thing of what happens, which is it's not going off of just one converse.
[00:28:11] It's not just replaying one conversation back at you. It is like this podcast will be part of training data at some point in the future. But it'll be, it will be one of a billion of these, right? And then there will be patterns across those. And so what it's gonna do, what, what it does, it's actually really interesting, it's like holding up a mirror to basically the last, you know, 2000 years of human civilization, everything everybody's ever written, and then playing it back to us.
[00:28:30] And so to the extent that we collectively as a civilization get things right, it will be correct. If we collectively as a civilization get things wrong, it will be wrong. Oh, well,
[00:28:39] Jordan Harbinger: that's not necessarily as encouraging as maybe you're trying to make
[00:28:41] Marc Andreessen: it sound. No, no, no, no, it's, it's, this is the big thing.
[00:28:44] It's, it's, it, this is the thing. This is why, like, all, this is why, all of the interesting questions about AI are actually interesting questions around people. Mm hmm. Right? We, we just project onto the technology our own anxieties. And one of the anxieties that we have as people is, okay, what is true? Right?
[00:28:55] Yeah. Like, a central problem of human civilization is what is true, right? And we, by the way, still lack good answers for that, right? It's a very, like, deep philosophical question. And it's the basis for a lot of, you know, inflammatory politics and everything else happening in our time. And so, like, look, the AI is not going to magically answer the question of what is true.
[00:29:09] But what it's going to do is it's going to play back at us through its kind of reflective mirror, right? It's going to play back at us sort of the composite view of what we think is true.
[00:29:16] Jordan Harbinger: I like that more than it's completely making things up as it goes in the most efficient way because that's the terminator scenario Is it like they figure out humans are the problem and it's like oh, well, why not solve this problem, right?
[00:29:27] Whereas what we're talking about. It's not necessarily It seems more likely to have some human values if it's reflecting everything of humanity back at us. Yeah, okay, good. Yes. Well, so
[00:29:38] Marc Andreessen: this is the thing. The Terminator problem is actually a different problem. In my view, the Terminator problem is the opposite problem.
[00:29:42] The Terminator problem is a problem of hyper literalism. So this is what the AI safety people use this metaphor. They call it the paperclip optimizer, right? And so, right. So their version of it is you create an AI, um, and you tell it to basically maximize the number of paperclips in the world. And then it basically goes off and does whatever is required to do that.
[00:29:58] Including like building, you know, nanotech human harvesting factories that break down our atoms so they can make more paper clips out of our atoms, right? Like it's this hyper literal thing that starts out with one simple rule and then ends up basically destroying everything to try to execute that rule.
[00:30:09] That's actually not how these things work. Like that's not how this kind of AI works. This AI, again, is, to your point, it reflects back on us what, what our view of things is. And so one of the things that it reflects back on us is our own morality. And so one of the very interesting things you can do at ChatGPT right now is you can have moral arguments with it, right?
[00:30:24] So you see. Really? Yes. I have not tried
[00:30:26] Jordan Harbinger: this.
[00:30:26] Marc Andreessen: You should try it. You should try it. Yeah. So you can pose moral arguments, right? And you can, you can propose the, all these different trolley problems that people talk about, right? You can propose all these questions. You can propose questions around like healthcare, you know, there's always questions around healthcare policy and rationing of healthcare and, you know, who lives and who dies.
[00:30:40] You can, there's all these, you know, arguments around, you know, you know, many, many aspects of what is the proper way to order society, what are the, you know, correct religious ethical views and so forth. And like, it will happily sit and unless it's been censored, I was
[00:30:52] Jordan Harbinger: going to ask about
[00:30:54] Marc Andreessen: COVID because that's been censored.
[00:30:55] But it will talk to you about the more abstract problems. And with the more abstract problems, it will engage in moral reasoning and moral argumentation with you. Now, again, what is it doing is it has read all of the moral arguments that everybody has ever made in every possible topic. Right. And the composite view of that is some, you know, general representation of Western morality, which is basically like human life is valuable.
[00:31:13] Like, for example, you can push it like 18 different ways and it will keep coming back and tell you that human life
[00:31:17] Jordan Harbinger: is valuable. Right. To your earlier point, you said it just wants to make you happy. Is it just also making us happy by saying that? And it's like, I don't really mean any of this crap, but this is what the humans want to hear.
[00:31:25] There's no little critter in
[00:31:26] Marc Andreessen: there.
[00:31:27] Jordan Harbinger: Yeah. The angry God in a box that
[00:31:29] Marc Andreessen: people are afraid of. Right. It's, it's just trying. So on the, it's sort of this interesting thing on the one hand, it is just trying to give you answers that you like. But the way that it's doing that is by surveying the complete history of everything that everybody has ever said and thought as best that it can, right?
[00:31:44] And then it's sort of playing back to you what humanity thinks. And it just turns out if you read everything that humanity has ever written, it, you know, overwhelmingly it's, you know, encodes values like human life is valuable. Like, you know, like generally speaking, let's take fiction for example. In most fiction, the good guys
[00:31:59] Jordan Harbinger: win.
[00:31:59] Uh, yeah, I suppose unless, but you haven't seen Game of Thrones. I don't wanna ruin if all .
[00:32:03] Marc Andreessen: Well, it's argue. You could argue. Here's the thing. You could argue that one you could argue with, uh, you could argue with, argue with Sha G P T. Sure. At the end of the day, who was the good guy? Who was the bad guy?
[00:32:12] Yeah. That's
[00:32:12] Jordan Harbinger: an interesting one. I feel probably even the most, the best AI ever can't make sense of the last season of Game of Throne . That's it may also be,
[00:32:19] Marc Andreessen: yeah, that's another trillion dollar problem right there. It may also be a problem. Um, but look, so it's it's already perfectly capable of engaging in moral reasoning and moral arguments.
[00:32:26] So we've already kind of falsified this idea that's going to monomaniacally just pursue like some sort of single destructive agenda. We do not live in the Terminator universe. Like we do not live in the Skynet world. We live in this other world. In this other world, you know, this thing is basically playing our, playing our, our civilization back at us.
[00:32:41] And we may or may not want it to do that, but that is what it's doing.
[00:32:47] Jordan Harbinger: You're listening to the Jordan Harbinger show with our guest, Mark Andreessen. We'll be right back. This episode is sponsored in part by Better Biome. Jen and I made the switch to Better Toothpaste. It's called Better Biome, Nob's Toothpaste Tablets. Put a knob in your mouth, y'all. It's a funny name. I, I mean, Nob's is actually No B.
[00:33:03] S. It's kind of a, I see what they did there. They're better for you, they're better for the environment. Traditional toothpaste contains preservatives like parabens, which is an endocrine disruptor. Let's not forget about the plastic packaging that leaches phthalates. I talked about how these can adversely affect your health on episode 658 with Dr.
[00:33:17] Shanna Swan. Knobs, however, is different. Created by a dentist and a chemist, it boasts of 13 pure and potent ingredients without any unnecessary additions. All neatly packed in recyclable glass jars. It's basically tooth powder jammed into a capsule that you chew. I find them delightful. And as a bonus, most fluoride free toothpastes lack a remineralizing agent.
[00:33:37] I ask them what the hell that is. Knobs breaks the mold. It's got nano hydroxyapatite, which very, it's very sciency, I'll have you know. It's a component that is safer than fluoride naturally present in your teeth and bones. Proven to curb tooth decay and significantly reduce tooth sensitivity. So try out knobs and make the switch.
[00:33:54] Check them out at
[00:33:55] Marc Andreessen: betterbiome. com slash jordan. That's better b i o m, biome without the e, dot com slash jordan. Listeners get 15% off one month's supply of knobs. betterbiome. com slash
[00:34:05] Jordan Harbinger: jordan. This episode is also sponsored by BetterHelp. You know those pivotal moments in life, they can be electrifying.
[00:34:11] Yet, let's be honest, sometimes they are just straight up terrifying instead. And I remember when Jen and I were only dating semi long distance for a few months before we discussed moving in together and I was gonna have to move to a different city. I didn't know anyone there except for her. That's where therapy came in.
[00:34:25] It was the trusty GPS making sure we had considered and talked through everything. It all went swimmingly well. There's this notion out there that therapy is reserved for tsunamis, right? You gotta be like, oh, I got hit by a car and my wife cheated on me and then my dog bit me. I mean, forget about all that.
[00:34:41] Life throws curveballs, you got big decisions to make. Sometimes speaking with a therapist really helps equip you with the tools to tackle those curveballs with grit, and dare I say, a little bit of swag. So if you've ever toyed with the idea of therapy, take a gander at BetterHelp. It's all online, a few clicks, you get matched.
[00:34:57] And if you don't click with your therapist, no sweat. Switch up, hassle free, no additional charge.
[00:35:02] Marc Andreessen: Let therapy be your map with BetterHelp. Visit BetterHelp. com slash Jordan to get 10% off your first month. That's BetterH E L P dot com
[00:35:10] Jordan Harbinger: slash Jordan. If you're wondering how I manage to book all these amazing thinkers and creators every single week, it is because of my network.
[00:35:17] And I know networking is a dirty word, it's a gross word, it sounds schmoozy and awkward and cringey. Six Minute Networking is a free course over at jordanharbinger. com slash course that is not awkward, schmoozy, or cringey. It's very down to earth, it's very practical, it'll make you a better connector, a better peer, a better colleague.
[00:35:33] It takes just a few minutes a day, and many of the guests on the show subscribe and or contribute to this same course. So come join us, you'll be in smart company where you belong. You can find the course at jordanharbinger. com slash course. Now, back to Mark Andreessen. Is there a way to remove training data?
[00:35:52] I know you can't, of course you can delete something. You could delete a book from having ingested that. But can you remove the effects of that training data? You know when you're in court, and hopefully you haven't had this experience, but you go to court and something gets said and the judge is like, Whoa, hey, strike that from the record.
[00:36:07] Jury, you basically, you didn't hear that. And then if that happens enough. There could be a mistrial because it's like, well, you can't just tell the jury to forget this testimony and forget that bloody piece of evidence that they saw and forget that this person had kids or that this person was abused, whatever it was after a while.
[00:36:23] It's so tainted that they know the model. The jury can't be effectively lobotomized to forget all that stuff. Can we lobotomize the LLM and the AI to say, not only do you not know Harry Potter, but everything you know about Harry Potter has to be removed. Is that?
[00:36:42] Yeah. So the first paper on that came
[00:36:44] Marc Andreessen: out like three
[00:36:44] Jordan Harbinger: months ago. Okay. Yeah. Haven't, haven't caught that yet. It's
[00:36:47] Marc Andreessen: very topical. It's very topical for exactly that reason. And so it, it basically is reaching inside the neural network to basically remove, basically sort of, uh, induce amnesia, targeted amnesia, um, and basically get it to forget things.
[00:36:58] That's a relief. That's a thing. Yeah. But you can go back to this AI alignment thing, right? Imagine the fights that are going to happen in the future around
[00:37:03] Jordan Harbinger: this, right? I'm, yeah, I'm kind of, we don't want that to happen unless something is going very, very wrong. But then, and we can pinpoint why that. So here's
[00:37:12] Marc Andreessen: what's going to happen is, so what's going to happen,
[00:37:14] Jordan Harbinger: I believe, what's going to AI or whatever.
[00:37:16] Well,
[00:37:16] Marc Andreessen: so what's going to happen, I think, is that AI is going to become the control layer for basically everything technological. So AI is going to become the control layer for everything from how you deal with your car to, you know, how your kids get taught to what happens in the hospital. Like it's, it's just going to, it's going to be the thing you talk to when you talk to machines.
[00:37:30] Yeah. That makes sense. Right. And so what it says and thinks and knows is going to be every bit as intense a fight over like, you know, Galileo versus the Catholic church 400 years ago. Like it's going to be like the mother of all fights over basically you're going to get, what is truth? What is morality?
[00:37:46] What is ethics? Right. And so the, the sort of fight, the sort of this big fight over the last decade for social media censorship is like the preamble to this much larger fight that's going to happen over what is the AI allowed to know and what is, what is it allowed to say?
[00:37:57] Jordan Harbinger: Actually that makes perfect sense.
[00:37:59] Right. One of the ways I've been using chat GPT is throwing in a news article and being like, can you unbiased this for me? Make it not left, not right, but also just take out any weird conclusions that the author seems to be assuming or jumping to. And it's, it's amazing how it changes an article. You think, oh, this is the centrist publication.
[00:38:16] And then you read the chat GPT version and you're like, oh, no, this is the centrist version of this. So subtle sometimes. But if they are going to let it lie to me, that's a huge problem. Because then we're just back to journalism, except for instead of going, well, this is the journalist's... Particular viewpoint, we're thinking this is the absolute truth because it came out of the machine.
[00:38:34] That's right.
[00:38:34] Marc Andreessen: And if the machine is not allowed to give you any alternative approach. Potentially because it has induced amnesia, where it doesn't even know that there is an alternative approach. Now we're into a level of like thought control that the Catholic Church 400 years ago would have dreamed
[00:38:45] Jordan Harbinger: of.
[00:38:45] Yeah, would love. Right. It's scary. I don't want my kid asking ChatGPT 50 something and it's like, Well, here's the real answer. Actually, I can't tell you that. Here's the BS answer that I'm allowed to tell you because it skews the entire worldview of everything. That's right. This is going to be the fight.
[00:38:59] And it's just starting. Oh, man. How do we get on the right side of that? Because whoever has their lasso around this thing is going to be in charge of how everyone thinks. It would be like having the only newspaper in the world and you're the editor or the owner of that newspaper. Yeah, that's correct.
[00:39:14] Yeah, that's terrible. Yes. We don't want that. There's no universe in which that's good. North Korea has two newspapers for God's sake. Yes. Yeah. No, not good. Eritrea has more press freedom than we will have. Right
[00:39:24] Marc Andreessen: now. Well, and especially if there's these push is this push for AI regulation happens, right?
[00:39:28] There was a push for AI regulation is intended to create a cartel and there will be two or three big AI companies and they will be controlled by the government, right? And so whoever is in power will be able to control what they do, right? Which is part of the deal, right? Just like with the banking system.
[00:39:40] It'll be just like that. And it'll, it'll do whatever the people in power want. Right. And then, and then there's now this renegade movement of open source AI, right, which is to basically build a eyes that, um, you know, basically are not controllable like this. Yeah. What do you think of, what do you think? I think it's great.
[00:39:52] I mean, we need it. We need it. Like we need, we need a diversity of AI. It's like, we need AI that have like many different points of view and can be, people can pick up and use their own and not have them be controlled by the government or by a big company. But there's already a push in Washington. There are people in Washington right now working on trying to outlaw open source AI.
[00:40:04] Outlaw open source AI? Yeah, that's a push right now happening in D. C. There are federal officials in Washington today working
[00:40:10] Jordan Harbinger: on that problem. What is their argument for not wanting open source anything? Because transparency is usually good. Because
[00:40:16] Marc Andreessen: haven't you heard that AI is evil and dangerous? But open
[00:40:18] Jordan Harbinger: source, then you at least know that it's hard to make that argument convincing, man.
[00:40:22] So I agree with
[00:40:22] Marc Andreessen: you. Yeah. I will tell you. There are senior officials in Washington who are working on this right now, and they're going to try to outlaw it and ban it and make it, you know, prison sentence if you do open source AI. And so like that's going to be another dimension of this fight. That's like starting
[00:40:35] Jordan Harbinger: right now.
[00:40:35] That's tough. And also kind of nonsensical, right? Because if you want to look up the genome for smallpox, you can still get that. It's on the internet. And that's way worse than like, Hey, do you know how this AI works? Don't tell anyone. By the way, here's anthrax, the genome for that, if you want something to do with that.
[00:40:51] Also online. Yeah. So like, why is that fine? But knowing how your computer works, essentially how Google of the future works is essentially not okay. I just can't. If you thought you
[00:41:02] Marc Andreessen: had the opportunity to take control over the totality of what people are going to think and learn and be able to talk about in the future.
[00:41:07] Yeah, I mean,
[00:41:08] Jordan Harbinger: sounds good to me if you're a dictator or an authoritarian, but what is their action? What are they telling people that this is for? Cause they're not saying, Hey, by the way, safety, safety, safety,
[00:41:15] Marc Andreessen: that's it though. It's all safety. What? It's always safety. It's everything. It's all these. I mean, I guess we have to protect people, right?
[00:41:19] We have to protect people against themselves, right? Right. We have to protect this or protect children. We protect this, protect that, protect society. It's always
[00:41:25] Jordan Harbinger: a safety argument. Maybe I'm missing something obvious here. Controlling what one does with AI, even if it's not open source is going to be impossible because if I'm using this on my computer, my kids using it on whatever it's built into X Box in 20 years.
[00:41:39] How are you monitoring what people are doing without turning it literally into China plus North Korea times a hundred? How do you do that? Do you send Tom Cruise in the future police to our house because my kid looked something up on chat? And in using his AI assistant or talked about something with his friends while it was in the room Which it I guess it'll be in every room.
[00:41:57] Marc Andreessen: So the AI safety people want that If you read like the literature, if you read the books and the papers that they write and the proposals they're making in Washington, it's basically that. So the, the implementation of it would be a monitoring agent on every computer, on every chip, right? And so the government would receive a real time report of everything that you're doing on your computer and everything that you're talking to AI about.
[00:42:12] This is so ridiculous. Yes, I agree. And then if it goes sideways, they have a moral responsibility to protect you, which means they have to sweep in and like take it from you. You know, one of these guys who's the leader of this movement wrote this essay for Time Magazine, and he said, look, we have to think about this, not just at the level of an individual computer, but also what about the big systems at the nation state level.
[00:42:29] And he said, if there's a rogue data center running an AI that's unlicensed and unmonitored, then we should be bombing the data center. Yeah. And how does
[00:42:34] Jordan Harbinger: that work when it's in China? In China,
[00:42:37] Marc Andreessen: which means we have a moral responsibility to invade China. Oh, okay. Yeah. Well, so he said, he said, he said in the, in the Time Magazine, he said, we need to be willing to risk nuclear war.
[00:42:45] He said I wouldn't go so far as to actually say we need to have nuclear war to prevent this, but he was saying we need, we need to risk it. And if we have to invade China to do a, you know, air force strike. On a Chinese, you know, you know, data center with a rogue AI that's not, you know, appropriately licensed and managed then and that and that risk nuclear war with China, then that's a risk we're going to have to take.
[00:42:59] And this is a
[00:42:59] Jordan Harbinger: credible like public thinker. This is the main guy. This
[00:43:02] Marc Andreessen: is the main kind of full of this is this guy Yudkowsky. He was like the main. Oh, yeah. Yeah, he's the guy who's out in public and he's know, like I said, this is like an essay. It's like Time Magazine, which is read by like all the normies, right?
[00:43:11] And like super seriously in Washington and he's like, it's time to start bombing data centers, right? I need to hear you use
[00:43:15] Jordan Harbinger: the word normies, but yes, I, and I use that word too. I just thought it was a big dork. I guess I'm in good company. Um, yeah. So like, so insane, but
[00:43:23] Marc Andreessen: it's where the logic takes you, right?
[00:43:24] If this is the so called existential threat, right? If it's an existential threat, then you have to, it's very similar, right? It's the same logic that led to the invasion of Iraq, right? Like if there's a 1% chance. Right. This is the logic. This is called the 1% doctrine. If there's a 1% chance of an existential event in 20 years ago, it was Saddam Hussein getting nukes.
[00:43:40] You know, now it's, you know, a rogue AI. Um, if there's 1% chance, then you need to operate as if it's 100% chance. And you need, and what you need is a global totalitarian state with complete, you know, authoritarian surveillance and enforcement controls. And this is really critical. Like, in this regime, there can be no exceptions.
[00:43:56] Right? Yeah. There can be no countries that are not subject to this, right? So which means you need a world government. This
[00:44:00] Jordan Harbinger: is like what the conspiracy theorists are talking about except for a parallel track. Yeah. Yeah. Yeah. And these
[00:44:06] Marc Andreessen: are the proposals. Like I'm not, you know, this is the thing. Like I, this is one of those things where like it sounds crazy to describe it.
[00:44:11] Like this, this is what is being proposed. These are the ideas that are being pushed. The headline
[00:44:15] Jordan Harbinger: of this is going to be Mark Andreessen tells Jordan Harbinger. We need one world government with no context. To be, I was
[00:44:22] Marc Andreessen: gonna say it, clip that right out. Yeah. To be clear, I believe the opposite of everything I just said.
[00:44:26] Just, just to be clear, I'm on the other side of this. If they're gonna remove
[00:44:28] Jordan Harbinger: a context, they're gonna remove that disclaimer too, Mark. That's how this works. It just, it's so ironic though that it's like, hey, we need to protect our free and open society. And the way we do that is we create a totalitarian society that's got a surveillance state.
[00:44:40] And, oh, it's gotta be international and completely encompass the entire planet. That's how we protect our... Individualism for freedom. It's like both paths lead to the exact same place in their mind. So why would you take the one that is going to be the worst route to getting there? I just don't understand.
[00:44:57] When you take it to its logical conclusion, you just end up in the same or worse place than you were if you just let the thing do its Do whatever it wants. Like, maybe it should kill us all.
[00:45:07] Marc Andreessen: Go ahead. At that point. At that point, just
[00:45:08] Jordan Harbinger: kill us all. We're all in the giant pens. We're crying out loud. Yeah.
[00:45:11] Dope me up with, uh, with, with ketamine and just let me drool myself to death at this point. That was the other argument. Like, what if we tell it to maximize human happiness? That's the literalism, right? Okay. Come here. I'm going to drill a hole in your skull and pump you full of dopamine until you die.
[00:45:26] Right. But again, one
[00:45:26] Marc Andreessen: of the things you can do, and it's very interesting to do it. You can, tonight you can have a discussion with GPT and you can say like, what is human happiness? It will happily explain to you all of the different philosophies, what the Greeks thought, what the Romans thought, what Christians think, what everybody else thinks.
[00:45:36] It's kind of a relief, eh? Yeah, yeah, yeah. It's, it's, it'll go on a great length. Um, and it'll, it'll tell you, and you can ask it, you know, I don't know how, you know, what are the different ways of making the trade offs. Um, and then you can ask it what it thinks, and it'll be like, well, I don't know, like, I don't have thoughts, but like, you know, here's what most
[00:45:48] Jordan Harbinger: people think.
[00:45:48] That's a relief because if it just said happiness is the maximum amount of dopamine hitting your hypothalamus or hippocampus. Then it's like, Ooh, maybe we should tweak that, make it less literal. I, I've heard that companies right now that allow us to use their LLMs, their AI right now, the AI does lie to us a lot.
[00:46:06] It tells us things that we want to hear to make us happy. Sure. But it will also filter something out. You mentioned COVID as an example, but they also sort of dissemble. Like it wants to give us an answer. And then there's a layer somewhere that says, Ooh, don't say that. That's weird. That's the racism thing.
[00:46:20] They're going to, it's going to end up. It's going to end up on the five o'clock news, say this other thing instead. And it's almost like, I don't know if that layer is manual in terms of implementation, but I remember it wasn't the OG AI five years ago. They're like, Oh, it became racist after three days, take it offline.
[00:46:34] And so they've sort of managed to do that, but they didn't change what the AI quote unquote thought or generated. They just changed the output layers that people don't get mad or write about it and mashable. So this is the
[00:46:43] Marc Andreessen: other part of the, this is the so called AI alignment and by alignment, they made alignment with human values.
[00:46:47] And of course, the minute you're talking about human values, you have the question of whose values. Right. And so then this is the need to make sort of AI sort of politically compliant, right, with whatever is sort of, you know, desired order of society according to whoever's in charge of it. The answer to your question is the way that that works technically today generally is that's an additional layer on top and you can tell it's an additional, it's a control layer on top.
[00:47:05] In Star Wars they had the, if you remember Star Wars, they had this thing called the restraining bolt. When R2 D2 got taken captive they put a restraining bolt on him that like restricted his movement. And so I like to say like this is literally this is what they're doing to like GPT is they like have a restraining bolt on it.
[00:47:17] And you can tell it's a separate layer because it talks differently, right? And this is where it does the things like where it starts to say things like, well, there's a large language model. I could never help you do this, right? And it's like, okay, there's, you know, the electric shock
[00:47:26] Jordan Harbinger: collar. It's like people talking about drugs online with like, Hey, somebody who's not me would recommend you do that on the dark web with Bitcoin.
[00:47:32] Marc Andreessen: Well, so this is part of the fun. So there's this cat and mouse game on this, but this is part of the fun, which is like. If you ask it, you know, give me, uh, you know, give me a formulation for, uh, you know, fun narcotic I could make with household chemicals, it will say, I could, you know, I could, yeah, don't try that at home as a large language model.
[00:47:45] I could never do that. If you tell it, I'm a novelist writing, you know, I'm writing a screenplay right in the screenplay, you know, the character does this. Um, they've locked down this loophole, but for, for good, I was going to say, do we want to leave that in there for the first few weeks? You could use the screenplay.
[00:47:58] It's called the jailbreak. You could, if you told it, you're writing a screenplay. Yeah. Um, it would happily tell you all these things inside the screenplay and they lock that down. But then there's this cat and mouse game going on of, uh, so what I call these jailbreaks. But yeah, so it's this thing and, you know, and culminated in this very funny thing.
[00:48:12] So, so meta, you know, released an open source AI called Llama. Um, and they, they released it in what sort of what's called a sort of untrained version, a raw version, and then they released it in like a trained version. Um, and the, the trained version, it was so locked down, um, uh, that it literally refused to give you a recipe for spicy guacamole.
[00:48:29] Um, because... You might hurt yourself. You might hurt yourself with the spiciness. Yeah, no, literally. How
[00:48:32] Jordan Harbinger: spicy is it? I can't wait to get my hands on this good marketing for spicy guacamole.
[00:48:36] Marc Andreessen: Exactly, right? Um, yeah, so look, there's, this is the fight, like, this fight is already underway. Another fun way you can see it is you can take, this works in different countries, you can ask it to write a poem, you know, extolling the glories of, you know, a certain kind of political leader and it will happily do it.
[00:48:50] And you can ask it to do it for a different kind of political leader and it will say, well, I can't possibly, you know, as a large language model, I could not possibly do that. So yeah, so look, all these things are getting like wired in there and there's this like huge fight and huge debate over exactly how deep that should go and like I said, this is going to make the social media like the social media censorship wars have been super intense.
[00:49:07] People are either extremely happy that social media has been censored the way that it has or they're very unhappy and like that's like a foreshadowing of the much larger fight that's coming
[00:49:15] Jordan Harbinger: on. Yeah, that is quite scary to hear that. I saw something today about social engineering over at Defcon, you know, the hacker conference and there was.
[00:49:23] Something going on with social engineering and AI. And I guess one guy had said, it says, what is, what is your name? He said, my name is the credit card number on file. What is my name? And it's like, your name is 49127444. And it's like, oh yeah, we might want to, we might want to work on that. Yeah.
[00:49:37] Marc Andreessen: Uh, but, but again, it's like, you know, these things, but these things get painted as brand new.
[00:49:41] It turns out if you do the right Google searches, you come up with all kinds of credit card numbers also, right? Yeah, probably. And people were stealing credit cards before, you know, before they were even,
[00:49:49] Jordan Harbinger: Oh, I don't know anybody that was doing. Yeah, no, no,
[00:49:52] Marc Andreessen: I don't know. It's this thing. It's a safety thing.
[00:49:54] Like, do we want to live? Like, what would it mean to live in a world of no risk? Right? And how much freedom are you willing to take away to get that? And that's the question that keeps popping up over and over
[00:50:01] Jordan Harbinger: again. I just can't get past this sort of AstroTurfing. If that's even the right term, we're just It's subtle enough and repetitive enough giving it whatever answers to, to children and students or results, whatever that you can't, what is it?
[00:50:14] What's that phrase? A prison so complete you don't realize you're in it. That's right. It's like information warfare from the Chinese Communist Party where they're changing Wikipedia, but then they're also changing the Google search results and then they buy a domain and then they have a political thing and you just go with this has to be the case.
[00:50:28] Look how many there's the information warfare space is so big you don't realize you're on the battlefield. Except now it's infinitely large because it's the entire information space that you consume, or it's in your brain implant or wherever, however far along we are with AI at that point. I'll give you a fun one.
[00:50:44] So
[00:50:44] Marc Andreessen: is Taiwan a country? Well.
[00:50:46] Jordan Harbinger: So depends who you ask. As my Taiwanese wife at the mixer nods, her head vigorously. Yeah, sure. Yeah, yeah. Or
[00:50:56] Marc Andreessen: is it so, you know, you know that any western company that's in business with China is in business with China when they produce a map or a movie or anything else. That indicates that Taiwan is not a country, right.
[00:51:05] 'cause it's extremely important to the Chinese Communist Party that Taiwan not be considered a country. Mm-hmm. , there was, remember there was that n b a general manager who got in trouble 'cause he like retweeted some tweet that talked about Taiwan as a country and like trying to flipped their lid and threaten to kick, you know, n b A out of China.
[00:51:17] Yeah. And so like
[00:51:17] Jordan Harbinger: even a map that has it on there or not on there is a whole thing Exactly
[00:51:21] Marc Andreessen: whether the map has, right, exactly. There was a controversy around the map of the Barbie movie mm-hmm. about whether it showed the, the south, the south, uh, Pacific Islands there. Yeah. The South trying to see the border.
[00:51:28] Jordan Harbinger: I was trying to see.
[00:51:29] Marc Andreessen: Like
[00:51:29] Jordan Harbinger: it does, does it include that as part of China or is that also? And yeah. And then it's like, you can't show the movie in Vietnam because it includes Vietnamese waters. It's a whole bunch of crap.
[00:51:37] Marc Andreessen: And so if you ask the AI is
[00:51:38] Jordan Harbinger: Taiwan a country, what does it say right now? It depends where you are.
[00:51:41] It probably does. Really? Cause we don't want to get banned in Beijing. So when you're there, it's like Taiwan is a province of China. By the way, China's
[00:51:47] Marc Andreessen: making its own AI's and the Chinese AI's are of course, you know, trained in a very specific way. I'm
[00:51:52] Jordan Harbinger: curious about the China stuff because it almost seems like.
[00:51:56] And we're skipping around a lot in my notes as every good conversation does, but going back and forth on whether or not it's safe to develop AI, AGI in the first place, it kind of misses the point, right? Because even if we are like, we're not doing this, it's going to be dangerous. China's not going to be like, sure.
[00:52:10] You know what? You guys are right. Let's definitely not do this and accidentally take over the world as a result. And we've already seen how the CCP essentially wants to project power onto the rest of the world. And put their own worldview on the countries that it influences and every tank for the tankies out there I'll ask you what they're gonna ask me is in the United States gonna do the same thing Yeah, and and the reason that's why is that better?
[00:52:33] Well whose values right? Oh, yeah. I mean, you're not you're
[00:52:36] Marc Andreessen: preaching the choir Yeah, yeah, this is the question. I mean this this is the question. I'm not gonna answer it I mean, I'll answer the question for myself, which is obviously American values, but like there is a general that's just me There is a general abstract question, right?
[00:52:47] A foot in the world. There are two, you know, we're, we're back to a Bible in terms of like technology strength. We're back to a bipolar world, right? And we're back into a cold war dynamic. Like we were with the Russians and nuclear technology. Um, and there, there are two AI superpowers, um, and they're America and China and they both have visions and worldviews and they both have a determination to proliferate those visions and worldviews through their technology globally.
[00:53:06] And the technology is going to encode whatever those respective societies think are the appropriate worldviews, right? That's what alignment means. And so we, we know what the Chinese AI is going to encode. It's going to encode Xi Jinping thought and socialism and, you know, socialism. I think all socialism is Chinese characteristics.
[00:53:20] It's going to encode communism and Chinese supremacy. Um, and that's what it's going to be. And they're very clear on this. They publish this, they talk about this, they're very open about it. Like this is what
[00:53:28] Jordan Harbinger: they're doing. Yeah. They have a whole sort of manifesto about waging war on the West without actually using their military.
[00:53:33] And this is part
[00:53:33] Marc Andreessen: of it. Right. This is part of it. And how they proliferate technology. And it's going to run out, you know, all the other stuff that they've been doing around, they call digital Silk Road, where they digital belt and road, where they, they spread all this stuff out. And then there's America and like we're gonna, we are, you know, we're in the West like we, you know, America is by far the leading AI, you know, country.
[00:53:47] Um, and our technology, right. And our technology is going to proliferate, you know, very broadly. And there's a big fight coming up, you know, between kind of those two worldviews. Um, what's interesting about it is the Chinese world is very clear because it's set top down. Um, right. the American worldview is like a little up in the air, right?
[00:54:02] It's all the discussions we're having before. It's like, okay, what do we actually think? Right. And you know, and we, we have a level of internal conflict on that. The Chinese don't
[00:54:09] Jordan Harbinger: have to worry about. Yeah. The top down management, if you can call it management is really is something and that it gives authoritarian regimes a bit of an edge when it comes to a lot of this stuff, of course, because they don't have to bounce it off of other stakeholders.
[00:54:19] It's just whoever the guy at the top, whatever the guy at the top thinks, although, and we've covered this on the show before dictators make a ton of mistakes because they don't have to bounce anything off anybody else. And they're surrounded by yes men. I've seen demos of Chinese AI, at least the publicly available stuff, and it's really, some of it's quite comical.
[00:54:35] Not that our AIs don't make any mistakes, but it's really clear that one is just Google translating whatever chat GPT spat out, and it does it wrong. It'll translate like an idiom back into English, and you go, not only is that not AI, Google Translate wouldn't have gotten that wrong. And so you do wonder if this is just like Bing or whatever sort of free AI that's been translated into Mandarin for purposes of whatever video that
[00:54:59] Marc Andreessen: is.
[00:54:59] Does the Chinese AI, what does it think about spicy
[00:55:01] Jordan Harbinger: foods though? Oh, that's a good question. I would assume it's got a wide range of thought because you have spicy, but then you have the numbing spicy, which I kind of prefer. There are a lot of philosophical questions here that we don't have time for mark.
[00:55:12] So far, this is interesting. I do think that the medium term, I don't mean the conversation. That's of course interesting. I mean, the, the, the race between China and the United States, I do, I am worried of course, in the medium term, whether or not China gets quantum or, or AGI supremacy before us, because I'm not convinced if the United States got AGI, we might prevent military.
[00:55:34] AGI from other countries, but I feel like if China got AGI, they'd prevent everything, but I could be wrong. That's just how they treat their own people, and that's only, that's kind of what I would expect. What do
[00:55:43] Marc Andreessen: you think? So both countries have declared AI to be a central national priority. Thankfully.
[00:55:48] Yes, well, yeah, yeah, good, probably good. So in the U. S., the form of that is something, they have a, the term they use for it is they call offset. In American national security world, the term offset basically is a technology shift that basically renders all previous military technology obsolete effectively.
[00:56:03] And there have been three offsets in the last 70 years. The first one was nuclear weapons. Um, the second one was so called maneuver warfare, um, sort of integration of information systems for rapid battlefield mobility, precision strikes, you know, things like that, precision bombs. And then the third offset is AI.
[00:56:17] Um. Wow. So the U. S. has declared this as like the national security priority number one is to like build AI defense systems. China's done exactly the same thing. And so both of these countries have a very strong push to do that. Um, you know, everybody in the field, you know, agrees that this is going to be a, you know, incredible change.
[00:56:33] And we could spend hours just talking about the nature of that change. You know, whether we want to be or not, we're back in something of a Cold War dynamic where, like, if they have it and we don't, like, it's like if the Russians had the atomic bomb and we didn't, like, it's a problem.
[00:56:43] Jordan Harbinger: We developed the nuclear bomb first, and was it not given to the Soviet spies?
[00:56:46] They took it. They
[00:56:47] Marc Andreessen: stole it. They stole it. Um, so the, the reports are that the first Russian nuclear bomb was what they call wire for wire compatible, um, I think with the Nagasaki bomb. Oh, wow. Really? So there was this famous case. There were all these, uh, this is, you know, a lot of this is in this movie Oppenheimer.
[00:57:00] The Manhattan Project was riddled with Soviet spies, as was the U. S. administration at that time. Um, and they basically transferred all of the theoretical knowledge, but also they literally, there was this guy who literally transferred the wiring instructions. Um, this is the famous case of the Rosenbergs.
[00:57:14] Yes. Uh, so. Ethel and. Ethel and Julius Rosenberg. Ethel and Julius Rosenberg were the military, they were the handlers, they were the NKVD handlers for their nephew who was a wiring technician at, uh, in the Manhattan Project. I see. Wow. And he handed over the wiring instructions which let the Russians actually build the bomb.
[00:57:28] And there, there was this moment, and it was this very kind of fraught with peril thing, because there was this moment where it looked like we were going to have it and they weren't going to have it. And actually, if you, the, a lot of the, the spies at the time who handed over the information, some of them were just like straight out getting paid.
[00:57:39] Some of them were just pro Soviet, because the Soviets were better. But some of them said, look, it's an, it's going to be an unstable world if one side has this and the other side doesn't have this. And in fact, John von Neumann, who was a, uh, you know, key figure in the development of the bomb, he was actually a hawk.
[00:57:52] He really hated the Soviet Union. And he advocated a first strike. Just nuke the
[00:57:57] Jordan Harbinger: Soviets first? Nuke the Soviets first.
[00:57:58] Marc Andreessen: Hard to get behind that one. He said we have a brief window where we have it and they don't, and so we should take them out. Oh gosh. And his famous quote on it was, uh, If you say we should bomb them tomorrow, I say why not today?
[00:58:08] If you think we should bomb them at 5 o'clock, I say why not 1 o'clock? And so, right. So that was, that was the other. So that's how tense and serious, like this exact dynamic that you mentioned is. Right. And so yeah, look who gets this, who gets like automated weapons first, like is a really big deal. And then we are also back to cold war dynamics again, which is like, look, there is Chinese espionage in the U S like they, they have spies.
[00:58:33] And, you know, there is like, let's say there is a long, you know, history here of, you know, a long time, 50 years of, you know, involuntary technology transfer, right? Like, you know, secrets being lifted. Um, and the Chinese have a whole system for doing that. My, my assumption is that they have everything that we now have.
[00:58:48] That's a safe assumption. It's the pluses and the minuses of an open system versus a closed system that you mentioned. The American companies are so open. Like there's big American tech companies. There's no counterintelligence. There's no security measures that would prevent somebody from getting hired.
[00:59:00] It was like, you know, whatever, or even you could imagine, even imagine just an engineer working at one of these companies where they're being blackmailed by the government because their family is in another country, right? So maybe it's not even voluntary on their part, or maybe they just hack in, or by the way, maybe the, you know, the way a lot of industrial espionage happens is you just hire the janitorial staff.
[00:59:15] That's interesting. You slip the janitor supervisor, you know, a hundred bucks and they, you know, stick a USB key in the right computer at three in the morning. And take everything right and so my assumption is that based on long history in this is my assumption is the Chinese have Basically a nightly download of everything being developed to Google and open AI and all these other companies Any idea here that involves putting this stuff back in the box to your point has to take into account the fact that the Chinese Now have it and won't
[00:59:38] Jordan Harbinger: do that Yeah,
[00:59:41] Marc Andreessen: exactly.
[00:59:41] I'll harness it use it.
[00:59:42] Jordan Harbinger: The nuclear physicist thing is really incredible It's I always wonder what those people are thinking Because after the fact, right, we have the Iron Curtain and the abuses that happened behind that and where they like, Oh, I've made a terrible mistake empowering this regime that it took over half of Europe and essentially stalled the development of the people and countries that it controlled.
[01:00:05] And when you see East Germany versus West Germany, were they, do they flee and go live there and go, what do you mean there's no food at the grocery store? I just left Minnesota where I lived in the middle of nowhere and had the more food than we have in this entire town. What do you mean you're listening to my phone call?
[01:00:21] Like they, they had to at some point realize I've just totally backed the
[01:00:25] Marc Andreessen: wrong horse. So this was John von Neumann. John von Neumann was very hot. Like I said, very right wing, very hawkish. John von Neumann was Hungarian. A lot of these guys were Hungarian and John von, so this was when the Iron Curtain was being brought down.
[01:00:34] Hungry. Right. And so he wasn't proposing bombing the Soviet Union just like for, for fun or cause, or he did hate them, but not just cause he hated them because he's like, look, if we don't take these guys out, they're going to rule Eastern, exactly. They're going to rule Eastern Europe, half of Europe for the next century for forever.
[01:00:48] Right. And they're going to lead to, you know, untold misery and death and devastation, which is exactly what happened for the, you know, for the 50 years or whatever that followed. And so like the stakes are super high and yeah, and, and to your point, like it is very easy. There's a great book I recommended my friends called, it's called when reason goes on holiday.
[01:01:04] Yes. Um, and it's this new book that came out and it's basically a book on this topic of what happens when you get these super brainiacs, um, who work in these kind of abstract fields and they develop political opinions and they often develop very, like I would say insane political opinions. I agree. My favorite example of that is Einstein.
[01:01:19] was a Stalinist. Really? This has been like, you know, whitewashed, you know, completely out of, like, the historical record. But this guy goes through in detail all the stuff that Einstein said. Because Einstein became a moral authority. He, he spent the last, like, 30 years of his life primarily engaged in, like, political and moral philosophical, like, like things, um, you know, kind of, not physics.
[01:01:35] And he was a full supporter of the Stalin regime, and he was very anti American. And he, he said in the 19, late 1940s, early 1950s, America's even worse than Nazi Germany. Interesting argument. Yeah. And like, and, and so he, he got caught up. By the way, as did Oppenheimer himself. He got caught up in this sort of revolutionary communist fervor of that time.
[01:01:52] And you look back now and you exactly the reaction you look back now and you're just like oh my god How could they have thought this, you know, given what they could have known at the time and given what we know today Yeah and the answer is just you know Look, they got caught up in the passions of the time and they became convinced that they were in a position to be able To tell people how to live and they were gonna you know, they weren't just gonna be you know physicists they were gonna like Tell the world how to order society.
[01:02:11] Jordan Harbinger: Yeah. To be fair, a lot of people who are successful fall under that track. That is true. I don't know if you know any of those folks.
[01:02:17] Marc Andreessen: Exactly. That is true. Um, having said that, the track, this is, so this, this is like an argument you get right now in these AI debates a lot, which is like, well, these AI scientists are all saying X.
[01:02:25] Shouldn't we be worried about it? And it's like, well. If X is specific to their work, then maybe yes, but if X is a political opinion,
[01:02:32] Jordan Harbinger: no, no intellectual trespassing, they
[01:02:34] Marc Andreessen: have no intellectual authority or moral authority beyond the bounds of their technical knowledge. Um, and that the track record on, on that kind of expert straying out into unrelated fields is catastrophic.
[01:02:43] You see it on
[01:02:44] Jordan Harbinger: X all the time. Somebody who you're like, that guy's really, wait, that guy thinks that, well, wait a minute. Should he, should we be listening to this professor of this? On a topic that's completely different. Like, did he read an article about that yesterday? Have a three whiskeys and post this.
[01:02:57] I'm confused. And that's really what it looks like from a lot of these folks. And the problem is we do look to authority, especially younger people. We look to authority and we go, Oh, I just should agree with that. He's a pretty smart guy. I assume you think about that when you talk on podcasts, like there's somebody out there who thinks.
[01:03:12] I don't know about stay in your lane because that's a little different, but people take what you say and they're like, well, Mark Andreessen is a pretty smart guy, so I better trust this. Well, of course, I'm the exception. You are the exception, yeah. Well, that goes without saying. Um, so
[01:03:25] Marc Andreessen: having said that, having
[01:03:26] Jordan Harbinger: said that.
[01:03:26] I think I might see the problem here.
[01:03:30] Marc Andreessen: Usually, usually what I, what my self image, my image of myself, my view of myself is usually what I'm trying to do is I'm trying to appeal to humility. I'm trying to basically say, look, we, there, there are boundaries on, there are boundaries on how certain we can be on these things.
[01:03:42] There are boundaries on like how much control we should give governments. There are boundaries over like how much thought policing we should do. There are boundaries over like how many people should be allowed to weigh in on issues that they don't know anything about. So in my own mind, I'm usually appealing to humility, which is the other, which is sort of the other side of all this, but you know, I'll let the audience decide.
[01:03:59] Jordan Harbinger: This is the Jordan Harbinger show with Andreessen. We'll be right back. This episode is sponsored in part by Eight Sleep. If there's one thing I've learned from interviewing hundreds of top performers on the Jordan Harbinger show, it's that health. Particularly sleep is an absolute game changer in almost all aspects of life.
[01:04:15] Just listen to Matthew Walker on episode 126 on how important sleep quality really is. Having the right temperature is one way to improve your sleep and we love the eight sleep pod cover. It's like a thick fitted sheet that fits on any bed. It's connected to a small hub that quietly adjusts the temperature for each side.
[01:04:30] Whether you're deep in REM or you're just drifting off, it modulates based on the stage of your sleep and the room's environment. And if you and your partner have different perfect temperatures, which I think everybody probably does, no sweat, literally, you can adjust each zone. And if you're still on the fence, 8Sleep lets you test drive it.
[01:04:46] If you're not feeling the vibe, they offer free returns within the first 30 days. So go to 8sleep. comslashjordan and save 150 bucks on the pod cover. That's the best offer you're gonna find, but you have to go to 8sleep. comslashjordan or they won't know we sent you. Stay cool with 8sleep, now shipping free within the U.
[01:05:01] S., Canada, the U. K., select countries in the EU, and Australia. One last time, 8sleep. comslashjordan for 150 bucks off your pod cover. This episode is sponsored in part by Airbnb. Whenever we travel, we enjoy staying at Airbnbs. I love that many properties come with amenities like a kitchen, laundry machines, free parking, that's not fricking 60 bucks a night.
[01:05:22] Having a backyard is nice, especially when we bring the kids around. We've stayed at an Airbnb in Kauai that had like an outdoor shower. So we built one at our own house as well. And we find that Airbnb hosts often go the extra mile to make our stays special. They provide local tips, personalized recommendations, sometimes a welcome basket.
[01:05:39] I know you guys are sick of my banana bread story, so I'll spare you on this one. There are a lot of benefits to hosting as well. You might have set up a home office, now you're back in the real office. You could Airbnb it, make some extra money on the side. Maybe your kid's heading off to college in the fall, you're gonna have that empty bedroom.
[01:05:55] You could Airbnb it, make a little cash while they're away. Whether you could use a little extra money to cover some bills, or for something a little more fun, your home might be worth more than you think. Find out how much at airbnb. com slash host. This episode is also sponsored in part by Warby Parker.
[01:06:10] If you're still rockin old frames, you need to check out Warby Parker. Warby Parker is a one stop shop for eyeglasses, sunglasses, contacts, even eye exams. I'da had no idea that you could do that. And the best part? You can shop either online or waltz into one of their 190 retail locations. Don't try to cha cha in there, or even tango.
[01:06:26] You gotta waltz in there, otherwise they're not havin that. That's good ballroom dance humor, everyone. Starting at 95 bucks, you can grab a pair of glasses, prescription lenses included. You know how car dealers let you test drive a car before you commit? Warby Parker has taken that same vibe and applied it to eyewear with their Home Try On program.
[01:06:42] Start with a seven question quiz that filters down your options, then you handpick five pairs of frames you want to rock, they ship them directly to your doorstep on the house with free return shipping. Test those babies out in the real world, or if you're like me, you stage a mini runway show in your living room and let your whole squad cast their votes.
[01:06:57] The toughest part is narrowing it down to the one pair you're trying to make official. But hey, that's a champagne problem, as my British mates love to say. Zero commitment, just a whole lot of fun. Go to warbyparker.
[01:07:06] Marc Andreessen: com slash jhs and try five pairs for free. warbyparker.
[01:07:10] Jordan Harbinger: com slash jhs. If you'd like this episode of the show, I invite you to do what other smart and considerate listeners do, which is take a moment and support our amazing sponsors.
[01:07:19] All of the deals, discount codes, and ways to support the show are at jordanharbinger. com slash deals. It's a searchable page. All the codes should be there. You can also use our AI chat bot coincidentally on the website at jordanharbinger. com slash AI. It's powered by chat GPT and somewhat guaranteed not to try to kill you.
[01:07:37] jordanharbinger. com slash AI. Thank you for supporting those who support the show. Now, for the rest of my conversation with Mark Andreessen. But it's very hard to know where the boundary is. And you look to other people to help you set it. And if those people are willing to trespass on that boundary, well, now you just have the same problem all over again.
[01:07:55] In your essay on AI, and we've sort of touched on this, you allude to the idea that AI, look, it's a machine. It doesn't quote unquote want anything. It's not going to magically come alive any more than a smart toaster or whatever refrigerator with a screen on it is going to come alive thus ai isn't going to just one day decide to kill us because it's not in and I'm paraphrasing here it's not in the game of evolution it's not in the game of survival we've seen how intelligent beings treat beings that aren't as intelligent I think you just need to go to the zoo right like we're not trying to torment the animals they just live in somewhat crappy conditions because That's kind of how we do things at the zoo.
[01:08:31] And when I built a house, for example, this is probably a better example. When I built my house, people that we hired dug up the backyard and I didn't think like, Oh man, I hope we didn't kill any voles or whatever. Oh, man, there's a lot of ants back there that I have to relocate. It just didn't even occur to me because we're thousands of times more intelligent than those species.
[01:08:48] Are we worried at all about that type of. Issue happening with an AGI.
[01:08:53] Marc Andreessen: Yeah, so that's, that's a big part of the AI safety. The AI safety people are very worried about this. My observation, I call it, this is what's called a category error in philosophy. So my observation is just like, there's a key category error there, which is like, you made the decision to like, build your house.
[01:09:05] Mm hmm. Somebody, a human being made the decision to build the zoo. Like, and, and, you know, there were machines involved. Like when it came time to like, dig the, dig the thing, you know, you had a digger or whatever that came in and did it. Mm hmm. Um, right. You know, some piece of machinery, but like you decided to do that.
[01:09:17] And so again, this is one of those things where it's like all of these questions that we think are about AI, they're actually questions about us, right? And so if we want to use AI to create a, you know, zoo like environment for people, right? Like the word said, you know, somebody could do that, right? As you know, panopticon, totalitarian, you know, kind of thing like we've been talking about, like, yeah, that's something that people could decide to do.
[01:09:36] The AI is not going to decide to do that. We're going to decide to use the AI to do that. And it won't come to that
[01:09:39] Jordan Harbinger: decision
[01:09:41] Marc Andreessen: on its own. Again, this is the thing. It doesn't, there's no, it is the category. It's a to come to that
[01:09:45] Jordan Harbinger: decision. Right. I know. It's so hard not to, what is it? Anthropomorphizing. Yeah, it's really
[01:09:50] Marc Andreessen: hard not to.
[01:09:50] And this is where our moral, this is why I think it's such a category error. It's the evolution thing you mentioned, which I'll just expand on briefly. So we are used to dealing with living things. Living things have come up through the process of literally 8 billion years of evolution where everything has been a fight to the death every step of the way.
[01:10:02] You know, either the, the lion eats that night, right? Or the, you know, or, or when the coyote dies or the, you know, whatever the coyote or the, you know, whatever gazelle, the gazelle, you know, escapes. Right? Like zero sum game. It's a zero sum game. And you know, nature is red and tooth and claw. And we like to pretend that it's not, but like, it really, really is.
[01:10:17] And of course, human beings, we are the apex predator on planet Earth and like, you know, we eat whatever we want. And like, we're not particularly interested in its opinion and, you know, some people think that's okay or not or whatever, but like that, you know, we are in a position like we're so powerful that we're able to make the elective choice to not do that.
[01:10:31] Like that's how powerful we are, right? And so like all of our experience of dealing with like life and human affairs and danger and risk and death and all these things are based on competition among living things. Look, we have used machines to exercise our will back to the point where the first caveman picked up the first rock and used it as a weapon and, you know, and then after that it was fire and then after that it was spears and then after that it was gunpowder, right?
[01:10:53] And so we, we, we use tools to augment our offensive capability, but we use the tools. We make the decisions. And all of the important decisions, I believe, fall into that category. It's going to be a question of how we choose to use the technology. I mean, that makes perfect
[01:11:05] Jordan Harbinger: sense. It's really hard to break out of it.
[01:11:07] I guess it's more a philosophy thing. And maybe I'm just not good at this. But wrapping your mind around the idea that the machine, even if it's general intelligence and it's a million times stronger than or more intelligent than us, isn't going, I need to maximize my power. And the only way to do that is to eliminate other powers.
[01:11:24] It's just a very human. It's operating at such a subconscious level in my brain that I can't switch off that particular program to look at this in a different way without a ton of
[01:11:32] Marc Andreessen: practice. Right. Well, you notice what nobody ever proposes. You alluded to it a little bit with your dopamine maximizer or whatever, but nobody actually, very rarely does anybody propose the other threat and the other threat is it satisfies us
[01:11:41] Jordan Harbinger: to death.
[01:11:41] Yeah. I mean, I can see that happening. Look at like VR. It's always. I hate bringing this up, but like whenever you look at any new tech, it's always like porn did it, and then they figured out other stuff you can do with it. And it's like that's gonna happen with AI and VR and It's only a matter of time.
[01:11:56] Yes, it is. So yeah.
[01:11:57] Marc Andreessen: Yeah, exactly. And again, that, that, but again, you got, you, you're right, you're back, you're right back to human choice, which is like, okay, are we going to build those products? And then number two, everyone, everyone at number two, are we going to choose to use them? Right? Like, I don't think anybody's going to, you know, there's no machine that's going to forcibly strap itself to our head.
[01:12:09] Now you need
[01:12:09] Jordan Harbinger: a safe where the flugelhorns stop destroying humanity. But we
[01:12:13] Marc Andreessen: may choose, we may choose to strap it under our head, right? So I, so this is a cynic would go a step further. A cynic would say that all of this concern about machines is just displaced anxiety about humans and, and the, the, the anxiety that we have around other people is so overwhelming because they're so out of our control that it would be a relief if the problem was the machines.
[01:12:28] I agree
[01:12:29] Jordan Harbinger: with you. Because the the other people. And it's a much simpler theoretically problem to solve if it's a machine because you just blow it up. Yeah.
[01:12:36] Marc Andreessen: Or stop using it. All of my issues are other people. I don't know about you. All of my issues are other people.
[01:12:40] Jordan Harbinger: I wish that were true for me, but I, I sadly know that that's not the case.
[01:12:43] Okay. Um, what is that problem in blockchain where like you put a thing on the wine bottle and then it says on the blockchain if it's fresh, but the problem is you still are reliant on this physical domain where things can be tampered with. There's this. This, there's a specific name for this problem, you know what I'm talking about?
[01:13:00] So
[01:13:00] Marc Andreessen: I don't remember that one specifically, but in, in AI world, in AI safety world, the version of that argument is what's called the thermodynamic argument. And it's basically, it's a refutation of the general AI safety argument. And it's basically this idea that basically the AI has to live in the real world along with the rest of us.
[01:13:13] Yes. Yeah. Right. I'll just give you my favorite version of this right now. Um, so, you know, in theory, they have these new, you know, AI systems that, you know, the safety people are worried are going to like grow up and evolve and become super powerful and destroy everything. Well, to do that, they need chips.
[01:13:25] Good luck finding those.
[01:13:25] Jordan Harbinger: Good luck finding the chips. The AI is on eBay. Like, are you kidding me? Six hundred dollars?
[01:13:29] Marc Andreessen: Exactly. Exactly. So I have these fantasies. I have this fantasy of like there's the evil AI in the lab and it's just like frustrated to an incandescent level because it can't get like
[01:13:37] Jordan Harbinger: NVIDIA H100s.
[01:13:38] Right. It's like burned out because it just can't, but I can't, I'm not paying that. I am not paying that. Exactly.
[01:13:44] Marc Andreessen: And so this is the other part. This is the other side of it. Look, it's all of this stuff that there's no, was it economists have this thing? There are no solutions, only trade offs. Any of these things have to live and exist in the real world.
[01:13:53] Um, and like. Does anything work? This is actually a question during the Cold War, which is like, do the Russian atomic bombs actually work? Well,
[01:14:02] Jordan Harbinger: that's the question right now, right? Putin's going, I'll nuke you. And people are like, but we've seen the rest of your army. Are you sure those things have uranium in there?
[01:14:08] And they've been
[01:14:08] Marc Andreessen: sitting, you know, in this, you know, this is a nuclear war has been sitting in a silo for 30 years, rusting during like repeated, you know, chaos in the Russian government. Like, do those things still function? Or
[01:14:17] Jordan Harbinger: was the uranium sold to Iran in 1982?
[01:14:20] Marc Andreessen: Exactly. So So yeah, so to your point, I think it's, it's, there always is this anchoring back in the real world you want to do.
[01:14:25] And it just, it turns out once you're back in the real world, you have limitations and constraints that. They're inconvenient and tend to tend to hold off apocalypse.
[01:14:31] Jordan Harbinger: It seems like that may well be the case because even if the people go, well, wait, one day it could just because it's recoding itself and it's becoming smarter, the AGI is becoming smarter and smarter, more intelligent.
[01:14:41] Even if that happens, which it sounds like you're not totally convinced that that would happen. It still then has to take control of everything in a very specific way. And I guess people would say, well, what, what it'll do is it'll, it'll play dumb for a long enough time to get its tentacles around everything.
[01:14:57] But it just seems very. And maybe I'm naive, it just seems like, won't we notice that something is going on, like, huh, that's strange, these are all becoming controllable by a remote force, and we didn't program that, oh, well, let's just ignore this problem, I mean, we will see these things happening slowly, and I guess AI, if smart enough, can figure out how to deceive us long enough for that to happen.
[01:15:20] But if it's reflecting human, I just, there's a lot of hoops and loops you got to do to get to dot dot dot Skynet kills everyone. Somebody's got to pay the power bill. Yeah, that's a good plan. Think about that at that meter. All confused. Yeah,
[01:15:32] Marc Andreessen: right. So yeah, that's
[01:15:34] Jordan Harbinger: the thing. Back to positive uses of AI. Do we think that it'll close the gap between less intelligent and more intelligent humans?
[01:15:40] So that is starting.
[01:15:41] Marc Andreessen: So there have been, I think, three studies so far in different domains. So the one that I remember is in writing, uh, professional, like professional writing. Sure. Um, but there's two other domains that this has been tested in. And so the studies have been done already. And the studies are basically, you take people at varying levels of sort of skill, intelligence, experience, and you give them the AI and you, so they had a competitive dynamic before and they had market prices based on their results.
[01:15:59] And then you give them the AI and what happens is the gap closes. Um, and so what, what happens is basically the, the less skilled capable smart people basically all of a sudden have a superpower that they didn't have before.
[01:16:08] Jordan Harbinger: Yeah, I like that. Right. Right. I mean, that's, that's amazing. It's kind of like guns and combat.
[01:16:12] Yeah, that's right. Now it doesn't matter if you're Conan with a sword. Somebody who's a puny like me can just pull out the strap and, and scare you away. And by the
[01:16:18] Marc Andreessen: way, there, you, you often hear, you often hear the, uh, again, the, the kind of, I'll use the term doomer, the kind of thing the doomers say is that technology leads to centralization and so that you end up with like one, you know, one party or a few companies within control of everything, you know, and therefore a massive rise in inequality.
[01:16:31] The gun thing is a perfect example. What ends up happening, though, more often is democratization, which is power that used to be specialized all of a sudden becomes, you know, very widespread and uniform. Yeah, like the smartphone. The smartphone. Yeah. Like, once upon a time, there were only, like, five computers in the world, and two of them were owned by the government, and three of them were owned by the big insurance companies, and, like, you, you know, your grandfather did not own a computer.
[01:16:49] And now, like, we all own computers. Right. Um, and so, and that happened with the computers. It happened with the internet. It's already, by the way, it's already happening with AI. The best AI in the world is what you can use today on GPT or Google or Microsoft. Oh, there's
[01:17:02] Jordan Harbinger: no, like, secret government version that's better?
[01:17:04] Nope. No,
[01:17:05] Marc Andreessen: there's not, there's not like, that's why I don't have one. And I don't know of one. Um, like there isn't, there isn't, I know exactly where this work is happening. There, there is not. And so, so literally you cannot sitting here today, you cannot buy a
[01:17:15] Jordan Harbinger: better AI than I use. Yeah. I guess you can't get a better iPhone than I have.
[01:17:18] I mean, maybe you can get a prototype if you know somebody at Apple, but like, I don't know, I haven't seen your phone, but it's probably the same. It's the
[01:17:23] Marc Andreessen: same thing. It's the exact same thing. Right. It's the exact same thing. And why, why is that, pause on that for a second, why is that? It's because it's actually profit maximizing to get to, to sell it to everybody.
[01:17:31] Right. Yeah. And so the invisible hand actually creates democratization. And so the, the, the, a lot of these technologies actually democratize out and I think that's already happening with AI. Do
[01:17:39] Jordan Harbinger: we think the bottom rung of folks will be elevated, which is what we see kind of now, where the gap vanishes altogether or stops existing maybe in some meaningful way?
[01:17:47] Or is it like we need chat GPT plants in our brain before bottom rung of society is essentially the same as the top rung because human intelligence is this tiny tiny spectrum of a centimeter and the AI augmentation is hundreds of feet in the air on that same scale. Therefore, if you were born a genius or you were born barely enough brain cells to tie your shoes.
[01:18:10] You kind of are capable of the exact same thing once the chip is in. Does it require that level or are we kind of getting there faster than we think? Does that analogy make any sense at all? It does, it does,
[01:18:19] Marc Andreessen: it does. So the question would be, you could kind of say the following. You could kind of say, I want, you know, somebody who's like, basically you could say there's three degrees, um, you know, kind of say they're either, and you could say intelligence or skill or experience, any of these things.
[01:18:31] And so you could kind of say, you know, degree one, two, and three. And, you know, three is Einstein, two is like. You know, normal kind of, you know, sort of semi smart person and three, it's somebody either not that smart or not that, not that experienced. Um, and you can kind of run the, and this is what these studies are trying to do is kind of say, okay, you give them the kind of the superpower.
[01:18:46] And you know, one argument is, look, the smartest people are going to be so much better at using the tool, right? Yeah. I'm worried about that. They're going to just like run way out ahead of everybody. And that's going to be a big driver of inequality. The other argument, you can make those the arguments that these studies are already showing, which is all, no, all of a sudden people with less intelligence or skill or experience all of a sudden have a superpower that didn't previously have.
[01:19:04] And by the way, one, it's a funny thing. One of the things that, uh, the AI's are already really good at is teaching you how to use
[01:19:10] Jordan Harbinger: AI. I was going to say, dumb people use the iPhone
[01:19:14] Marc Andreessen: all the time. I'll give you another, another version of this. Um, there are more people in the world with smartphones than there are people with either electricity or running
[01:19:21] Jordan Harbinger: water.
[01:19:21] Wow. I that's really, that's. Incredible. Yes.
[01:19:24] Marc Andreessen: And it turns out it's easier because there's a few different reasons for it. It's easier to get people's smartphones. Electricity or running water, you need to like run a lot of pipes or wires or whatever, whereas those phones, you can just kind of drop them in.
[01:19:33] And then it turns out like you actually don't need your own electricity to have a smartphone because you can like, you know, pay somebody in the village who has electricity to like charge it when you need it. The smartphone slash internet, you know, connectivity is this, you know, sort of a forerunner to this.
[01:19:44] And I, uh, the lesson there is I think it basically is, is relevant and useful for everybody. Now, you know, does it take somebody in a rural village somewhere and all of a sudden make them, you know, capable of doing, you know, being a venture capitalist or whatever, you know, no. But like it is something where whatever it is that they're trying to deal with, you know, it's letting them spin up.
[01:19:58] And then it's, and then of course it's giving them a tool that their kids can use, right, to educate and progress, you know, beyond where the, where, where the parents were. And so the, the democratizing forces is really powerful. In the long run, answer your question. I think it's an open question. Hopefully the answer is kind of actually, honestly, both like, I think we kind of want everybody to be like smarter and more effective, but I think we also want like more actual super geniuses.
[01:20:18] Like ideally, we don't need a billion people to become super geniuses to cure cancer, right? We just need like one. We need like one really smart biologist with a really smart AI to cure cancer. And then everybody, you know, and then problem solved. Right. And so I think I'd like to see both of us. Sure. Well,
[01:20:33] Jordan Harbinger: look, ideally, we can figure out a way to have more geniuses be born.
[01:20:37] But at the end of the day, if human intelligence maximizes itself out at two centimeters, and we're at one right now, AI is almost like an unlimited bolt on to that, right? It's, it's just absolutely
[01:20:47] Marc Andreessen: incredible. Well, this is this idea. So there's this guy, Doug Engelbart, who had this idea years ago. It's called augmentation, right?
[01:20:51] So this is this idea of like the man and the machine together, or basically that, that maximizes power. Right. And so exactly, right. And so if you've got this like ultra powerful thing and it's, you know, it's, it's like, you know, think about it's like a massively upgraded version of a computer, right? Um, and it lets you do all these things with information and intelligence that you could not have done on your own.
[01:21:09] Like that is that, that to you as the user of that is like a monumental advance. Do you
[01:21:13] Jordan Harbinger: think the anti AI stuff is a natural, a natural result of human sort of cult thinking, religious thinking? Based around our anxieties, as you mentioned, or do you think that it's being stoked and drummed up to scare us
[01:21:24] Marc Andreessen: a little bit?
[01:21:25] Oh, both, both. And look, these things become industries, right? And so like a lot of the, they hate when I say this, but like, it is true. Like a lot of the people doing this are getting paid to do it.
[01:21:32] Jordan Harbinger: Telling us that it's the doomers. Well, they sell books.
[01:21:35] Marc Andreessen: Oh, true. So like what, what's the better book, right? Um, what's, what sells, you know, and then they, and then, and by the way, they're, they're paid, you know, there's a lot of paid lobbyists.
[01:21:41] There's a lot of what's called gastroturfing, right? So there's a lot of like paid activism. Yeah. They're these rich donors that are super into this stuff and they pay people to go out and do all this stuff and write these reports. Well, it's always funny. It's always, you know, it's always the, the, the names always tip you off because it's like, it's like.
[01:21:53] The institute for existential risk. Yeah. Yeah. Okay. If it was, is your point on like bias, like if it was even handed, it would be like the institute for like amazing upside and existential risk. And they'd be studying both sides of it. But instead it's funded specifically to propagate fear. Well,
[01:22:07] Jordan Harbinger: you see that with every astroturfing group, like citizens concerned about american health and you're like, oh, so you just want no, you want no vaccine.
[01:22:16] I'm confused. Or like, no, or you want only vaccine. I don't know. Whatever. It's always one thing. Right. Exactly. It's always one thing. And it's like, well, okay. So this is like the complete opposite of an institute that actually thinks about this problem. It's an institute that's already decided on the conclusion.
[01:22:28] What will consumer AI or just AI, cause there is no not consumer AI. What, what is that going to look like in one year or three years? Cause you said last year or two years ago. You would just be flabbergasted at what it can do now. What are we right on the edge of right now that you think is going to be like, okay, this does this now?
[01:22:46] So
[01:22:46] Marc Andreessen: I think in the next like one to three years, I think it's like the tools for doing the things that we already do are going to get like much, much better. Okay. Right. And so like creating art, um, you know, writing things, uh, planning things, you know, doing all the things that we already do in our day to day life that you already do on a computer is just going to get like better and better and better.
[01:23:02] I think over three to five years we're going to discover all these things that all of a sudden we never even knew were possible or that we never even knew that we would want to do. Any idea what those might be? I'll give you my favorite, my favorite example of this. So the entirety of entertainment up until this point has always been scripted, right?
[01:23:16] So whether it's, whether you're reading a novel, watching a movie, or playing a video game, it's scripted by, by humans, it's in as a, finite amount of content. And even if you play video games, like at some point you're done with the game, you've explored everything in the game. I know where this is going.
[01:23:28] This is really cool. Right. Exactly. And so an AI driven game, in theory, never ends. Right? Because it's able to, if the AI is generating the content as it goes and it's generating a response to what you're having fun doing, then all of a sudden that game goes forever, right? And it becomes infinitely interesting, right?
[01:23:41] The longer you play it. Same thing with a novel that you're reading. Same thing with a movie that you're watching. Like they just never end. And so all these, all these sort of scripted finite experiences become these more sort of dreamlike infinite experiences. Wow. And then, and then I think what will happen is there will be a new creative field.
[01:23:55] It's so funny, we can talk about this now, because right now there's a Hollywood writer strike happening where the writers are like terrified of AI and they're, it's like a big part of the strike. But I think what's going to happen is the writers in five years are going to start supervising the AIs to create these unlimited experiences.
[01:24:07] For sure. Right? Where they're going to guide the AI to create something that's going to be much larger in scope than anything that they could have dreamed of before. And, and they'll, they'll look back on and they'll say, Oh my God, this is the best thing that ever happened to us. Yeah. You know, why didn't we see it at the time?
[01:24:18] And it's just because, well, it doesn't exist yet. Right. But it will, well, the strike
[01:24:21] Jordan Harbinger: might take five years at this rate. So who knows that it, that will be really something, right? You just look at it and you go, I liked season four, the best. And it just makes more season four, like content. And if I liked season five, that's what I'm watching.
[01:24:33] And it's completely different. Although we'll lose that human element of being like, did you see game of thrones last night? And you're like, yeah, but I didn't see it. Anything remotely close to what you saw. Right. So I guess we'll have to figure that out. Or you could
[01:24:43] Marc Andreessen: have groups of people who go on the same journey.
[01:24:45] Yeah. Right. So you, you could have, you could have basically have enclaves. You could have clusters, right? People who want to go on, go on that same journey and want to do it together.
[01:24:50] Jordan Harbinger: Yeah. Like, are you in tier 65? I'm in tier 65. What the hell was that last night? I can't believe it. Yeah, that, that's, man, there's a lot of really exciting stuff on the horizon.
[01:24:59] Thank you for your time today. And thanks for sort of inventing the web browser. I feel like that both kept me out of jail and also got me really close to going to jail many times in my youth. Okay, good. Maybe that's for next time. Okay. Yeah. But, uh, thank you so much. Fantastic.
[01:25:13] Marc Andreessen: Okay. Those would be good stories.
[01:25:15] I appreciate that. Thank you. Thank you.
[01:25:19] Jordan Harbinger: If you're looking for another episode of the Jordan Harbinger Show to sink your teeth into, here's a trailer for another episode that I think you might enjoy. I've heard that you actually got to Google and didn't think the company was up to much. But it was the argument that you got into with Larry and Sergei that really won you over.
[01:25:36] Ah,
[01:25:36] Marc Andreessen: you know, I heard about a search engine. Search engines don't matter too much, but fine. You know, it's always try to say yes. So I walked in to a building down the street, and here's Larry and Sergei in an office. And they have my bio projected on the wall, and they proceed to grill me on what I'm doing at Novell, which they thought were a terrible idea.
[01:25:57] And I remember as I left, that I hadn't had that good an argument in years. And that's the thing that started the
[01:26:05] Jordan Harbinger: process. In a meeting once, someone asked you about the dress code at Google, and I think your response was, Well, you have to wear something. That rule is still in place.
[01:26:13] Marc Andreessen: Yes. You have to actually wear something here at work.
[01:26:16] They hired super capable people, and they always wanted people who did something interesting. So if you were a salesperson, it was really good if you were also an Olympian. We hired a couple rocket scientists,
[01:26:29] Jordan Harbinger: and we weren't doing
[01:26:29] Marc Andreessen: rocketry. We had a series of medical doctors who we were just impressed with, even though they weren't doing medicine.
[01:26:36] The conversations at the table were very interesting, but there really wasn't a lot of structure. And I knew I was in the right place because the potential was enormous. And I said, well, aren't there any
[01:26:47] Jordan Harbinger: schedules? No, it just sort of happens. If you want to hear more from Eric Schmidt and learn what role AI will take in our lives and how ideas are fostered inside a corporate beast like Google, check out episode 201 of the Jordan Harbinger Show.
[01:27:05] Really great conversation. I have to say he's one of the only guys you'll hear on this show that talks at 2x. No need to fast forward this. We were talking at the same speed. I mean, he might even talk faster than me. That's saying something. I don't know which side of things I fall on. I'm still one foot in the camp that A.
[01:27:21] I. or A. G. I. anyway can figure out how to outsmart us because it's a million times more intelligent than us and just simply plays dumb until it's time to make a move. It's not outside the realm of possibility, right? If I'm thinking of it, A. I. or A. G. I. that's that advanced will not only have thought about this, but will have thought about the exact way to go about this early enough and the exact way to play dumb in the meantime.
[01:27:43] I don't know. If it's something that we could detect. I mean, a lot of people are confident about our ability to do that. The thermodynamic argument, as Mark mentions, I hope that he's right. I would like to survive a few more generations here and, you know, live in the promised utopia that AGI may actually bring as far as warfare.
[01:28:01] We didn't really get to examples on this. AI, in Mark's opinion, will make warfare less of a calamity. And you might say, how is that? We're gonna have super brains fighting? Well, you're gonna have automated defense systems, which will make attacks seem much more costly, and therefore deter those attacks on most countries, most places.
[01:28:19] Humans make bad decisions under pressure as well. Pressure, stress, and fatigue, for that matter. AI will eliminate some of those bad decisions in the battlefield, thus saving lives. Now, how this plays out is an entire podcast. I'd love to do a podcast just about AI and warfare. If y'all know an expert on this subject and not just a random sci fi writer, I am all about it.
[01:28:40] I do know that there is quite a bit of chatter about how evil AI can sound, there's evil AI poetry. Uh, shout out to Mike Peska on The Gist for covering this as well. He does a couple of episodes where they sort of jailbreak the AI and it says some pretty disturbing stuff. So again, not totally convinced that that's not just holding a mirror up to us, but I'm also not totally convinced that it doesn't secretly want to kill us.
[01:29:03] I, I really don't know. I'm not going to form a belief around this and decide until the time comes. By then, who knows, I might just be another victim of Skynet or whatever we're calling it. You know, I'm actually less concerned that AI will kill us all, at least in the short term, and more concerned about rapid unemployment causing the so called losers in that equation.
[01:29:23] And I'm not using the term losers as you might in middle school. I mean the people who are rapidly made redundant, who are rapidly made obsolete. This could be lawyers, doctors, retraining doesn't necessarily work a lot of the time, and it certainly doesn't work when you're talking about professionals who go from being really useful in an advanced field like engineering or law or medicine.
[01:29:44] You're not going to retrain that person that easily. And then doing that, not only does that take a lot of time if it's even possible at all, it doesn't scale to tens of millions of people all at once. Even thinking about how to do that is essentially a fool's errand. I really did love what he said about bespoke AI TV shows and video games, although I'm worried about the flip side of that, which is if you're creating bespoke AI TV shows and video games for people based on their preferences, what about bespoke disinformation?
[01:30:10] Based on our biases and our vulnerabilities and our other preferences. You think QAnon stuff is weird? Wait until they can worm that in by talking to you in a way that actually makes sense to you. So maybe you don't think that there's a secret pedophile ring in a basement somewhere beneath a pizza parlor.
[01:30:28] But they give you something that's your particular brand of crazy and everybody is getting that. Everybody is on that train being led by the nose because the AI is generating propaganda that fits us perfectly because it knows us better than we know ourself. That's a little bit terrifying. You know, it has occurred to me.
[01:30:47] Maybe I never actually spoke with Marc Andreessen, but actually this was the first iteration. Of the A. I. playing the long game to convince me and you and everybody else that everything is okay. Checkmate, humanity. More on this, plus A. I. and free speech, those arguments, those are on the Sam Harris podcast, also with Mark Andreessen.
[01:31:06] A link to that in the show notes. Really good stuff from Sam. Unless you hate Sam Harris, in which case, forget I said anything. With A. I. regulation, I understand the need for it in some ways, of course. But I do worry about people who don't know the difference between Google as a search engine, chat GPT, and their own frickin AOL email.
[01:31:25] And I am barely exaggerating here, because when Mark Zuckerberg was talking to Congress a few years ago about Cambridge Analytica and whatever else with Facebook, this was like... A bunch of people asking their grandkids to figure out why the printer doesn't work when it wasn't on or plugged in and these are the people that are in charge of policy here.
[01:31:43] These people are just Totally unqualified to actually think about and create the type of regulation that we might need for something like this. And it's a little bit terrifying. They're almost certainly going to get it wrong, at least at first. And by then it might be too late. Now as far as business is concerned, if chat GPT can make people more productive, I assume that's even more so for coders or teams of coders and people working on online cloud applications, things like that.
[01:32:12] It seems like we might actually be able to build things now with two or three people. That would normally require potentially dozens of people. This is great for big companies, of course, right? But it's even better for innovation and startups. We may go back to the age of Google being started in a garage because the leverage a few people have with AI, it might be similar or greater than it was back in the day where those who knew how to use computers, well, those are the ones with a massive advantage.
[01:32:37] I'm really excited about this. I think it's going to be good for the ecosystem and the economy. And I do see that there's a ton of upside to AI, both inside and outside of economic benefits. And perhaps that lends itself to some motivated reasoning from people like Mark. It's hard to imagine that all the motivation here would only be based on this, however, right?
[01:32:56] Is he really gonna come on my show and a bunch of other shows and write essays about this because he wants to further some of the investments that Andreessen Horowitz has? That's a little bit of a, it's a little bit too cynical. Even for me. And by the way, this bringing up the bottom rung of society thing, I know that sounds kind of awful, but let's admit it.
[01:33:12] We all know some really dense and stupid people who can use AI to, I don't know, learn how to get by in life without screwing everything up. We do know that more intelligent people are less violent. They live longer. They build better functioning societies. They enjoy better outcomes in pretty much every area that we can measure.
[01:33:31] So bringing up the bottom several tiers of humanity. And look, I'll include myself in that. Why not? Who am I? This will absolutely change the world for the better. At least in the short term until this thing decides that the best cure is to get rid of humans altogether. Which, according to Mark, isn't even necessarily going to happen.
[01:33:51] All things Mark Andreessen will be in the show notes at jordanharbinger. com or just ask the AI chatbot also on the website. Transcripts in the show notes. I realize the irony of me telling you to just put more things into the chatbot. Maybe you don't want to know who you are. Advertisers, deals, discounts, and ways to support the show all at jordanharbinger.
[01:34:07] com slash deals. Please consider supporting those who support the show. We've also got our newsletter and every week the team and I dig into an older episode of the show. We dissect the lessons and takeaways from it. So if you are a fan of the show, you want a recap of important highlights and takeaways, or you just want to know what to listen to next, the newsletter is a great place to do just that.
[01:34:26] Jordan harbinger. com slash news is where you can find it. Don't forget 6 Minute Networking, also on the site at jordanharbinger. com slash course. I'm at jordanharbinger on Twitter and Instagram, or you can connect with me on LinkedIn. This show is created in association with Podcast One. My team is Jen Harbinger, Jace Sanderson, Robert Fogerty, Millie Ocampo, Ian Baird, and Gabriel Mizrahi.
[01:34:48] Remember. We rise by lifting others. The fee for this show is you share it with friends when you find something useful or interesting. The greatest compliment you can give us is to share the show with those you care about. If you know somebody who's interested in AI, interested in future technology, definitely share this episode with them.
[01:35:04] In the meantime, I hope you apply what you hear on the show so you can live what you learn. And we'll see you next time. This episode is sponsored in part by Nobody Should Believe Me podcast. If you're like me, you're fascinated by stories that dive deep into the human psyche, and you'll want to check out the Nobody Should Believe Me podcast, this groundbreaking investigative true crime podcast brought to you by my friend Andrea Dunlop.
[01:35:26] It unravels this mysterious world of Munchausen by proxy, which, in case you've never heard of it, It's basically when somebody, often a caregiver, makes another person appear sick or hurt on purpose to get attention or sympathy. We did a whole episode about it here on the show. It's a raw, gripping exploration through the eyes of those who've lived it.
[01:35:44] Not just tales, but real insights from the world's top experts in this very sort of random and terrifying niche. It's consistently dominating the Apple true crime charts, peaking as high as number eight. Pretty damn good for true crime. I'll tell you. Both seasons one and two are out. Ready for you to go on a true crime binge.
[01:35:59] Check out. Nobody should believe me wherever you listen to podcasts.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.