Kai-Fu Lee (@kaifulee) is an AI expert, CEO of Sinovation Ventures, former President of Google China, and co-author (with Chen Qiufan) of AI 2041: Ten Visions for Our Future.
What We Discuss with Kai-Fu Lee:
- How AI will magnify the effects of the energy revolution, materials revolution, and life science revolution currently under way.
- How can we keep the data that trains AI to operate free from human and cultural biases and other inaccuracies?
- The four waves of AI and where we are on the path to truly autonomous AI that frees humans to do more worthwhile work.
- How human beings can avoid displacement when all the repetitive, soul-crushing tasks are being done by robots, and what society must do to keep this from widening the gap in economic inequality.
- How AI might be used to optimize the educational experience and make it engaging for every child by tailoring it to their individual interests.
- And much more…
Like this show? Please leave us a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
Last time he was on the show, AI expert and former Google China president Kai-Fu Lee gave us a glimpse into the current state of AI in China and what this means for the future of humanity. And if you’re wondering if this spelled out a future we should try to resist or welcome, you should give that episode a listen here.
This time around, Kai-Fu Lee rejoins us to discuss AI 2041: Ten Visions for Our Future, the book he joined forces with award-winning science fiction writer Chen Qiufan to pen in order to answer the question: how will artificial intelligence change our world in the next 20 years? Listen, learn, and enjoy!
Please Scroll Down for Featured Resources and Transcript!
Please note that some of the links on this page (books, movies, music, etc.) lead to affiliate programs for which The Jordan Harbinger Show receives compensation. It’s just one of the ways we keep the lights on around here. Thank you for your support!
Sign up for Six-Minute Networking — our free networking and relationship development mini course — at jordanharbinger.com/course!
This Episode Is Sponsored By:
- BrandCrowd: Get 60% off a premium logo pack
- Public Rec: Get 10% off with code HARBINGER
- BetterHelp: Get 10% off your first month at betterhelp.com/jordan
- ZipRecruiter: Learn more at ziprecruiter.com/jordan
- Progressive: Get a free online quote at progressive.com
Miss our episode with LeVar Burton, award-winning actor of Roots, Reading Rainbow, and Star Trek: The Next Generation fame? Catch up with episode 213: LeVar Burton | Storytelling the Enemies of Education Off here!
Thanks, Kai-Fu Lee!
If you enjoyed this session with Kai-Fu Lee, let him know by clicking on the link below and sending him a quick shout out at Twitter:
Click here to thank Kai-Fu Lee at Twitter!
Click here to let Jordan know about your number one takeaway from this episode!
And if you want us to answer your questions on one of our upcoming weekly Feedback Friday episodes, drop us a line at friday@jordanharbinger.com.
Resources from This Episode:
- AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan | Amazon
- AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee | Amazon
- Kai-Fu Lee | What Every Human Being Should Know about AI Superpowers | Jordan Harbinger
- Sinovation Ventures
- Kai-Fu Lee | Medium
- Kai-Fu Lee | Twitter
- Kai-Fu Lee | Facebook
- Kai-Fu Lee | LinkedIn
- Kevin Kelly | 12 Technological Forces That Will Shape Our Future | Jordan Harbinger
- DNA Sequencing Fact Sheet | National Human Genome Research Institute
- GPT-3 Powers the Next Generation of Apps | OpenAI
- Foundation Models Risk Exacerbating ML’s Ethical Challenges | VentureBeat
- How Does TikTok’s Algorithm Know Me So Well? | Towards Data Science
- What Do We Do About the Biases in AI? | Harvard Business Review
- The Social Dilemma | Netflix
- China Is Still the World’s Factory — And It’s Designing the Future With AI by Kai-Fu Lee | Medium
- The Amazon Robotics Family: Kiva, Pegasus, Xanthus, and more… | All About Lean
- Inside China’s new robotic restaurant in Guangzhou | New China TV
- Hyperdrive Daily: China Ramps Up Its Autonomous Vehicle Development | Bloomberg
- Robotic Process Automation (RPA) In 5 Minutes | Simplilearn
- Automation Platform | UiPath
- Universal Basic Income (UBI) | Investopedia
- Tapping into the Drug Discovery Potential of AI | Nature
- Black Mirror | Netflix
- Virtual Reality vs. Augmented Reality vs. Mixed Reality | Intel
- How and Why Google Glass Failed | Investopedia
- AlphaGo | DeepMind
- Garry Kasparov | Deep Thinking for Disordered Times | Jordan Harbinger
567: Kai-Fu Lee | Ten Visions for Our Future with AI
[00:00:00] Jordan Harbinger: Coming up next on The Jordan Harbinger Show.
[00:00:03] Kai-Fu Lee: As an example, let's say we want to train a system that determines if someone has good credit or not, and suppose hypothetically, we have everything on the phone. Let's say we have a license to have the ability to feed that in. You would think a lot of the things are irrelevant. Does what app they use have anything to do with the credit? Does the battery level have anything to do with the credit? Does the person's address have anything to do with the credit? It turns out most of them are actually relevant when you think about it.
[00:00:34] Jordan Harbinger: Welcome to the show. I'm Jordan Harbinger. On The Jordan Harbinger Show, we decode the stories, secrets, and skills of the world's most fascinating people. We have in-depth conversations with people at the top of their game, astronauts and entrepreneurs, spies, and psychologists, even the occasional national security adviser, war correspondent, or underworld figure. Each episode turns our guests' wisdom into practical advice that you can use to build a deeper understanding of how the world works and become a better critical thinker.
[00:01:01] If you're new to the show, or you're looking for a handy way to tell your friends about it, we now have episode starter packs. These are collections of your favorite episodes, organized by popular topics and it'll help new listeners get a taste of everything that we do here on the show. Just visit jordanharbinger.com/starts to get started or to help somebody else get started. Of course, I always appreciate it when you do that.
[00:01:21] Today on the show, there's a lot of talk about AI artificial intelligence these days. From whether it'll take all of our jobs and leave us all unemployed, or whether it'll murder all of us in some particularly brutal fashion. While watching experts in science fiction, authors debate this endlessly online, I came across this book by Kai-Fu Lee, former president of Google, China discussing the rise of AI in China, in the United States, the future of AI, what it means for AI in the rest of the world. We'll learn just how close or how far we are from these different types of artificial intelligence, how AI will begin to change the world and our position in it. We'll also discover why AI is as important as the industrial revolution, or as electricity itself, and yet will happen a lot faster and what this means for us as mere humans. This is sort of an update to Kai-Fu's previous appearance here on the show. We'll link to that in the show notes as well.
[00:02:14] And if you're wondering how I managed to book all of these great authors, thinkers, and creators every single week, it's because of my network. And I'm teaching you how to build your network for free over at jordanharbinger.com/course. By the way, most of the guests on the show, they subscribe to the course and contribute to the course. Come join us, you'll be in smart company where you belong. Now, here's Kai-Fu Lee.
[00:02:36] Previously on the show, I had Kevin Kelly. That's episode 537, and he had said something along the lines of the AI revolution will be on the scale of the industrial revolution, but it'll be larger. It'll happen faster. It's basically the best thing since electricity, but even more impactful on our society. Would you agree with that?
[00:02:55] Kai-Fu Lee: Yes, I would. And coincidentally, we're seeing actually multiple revolutions. We're seeing an energy revolution. We're seeing a materials revolution. We're seeing a life science revolution. So when you fold all this together, this will actually be all the more magnified.
[00:03:11] Jordan Harbinger: So when you say that we're going to — and we'll get into this later in the show, but when you say we're seeing things like life sciences revolutions, I assume what you mean is AI stacked on top of each of these industries is going to be a game changer. For example, with life sciences, I can't crack open the human genome in a textbook and go, "Aha, there's something that I can use," but AI can look at that genome and say, "You know for people that have this, this drug might cure this thing that these people are always dealing with," and that might not happen in a century of human experimentation.
[00:03:44] Kai-Fu Lee: Exactly. Genetic sequencing is almost one gigabytes of data. Researchers don't know how to read most of it and doctors certainly don't know how to read it. It's really up to AI to figure out how do we create precision medicine based on each person's individual characteristics in the genetic sequencing, maybe a different kind of treatment is needed. So just like AI can show you a different Facebook newsfeed than it shows me. AI can give you a different treatment than it gives me, and both being much more effective. So it's a perfect combination that everything's going digital.
[00:04:19] Jordan Harbinger: And a lot of these industries are, for example, researchers putting things in a pile and saying, "Hey, I can't deal with this right now. AI will handle it later." There are a lot of professionals kind of saying like, "This problem is too big for me to tackle, but maybe in the future we'll have the computing technology." Or is it simply going to be almost like an invisible layer on top of everything that we do now?
[00:04:39] Kai-Fu Lee: You know, take genetic sequencing for example, people are saying, "There's no way I can read a gigabyte of data for every human."
[00:04:45] Jordan Harbinger: Right.
[00:04:45] Kai-Fu Lee: So we're just going to figure out what the 1 percent that we can read the other 99 percent will maybe contain less information, but maybe it doesn't. So yes, like you said, they are deferring it. And I think now once we figure out how to get permission by people to collect this data, because genetic sequencing is extremely sensitive, personal information and you can't anonymize it. So once we figured out how to get some people to donate their information, then AI can really party on that.
[00:05:17] Jordan Harbinger: What do you mean you can't anonymize it? Because it's a genome and it's very unique to you, there's just no way to make it so that they can't figure out who you are.
[00:05:24] Kai-Fu Lee: Exactly.
[00:05:24] Jordan Harbinger: Yeah. That makes sense.
[00:05:25] Kai-Fu Lee: Yeah, like with the hospital record, you can remove your name, remove your zip code, remove your phone number, address. More or less people can't figure out and reverse engineer you, but with genetic sequencing by definition, it is just you.
[00:05:37] Jordan Harbinger: So that there's a privacy concern there, because at some point, the cat's going to be out of the bag, whether AI gets this information through some channel that we really don't want to happen. Or for example, if there's just a company or an insurance company that decides to be a little bit less than ethical with it, or a nation state that says, "Eh, our citizens don't get privacy. We don't allow that." We could end up with a bunch of genetic data that is obviously super traceable. It's like a fingerprint, except you can never get rid of the finger, right? It's you forever, your genome.
[00:06:08] Kai-Fu Lee: Yeah, there could be, let's say an important political figure, some genetic sequencing got known, then that person's mutation and inclination to get Alzheimer's or whatever disease. And that gets spread, it's terrible for the person and the country. So as far as solutions, I think there are many solutions for non-genetic sequencing problems, right? In terms of protecting, having our cake and eat it too, that's the goal that we protect our privacy and personal information. Still, the AI is able to train. For most kinds of data there's ideas like privacy computing, federated learning that could work. There is anonymization that could work. But on the genetic sequencing, I think we have to be extremely careful. I think possibly privacy computing could still work. The idea of federated learning is a technology that keeps your data only in computers that you entrust and never beyond.
[00:07:03] So let's say you do your gene sequencing in the Mass General Hospital. You obviously entrust Mass General. So now if we want to train an AI, that's precision medicine for a particular illness, and your data could be included in it by doing the training in Mass General and all the other hospitals., and then AI will aggregate the models from the hospitals, but never your personal data. So that kind of a privacy computing technology could potentially one day allow us to have our cake and eat it too.
[00:07:35] Jordan Harbinger: Wow. For me, as somebody who looks at things that are supposed to be secure and be like no, that's really scary because of the idea of a hospital that's operating on whatever budget and has like two IT security guys or zero.
[00:07:47] Kai-Fu Lee: Yeah.
[00:07:48] Jordan Harbinger: And they're like, "No, no, no, no, no, don't worry. We're going to keep tens of thousands/hundreds of thousands of people's genomes, totally secure from bad actors because we put like a password on the file—" it's just like, there's so many things that can go wrong with that but I realized that's what happens with every technology, right? Online banking. Now, we got hackers stealing money. It's kind of just the way it is. It's just a little scarier when it's, here's the exact thing that can kill these specific people or here's the thing that you're allergic to, or here's the disease that this person is going to get, you know, it just becomes so much more personal literally in every way.
[00:08:23] Kai-Fu Lee: Yeah. We need to boost security everywhere. I mean, people might think, "Hey, if I just stored my genetic sequencing on my phone, then I feel safe." But actually that's the easiest thing to hack into, right? Even worse than hospital IT.
[00:08:36] Jordan Harbinger: Yeah. I mean, I would definitely not trust my phone with anything that's that important, you know, even my credit card numbers where I have to put — I think I'm liable for like 50 bucks worth of fraud or zero-dollar worth of fraud, even then I'm like, "Eh, I don't know if I want that on my phone." Banking data is one percent as scary as having my genome online. With AI, we talked before about the key bottleneck in developing AI is the amount of data, right? And China, the United States, any major superpower has a huge amount of data coming in. And in AI 2041, you mentioned that some of the AI, the new stuff is programmed by — and paraphrasing here — ingesting 500 million pages of information and things like that. Is that just the whole Internet? Where's that information from? What is that? Is it Google books?
[00:09:26] Kai-Fu Lee: Yeah. For most applications, you actually want data that is closed-loop relevant to your business.
[00:09:33] Jordan Harbinger: Okay.
[00:09:33] Kai-Fu Lee: Yeah. So Facebook data would be no good to a hospital and vice versa. So by and large, you want to find what app you're going to run, collect the data relevant to that, and you can collect the data and use it to optimize some metric. That's the normal application. Now, there is a new technology that is coming up. Some call it foundation model. Others call it a pre-training, a generic pre-training, followed by fine tuning. And what that means is suppose we don't have any relevant data or very little relevant data, can we take the whole data from the whole world and train a general model that consumes and ingest everything? And then when you have a particular domain, then that can be fine tuned for the domain. So many of your audience may have read about GPT-3 and some of the new Google LaMDA and BERT and transformer, those are in that class. And it's pretty amazing that a gigantic network trained on everything in the world.
[00:10:34] And to answer your question, yes, it is every text we can find anywhere. It can basically have seen everything. Then when you ask it to do something like write a limerick about Elon Musk in a Dr. Seuss' style, it can do that, because it actually has a little concept grouping of Dr. Seuss and limerick and Elon Musk. So that ability is something that five years ago I did not think would work as well as it does. It in fact has no human program's concept of what Elon Musk or limerick or Dr. Seuss. And it's totally self-organizing with no supervision. And then when you have that data, when you want to fine tune it to do something specifically, whether it's to write poems or generate music or answer questions about technology or pretend you're talking like Albert Einstein, it can do all that with mistakes. But I think the level of fidelity and quality is amazing. I think the mistakes will be reduced over time and this can lead to many AI new applications.
[00:11:39] Jordan Harbinger: So if we have — and by the way, for people who don't know, GPT-3 transformers, some of these things, these are — what would we even call them? They're AI, they're not bots. That's too simple. What do we call them? What do we call them? Systems sounds too generic to me. It's like, this is the entity, this is the robot, right?
[00:11:56] Kai-Fu Lee: It's a large generic model trained on everything known to mankind. So it's kind of like when we were very young, just learning concepts of language. Maybe before elementary school, we read and watched TV and listened to people and our brains form a certain connection, neuron firing that allows us to gain a general understanding of language. Then based on that knowledge, when you take a class in arithmetic or chemistry or United States history, you can draw upon your general knowledge and then learn something. So that is very much akin to this foundation model, which is one name to call it, which includes a generic pre-training, that's like learning everything about the language. And then fine tuning, that's like making a workforce specific domain.
[00:12:49] Jordan Harbinger: And this is different than what we have today, typically in our computer, right? Which is like a rule-based system and an early AI, which is like an expert system. This neural network. It's not, when there's this input, you got to do this and then there's that output. This is a neural network that has a different approach to problem solving, right? So the less human interference with what the AI is doing, the better the outcome versus a computer like mine that I'm using now where somebody had to tell it pretty much exactly what to do with everything that I'm putting in. Right?
[00:13:19] Kai-Fu Lee: Yeah. Yeah, exactly. It's very counter-intuitive. People would think that with everything that we humans know, we should program every detail.
[00:13:26] Jordan Harbinger: Right.
[00:13:26] Kai-Fu Lee: It turns out that our rules are very simple, very brittle. The rules whereby we make decisions are very brittle. The reason AI can beat us in so many tasks from game-playing to reading radiology to diagnosing type sickness, it's because AI is able to consume so much data from so many permutations and draw its own mathematical conclusion. So when AI makes a decision or makes a prediction, it is doing so on a thousand dimensional space and finding a particular way to divide up the yeses and nos and humans can never do that and can never comprehend that.
[00:14:04] Now, humans still have to do a little bit of programming. One is tell the network what the goal is, right? So Facebook would tell the neural network, get people to click more, get people to read more.
[00:14:17] Jordan Harbinger: Right.
[00:14:17] Kai-Fu Lee: Amazon would click, show them good products that they're likely to buy, to maximize my revenue. So each company could program the AI in that sense. That's an objective function. It's a goal that we want to accomplish and have AI learned to optimize. And of course the human has to create the architecture of the network, how large it is, how it's connected, what sequence in order to train them. It's a little bit like a black magic.
[00:14:43] Jordan Harbinger: Yeah.
[00:14:43] Kai-Fu Lee: It's not programming in the rule sense, but the human involvement is still not non-zero, but it's not as substantial as certainly not as detail oriented as people's intuitions might be.
[00:14:56] Jordan Harbinger: I think anybody who's gone on YouTube to watch one thing. And that has like shook themselves off three hours later and realized that they watched a video on like World War II. And now they're watching something distasteful. Let's leave it there. Like they informatively know what AI is because you just go, "Why did I watch five additional or 25 additional videos?" It's because the AI was like, "This guy likes these kinds of things. Let's do that but one inch to the left." And then the AI just keeps doing that until I wake up and I realize I haven't eaten or showered and my day is ruined.
[00:15:28] Kai-Fu Lee: Right.
[00:15:28] Jordan Harbinger: With AI, won't AI also be programmed with bias and a lack of common sense? If we're training it on bulk information from a wide swath of society that let's say represents the population at large, right? We were ingesting Twitter, we're ingesting Facebook. We're ingesting all the blogs of the world. Do we not maybe want to feed the AI everything? Because there's some ridiculously dumb, bad thinking, unethical stuff and horrific things that we see on social media let alone everywhere else. Do we want to keep that away from the AI or is it our way for us to say, "Hey, by the way, this is low quality information"?
[00:16:04] Kai-Fu Lee: It's a double-edged sword. On the one hand we definitely want to filter out something due to quality, due to misfits, and also due to misrepresentation. If you feed it content all from men, then it will completely miss the women angle because of omission or because of the ratio. So I think balance is important. Quality, quality is somewhat important because when humans try to play God with AI and says, "Ah, you shouldn't have had this information, you shouldn't be given that." Then you're removing data and less data makes a less powerful AI system. So generally we want to balance the need to remove what we know is bad with the desire to have more data. So that has more to train on. So you want to give AI the ability to make its own decisions.
[00:16:54] As an example, let's say we want to train a system that determines if someone has good credit or not. And suppose hypothetically, we have everything on the phone. Let's say we have a license to have the ability to feed that in. You would think a lot of things are irrelevant. Does what app they use have anything to do with the credit? Does the battery level have anything to do with the credit? Does the person's address have anything to do with the credit? It turns out most of them are actually relevant when you think about it.
[00:17:21] Jordan Harbinger: Is the battery level a real example? Because my theory would be that people whose battery levels are critically low all the time are just irresponsible and have crap credit. Is there a correlation?
[00:17:30] Kai-Fu Lee: There is a correlation, but not the way you stated. There's probably a 51 percent correlation, so it's slightly better than useless and you could use it. And AI would be smart to know, "Okay. Battery level has a tiny bit of correlation, so I'll consider it with everything else being equal." So actually you can throw more garbage at it, it will figure out what's irrelevant and what's highly relevant and rank them, weight them accordingly. So I wouldn't throw out all the data just because I humanly think it's not useful.
[00:18:00] Jordan Harbinger: Right.
[00:18:00] Kai-Fu Lee: But you might want to throw out data that you think is really contaminating. It's a really negative sentiment or something like that.
[00:18:06] Jordan Harbinger: This is interesting because it's kind of like raising a teenager, right? So you're like, "Okay, I know, I have to tell you about people that think that other races are bad or that the Holocaust didn't happen, but those people are idiots. You're going to see a lot of it, but I want you to just let it wash over you and forget it immediately. Don't forget it, forget it, but don't pay any attention to it any more than you need to, because it's a bunch of crap," right? Like we kind of have to show AI that. So it's kind of like raising a kid.
[00:18:34] Kai-Fu Lee: Yeah, that's right. Yeah, you want some supervision, but not too much.
[00:18:37] Jordan Harbinger: Right. But also like we almost have to teach AI some sort of common — it's hard to say common sense, right? Because that doesn't really fit. But we have to teach almost ethics, but also how to weigh things like maybe posts on social media that are spelled horribly, that are by people who only post hateful stuff are just weighed near nothing. That seems like a very tricky thing to program. Do we let the AI decide what has weight or do we program that in at least?
[00:19:02] Kai-Fu Lee: Well, we programmed the outcome, right? So, you know, we know the beginners in the companies, their outcome is make more money to get more eyeballs.
[00:19:09] Jordan Harbinger: Yeah.
[00:19:09] Kai-Fu Lee: And that's what's caused some of the problems. So in the book, AI 2041, I talk about scenarios where companies have aligned interests with a user. Imagine if there were an app that would make you more knowledgeable or make you happier or make you wealthier, whatever good metrics you think that there might be. And let's say we trained in AI on a lot of people to continue to expose you to content that would actually make you knowledgeable, then that AI would actually figure out not to show you fake news. Because fake news doesn't make us more knowledge. Or if a lot of violent content is enticing for you to watch, if it makes you very angry and not happy, if AI detects that it can choose not to show you those violent content. So I think knowing how to measure things that are long-term, definitely good for us. And then building apps on those long-term positive things, that's probably the ultimate way out of the current situation as described by the documentary, Social Dilemma.
[00:20:14] Jordan Harbinger: That kind of thing — we don't want to make that worse than it already is, right? If we have this sort of — I don't want to say rudimentary, but more or less early stage AI, and it's already doing what let's say, Facebook is able to do to us as humans and as a country or as a global society, do we want to exacerbate that times a hundred or times a thousand or 10,000, which is kind of the direction where AI is headed, of course, right? We don't want to be defenseless against that kind of thing any more than we already are.
[00:20:43] Kai-Fu Lee: Right.
[00:20:43] Jordan Harbinger: Which brings up the idea in the book in one of your earlier writings, you have four waves of AI and I'm paraphrasing here, but like there's Internet AI, right? Amazon knows what you want. Netflix tells me what I should watch next and is almost always wrong although I still maybe watch some of those things. Or algorithms, right? Crummy articles about sports that you can tell are written by a bot. There's business AI, so business analytics and deep learning and supply chain and fraud detection and stuff like that. And then perception AI, which is computers, looking at photos or listening to audio and labeling it and categorizing it and things like that. Where are we now with autonomous AI? Where it's like machines that shape the world around us as opposed to merely understanding it. So, you know, self-driving cars, drone swarms that paint houses or install windows on skyscrapers. Where are we kind of in this? Is it a spectrum? Is it a timeline? Where are we with these?
[00:21:34] Kai-Fu Lee: We're actually making tremendous progress. And a lot of that. It's coming from China because China is the factory of the world. And China has a strong incentive to automate the factories because the blue-collar workers in China are making twice as much money as those in Vietnam and other lower wealth countries. So as a result, the only way China can continue to produce goods for the world is to automate. So there's a strong push to automate. And starting at the factory is the right place because that's where you can afford to pay a million dollars for equipment that can automatically do something that maybe hundreds of people do today.
[00:22:13] And once it's perfected for the manufacturing environment, it can move into commercial. So robots for shopping malls and restaurants. And once that works, it can move to our home and robots can do our dishes and clean our homes and cook for us. So that is the progression from starting from manufacturing and within manufacturing, we can break down various tasks that we might want robots to do. Starting from a visual inspection using computer vision. That's arguably the third wave, but still an important process in the factories. And then moving into moving, moving things around turns out to be a lot easier within the factory or warehouse.
[00:22:54] As people know, Amazon bought a company called Kiva that when you buy something or you buy a bunch of things and there's a box that's coming to your home, the Amazon Kiva robot will move the shelf to a person who will pick an item, put it into the box, another shelf to the person, pick the item. That's the current workflow. So moving the shelves is the relatively easier thing. So anything having to do with forklifts or people walking around pushing things that will be wiped out and gone and done with robotics.
[00:23:25] Then after that, the picking, increasingly picking, has been improving and picking is simpler or more difficult, depending on what industry you're in. For example, if you're always picking the same thing, like in the laboratory environment, the technician, or doing a COVID test, that's very easy because you can just customize for that. Picking any arbitrary thing can be difficult because an egg will break, right? So that requires a lot more work. Then there is the hand-eye coordination, things that require dexterity with a very fine, you know, putting a screw in place, et cetera. That's the longer term.
[00:23:59] So in the factory, we're seeing right now going from easy to hard increase in number of repetitive routine work being done by robots. So that's a lot of progress being done. And some of that technology is now making its way into non-manufacturing environments. So for example, many Chinese restaurants today have waiter bots. They're not humanoid. They are bots. You go to a restaurant, so when I go to some of the restaurants I go in, I place an order on my phone and then a tray walks out to me. Not a humanoid robot, but the tray. It rolls itself to me, with the dishes I ordered. I take it off and then it sees that I took it off. I finished eating and then I click to pay. No human contact in the entire process. So that's already functioning in a number of restaurants in China.
[00:24:51] And also in consumer, in my apartment, when I buy something on the Chinese Amazon equivalent or the Chinese delivery equivalent for fast food, a robot actually brings it from the reception up to my room. So that was originally put in place to minimize contact and spreading of COVID.
[00:25:12] Jordan Harbinger: Right.
[00:25:12] Kai-Fu Lee: But now it's standard. It's so convenient because I can just go open the door in my pajamas. I don't have to worry about being embarrassed because it's just a robot that sees me, not a human. So that automation is going rather quickly. And of course, autonomous vehicles. We've had some ups and downs, but my belief is that you want to launch in relatively simpler environments like forklifts followed by airport, luggage transportation, followed by trucks, followed by buses with fixed routes, followed by robo taxis.
[00:25:44] And that's kind of the rollout we see in China today. The simplest scenarios have been nailed. We're going to tougher scenarios. I think in the US you know, way more than Tesla tends to go directly to the tough problems. Two different ways to solve the problem, both valid. And I think we're going to see relatively autonomous vehicles on the streets of US and China and other countries. In the next five years, we're going to see a lot of them.
[00:26:11] Jordan Harbinger: I mean, we see them already in Silicon Valley. It's just that there's somebody behind the wheel taxing and pretending to drive so that they don't run anybody over, right? But you can't get in one and he can't call it on your phone. It's just being tested.
[00:26:21] Kai-Fu Lee: Yeah.
[00:26:24] Jordan Harbinger: You're listening to The Jordan Harbinger Show with our guest Kai-Fu Lee. We'll be right back.
[00:26:29] This episode is sponsored in part by Brand Crowd. Brand Crowd is an awesome logo maker tool that can help you make an amazing logo design online. Using high-quality hand-crafted designs, Brand Crowd takes your business name and industry and generates thousands of custom logos just for you in seconds. It's actually cool how this works. Go to brandcrowd.com/jordan. You can enter the name of your business or your own name. You can enter keywords you'd like to incorporate in the logo like Jordan fitness, Jordan bear or whatever. It's fun. Try it out for free. Just go give it a shot. Within seconds, Brand Crowd will spit out thousands of logos for you. You can browse all of them, change the font, change the color, change the layout to as many as you like, you save them, and you pick the ones you like, and you can just buy the design files right then and there. The whole thing is customizable. Kind of an impressive, cool little tool
[00:27:15] Jen Harbinger: Check out brandcrowd.com/jordan, B-R-A-N-D-C-R-O-W-D.com/jordan, to learn more play with the tool for free and get 60 percent off Brand Crowd's premium logo pack.
[00:27:26] Jordan Harbinger: This episode is also sponsored by Public Rec. In the past, I would never be caught out in public wearing sweatpants. Even if they're comfy, I don't want to be seen with a saggy diaper butt, looking like a scrub who just walked out of the strip club. I've got a reputation to maintain. Thank goodness for Public Rec. They make leisure wear that looks good and feels great as well. My personal favorite right now is the all-day everyday pants, which I've been wearing pretty much throughout the pandemic every single day, all of time. So comfortable on long flights, you can wear them to bed. You can wear them out to nice dinners if you want to. Public Recs are always my go-to pants. By some magic, they don't wrinkle, they always look new. They also have zipper pockets. So you don't have your phone and wallet fly out when you sit down. I wear them so much. I actually bought them in every single one of the nine colors to keep them always on rotation.
[00:28:10] Jen Harbinger: As the world's opening back up, make sure you've got clothes that are as flexible as your life is. Public Rec rarely discounts but right now they have an exclusive offer just for our listeners. Go to publicrec.com/harbinger to receive 10 percent off. That's Public Rec, R-E-C.com/harbinger, for 10 percent off.
[00:28:28] Jordan Harbinger: Thank you so much for listening to and supporting the show. Your support of our advertise helps keep the lights on around here, frankly. So if you want those codes, those URLs, you don't have to write that stuff down. We put them all in one place for you. jordanharbinger.com/deals is where you can find it. Please do consider supporting those who support us. And don't forget, we have worksheets for many episodes. If you want some of the drills and exercises talked about during the show in one easy place, that link is in the show notes at jordanharbinger.com/podcast.
[00:28:58] Now back to Kai-Fu Lee.
[00:29:02] I mean, are you personally ready to get into a self-driving car and just be like, "All right"? It's going to be very hard for those of us that grew up driving, I think.
[00:29:09] Kai-Fu Lee: So as a technologist, I can recommend to your audience that it depends on the environment and constraints. You may or may not want to get into a fully autonomous vehicle just yet. So if it's a bus shuttling people or the airport, no problem, very simple scenario. If it's in a tourist spot, probably no problem. If it's in a truck only on the highway or actually cars on the highway, generally okay, safe for the people. If it's in a robo bus with no humans, I would say it's okay, because also it's fixed route. So it can get a lot of data on a small number of permutations. But if it's really a car that takes you anywhere, any time, any weather, and with no safety driver in that car, I would say that's quite challenging and unsafe, at least for this year and maybe next year. I would wait and see the numbers of fatalities before I would completely delegate driving to autonomous vehicles.
[00:30:09] Jordan Harbinger: Well, it sounds like going from what you mentioned before, where there's going to be autonomous AI, automating jobs that hundreds of people used to do, especially in China, where, which is the world's factory, it seems like the inevitable result of that is going to be widespread unemployment. And that the wealth gap of people who control AI or on the factory is going to be, and those who used to work there, that wealth gap is going to be enormous, especially in countries where there's a lot of manufacturing or where there's a lot of hands-on jobs that are now automated. I mean, that's just going to be so many people.
[00:30:40] Kai-Fu Lee: Yes. But also white jobs are not at all immune to this.
[00:30:45] Jordan Harbinger: Sure. Of course.
[00:30:46] Kai-Fu Lee: In the last two or three years, a technology called RPA, robotic process automation, has really taken off a company called UiPath. And other companies have gone public and done very well. What they do is replace white-collar-routine work. The software bots sit on your computer and watch everything you do. And eventually one day it tells your boss, "Hey, I can do 70 percent of the job." And those parts of your tasks are routed to the AI. While 30 percent of the workforce remains to do the more difficult ones. And then the AI will continue to improve and chip away at it. These are routine tasks like telemarketing, customer service response, email response, managing email, marketing campaigns, expense reports, HR processing, and so on and so forth in the various admin areas of the work. So I think the AI replacement will be substantial in any routine work, white collar or blue collar, so it affects really all countries equally.
[00:31:43] Jordan Harbinger: Sure, okay. So do you think that countries are going to have time to adapt to this? Because it seems like progress is going faster than anybody assumed. So if we see widespread manufacturing, job loss and say China, and we see widespread white collar job loss in the Western economies, we're all kind of screwed, right?
[00:32:00] Kai-Fu Lee: If we're unprepared, yes, but there's a silver lining here.
[00:32:03] Jordan Harbinger: Okay.
[00:32:04] Kai-Fu Lee: Because AI will do the work. So it will generate a lot of wealth for the economies. So the question is, number one, how do we redistribute that wealth? Otherwise the inequality increases, the tycoons make all the money and while the jobs are gone.
[00:32:17] Jordan Harbinger: Yeah.
[00:32:18] Kai-Fu Lee: So something like the universal basic income needs to be considered, but that's not enough because people need to be re-skilled because people not only depend on the job for the money, but for a self-satisfaction, actualization contribution to the world, meaning of existence, pride.
[00:32:34] Jordan Harbinger: Yeah.
[00:32:35] Kai-Fu Lee: So people need to be retrained and they can't arbitrarily pick a new job to be retrained because you could do what? Customer service job that one's gone.
[00:32:44] Jordan Harbinger: Right.
[00:32:45] Kai-Fu Lee: You can be trained on graphic design, and that one's gone. You need help by people who know what jobs are likely to be to last longer, and then you need to get training. And then the jobs that last longer you'll have to train longer. And you have to think hard because any simple routine, minimal thinking job will be taken over by AI. So people who want to have pride in their work have to conscientiously get training and move into professions that are not so easy or skilled jobs and some of them require thinking. Some of them require, you know, thinking on your feet, some require creativity, but also a large number of jobs in the service industry. Jobs that require a high degree of human connection and trust and warmth. While those jobs may be somewhat routine, those are hard for AI to do because AI doesn't have feeling, it can only fake feeling. And when it fakes, it makes mistakes. People don't like that. And even if they did a reasonable job faking, feeling, and try to create connection, people don't want to be connected to a robot.
[00:33:51] Jordan Harbinger: Yeah.
[00:33:51] Kai-Fu Lee: People want to be connected to another person. So there will be job increases in service sector. So some kind of a coordinated plan by governments and companies and awareness by the public is needed to do that. And that's why in the book, AI 2041, I have several stories based on how the retraining could take place and how people can find satisfaction in jobs that may look nothing like jobs of today.
[00:34:18] Jordan Harbinger: I know we talked probably three, almost four years ago now, you had said the third world, the developing world, is going to be the hardest hit because of cheap labor, cheap exports. Now, you got all these laborers that are no longer needed. It could destabilize the whole country or the whole economy in those areas. Now, you think it could destabilize everything, everybody.
[00:34:40] Kai-Fu Lee: Yes. I still think the inequality is a major issue because from a human race standpoint, when the top technologies and companies and countries can generate so much wealth and can produce goods at such a low cost, it would seem that we have a responsibility to human race to eradicate poverty, but at the same time, dealing with inequality that will be growing within the country and between country, that problem isn't going to go away. I'm not sure what the global solution is but I think we need to be aware of the problem and whether we rely on either some mechanism or just the goodwill of the people, philanthropy, to take care of that. Something needs to be done. Otherwise the inequality will cost more social tension and unease and even conflicts.
[00:35:32] Jordan Harbinger: It just seems like AI is almost — it could become an inequality machine, right? Where it really hits the poorest countries the hardest. It essentially creates — pardon the phrasing here — but it essentially creates a useless class of people that can never generate enough economic value to support themselves. In addition to what you mentioned about having no purpose and doing a bunch of psychological damage that way. Like imagine knowing that you are completely a drain on your society, you're never going to be able to pay for anything because all you can do is skilled or semi-skilled labor, which now a robot or an AI can do a thousand times better than you ever could. And you're getting older, right? So a robot can do something instantly that you've spent your whole life mastering, that's not good for you psychologically. It's not good for you. It's not good for the country to have that happen to millions of people in a decade or a decade and a half, or even in two decades. But we're moving so much faster than that. So you're right. It seems like this problem — it seems to me like this could creep up on us and we can wake up one day and be like, "Oh, we didn't plan for this at all."
[00:36:33] Kai-Fu Lee: So part of writing the book is so that people are aware of it. So that governments and companies can think about and start planning. But also I think to your point about increasing number of people finding it increasingly difficult to make an economic value contribution, I agree with that, but maybe we also need at the same time to shift the economic value into social value. So someone who can no longer drive a truck because robo trucks have taken over or someone who can not be a customer service rep because chat bots are taking over. Why can't they take on things that contribute to the society, but maybe not that much economically, but generates goodwill and warmth and connection, for example, elderly care, healthcare services, keeping the elderly and kids in the foster home company. Or for some people homeschooling for the kids, right? Is homeschooling generating economic value? Probably not. But if a parent does a great job, can the child have a much happier and better future? Definitely. So how can we, as a society, encourage these kinds of new jobs that clearly add value, but not necessarily economic value? That might be the solution because if we just tried so hard to make every job add value, while AI is chipping away at the jobs, that doesn't lead to a good outcome.
[00:37:59] Jordan Harbinger: Right, yeah, this is certainly a complex issue. That's going to end up being politicized and probably botched, which is kind of horrifying, but there's only so much we can do, right? We got to make people aware of it and then hope it doesn't slap us in the face.
[00:38:11] Going back to the idea of bias and bias mitigation, it seems like the quality of data going in would equal the quality of output coming out more or less that may or may not be true with AI. Maybe AI ramps things up nicely, but I guess where I'm going with this is let's say we train the data. We train the AI on 1.4 billion Chinese people, because it's a Chinese company that happens to be developing whatever AI that we're talking about right now, could the data then become biased against, let's say Indian people, right? Not because Chinese people don't like Indian people. That's not where I'm going with this. I mean, you have your conflicts right now, but what I mean is it's specialized for Chinese preferences, Chinese culture, Chinese ways of thought. Is it possible that there's going to be like AI cultural mismatch? Or are they ingesting so much information that all of that stuff comes out in the wash, so to speak?
[00:39:01] Kai-Fu Lee: So a company, so let's say a Chinese company wants to launch software globally. Then the company must gather data globally from India, from US and so on. Otherwise, they won't work. A good example is TikTok, right? That is a product that is global. Actually their version for China is very different from the version of the US, not only is the training data different, but usage habits are different. So I think companies that have global ambitions will need to train on global data. A similar situation is a large US company trained in AI technology to select people that they might want to interview. And because it trained not too many men, it became negative for women applicants. So it's not just the country or racial basis, but rather there has to be good balance among — if you want to provide fair AI, you need to make sure that the training data is balanced. Otherwise the bias will become inherent.
[00:39:59] So that I think can be done first by educating all the AI engineers. That they have this responsibility, not just to make money, not just to get good results, but also provide something that is. And there needs to be continued social media and other watchdog for misbehavior so the companies know that this kind of training is important. I also think there can be tools that can automatically scan every time you do an AI training and alert you that you have a data inadequacy problem, a data balance problem, and suggest that you should fix it. Just like, you know, compilers today report likely bugs and problems and warnings and leaks of memory. It can also alert potential bias and fairness issues. So I think with some efforts put in, in education and training and tools, most of the problems can go away, but undoubtedly, some will still remain.
[00:40:55] Jordan Harbinger: It just seems to me — and look, I'm a layman, obviously. So I don't know squat about what I'm talking about when it comes to AI, but it seems like the AI process — well, let me phrase it as a question, is the AI process too complex to be made transparent, right? Like if someone's debugging code, they go, "Ah, here's your problem. This is very clear. This needs to be rewritten in a way that's more flexible," but AI, it's not, you're not looking at a bunch of code. You've got a neural network. You've got deep learning going on. It's not necessarily like somebody can go, "This is your problem right there." It's not mechanical like that. So do you think the AI process is so complex that it's going to be nearly impossible to sort of diagnose these? It also seems like the more we regulate something like this, the less efficient and useful it might become because we're essentially hamstringing it. And maybe that's a good thing in this case.
[00:41:44] Kai-Fu Lee: The answer is yes and no. Yes, in the sense that the reason AI is so good is because every decision it makes is a mathematical equation involving thousands of variables. Something we, humans, cannot comprehend. If we could comprehend it, we would do it. We don't need AI.
[00:42:00] Jordan Harbinger: Yeah, we wouldn't need AI.
[00:42:00] Kai-Fu Lee: Yeah, it's better than us precisely because it's too complex to explain fully. However, because we are relatively simple-minded beings.
[00:42:10] Jordan Harbinger: Guilty as charged.
[00:42:11] Kai-Fu Lee: We can't comprehend the fancy mathematical equations. Then AI can basically dumb down the answer for us, right? So let's say I went to a bank, apply for a loan, I got rejected. I said why. So the actual reason is a complex mathematical equation.
[00:42:27] Jordan Harbinger: You didn't charge your phone battery. That's why.
[00:42:30] Kai-Fu Lee: Right — there's no reason why the AI cannot analyze its decision and come up with the top five reasons and say, "It's because you didn't charge your phone battery," or, you know, seriously, your income isn't good enough. You haven't lived long enough in a particular house and your job or too new to the job, et cetera, because ultimately it makes these decisions for many of the same reasons that we humans make. So there's no reason it can't explain a good part of its actual decision-making in a way that humans can understand. So I think that will be good enough.
[00:43:07] And I think sometimes we as humans give ourselves too much credit. Do we really think we know why we made every decision? If you ask a driver, why did you make that stupid decision and ran into the house? They could give all kinds of reasons. The reasons may not even be true. They might not want to admit they had one drink too many. So, you know, at least AI will be honest as we program it.
[00:43:30] Jordan Harbinger: Right.
[00:43:31] Kai-Fu Lee: And will attempt to explain it. And I think it will explain itself no worse than probably better than human explanation. So I think this problem will be solved for sure.
[00:43:44] Jordan Harbinger: This is The Jordan Harbinger Show with our guest Kai-Fu Lee. We'll be right back.
[00:43:48] This episode is sponsored in part by Better Help online therapy. Many people think therapy is only an option when relationships become disconnected and marriages are on the brink of divorce. While therapy is important and vital for all of the above, it's also helpful for a lot of other reasons like maintaining sanity. You don't have to wait until the walls are freaking fallen down to get help. Better Help will assess your needs and match you with your own licensed professional therapist. With whom you can start communicating in under 48 hours. It's all done securely online. Checkout better helps online testimonials such as this one: "I have not ever been in therapy before, but I must say I've had a great experience so far. I feel understood and not judged. I'm challenged to step outside my comfort zone to experience things I didn't know I needed help with. And I'm grateful for the time I spent talking with my therapist." It's a real testimonial. I recommend therapy for everyone even if you don't think you're losing it.
[00:44:37] Jen Harbinger: For 10 percent off your first month, visit betterhelp.com/jordan. That's better-H-E-L-P.com/jordan and join over 2 million people who've taken charge of their mental health with the help of an experienced professional.
[00:44:49] Jordan Harbinger: This episode is also sponsored by ZipRecruiter. There are some things in life I like to pick out myself, so I know I've got the one that's best for me. Like where to go on vacation, what kind of car to buy? Actually, Jen makes those decisions. Who am I kidding? What if you could do the same thing for hiring? Choose your ideal candidate before they even apply and that's where ZipRecruiter's Invite to Apply comes in. It gives you, as the hiring manager, the power to pick your favorites from the top candidates. And right now you can try it for free at ziprecruiter.com/jordan. So how does Invite to Apply work? Well, when you post a job on ZipRecruiter, they send you the most qualified people for your job. Then you can easily review the candidates and invite your top choices to apply for your job. ZipRecruiter's internal data shows that jobs where employers use Invite to Apply get on average two and a half times more candidates, which helps make for a faster hiring process.
[00:45:38] Jen Harbinger: See for yourself, just go to this exclusive web address, ziprecruiter.com/J-O-R-D-A-N to try ZipRecruiter for free. That's ziprecruiter.com/jordan. ZipRecruiter, the smartest way to hire.
[00:45:49] Jordan Harbinger: This episode is also sponsored in part by Progressive. Progressive helps you get a great rate on car insurance, even if it's not with them. They have this nifty comparison tool that puts rates side-by-side. You choose a rate and coverage that works for you. So let's say you're interested in lowering your rate on your car insurance and who isn't? Visit progressive.com, get a quote with the coverage you want. You'll see Progressive's rate and their tool will provide options from other companies, all lined up and easy to compare. So all you have to do is choose the rate and coverage that you like progressive gives you options so you can make the best choice for you. You could be looking forward to saving some money in the very near future. More money for a pair of noise- canceling headphones, maybe one of those Instapot type things, maybe a heated toilet seat. Those are always great for the winter time. Whatever brings you or your butt cheeks joy. Get a quote today at progressive.com. It's just one small step you can do today that can make a big impact on your budget and your bum tomorrow.
[00:46:36] Jen Harbinger: Progressive Casualty Insurance Company and affiliates. Comparison rates not available in all states or situations. Prices vary based on how you buy.
[00:46:43] Jordan Harbinger: Now for the rest of my conversation with Kai-Fu Lee.
[00:46:48] Yeah, you're right. It's kind of like when you ask somebody why they bought something, "Oh, it was on sale." Well, that's not the reason you bought something. Like, if you dig enough, you find out that they wanted it because they thought it would impress their neighbors and they're feeling insecure about it. You know, you just get it. You can't really dig down that many layers because people aren't really aware of them.
[00:47:05] Kai-Fu Lee: Yeah.
[00:47:05] Jordan Harbinger: But the AI knows that it took the following 350 or 350,000 variables into consideration. And it might tell you the top 10 of those variables that comprise 80 percent of the decision. It can actually lay those out because it's part of the equation. Whereas the human would never even, if you're lucky, give you one or two good reasons why they've done something. And most of the time it's BS, right? They don't know. We don't know.
[00:47:29] Kai-Fu Lee: Exactly, right.
[00:47:30] Jordan Harbinger: I do worry that we won't be able to retrain workers fast enough to keep up with the developments of AI. Can we even predict which workers are going to be obsolete in a few years? Kind of, but maybe not really, right? Training just takes so long.
[00:47:44] Kai-Fu Lee: Yeah. We can sort of predict, maybe not exactly right, but roughly which ones will go first. For example, we can pretty accurately estimate that most automotive repairs will need to be changed because cars are changing, not just AI, but electrical vehicles. It's the phone running on simpler mechanical parts. And the job like plumber, that's not going to be replaced by AI anytime soon because every building, every house is different. A plumber's job is actually a little bit like a detective. You See the leak. But you have got to find out which part of the wall to knock open so we can make some predictions.
[00:48:18] So how would you do the training, right? I think we can. This, the basic thing is all the vocational schools really need to go through a revamp of their curriculum. Don't train that many traditional auto mechanics, train more plumbers, and train more robot repair. Similarly, if you go to medical school, go into medical research, which AI cannot do in humans, creativity is needed, but maybe have fewer students in radiology and pathology. These are areas where AI will become increasingly good. So we can do a better map and can provide the training. There is still a very interesting additional issue, which is AI will take over the routine jobs first and routine jobs tend to be entry-level jobs, right? You do bookkeeping before you become a good accountant. If you're a journalist, you first write about quarterly reports before you can become a columnist. But if AI is taking over all the routine jobs at an entry level, how does someone ever become a senior accountant or a famous lawyer or a great journalist and columnist?
[00:49:22] One of the stories in the book, AI 2041, is maybe we need to have made-up jobs to give people the impression or pretest that they are working. But actually they're gaining experience.
[00:49:35] Jordan Harbinger: Like a podcaster.
[00:49:37] Kai-Fu Lee: No, no podcast is not that easily taken over.
[00:49:40] Jordan Harbinger: No, no. It's just a made-up job though.
[00:49:41] Kai-Fu Lee: Why is that a made-up job?
[00:49:43] Jordan Harbinger: I mean, it makes me feel like I'm working, but really let's be honest. How hard is this? Right? I read books and I talk to smart people.
[00:49:49] Kai-Fu Lee: No. When I say made-up job, I mean that you think you are doing something useful, meaningful, but it's actually, you're not. So someone thinks that—
[00:49:58] Jordan Harbinger: I'm still thinking that I might've nailed it on this one, yeah, but go ahead.
[00:50:02] Kai-Fu Lee: All right. So here's an example, a new person who gets hired by a New York Times who is not that experienced yet, needs a lot of practice is writing a bunch of quarterly reports, simple things, but those things never get published.
[00:50:16] Jordan Harbinger: Oh I see.
[00:50:17] Kai-Fu Lee: Or maybe they get modified by AI and then published, or they get published, but AI could have done the job, but by doing many quarterly reports, they get to now do annual reports. They get to do reports on industries. They then get to become columnists. We may need to have these jobs that are really practitioner jobs, that the work you do is meaningless to the society but it's meaningful for your growth. So one of the stories in the book talks about a new approach called job reallocation. That is when a company lays off a bunch of people, they get retrained, and they get assigned to the domains they want to go into. Let's say they want to be a journalist. Then they think they're working. But actually the output of their work is not being used anywhere, but they are improving their skills until they're at a point when they can take a more senior job.
[00:51:08] Jordan Harbinger: Yeah. It's kind of freeing in a way, right? Because instead of keeping workers doing, let's be honest, mindless crap for years because you need it to get done, you can actually just get them enough work that sort of on the low end of the totem pole, enough to get trained, to do something more interesting. So instead of maybe having to pay your dues, so to speak, at the newspaper for five years or more writing on the police blotter and writing about petty crimes and all this other stuff, you do it for a year until you can really throw something down, that's deserving of the paper itself that gets published. Right? So we might actually end up in more satisfying work earlier in time than we normally would.
[00:51:46] Kai-Fu Lee: Yes, I think so. And ultimately, I mean, the process will still be difficult and painful, but ultimately let's say 20 or 30 years from now when AI does all the routine jobs and we can be liberated from it, then we're free to do things that we love and things we're passionate about, things we're good at that includes spending time with family, homeschooling our kids and learning about poetry or sculpture. And I think our lives will be much more fulfilling and interesting if we could get over the hump that is ahead of us.
[00:52:17] Jordan Harbinger: Yeah. Look, if there's a lot of exciting innovations and you write about many of them in the book, one of which was AI transforming education. Imagine a one-on-one custom teacher for literally everyone in any subject that you want at any age. And it's basically free because what's it going to cost for me to plug into GPT-20 or whatever we have in a few years for it to teach me a very specific but random skill that I want to learn. And it's just teaching me on my phone, right? And it doesn't need food. It doesn't need housing. It's infinitely patient with all my stupid questions and dad jokes, right? And every single person on earth pretty much can have this in their native language at any time.
[00:52:54] Kai-Fu Lee: Right. Right. And for younger children, this could be entertaining.
[00:52:58] Jordan Harbinger: Yeah.
[00:52:58] Kai-Fu Lee: For a kid that loves basketball, it can make learning like you're playing basketball. If someone who likes a superhero, that kid can become the superhero and try to fight villains. And then in the process of doing so learn math in the process. And also earlier we talked about AI introducing inequality, but probably in education and perhaps in healthcare, AI can actually become equalizing by providing a decent quality of service to anyone, whether they're wealthy or not.
[00:53:30] Jordan Harbinger: Another use that's really exciting is drug discovery and repurposing. And I didn't really think about this, but it completely makes sense that we're already using drugs that are, quote-unquote, "safe" at certain doses or in certain use cases for humans. But we don't necessarily know everything that that drug can treat because nobody's thinking about every rare, random disease in every drug that's ever been tested safely on humans. So AI can sort of figure that out. In addition to helping find, let's say vaccines for novel viruses like we're dealing with now.
[00:54:01] Kai-Fu Lee: Absolutely. Today, one of the big problems is that pharmaceuticals may spend two billion dollars to invent a new drug. They only go after relatively common sicknesses because they're rare—
[00:54:14] Jordan Harbinger: Huge markets.
[00:54:15] Kai-Fu Lee: Huge markets, right? If there's a disease where only a hundred thousand people in the world have it, they can't get their money back with a two-billion-dollar investment. If AI can analyze these pathogens and targets and come up with a small molecule or other solutions to work with the scientists together, it's a symbiotic process. AI is not replacing scientists, but then AI can help a scientist invent 10 times as many drugs in a given period of time because AI rules out certain permutations due to his internal evaluation and prioritization. So the ultimate effect is the cost of discovering a drug may drop by 90 percent. Then many rare diseases will become treatable. And then many common diseases may have multiple treatments. Each design for a different type of people based on genetic sequences or race and gender or age or whatever gives the greatest efficacy. So I think we can definitely look forward to living longer and healthier, partly because of the new drug discovery, partly because of precision medicine, partly because we've got the new genetic sequencing. So, you know, we will probably live longer, maybe another 40 years. I can still come to your podcast.
[00:55:29] Jordan Harbinger: I'm pretty sure people will be sick of me long before then. You mentioned equality before. It occurred to me that a lot of drug companies can't afford to solve problems or try and cure diseases or treat diseases that, let's say, only occur in Sub-Saharan Africa, if it's an expensive cure, because the market while it's big is extremely poor. But if we can have AI say, "Hey, you know what? This is a really easy cure for this Ebola or some other type of virus that's even smaller and less scary." It can find it like that. And then it doesn't cost billions of dollars to discover and distribute. The marginal cost might even be negligible. It might even find a solution for this while looking for something. And we end up being able to treat hundreds of thousands or millions of people in very, very poor countries where normally a drug company would go, "We're not going to invest in that. We're never going to see a return," like you mentioned, with rare diseases. So there's a lot to be said for quality of life improvements when it comes to AI and drugs. And I didn't realize that finding a vaccine was almost like a very complex equation. You have to solve proteins somehow. Are you familiar with this process at all?
[00:56:36] Kai-Fu Lee: Yeah. There are multiple processes you have to go through. One is you can take the pathogen, fold the protein to figure out where is the target. Target is like a little pocket in which the treatment can go into, and then you can hypothesize how to fold the protein, where the target might be, and what to put in it that would counter the pathogen and treat the disease. So not all vaccines are invented that way, but this is one possible path. And invariably, all drug discoveries are looking at the infinite space, all the ways of treating all the problems. And AI can help eliminate unlikely paths and help select and prioritize more likely paths so that scientists have a much higher likelihood on their process of conjecturing, experimenting.
[00:57:26] There's the other side is once a drug is conjectured and tired then early success, it needs to move into wet labs. It needs to move into the actual trials. And that's a process where AI can help again, by having these little robo technicians that can do experiments 24 by seven, with no errors and no risks for contamination that can further accelerate that part of the drug discovery. So I think the whole chain is something AI can fit in very nicely, and we're going to see many public companies that will become listed that does AI drug discovery. And traditional pharmaceuticals will be either given the run for money or they will have to find a way to learn and embrace this new technology.
[00:58:12] Jordan Harbinger: You mentioned in the book in AI 2041, that we're going to see a lot of games and other applications. I mean, there's a lot of application in the book. It's a really good book full of stories. And some of the stories resemble a Black Mirror episodes, if you are familiar with that show. But I'm wondering when we're talking about things like mixed reality, where we are looking at something and we can maybe see the score over the game or different sorts of layers to what we're actually interacting with in real life, it almost sounds like — and I'll ask for your prediction here. Are we going to see something like Google Glass again, where we have our goggles and we're looking around and we can see a warning, there's a car coming, or here's a restaurant that has your favorite food in stock right now, or a store that's having a sale? Like, are we going to maybe see that type of thing yet again?
[00:58:57] Kai-Fu Lee: Absolutely. I think that Google Glass was just way before it's time. And also it was packaged poorly that people think is a privacy issue. So those issues will be resolved in parallel. In order for such a pair of glasses to work right, that you can see and get super imposed content on it, that could be fun, or it could be for training, or it could visualize new spaces, it requires a couple of things. One is it can't be too cumbersome. They can be a huge headset. It can be very heavy. It can't be tethered. So that's a set of problems, technical problems, that need to be solved.
[00:59:33] Secondly, the quality and fidelity has to be high. If it's going to put things in the world that I see right now, while the better have the right lighting and the right shadows. And that's very hard to compute. So there are still technical problems there. And then there's interface issues. So suppose I see something. Can I use my fingers, not gloves or not use a trackpad or anything, but actually use my fingers to grasp that item, to put it in whatever shopping basket or something is the good interface. So all of these are technology problems that need to be overcome until a normal looking untethered glasses that can deliver lifelike vivid experience. And that's probably around five years away. So that's kind of one set of ways to do augmented or mixed reality.
[01:00:19] The other direction, we saw Mark Zuckerberg recently show having a conference with someone in an animated environment. That's more virtual reality. I think that is also going to develop towards more realistic, less tethered, more convenience and target scenarios. I think all of this will probably first find roots in entertainment and games because that's the situation where we can let our imagination run wild where things don't have to be perfectly photorealistic and 3D has a lot of value. But we do have to still solve the problems of very simple device wearing and the very high quality rendering and display. And also we can't get dizzy.
[01:01:04] Jordan Harbinger: Right, the motion sickness thing.
[01:01:05] Kai-Fu Lee: Yeah.
[01:01:05] Jordan Harbinger: Yeah. I mean, this is almost like, I won't say fine tuning because these are big problems, but our brains are very adaptable. And so the motion sickness thing maybe kind of a problem that ends up starting to solve itself in concert with better optics and things like that. But yeah, it really does seem like we are moving so quick, so much faster in that direction than we predicted. And when you and I talked three, four years ago, do you think we're moving faster than even you had originally predicted because your timelines were pretty tight back then as well?
[01:01:32] Kai-Fu Lee: Yeah, I think we've made, probably a little bit more progress than I thought we would. There's always, you know, you can predict the existing technologies on how they will extrapolate, but you can't predict new technologies. So this new, huge language model, foundation model pre-training was not something I knew four years ago, but it has really taken off. And we'll continue to see breakthroughs like that. So I think in the book, AI 2041, I feel comfortable with all of my predictions, but I'm sure I missed a couple of big ones. So the future might be more powerful and surprising than the book would portray.
[01:02:09] Jordan Harbinger: In closing here and I don't want to get too sort of cheesy, philosophical, but I'm going to go for it anyway, what have you learned about being human through your studies of AI?
[01:02:18] Kai-Fu Lee: Well, I learned that there are many things that AI cannot do and might, maybe those are the real essence of being human. I started going into AI thinking that AI would create a replica of me or figure out how our brain works. And that's the naive assumption of engineering students 40 years ago. But in building AI that has worked quite well on many, many tasks, exceeding human performance and beating us, I realized that actually, whatever AI ends up not being able to do for the long term, that is the essence of our being human and those two things. If we kind of summarize it is really about creativity and capacity to learn and our compassion and our ability to connect and love each other.
[01:03:10] Jordan Harbinger: Kai-Fu Lee, thank you very much. Always a fascinating conversation. I'm glad you were able to join us today from — are you in Beijing? Actually, I should have asked probably at the top of the show.
[01:03:18] Kai-Fu Lee: Yeah. I'm in Beijing.
[01:03:19] Jordan Harbinger: Yeah.
[01:03:20] Kai-Fu Lee: Thanks for inviting me.
[01:03:21] Jordan Harbinger: Yeah, you got it, anytime. And look, we'll do it again for the next 40 years. I hope to talk to you again in a few years and see where these predictions have landed because like I said before, this stuff is moving so much faster than it sounded like from your earlier work.
[01:03:35] Kai-Fu Lee: Yeah.
[01:03:35] Jordan Harbinger: It's exciting and it's terrifying. And the lesson here, kids is less lawyers, more plumbers.
[01:03:41] Kai-Fu Lee: Something like that. Okay, very good, thanks a lot, Jordan.
[01:03:44] Jordan Harbinger: Thank you.
[01:03:47] I've got some thoughts on this episode, but before I get into that, here's what you should check out next on The Jordan Harbinger Show.
[01:03:54] LeVar Burton: Roots really made me aware of the power of the medium of television. There was an America before Roots and there was an America after Roots and they weren't the same country.
[01:04:06] Jordan Harbinger: I'm wondering if the theme song was stuck in your head for the entire 21-year run of the show, or, or if you've had some breaks?
[01:04:13] LeVar Burton: It still stuck in my head, Jordan.
[01:04:15] Jordan Harbinger: Yeah
[01:04:15] LeVar Burton: it's still there.
[01:04:18] Jordan Harbinger: Reading Rainbow, for example, every kid watched that, whether they liked it or not, it just came on after cartoons if memory serves or Sesame Street.
[01:04:26] LeVar Burton: Or they rolled in the AV cart, you know, on Fridays and you watched in school.
[01:04:30] Jordan Harbinger: Oh, yeah, that's true. I think we did watch it in school early on, on like a reel-to-reel projector. If you want to feel extra old, I was a kid watching you, but I was watching reel-to-reel but you were on the reel. Close the windows. It's time to watch Reading Rainbow. Teacher has a hangover, which is a hundred percent what that was 20/20 hindsight.
[01:04:57] Back to Roots, why didn't you implode? You were 19. I mean, how come we're not seeing the headlines like LeVar Burton pleads not guilty, says we have to take his word for it. I mean, how come we don't see—?
[01:05:09] LeVar Burton: How long did you work on that?
[01:05:10] Jordan Harbinger: That came to me in the shower this morning.
[01:05:15] LeVar Burton: I'm just a storyteller and that's what I've discovered about myself. I'm a storyteller. I was born to story tell and I want to do it in as many ways as I can, acting, writing, producing, directing, podcasting. I'm fulfilling my purpose. I genuinely believe that, Jordan. I believe that. That we are all here for a reason. I believe that it's really important for us to discover and discern what that reason is. Right? And then pursue it with everything we've got.
[01:05:42] Jordan Harbinger: For more with the legendary LeVar Burton of Reading Rainbow and Star Trek fame, check out episode 213 of The Jordan Harbinger Show.
[01:05:52] Always interesting to talk about AI, especially with somebody who is as much an expert on this as Kai-Fu Lee. Now, we tend to overestimate technology in the short term and underestimate technology in the long term. AI is no exception to this. The book AI 2041 is interesting because it's written as series of explanations of AI and stories that illustrate the possibilities of the technology. So it's kind of like Black Mirror episodes if you've seen that. Only, you know, a little more hopeful, a little less dark. If you're anything like me, you have all these sorts of kindergarten questions about AI as well. Will they replace us? What will we do? Will my computer be bossing me around? My phone already does.
[01:06:31] AI really has developed in the past five years, beating humans and cancer diagnosis, legal sentencing, games of all sorts from Dota 2 to Go. And computer vision is now better than human vision in identifying objects and people. So these sort of kindergarten questions, I hope this episode clears some of it up, but also I'm torn. Is this even more terrifying than it was before? And is this what's going to make all of us feel old, right? Where the natives who grew up with AI are the ones that adapt. And this is possibly the technology that's going to make everyone my age just feel like we don't get it. And I'm totally ready for this. Or at least I'm totally ready to feel like that. I'm not necessarily ready for the technology.
[01:07:13] Data and storage are thousands of times cheaper than before, food service, cooking, and deliveries are all going to be automated. A lot of this stuff already is, but imagine cooking, you know, no people, from ingredients to your belly, other than you shoveling it in your mouth, right? All automated. Kevin Kelly, who was on the show episode 537, he said that AI was as important and as much of a game changer as the invention and discovery of electronics. Now think about that for a second, this is going to revolutionize everything. We're still a ways away from all this, of course, robotics isn't advancing as much as AI, which is kind of weird to think about. AI is great at thinking, but not necessarily great at moving around and fine motor tasks. We can download an algorithm anywhere. Robots need to be manufactured, shipped, and maintained onsite. So AI will make workers more productive, but not necessarily obsolete right away. Kind of like tractors were to farm hands. We still have people working on farms. In fact, we probably will for the foreseeable future as well. to pick strawberries and berries, it's just really, really hard to make a robot that can do that as well as the human.
[01:08:18] Unfortunately, when it comes to replacing jobs, poorest countries will be hit the hardest. AI, it's an inequality machine. It may actually create over time a useless class of people. And I put that in air quotes because it's a little cruel, but it's kind of true. These are people that can never generate enough economic value to support themselves. Imagine the psychological and societal damage that comes from that. Robots can do things instantly that humans have spent our whole lives mastering. That is not going to be good for society at large. And we need to start thinking about what we do now. Yes, we can retrain people, but some people aren't going to be able to be retrained in time. They're going to be obsolete before they can even be retrained. And that's provided they have the raw material and the intellect to be able to be retrained in the first place. So that's a real argument for universal basic income or, well, some sort of solution needs to happen.
[01:09:12] People generally, aside from Elon Musk, think that we are still really far from robot overlords or even generalized AI. But man, AlphaGo was China's Sputnik moment when it came to AI, this is very different than how Deep Blue IBM's Deep Blue beat Garry Kasparov, also a guest on the show, in chess in the '90s to be able to beat someone at Go is really, really a feat. And China is investing massively in artificial intelligence. That was the topic of my earlier interview with Kai-Fu Lee. It's a little scary. That should light a fire under our butts here in the Western world, for sure. You know, we get preoccupied with whether AI will even happen and what'll happen to our jobs when it does, but we aren't really thinking about China or other super powers racing to get there before us, which is really the issue that we should be taking note of here.
[01:10:01] China is set to take the lion's share of new value added to the GDP by AI and that's seven trillion dollars or something like that. So we really need to focus on this. There needs to be political will. We need to get our smartest people in on this and working on this. I know we already are, but we really need to triple down on this ASAP if we are going to be competitive in the coming decades.
[01:10:24] Big thank you to Kai-Fu Lee. The book title is AI 2041. Links to all of his stuff as usual will be in the website in the show notes at jordanharbinger.com. Please use our website links if you buy the books from our guests, it always helps support the show. Worksheets for the episodes are in the show notes. Transcripts in the show notes. There's a video of this interview going up on our YouTube channel at jordanharbinger.com/youtube. We've also got our clips channel with cuts that don't make it to the show or highlights from the interviews that you can't see anywhere else. Those are at jordanharbinger.com/clips. I'm at @JordanHarbinger on both Twitter and Instagram, or you can hit me on LinkedIn.
[01:11:00] I'm teaching you how to connect with great people and manage relationships using systems and software and tiny habits over at our Six-Minute Networking course, that course is free. jordanharbinger.com/course is where you'll find it. Dig that well before you get thirsty. And remember that most of the guests you hear on the show, they subscribe to the course and contribute to the course. So come join us, you'll be in smart company.
[01:11:22] This show is created in association with PodcastOne. My team is Jen Harbinger, Jase Sanderson, Robert Fogarty, Millie Ocampo, Ian Baird, Josh Ballard, and Gabriel Mizrahi. Remember, we rise by lifting others. The fee for the show is that you share it with friends when you find something useful or interesting. If you know somebody who's into AI, futurism, I really would love it if you'd share this episode with them. I hope you find something great in every episode of the show. Please share the show with those you care about. In the meantime, do your best to apply what you hear on the show, so you can live what you listen, and we'll see you next time.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.