Nina Schick (@NinaDSchick) specializes in how technology is transforming geopolitics and society in the 21st century. She is the author of Deepfakes: The Coming Infocalypse.
What We Discuss with Nina Schick:
- What deepfakes are and how they’re created using technology that is getting exponentially more sophisticated every year and accessible to anyone.
- How deepfakes go beyond novelty and into dangerous territory occupied by blackmailers, propagandists, terrorists, conspiracy theorists, and other unsavory ne’er-do-wells who profit by making the rest of us doubt what we can trust as reality.
- Why deepfakes are so much more concerning than reality-bending manipulations of the past.
- The potential positive uses of AI-assisted deepfake technology that will change our world as dramatically as the advent of the Internet.
- How we can discern between real photo, video, or audio footage and imposter deepfakes to avoid being duped by 21st-century technoshysters.
- And much more…
Like this show? Please leave us a review here — even one sentence helps! Consider including your Twitter handle so we can thank you personally!
What are deepfakes, and why should they be a cause for concern beyond their novelty? Recently making the viral rounds is a series of TikTok videos seemingly presented by diminutive Hollywood blockbuster factory and infamous Scientology spokesman Tom Cruise. On the surface, it appears we’re catching a candid glimpse of Mr. Cruise’s personal life with a demonstration of his flawless golf swing, a presentation of his close-up magic prowess, and even a witty anecdote about time spent with glasnost granddaddy Mikhail Gorbachev. But in reality, these videos are the creation of Belgian VFX specialist Chris Ume, assisted by Tom Cruise impersonator Miles Fisher, the open source AI DeepFaceLab algorithm, and video editing tools. Sure, it took a lot of work by a professional to wind up with such convincing end results, but the technology is improving at such a rapid pace that it won’t be long before amateurs will be able to create equally credible videos to unleash upon the unsuspecting.
In a world where misinformation, disinformation, and straight-up lies can already be transmitted effortlessly across the world via social media memes by enemies of the state, impressively realistic deepfakes are just a more sophisticated instrument of propaganda propagation in the arsenal. In the wrong hands, this power could be abused to create false evidence in criminal trials, revenge pornography to ruin lives and careers of the innocent, and doctored front-line journalism by tabloid peddlers legally masquerading as legitimate news outlets.
On this episode, we talk to Deepfakes: The Coming Infocalypse author Nina Schick about how this technology is being used today, what we can expect from it in the future, and how we can discern between real video, photo, or audio footage and its deepfake imposters to avoid being duped and manipulated by the agendas of 21st-century technoshysters. Listen, learn, and enjoy!
Please Scroll Down for Featured Resources and Transcript!
Please note that some of the links on this page (books, movies, music, etc.) lead to affiliate programs for which The Jordan Harbinger Show receives compensation. It’s just one of the ways we keep the lights on around here. Thank you for your support!
Sign up for Six-Minute Networking — our free networking and relationship development mini course — at jordanharbinger.com/course!
This Episode Is Sponsored By:
Great protection. Fair prices. Easy to use. SimpliSafe is the right way to protect your home at half the size and double the range — go to SimpliSafe.com/jordan to learn more!
Purple is the best mattress tech advancement in 80 years; its mattresses and pillows come with free delivery, free returns, and a 100-night trial. Go to purple.com/jordan10 and use promo code Jordan10 for 10% off any order of $200 or more!
MVMT believes style shouldn’t break the bank. Shop premium watches for men and women, bluelight glasses, and more. Free shipping and free returns. Join the MVMT and go to mvmt.com/jordan to get 15 percent off today!
HostGator has been around almost as long as the Internet. Does your business have an Internet presence? Save up to a whopping 62% on new webhosting packages with HostGator at hostgator.com/jordan!
Miss our conversation with comedian, actor, and director Bob Saget? Catch up with episode 372: Bob Saget | How Comedy Continually Changes His Life here!
Delicious Ways to Feel Better is a podcast hosted by Ella Mills, the founder of Deliciously Ella, and it explores the world of health and wellness through a series of interviews with world-leading researchers, scientists, and doctors. Listen here or wherever you enjoy hearing great podcasts!
THANKS, NINA SCHICK!
If you enjoyed this session with Nina Schick, let her know by clicking on the link below and sending her a quick shout out at Twitter:
Click here to thank Nina Schick at Twitter!
Click here to let Jordan know about your number one takeaway from this episode!
And if you want us to answer your questions on one of our upcoming weekly Feedback Friday episodes, drop us a line at friday@jordanharbinger.com.
Resources from This Episode:
- Deepfakes: The Coming Infocalypse by Nina Schick | Amazon
- Nina Schick | Website
- Nina Schick | Twitter
- Nina Schick | Instagram
- Nina Schick | YouTube
- Deeptomcruise | TikTok
- TikTok Tom Cruise Deepfake Creator: Public Shouldn’t Worry About ‘One-Click Fakes’ | The Verge
- What Are Deepfakes and How Are They Created? | IEEE Spectrum
- This Person Does Not Exist
- Dancing Baby | Know Your Meme
- How Photos Became a Weapon in Stalin’s Great Purge | History
- 1984 by George Orwell | Amazon
- Deep Nostalgia | MyHeritage
- This AI Makes Audio Deepfakes | Two Minute Papers
- Her (2013) | Prime Video
- In China, the ‘Great Firewall’ Is Changing a Generation | Politico
- The Liar’s Dividend, and Other Challenges of Deep-Fake News | The Guardian
- China Makes Deepfakes and Fake News Illegal | PCMag
- Three Types of Deepfake Detection | Lionbridge AI
- Deepfake Detection Tool Unveiled by Microsoft | BBC News
- Deepfake Detectors Can Be Defeated, Computer Scientists Show for the First Time | ScienceDaily
- The Irishman De-Aging Fixed By a Deepfake YouTube Video | Esquire
- Processing Fluency | Wikipedia
- I Was Vomiting: Journalist Rana Ayyub Reveals Horrifying Account of Deepfake Porn Plot | India Today
- Deepfakes: Informed Digital Citizens Are the Best Defense Against Online Manipulation | The Conversation
- Women, Not Politicians, Are Targeted Most Often by Deepfake Videos | Centre for International Governance Innovation
- The “Drunk Pelosi” Video Shows That Cheapfakes Can Be as Damaging as Deepfakes | Slate
- Cheapfakes Did More Political Damage in 2020 Than Deepfakes | MIT Technology Review
- Vocal Synthesis | YouTube
- Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case | WSJ
- Producers Speculate That Donald Trump’s Post-Coronavirus Video Is A Deepfake | The Inquisitr
- Adobe Co-Founds the Coalition for Content Provenance and Authenticity (C2PA) Standards Organization | Adobe Blog
- This Camera App Is Designed to Fight Fake News | Wired
- Before Zuckerberg, Gutenberg | The Atlantic
Nina Schick | Deepfakes and the Coming Infocalypse (Episode 486)
Jordan Harbinger: Coming up on The Jordan Harbinger Show.
[00:00:02] Nina Schick: Fake pornography has a long history. So this is not just a photo-shopped image of a celebrity's face stuck onto a porn star body. This is a real-life video where the celebrity is moving her face, she's got different expressions. AI is getting better and better. So less training data is needed. I can make a nude image of your sister, your wife, your mom, and young girls from a single photo, for example. And there were over a hundred thousand of these images just being publicly passed around in these telegram groups. The key point is the AI can basically be used to hijack anyone's biometrics. So it's pretty clear that it's going to leach out into other forms of malicious use with fraud, with political disinformation, as a tool of intimidation. We're already starting to see that.
[00:00:53] Jordan Harbinger: Welcome to the show. I'm Jordan Harbinger. On The Jordan Harbinger Show, we decode the stories, secrets, and skills of the world's most fascinating people. We have in-depth conversations with people at the top of their game, astronauts and entrepreneurs, spies and psychologists, even the occasional billionaire investor, rocket scientists, or extreme athletes. And each episode turns our guests' wisdom into practical advice that you can use to build a deeper understanding of how the world works and become a better critical thinker.
[00:01:21] If you're new to the show or you're looking for a handy way to tell your friends about it, we now have episodes starter packs. These are collections of your favorite episodes, organized by popular topics to help new listeners get a taste of everything we do here on this show. Just visit jordanharbinger.com/start to get started or to help somebody else get started. And of course, we always appreciate that.
[00:01:43] Today's episode is about deepfakes which is an incredible phenomenon that if you haven't heard about it yet, this is about to rock your world. And if you're listening to this in five, 10 years, you're going to be like, "What? This was news. How did you not know about this?" I'm serious. This is going to take the world by storm. This is not hyperbole. Deepfakes are essentially software/computer created videos that are, of course, synthetically generated, but they look like they were filmed. So it's synthetic media. You may have just recently seen some video of Tom Cruise doing a magic trick, and then you find out it's not Tom Cruise at all, but some guy on TikTok with some software. That's a deepfake. It seems harmless, right? Kind of a novelty thing.
[00:02:22] Well, it all depends on the application. What if deepfakes are used to create disinformation? What if they're used to create or discredit evidence in a trial? What if they're used to create revenge porn about somebody that you know? What if it's about a minor? Well, it turns out this is already happening. Today on the show, my friend, Nina Schick and I discussed where this deepfakes phenomenon is going, how it's being weaponized against us, and what we can do about it. You like that dramatic pause right there. Very intense, right? I think this episode was fascinating. And it's a great intersection of technology, social engineering, information warfare, and the Internet. And Nina really is the expert in this area. So I think you're going to love this conversation as much as I did.
[00:03:04] And if you are wondering how I book great, amazing folks like Nina all the time, these authors, thinkers, and creators, it's because of my network. I'm teaching you how to build a network, just like mine, whether it's for business or personal reasons, just check out jordanharbinger.com/course. It's free. I don't need your credit card. None of that stuff. Just go to jordanharbinger.com/course. And by the way, many of the guests you hear on the show, they subscribe to the course. So come join us and you'll be in smart company where you belong. Now, here's Nina Schick.
[00:03:34] Deepfakes are interesting for me, because this is something that I've seen used in various instances. And we'll get into that here later in the show. Obviously, there's some sort of — I hate to say obvious use, but like the first thing people think of and a lot of people's first exposure to deepfakes is something to do with celebrities. And I'll let you explain that. So I don't come across as the creep that I'm going to if I start off with it. But also it can threaten our democracy, which usually you don't equate like revenge porn with democratic threats or threats to the existence of the American or Western way of life. And I want to get into that here in the show, because what sort of technology could possibly be that powerful? Can you define what a deepfake is first for people who don't know?
[00:04:21] Nina Schick: Sure. Well, a deepfake is essentially a type of synthetic or fake media. So that's to say a video, a piece of audio or an image, that's either been manipulated by artificial intelligence or wholly generated by AI and this amazing ability of AI basically to create fake media has only been possible really for about the last five years. And it's getting better and better and better. One of the things that's very interesting about it. Is that it's very good at recreating humans and all that you need in order for AI to basically learn how to recreate a human is the right training data.
[00:05:02] So let's say — this has already been done by the way. You want to get a machine learning system to generate images of fake people. You need to train it on a data set of lots of human faces, and then wala, you basically have an AI system that can now at the click of a button generate a fake human face. That looks so authentic, so real that to the naked human eye, you can't tell it's not real. And if you want to check it out, you can check out this person does not exist, just to see how amazing it is at doing that.
[00:05:31] Jordan Harbinger: That's thispersondoesnotexist.com, right? This is a website.
[00:05:34] Nina Schick: Exactly.
[00:05:35] Jordan Harbinger: And I'm thinking, okay, where do we get a huge database of human faces that are all digitized? And the answer is what? Facebook, Instagram—
[00:05:42] Nina Schick: Facebook, google search, very, very, very easy. The thing that is really alarming. Okay, so this instance, you're talking about people who don't exist. AI can make these faces that look like real images, but another thing it can do is it's very good at basically, I call it hijacking your biometrics. So if you have a digital footprint — and who doesn't online, right? An image on LinkedIn, on Facebook, on Instagram, wherever, or a piece of audio of your voice, somebody can take that piece of data and get an AI to basically learn how you look and sound and make a deepfake of you saying and doing things that you never said and did.
[00:06:23] So it's not only very good at generating fake people. It's very good at generating you in fake scenarios. And the key thing is, again, we're just at the very, very beginning of this technology. So the amount of training data that's needed is going to be less and less and less to the extent where it's only a couple of seconds of your voice or a single image taken from Instagram, Facebook, wherever that's enough to make you the target of a deepfake.
[00:06:49] Jordan Harbinger: So to be really clear here. This is not someone editing media in a deceptive way. It's not me saying something like, "Well, I'm not racist." And then taking out the word not, and then to me admitting I'm a racist or something like that, right? It's not taking things out of context. It's creating new fake media out of whole cloth, often not using anything that I created, that's even in the same context or in the same subject line. This is maybe something like — and they're legit uses of this that are exciting, right?
[00:07:17] Like, Disney can remake a movie and put Audrey Hepburn and Frank Sinatra acting right next to each other. And then they can remake an entire movie based on that. And it will look just like those two people acting, which actually brings up a whole bunch of interesting questions from a legal perspective about like, what do those people's estates get paid and acting fee for their dead relative performing. That's probably a different show on different topics. But it's scary because you know, we've seen like — and we'll get to some of these examples later. We've seen like politicians be creatively edited, but this is like three levels up from just to create a slowing down the video or something like that. And it can be used in pretty horrible ways.
[00:07:57] People don't know. I really noticed when you go to thispersondoesnotexist.com, I kind of expected these cartoonish looking faces, like things were off a little and I was like, "Oh, you know, see how this eye looks." You can't tell that it's fake. That's the scariest part. 10 years ago, 20 years ago, remember the dancing baby video where it's like a little baby dancing around in this cartoon. It doesn't look like a real baby dancing. Now, you could make someone's actual baby do that. And it would look as real, almost as real as if it were a video of their baby dancing. And that's scary.
[00:08:29] Nina Schick: Absolutely, it is scary. And you're absolutely right to point out, however. And I think this is probably something we should tackle right at the top. It's not only going to be used maliciously, right? Just like every amazing technology of the exponential age. It's going to be a massive amplifier of human intention. So I think what we're looking at here, when you think about synthetic media, that's media, that's created by AI, it's such a paradigm changer that I think it's just going to be just as important in the history of human communication and in the history of the development of human society as the Internet.
[00:09:06] Because increasingly the way we produce media and consume media is going to be led by AI. It's increasingly going to become synthetic. Some experts, who I interviewed when I was writing my book, think that in as little as a decade's time, 90 percent of video content online could be synthetic. The problem is, is that we're no longer going to know what's authentic and what's synthetic. So you really lose your touch on what is real, what is reality? Because thus far, you kind of still accept that what you see and hear is real, right? It's an extension of your own perception. We know there's been a long history of like tampering with the visual record. That goes way back into the, even the 19th century. For example, Stalin, the Soviet dictator used to like — there was an entire cottage industry that developed under his brutal regime that was dedicated to tampering with historical photographs. So every time his enemies were executed, you know, they'd be kind of scratched out of and removed and edited out of the photographs.
[00:10:09] But this is far more sophisticated than that. And it's also going to become accessible to everyone. So when you are talking about Hollywood level effects today, you still need a multi-million dollar budget. You need teams of special effects artists to achieve something that the best kind of Hollywood effects are. But by the end of the decade, a YouTuber with a budget of zero will be able to make the same type of thing, not only in image, but even in video form. And video is just becoming such an important mode of human communication. So you're essentially looking at a future where synthetic media is going to become ubiquitous. On the good side, it's going to completely transform entire industries. Entertainment is going to become amazing. It's going to rewrite fashion and corporate communications. However, on the negative side, it's going to become the most powerful tool of visual disinformation, known to humanity. And not only that it's going to become accessible to everyone. So we're not just talking about a state actor or a terrorist group. Any teenager can create a deepfake because the AI will do all the heavy lifting.
[00:11:19] Jordan Harbinger: There's a lot to unpack there. And most of it is, unfortunately, kind of scary. I mean, it will be cool to be able to talk to a dead relative and get close — I don't know how this would work from a therapeutic standpoint but to get closure. Or to like imagine a lonely child whose parent is deployed in a military scenario or on a business trip and it's one o'clock in the morning their time, they can't call that person, but they can be comforted at night by that person, or even just babies can get comforted by an AI mom who's finally getting some sleep, and I know you and I kind of have that. That's like the first application I want for this thing, right? The AI parent who just tells him to go back to sleep for God's sake for the 18th time at night. But it becomes more scary.
[00:12:04] The Stalin thing is creepy, right? I mean, there's — so you're saying there was a group of people sitting in a basement somewhere and they said, "Hey, that guy who's in all these photos with the leader, he's disappeared. We want to make sure that it looks like he never existed," which is very 1984, where they kind of reprint the old newspapers to say, "We've always been at war with South Eurasia." That seems like a miserable job, especially when they ask to take you out of the photos to do it to yourself.
[00:12:32] But the fact that any teenager can do this and there's no longer kind of any oversight or any decision-making body, even if it is a terrible dictator, who's making the moves that we can monitor. It's just like some kid in the class made a video of another kid in the class, and now that's the bullying that happened. You can't put that toothpaste back in the tube when there's a deepfake app that everybody gets for free from the App Store and it runs on their iPhone 25.
[00:12:59] Nina Schick: Yeah. There's already actually dozens of deepfake apps and this is what I mean about this technology becoming accessible to almost everyone. Like what's changed in the past 30 years when you think about how we communicate in this global information ecosystem? Well, what's changed is the Internet, social media, and smartphones, right? More than half of the world is connected into this ecosystem with a device that they hold in their hand. And the other half of the world who hasn't joined yet will be joining within the next 10 years. And on this device, you're going to be able to have the most sophisticated tools, visual tempering that we have known. It's going to be very, very easy to kind of use these apps to create deepfakes. You already see dozens of deepfake apps that have been exploding on the App Store.
[00:13:48] Again, look at the timeline here. This technology only started becoming available about three years ago. Can you imagine where we're going to be in 10 years' time? And I think you're really right to point out — you know, there obviously are going to be like bad use cases, good use cases, but there are profound philosophical questions that we need to answer as a society as well.
[00:14:10] To what extent, for example, is it permissible? Should it be permissible for us to interact with relatives back from the death, for example, as a form of therapy? Is that okay if we resurrect them using AI to emulate their voice, to speak to us? Or does that mean that we're retreating further and further from what is reality? And I think at the heart of it, this is really a philosophical question about: are there forms of communication as we go forward, that should be entirely authentic and organic? Or can everything become synthesized in future? Is that a good thing? Is that a bad thing? Is that even desirable?
[00:14:46] Jordan Harbinger: Yeah, I think about this probably more than normal people do, because there are so many thousands of hours of my voice out there and hundreds and hundreds of hours of my face and my voice talking at the same time, so any expression that I have, the unshaven version of me that you're looking at right now versus the clean cut version that I might've had from the time this interview an hour from now, right? Like all of these variants are available and it's cool to think that my great grandchildren can be like, "Great grandpa Jordan, what would you have done in this situation?" And my super smart AI can look over like 10,000 hours of my show and give good advice based on that. Not that it would be relevant in a hundred years, but you never know. But the downside is much more obvious and probably going to be much more common. And it reminds me of sort of every dystopian Sci-Fi movie ever where Joaquin Phoenix orders an AI girlfriend from the Internet. Have you seen this one?
[00:15:39] Nina Schick: Yeah. Yeah.
[00:15:40] Jordan Harbinger: Her, I think it's called and she ends up being super smart and talking with like a million guys at the same time and then just vanishes and says, like, "To hell with humanity." That's the good outcome that we can get. The other one is everyone is trapped in a box, talking with their AI friends and family, because the real version is completely incapable of human interaction at that point.
[00:15:59] Nina Schick: Yeah. I mean, ultimately you can frame this as, you know, is this going to become the end of reality? I mean, what is reality? It's just an extension of your perception. But if what you perceive is no longer actually real, you know, what is happening in your life. And I think there is this danger that you retreat further and further into these virtual worlds. But I think the most potent danger directly with the threat of deepfakes is very specific to liberal democracies. And that is that in order for a liberal democratic society to work, you need to have some kind of common basis of what is objective fact — what is reality? Because if you don't even agree with that basis of facts, you're not going to be able to agree to do anything in society.
[00:16:44] If you corrode the very notion that there is an objective reality, then you basically corrode the very fabric of what's holding democratic society together. And you see this with the increasing polarizing trends in Western democratic countries. Today, you see it very vividly in the United States, for instance. So you can imagine how much worse it's going to become when there is a deepfake to support every theory you might have. There's going to be no room for people to come together and compromise which, of course, is going to be devastating for a democratic style of government. It can be a very powerful tool for an authoritarian regime because you basically create a reality that isn't real and say, this is the truth.
[00:17:30] It's interesting to see that all governments from the United States to China and Russia are investing heavily in artificial intelligence, and synthetic media is a big part of that. Their militaries are doing a lot of research into it.
[00:17:42] Jordan Harbinger: Yeah, that's terrifying because I can see right now — I mean, I work a lot with China and I talked to Chinese people pretty much every day because they take Mandarin lessons in the morning. And I'm like the dangerous client they have because I go, "Don't you think it's weird that you can't say Winnie the Pooh?" And they're like, "Uh," but then 10 lessons in when they trust me, they're like, "Yeah, it's so weird that we can't say things in WeChat because it'll get deleted." And it's so weird that a colleague of mine has a red notice next to his picture that says, this person doesn't pay his debts. So his social klout score has decreased. And it's only a matter of time — like if China gets this and I don't know what the United States will do, and I'm not saying it'll be a better thing than what China is doing. But I can tell you what China will do is they will create a reality that they shoved down everyone's throat. And if somebody creates a different one or shows the real version of a video, that's similar, that person will vanish, or they will be completely unable to communicate. And they have this Great Firewall of China as it's called, that will search for unauthenticated videos that someone has made or unauthenticated videos that are actually the video that's the real version of events. Those things will get shut down very quickly. They can already do that. So it's only going to improve in a decade.
[00:18:52] Nina Schick: Exactly. And you're touching on one of the key points here with deepfakes, and it's literally called the Liar's dividend. It's the flip side of the coin. So we're increasingly going to be in this future where everything can be faked, right? Including video, which to this day, we kind of see as hard proof as evidence, which is why it's so powerful, for example, in a trial. But the flip side is that everything that's authentic can also be denied because if everything can be fake, everything can be denied. And that means that bad actors get lots of leeway to get away with things that otherwise would have been called undocumented and seen as evidence of their wrongdoing.
[00:19:34] And we're already starting to see that because even before synthetic media becomes ubiquitous — we're not there yet, right? We're just at the very, very start, the deepfake journey. We're already starting to see real life political events where authentic media has been decried as a deepfake to basically let bad actors get away with their actions. The interesting point about China, I'll make specifically in regard to deepfakes, is that China is the only country that has a blanket law that bans deepfakes. So essentially what that means is the central government has the power to say, what piece of media is authentic or not?
[00:20:13] Jordan Harbinger: Yeah, that's scary. Like many things, authoritarian/Chinese Communist Party — and again, I always have to clarify the difference between the CCP and the people of China. The people of China have nothing to do with this. It's really scary that the government says, "Okay, you know what? This is real." I get why it sounds like a good idea because, "Hey, we need a central authority that can tell if something is fake or something is real because of deepfakes. So we're going to take that power." But then it's like, "Well, wait, how do I know that you're not lying about what's real?" And the answer is you don't and that is the classic sort of technological authoritarian dilemma is, "This is for your own protection. Trust me, you definitely want this." And people go, "Yeah, you're probably right." And then like 10 years later, they're like, "How the hell did we agree to let this happen? And this is how it happens. The appeal of an authoritarian government in the first place is, "We got to figure out what the hell is going on." "You know, what I'll do is I'll clean up all the things that are confusing for you and all of the things that you feel are destabilizing. I will clean that up for you. In return, you just have to live in the exact way that I want to, and you don't have any pesky freedom to get in the way of a productive society."
[00:21:21] Nina Schick: Exactly. And that problem is actually going to be felt even in the West as well, right? Because—
[00:21:27] Jordan Harbinger: Sure.
[00:21:27] Nina Schick: —as knowledge of deepfakes and synthetic media starts to broaden the kind of distrust in society, this crisis of trust, lack of faith in institutions, mainstream media, a lot of it justified by the way, it's only going to augment. And if somebody says, Well, I'm going to be—" The central government, the government of the USA is going to be the authority on telling you what's authentic media, what's synthetic media, what's the deepfake or not, or Twitter is or Facebook is, you can see why that's going to go down like a pile of sick amongst a certain part of the electorate, right? Who are you to tell me what's real and what's not. So I think this distrust that you see in societies, just the knowledge that synthetic media exists is actually going to harden that distrust and augment the liar's dividend, even before we start seeing loads of deepfakes out in the wild.
[00:22:21] Jordan Harbinger: You're listening to The Jordan Harbinger Show with our guest Nina Schick. We'll be right back.
[00:22:26] This episode is sponsored in part by SimpliSafe. If you have 30 free minutes, you never have to worry about a break in at home ever again. That's how quick and easy it is to set up a security system from SimpliSafe. It's the kind of thing that is so easy to do. You can do it during a Netflix binge, watching the game, or listening to a certain podcast. SimpliSafe is incredibly easy to customize for your home. You go to simplisafe.com/jordan. Choose the sensors you need, Glassbreak, door sensors, whatever it is. It'll get to your house in about a week, which means by this time next week, you and your whole family can go to bed, knowing your home is being guarded. If you're like me and frustrated with some technology, some of the time — I know Jen wrote this copy because I wouldn't have said that about myself. I can attest that I was able to easily set everything up myself and you can also get help from one of SimpliSafe's experts. You know it's legit when our friend that runs personal protection for high-net-worth families also recommends SimpliSafe.
[00:23:16] Jen Harbinger: It's easy to assume everyone in your house already feels safe, but they might not. And it's worthwhile to talk about and SimpliSafe is a small, easy step to make sure everyone feels safe at home. Go to simplisafe.com/jordan today to customize your system and get a free security camera. That's simplisafe.com/jordan today.
[00:23:35] Jordan Harbinger: This episode is also sponsored by Purple mattress. As the world becomes increasingly uncomfortable, we're all looking for as much comfort as we can get. And this is a true story, and I'll tell the full version on a Feedback Friday here, but I was at the mall with Jen one day when not two seconds after I was having a conversation about the benefits of not being super famous, because I'm unshaven, I'm gross. I got a horrible stomach ache. I'm ready to like crap myself at the mall. And I'm thinking we're going home soon. We hear, "Jordan Harbinger, oh my God, Jordan Harbinger," from across the floor across the mall. Next thing you know, I'm taking photos with this guy who spotted me. It's the day before his wedding. He's buying shoes. Meanwhile, I'm about to lose control of my bowels in a selfie photo. So talk about an uncomfortable situation. But the one thing I can always count on is how comfortable my Purple's mattress/pillow is. Like that transition? That's because Purple is comfort reinvented. Only Purple has the grid. It's a stretchy gel material. That's amazingly supportive for your back and legs while cushioning your shoulders, your neck, your hips. I don't know how it does it. It's just fantastic. It's designed really well. The grid doesn't trap air, actually circulates and flows through it. So you're never overheating. You never want to underestimate the cool side of the pillow, right? The grid bounces back. It moves, it shifts. Unlike memory foam, which unfortunately remembers everything kind of like that story earlier that I remember. Memory foam has craters and divots. So the grid will make sure your Purple pillow is always staying cool and refreshing on the cheek.
[00:24:55] Jen Harbinger: And right now, you can try Purple mattress risk-free with free shipping and returns. Financing is available too. Purple really is comfort for an uncomfortable world. Right now, you'll get 10 percent off any order of $200 or more. Go to purple.com/jordan10 and use promo code Jordan10 that's purple.com/jordan10 promo code Jordan10 for 10 percent off any order of $200 or more. Purple.com/jordan10 promo code Jordan10. Terms apply.
[00:25:21] Jordan Harbinger: And now back to Nina Schick on The Jordan Harbinger Show.
[00:25:26] Look, as a former attorney, I'm just waiting for a trial where there's a videotape of the defendant, walking into a convenience store, whacking the clerk over the head with a club or something, ripping out the cash from the cash drawer. And he goes, "Your honor, that wasn't me. That is a deepfake. I was at home watching Netflix at the time." And then the government's going to have to spend $80,000 authenticating the video frame by frame with some sort of expertise. Hopefully, AI can do that, but it's looking like that. The arms race of being able to detect one AI that's created using another AI — actually, do you know anything about that? Like how are we with detection here?
[00:26:09] Nina Schick: So the first thing to say is that when AI started to be able to synthetically generate this media, it's been hugely exciting to the AI research community, right? It was a massive breakthrough that they got to this place where AI could not just categorize. So this is kind of what they do, for example, to use driverless cars, so categorize things, but actually create something, generate something. So there's been a tremendous amount of interest to work on the generation side. It pretty quickly became clear that this could potentially be a problem. And then they have been research efforts on the detection side as well. What's been covering us for the past three years in terms of detection has largely been digital kind of forensics. So researchers looking at fake videos being like, "Oh, okay, we can tell that this is fake because the eyes are not blinking correctly or behind the ears, the background is blurred." So kind of looking for telltale signs. However, that is only ever going to be a plaster, right? Because AI is getting constantly better.
[00:27:09] So you can't rely on using these kinds of forensics methods to identify deepfakes. Not only because they won't exist pretty soon and also because when they become ubiquitous. It's going to be impossible to use a human detector, right? Nobody is, you're not going to have enough manpower to look through all the videos and kind of frame by frame look up, which are real and which are not. So then you have to think about building an actual AI detector. Can you get AI to find something in the DNA of a piece of media to show that it's not authentic, that it's synthetically generated?
[00:27:43] The problem is that this is always going to be an adversarial game. The better the detector gets the better the generator can become. So as soon as you build a really world-class detector, the people who are building the generator can kind of look at what the detector is doing and find a way to beat it. And the second problem is that there are so many different ways of generating deepfakes, that you're never going to have one-size-fits-all, right? You're never going to have one model, which is going to be the ultimate detector, which you can put on all social media platforms to show you that a piece of content is synthetic or a deepfake. It's just not going to happen. So you're going to have to build lots and lots of different models to find all these different deepfakes in the wild.
[00:28:23] And the jury is still out as to whether or not you get to a point where the generation sides of the AI become so good that you can never find a detector to tell that this is a piece of synthetic media. So can the AI become so good that even AI can't detect that it is synthetic?
[00:28:43] Jordan Harbinger: That's kind of inevitable almost, right? And even in the interim, it'll come down to the person, they'll say like, "Oh, is that the real Jordan or the fake one?" And then somebody who's got like a million hours of my videos is going to go, "Nope. My AI detector says that the real Jordan's left nostril is slightly smaller than the right nostril. And it's imperceptible by the human eye, but the computer can tell and look, the AI version has too perfectly symmetrical nostrils. This is fake." But even then, if I really want to make a deepfake of you or me, the AI will go, "You know what? He's got two sorts of different nostrils. Her dimple on the right is deeper than the dimple on her left by like this imperceptible amount." They're just going to fake that, that blip of detection, like any window we have for detecting is just going to get erased within well — with AI within minutes, days, hours, a year. I mean, I don't really know. The technology — when you explained in the book, how quickly this tech moves, I was blown away. Can you tell me about The Irishman? That was something where I just went, "Okay. This is like coming at us like a train."
[00:29:43] Nina Schick: Yeah. I mean, a really powerful way to demonstrate just how incredible this technology is, is when you consider The Irishman, right? Martin Scorsese's latest blockbuster film, the storylines span seven decades. And the big challenge you wanted to take was to kind of bring the old ensemble back together, Joe Pesci and Robert De Niro, but he wanted to de-age these actors, right? To make them young again. So he can tell the story over the seven decades. He started his project in 2015. And in order to do it, they knew it would be a big technical challenge with a lot of special effects work. He had to basically film with this three-rig camera. He said it was awful. It was very, very cumbersome and annoying. And then they had a lot of post-production work. And if you went and sold a movie — by the way, the budget was in the millions of dollars and the movie came out in 2019. If you went and saw in the cinema, it didn't bridge uncanny valley in the sense that you could kind of tell that they weren't — it didn't look quite right, the de-aging effect. Kind of good, but not entirely convincing.
[00:30:48] Fast forward to 2020, just last year, a YouTuber basically took free AI software. He had a budget of zero and in a week taking kind of the footage from The Irishman, how to go at de-aging the protagonists. And you can see it on YouTube, his result is arguably far — well, it's not arguably, it is far, far better than anything Scorsese was able to achieve over five years with lots of help—
[00:31:15] Jordan Harbinger: Yeah.
[00:31:15] Nina Schick: —a huge budget and teams of special effects artists. So that's the power of AI.
[00:31:18] Jordan Harbinger: The budget for — I looked this up the budget for The Irishman and this includes actor salaries and catering and everything, but still 159 million US dollars. Let's assume that at least one million of those dollars was for the de-aging, it was probably a lot more than that. And this YouTuber did it with the gaming PC that he uses to play Fortnite on or whatever, right? So—
[00:31:40] Nina Schick: Exactly.
[00:31:41] Jordan Harbinger: Yeah.
[00:31:41] Nina Schick: Exactly. And the barrier to enter — sometimes when you see like the public debate on deepfakes, you know, you can imagine that, "Oh my God, I could just go and do this on my smartphone now. I could make a video of you using a racial slur," but the barrier to entry is still higher than that. You still need to have some level of expertise to make a really good deepfake. You need to have a computer with a lot of processing power. However, the trend, the direction of travel is clear. These barriers, that are basically technical, are not going to be there for much longer. Like I said, by the end of the decade, according to some experts, 90 percent of the video content online is going to be synthetic.
[00:32:21] Jordan Harbinger: There's a concept called — and brief aside, we may have covered this, I just don't know. There's a concept called processing fluency. What is this? And what does this mean in terms of humans believing what they hear and see?
[00:32:33] Nina Schick: The reason why deepfakes are just so visceral and powerful is because they're visual media. Processing fluency is, basically the concept that psychologists use, to refer to the fact that we as humans, when we see something that looks and sounds right, we want to believe it's true. We're hard wired to do that. It's far more difficult for us to read something, you know, words, a piece of text, on a piece of paper that makes outrageous claims and think that's true. Far, far different when it comes to visual media and the kind of Rubicon, if you will, when it comes to the manipulation of visual media thus far, because technology has consistently made it easier for visual media to been manipulated, right? In the 20th century, you had Stalin with his cross-people kind of working 20 hours on each photograph to unperson people.
[00:33:27] By the time the Soviet Union fell in 1990, you had the release of Photoshop, the first kind of mass market image manipulation software. To this day, 30 years later, people still look at images, be that on Instagram, the front of a magazine cover, and they don't realize that those images are edited. Right? People think that they're real, but video was still pretty much in the domain of Hollywood to really create a video piece of visual media that was manipulated to a highly sophisticated degree. You would still have the need for special effects artists with big, big, big budget. However, deepfakes and AI is going to blow all of that out of the water.
[00:34:09] So if you think about how powerful visual media is to us, first of all, just because we're wired to want to believe it, but also in the changing context of our information ecosystem, right? Again, over the past 30 years, smartphone, Internet, social media, so much of the content that we interact with is visual. It has to be, it's the only thing that kind of grabs us. This is potentially leading to this perfect storm where you're tearing away the barriers to manipulating even the most sophisticated type of media. The thing that was hardest to do, video, and you're doing it in this environment where everybody in the world is not only consuming video, but producing video as well. I think it's 80 percent of Internet traffic next year is going to be driven by video uploads and downloads.
[00:34:57] And there's another risk factor here, which is that perhaps in the Western world, where we've had a bit more time to kind of deal with the Internet, understand some of the ramifications and the darker sides of the disinformation. We've seen it for instance, around COVID. There is a degree of digital literacy. However, when you start talking about the other half of the world that has yet to join, that is soon going to be joining this information ecosystem, mostly in India and Africa. It is potentially even more damaging because the degree of digital literacy is obviously far lower. They're unprepared.
[00:35:34] Jordan Harbinger: Right, yeah, they are unprepared. And speaking of India — and this is a good transition here. There was a case of, I guess, deepfake porn used against a journalist in India. And it was very effective because well, one, that type of thing would be very effective generally in a conservative society, I would imagine. But also India has low media literacy. I don't know exactly what that means. Why don't you define it? And can you place India on maybe a spectrum with the US and Canada? Because I would suspect the United States has higher media literacy, but I don't know by how much, because I talked to my parents and their friends and I'm like, "What? You believe that thing you've read on like West Texas Patriot blog. It's not a news source. It's written by a guy in a basement. He made it up."
[00:36:17] Nina Schick: Yeah, you're your right to bring up the case of deepfake porn, because I don't think we discussed it yet, but the kind of first application of deepfakes, the most malicious and widespread use is a non-consensual pornography. So when these tools started leaching out of the AI research community, the first place that deepfakes basically buried their head on the Internet was on Reddit at the end of 2017. And this is where I first came across them. And the name deepfakes has stuck because the guy that was creating them was calling himself Deepfakes. He was a Redditor called Deepfakes, which was a portmanteau of deep learning and fakes. And he basically figured out how to use these free AI tools to create fake porn videos of famous female celebrities. And he posted them on Reddit and then told other Redditors how he had done it.
[00:37:08] And fake pornography has a long history. So this is nothing new, the idea, but the creations were just unlike anything anyone had seen before. This is not just a photo-shopped image of a celebrity's face stuck onto a porn star body. This is a real live video where the celebrities moving her face, she's got different expressions. The training data used to create those with pretty easily available on Google image, search videos, and images. And as soon as he revealed how he had done it, Reddit went wild. Other people started creating their own deepfakes. There was a furore and all the deepfake porn was basically banned off of Reddit.
[00:37:47] But since then an entire deepfake porn ecosystem has developed online. It's very, very easy to find deepfake porn of every female celebrity imaginable, also political personalities, everyone from Ivanka Trump to Michelle Obama, but it is undeniably agenda phenomenon, right? This is primarily targeted against women. And the alarming thing is it's not just celebrities. Normal women are being targeted now as well. Because as I mentioned, the training data, the AI is getting better and better. So less training data is needed. I can make a nude image of your sister, your wife, your mom, from a single photo, for example, from Facebook. And it's not only women that are being targeted now, minors are being targeted as well. There was a case last year where basically they found a bot being hosted on telegram, which was basically being used to generate fake images, nude images of normal women and young girls. And there were over a hundred thousand of these images just being publicly passed around in these telegram groups.
[00:38:52] The key point is it's not a tawdry women's issue, right? Because the AI can basically be used to hijack anyone's biometrics. So it's pretty clear that it's going to leach out into other forms of malicious use. We're already starting to see that with fraud, with political disinformation, as a tool of intimidation. So this instance that you mentioned about this journalist in India, it was a case of somebody who is a very outspoken investigative journalist, who was critical of the ruling Hindu Nationalist Party. So she was kind of a thorn in the side of the government. And there was a very contentious case, which was speaking out publicly upon involving a child, somebody who had been accused of child rape. And she said things that were obviously not in line with what some powerful people in the government wanted to hear. And she basically became targeted with a deepfake porn video. And to any woman, this would be utterly humiliating, devastating experience, perhaps even more so if you know, you're in India where the status of women in society, I mean I'm half South Asian and being a woman in South Asia is probably one of the worst places in the world where it can be a woman.
[00:40:02] So not only was this fake video released but all her personal information was released as well. Her telephone number, her address, she was doxed and she just became subject to this absolute campaign of harassment. People were emailing her, calling her, asking her for rates of sex. She's a brave woman, right? She's this kind of intrepid investigative journalist, but that experience really scared her. And she even spoke about it later saying, "It changed me. I wish I had kind of the courage to just do what I did before," but that kind of hits too deep.
[00:40:36] So I think going back to the digital literacy question, yes, it was very effective in India, but you can see how something like that would be just as effective in Canada and the United States. Actually now we're starting to see this deepfake creation marketplace emerge on the Internet where business executives are being targeted either by deepfake porn or perhaps you can imagine how in this environment, if a video or an audio clip of you emerged using a racial slur, how that could potentially devastate your career and your business. So porn is the beginning. There are going to be many, many other use cases. And in terms of digital literacy yes, in a place like India, it could potentially do more damage but just as much damage in Canada and the United States as well.
[00:41:22] Jordan Harbinger: Sure. Yeah. Look, we feel bad for celebrities and public figures when this happens, but how are we going to feel when it's so easy that people are doing this and creating this for little kids in their seventh-grade class? I mean, we're talking about—
[00:41:34] Nina Schick: Exactly.
[00:41:34] Jordan Harbinger: —potentially life altering consequences for kids involved in this, whether it's a kid making it as a prank and getting in trouble and getting kicked out of school or getting arrested, or the little girl that is the subject of the video, which now has essentially this horrific child pornography floating around that everyone's laughing at and she's seven or 10.
[00:41:57] Nina Schick: Yeah. And that's actually what I'm more worried about because I think when I first came to deepfakes and when I started researching and seeing what was going on your mind goes to celebrities or how it can be used as disinformation against political figures. But the reality is, these people are well-resourced. They can lawyer up. They have teams of people that can rebut. They have PR teams, crisis comms, but what happens, like you said, if it's your seven-year-old child, who's being bullied at school with a fake video or a nude image, which by the way, is very, very easy to do right now. And as I already mentioned, they exist and they're being publicly shared.
[00:42:36] And I think this is really, for me, one of the most alarming things about the malicious use of deepfakes is that we're not just talking about celebrities and politicians. We're talking about you, me, our children, every individual can be targeted in a very harmful way. Because again, as I mentioned, the training data that's needed is becoming less and less.
[00:43:00] Jordan Harbinger: This is The Jordan Harbinger Show with our guest Nina Schick. We'll be right back.
[00:43:05] This episode is also sponsored by MVMT watches. In a tiny apartment in Southern California, two college dropouts teamed up to create a watch company that broke all the rules. With fair prices, unexpected colors, and clean original designs, MVMT grew into one of the fastest growing watch brands shipping to over 160 countries across the globe. I've been wearing the Jet Noir Arc automatic watch, and I liked that it's durable and it's not heavy. I've accidentally banged the watch against the undersides of tables. I do that all the time with watches, by the way. I bang it against something like two hours after I put it on, I was kicking myself, but then I noticed the marks wiped right off and it doesn't break the bank. It's high quality for a reasonable price. Because they own the process from start to finish. You get a beautiful watch, shipped right to your door for free. And if you don't love it, just ship it right back also for free. Now, MVMT has expanded into blue light glasses that protect your eyes from your screens, minimalist jewelry, and more style essentials that don't break the bank. All designed out of their California HQ.
[00:44:01] Jen Harbinger: If you want to elevate your liquid style, that doesn't break the bank, then join the MVMT and get 15 percent off today with free shipping and free returns by going to M-V-M-T.com/jordan. Again, that's M-V-M-T.com/jordan.
[00:44:15] Jordan Harbinger: This episode is sponsored in part by HostGator. Sure, I used to be just like you a drift without a website to call my own on a merciless Internet. That didn't care if I came across to the world, like some kind of schmuck in a Google search. But then somebody introduced me to HostGator and I wised up real quick. See. I came in from the cold and took control over my Internet presence and best of all, it was super simple and it didn't cost me an arm or leg or even a secret family recipe. HostGator has plans that start at $2 and 64 cents for our listeners. It's an oddly specific number. Let's be honest, that's cheaper than a lot of parking meters. HostGator offers a 45-day complete money-back guarantee and HostGator hosts over two million websites. They've got 18-plus years of experience in supporting website owners with 24/7, 365 support. So you're never going to be on your own when it comes to managing our website. It's time to make it official. Go to hostgator.com/jordan right now to get started, hostgator.com/jordan.
[00:45:10] Thank you so much for listening to the show. I love the fact that I'm able to create this for you. My team loves that we create this for you. Please do consider supporting those who support us. And I know you're thinking, "Oh, but I can't remember the codes. They're hard to remember. Is it just slash Jordan or slash Jordan10? Don't worry about that. Just go to jordanharbinger.com/deals. That's where you find all those codes, everything in one place. So you don't have to write it down. You could find it all at jordanharbinger.com/deals. And don't forget, we've got worksheets for today's episode. If you want some of the drills and exercises talked about during the show in one easy place, that link is in the show notes at jordanharbinger.com/podcast.
[00:45:48] Now, for the conclusion of our episode with Nina Schick.
[00:45:53] Right now, the things that are more readily available, or I guess you categorize them as cheapfakes, right? We kind of touched on this in the beginning of the show where — what was the classic example from earlier last year? It was like a Nancy Pelosi video that I think they had slowed down and sped up at different points. So it looked like she was—
[00:46:08] Nina Schick: Yeah.
[00:46:08] Jordan Harbinger: Was it that she was drunk? Or I can't remember exactly the example here.
[00:46:12] Nina Schick: Yeah, so cheapfake is basically the forbearer to a deepfake, and essentially a cheapfake is a type of visual media that's been manipulated or edited or taken out of context in some misleading way, but it's not been done with AI. So it can be nothing very sophisticated at all. And the interesting thing about cheapfakes is that they have already been so effective over the past 10 years. Over the past 10 years, I worked in geopolitics. I'm looking at how the Internet, social media, and smartphones are changing politics all across the world. And one of the things that we've seen coming up again and again and again is the prevalence of cheapfakes, manipulated media to make service a political point or as a piece of disinformation. And it's been really interesting to track how during the course of 2020, these cheapfakes basically inundated the American political discourse in the context of the 2020 election.
[00:47:07] So the Pelosi video caused a furore because basically it wasn't done very well, but they slowed down her words to make her appear drunk like she was slurring her words. And this was, of course, amplified by the president of the United States, right? So you're setting the tone from the top that in order to score political points, it's okay to use this kind of manipulated media, even though it's not done with AI, it's a cheapfake, it's already very, very effective. So if you consider how effective cheapfakes are now, then just think about where are we going to be in five to 10 years’ time when it's deepfakes, which nobody can tell if they're authentic or not, everyone's already far more distrustful of what media they can trust or they can't trust. And the potential ramifications are vast.
[00:47:52] You start to see some of this in American discourse. So around the 2020 election, the use of cheapfakes was really prevalent, especially when the president started pushing disinformation that somehow the election was subject to widespread voter fraud. One of the ways he augmented this argument was by tweeting a lot of authentic videos, real videos of election workers, right? Shifting ballots as though it were proof of ballots being stolen. Now, if I'm telling you in five- or seven-years' time, you can literally create a video of like the ballots being torn up and a recording — you could actually do this now, a recording of Joe Biden saying like, "Burn those ballots." Imagine what could potentially cause in terms of unrest, civil unrest. We saw what happened with the storming of the Capitol on January the sixth. I've seen the opinion polls consistently now showing that the majority of Republican voters believe that the election was stolen from Donald Trump. And if you have this kind of discord at the heart of a society, you just can't see how that is going to become any better and future. It's hard to overcome something like that.
[00:49:04] Jordan Harbinger: What people don't realize is it's not just the video imagery that can be manipulated, it's also the vocal synthesis. And this is something that kind of freaked me out before, too, because as a guy with 1500-plus hours of my own voice, you can use the same AI. The vocal synthesis is even easier. And there are podcast editing programs now where you can give an enough samples of your own voice. And then my producer can fill in a word or two, or if there's an awkward transition or let's say I cut off in the middle of a word and he needs to fill it in, he can just type it and it will say it in my voice. And if it's one or two words, you just don't notice it's imperceptible, unless I'm listening to it very carefully. I can't even sense it. So of course, some casual listeners can't do that. There's a YouTuber that I think is called vocal synthesis or VS. And basically, he can just make anyone say anything.
[00:49:53] And you mentioned before pranksters and fraudsters, I can imagine a scenario in which. I'm the controller for a major company. And I get a call from the CEO and a few members of the board that say, "Listen, we need to take care of this. We need you to get two million dollars ready and wire it right away. We missed a payment. Here's the banking and routing info. Do you have everything? Great." And then suddenly two months later, they come back and say, "Why did you wire two million dollars to this random guy in Bangladesh or in New Jersey? What was the deal there?" "Well, you called me and told me to do that. You and three members of the board, we're on the phone at the same time. You answered my questions." "What do you mean?" And then that person goes to prison or something for embezzlement, right? Because it's so tricky, they're not going to have a recording of that phone. Right?
[00:50:44] Nina Schick: Well, do you know the crazy thing? It's already happened. You had the first reported major case of deepfake fraud reported in the Wall Street Journal in 2019.
[00:50:54] Jordan Harbinger: Oh, wow.
[00:50:55] Nina Schick: Yeah, it was the top executive of a energy company. He was the head of the British part of the company who had a phone call with the parent company where his boss was German. Now, the challenge of getting AI to synthesize your voice when deepfake first started merging was actually more difficult because everybody's voice is so different, the cadence, the tone, the accent, but in the three years, since then we've come a long, long way. So this executive had a phone call with his German boss, have no reason to suspect that he was speaking to anyone, but his German boss, the accent was right. The cadence was right. The tone was right. His boss told him to wire a quarter of a million dollars to a Hungarian energy supplier. He believed he was talking to his boss. Even though it was out of the ordinary, he did it. And the alarm was only raised when his boss called him again and asked him to do it again. And then, then, you know, some alarm bells were raised, this is a bit suspicious.
[00:51:56] And what it was fraudsters assisted by AI. Now this would have been like a high-value target whoever was behind, it would have obviously invested quite a lot into the technology. You're talking about a high sum of money, but as the technology becomes accessible, you can see how this is not only going to be used for a quarter of million dollars or two million dollars, top business executives. One favorite thing that's camera's left to do is the way your distressed relative is calling. "I've been in a car accident. I've hit somebody with a car. I'm in an emergency. I'm in jail." Could you imagine how devastating that would be where you as a parent or a husband, get a call, a distress call from one of your loved ones, telling you they need money now? You're, of course, going to wire it to them. So you can see why this is going to become a favorite tool for pranksters, especially as the technology gets better. And especially as you already mentioned right now, you already have programs that you can use as a podcast, right?
[00:52:52] Jordan Harbinger: Sure.
[00:52:52] Nina Schick: To kind of with audio track of your voice, where you can re synthesize what you said, but when it's possible to do that with just a few seconds of a clip found on an Instagram story, a TikTok video, you know, anyone's voice can be emulated.
[00:53:08] Jordan Harbinger: How do we defend against this? I mean, you mentioned AI detection methods and things like that, but it seems like we need to be skeptical but not cynical. Because if we become cynical than to use the cliche, right? The terrorists win because if Americans or Westerners or people in the UK and wherever you're listening are busy fighting each other. And we're just desperately trying to figure out which facts are actual facts, we can't respond to information warfare, propaganda, or other very large issues that require bipartisan cooperation. And that's really dangerous.
[00:53:41] Nina Schick: Absolutely. I think that is key, being skeptical, but not being cynical. It was very interesting because obviously I speak publicly about deepfakes and I wrote my book on deepfakes. When for instance, certain things happened. People want to interpret it with our political slant on it. They become almost too skeptical. So when Donald Trump went into the hospital with COVID and came out with obviously stage-managed video and photographs, you know, trying to project this image of strength that he was fine. A lot of people contacted me on Twitter asking me are these deepfake videos, you know?
[00:54:16] So it is important again, when you think about the liar's dividend, which is, I think already the most pervasive political effect of deepfakes before deepfakes themselves become ubiquitous, where people don't want to trust anything that we approach this with skepticism without being cynics. However, the second point I'd make is that this to me is a paradigm change, right? It's a paradigm change in the way that we communicate. And it's a paradigm change in the way that we interact and think about the future of this content, which we're constantly going to be interacting with this media that we're constantly going to be interacting with.
[00:54:54] So unfortunately there is no silver bullet answer. It is something that has to be addressed at a society-wide level, just like a question like climate change, because ultimately you're talking about the integrity of the information ecosystem and everything exists within this information ecosystem. Unless you're one of those people who lives in the Amazon in a tribe and still hasn't had any contact with the outside world, every single human being on this planet will come into contact with this information ecosystem.
[00:55:28] So how do you shore up the integrity of the system itself? There are some technical solutions that you can use, AI detection tools. You can also talk about media provenance and Adobe has been leading initiative here where you basically flip the equation on its head and you make the tools available for those people who are purveyors of the truth, whether they're journalists or activists, to be able to prove by capturing in the hardware of their devices, that the piece of media that they're sharing is authentic. And they, interestingly, already created a prototype with TruePic and Qualcomm, that chip manufacturer, where they have now a working prototype of a phone where you can kind of like identify the authenticity of a piece of media throughout its life.
[00:56:15] In order for that to work, though, it has to become industry standard, right?
[00:56:19] Jordan Harbinger: Right.
[00:56:19] Nina Schick: Otherwise you basically have this question of your authenticity approval photo versus — again, the trust issue comes into play. And then second, you have to think about — and this is actually a larger question for society in the exponential age, right? We are living at a time where there's going to be more disruption and flux than potentially has ever been in the history of humanity. And the reason for that is because of the exponential technological change that's coming our way. So we have to think about: how do we refit and rescheme our society to be able to function in this new world that we're creating. We have to rethink policy. We have to rethink regulation. We have to rethink legal systems. All of this means networked approach, and I think the first step is overwhelming, but I think the first step is just conceptualizing it. And that's why I wrote my book. It's just kind of understanding how does all of this fits into this information ecosystem and what is this information ecosystem. That's basically come into existence in the past 30 years.
[00:57:22] In the past, in our history, whenever there's been this huge moment of change in history of human communication, it's always transformed the future of human society. So for example, without the printing press, that wouldn't have been the reformation, the world would look entirely different. And you know, what would the world look like today if it hadn't been for kind of the technology of the information age? You know, there was no Internet, smartphones, social media, again, a completely different place. So we're still reeling with those kinds of changes that have come about in the last 30 years. And boom, now we're about to go into the age of synthetic media. It's going to take some time for society to catch up, but it is a society wide effort.
[00:58:01] Jordan Harbinger: Nina Schick, thank you so much for coming on the show. This is terrifying and I'm probably not going to sleep for a few weeks on this. But no, it really, honestly, though it is fascinating. I think the technology is fascinating. It seems like we're going to have to become much more media literate as a society. And I guess the good news is that that seems to happen naturally, right? Like as people use computers and see these things more — like my own parents, 10 years ago, they were using AOL email in any video or any photo they would have been like, wow. And I remember the first time I saw something that was photoshopped I was like, "Oh, my gosh, it's a bird head on a man's body. It must be a real bird man." And now, I'm like, "Okay, this is fake. I'm not an idiot," right? So that's sort of, it happens organically.
[00:58:46] The problem, though, it seems like as media literacy, it's trailing by like 10, 20 years for many people. So there's going to be this window in which people just believe anything that they see. And I think we're in that window right now, where you see a video of Nancy Pelosi looking drunk or Hillary Clinton stumbling around and you go, "That's real. I saw it on InfoWars." You know, where the guy is just editing videos to make things look fake. It's going to be a while before we can clearly tell. And if we have our own bias, we have to mitigate that. So there's like this decade long or longer window where we're chasing this. Our brains are chasing the computer technology to detect it. And like you said, there's going to be a time at which we can't do that anymore. So that is a little scary, but hopefully policy catches up in the meantime.
[00:59:32] Nina Schick: It's going to have to.
[00:59:33] Jordan Harbinger: Thank you very much.
[00:59:34] Nina Schick: Thank you.
[00:59:37] Jordan Harbinger: I've got some thoughts on this episode before we get into that, Bob Saget shares how humor can be used as a coping mechanism for pain and the necessity of reinvention for career longevity and fulfillment. Here's a preview.
[00:59:48] When did you know that you were funny? Like, were you a class clown the whole time or was it—?
[00:59:53] Bob Saget: Last year.
[00:59:54] Jordan Harbinger: Last year, yeah.
[00:59:55] Bob Saget: Fame is bullshit. To let it go to your head, the moment you're cocky, is the moment you've lost me as an audience. And a lot of people are attracted to it. You know, if I didn't know that secret as a teenager, I would've had a lot of girlfriends or just been quite a stud because the key was not to care. I'm calm in my skin now. I don't know if it's evident during this thing, if I'm so calm, why the hell was I clicking this chopstick? I'm demonstrating—
[01:00:21] Jordan Harbinger: Yeah, man.
[01:00:22] Bob Saget: This is what you were hearing — but you were hearing it in a much slower clicking.
[01:00:28] Jordan Harbinger: It's like a hypnotic pattern for the listener.
[01:00:31] Bob Saget: So I put your listeners to bed or me to walk around scared or thinking about people that are trying to hold me back. Nobody's holding me back. If anybody's holding you back, it's you, you know — not you.
[01:00:41] Jordan Harbinger: Yeah.
[01:00:41] Bob Saget: Not you, Jordan.
[01:00:42] Jordan Harbinger: No.
[01:00:43] Bob Saget: If anyone's holding me back, it's you, Jordan.
[01:00:45] Jordan Harbinger: This podcast is going to — you're going to see a massive onslaught of listeners for your show.
[01:00:50] Bob Saget: Talking Bob Saget's Here for You is going to get eight more people.
[01:00:53] Jordan Harbinger: Provided we don't blow it in the last quarter here, or the last 10 minutes here or whatever.
[01:00:57] Bob Saget: No, that's impossible. Well, we got more than that. We're at an hour and 12 and two of those minutes have to be cut.
[01:01:03] Jordan Harbinger: You're standing there with John Stamos, Dave Coulier, and you think no one can see you. And I heard that there was a life-size doll.
[01:01:10] Bob Saget: Yeah. Let's forget this one. This was painful. You know too much. I'm going to have to kill you.
[01:01:15] Jordan Harbinger: I know I read your book.
[01:01:16] Bob Saget: When I get there, I have to kill you.
[01:01:17] Jordan Harbinger: Yeah, we can hang out when the plague lifts.
[01:01:19] Bob Saget: Oh, I can't wait. Maybe I'm on here, but I don't like Coronavirus. We don't have wet markets in downtown LA, right?
[01:01:25] Jordan Harbinger: No, we don't. At least not with like bats and pangolins and other stuff that you haven't heard of. No.
[01:01:30] Bob Saget: Oh, what the hell happened?
[01:01:38] Jordan Harbinger: For more with Bob Saget on how the big breaks can come from one of life's worst disappointments and Bob's proven remedy for dealing with the haters — and we all have haters —check out episode 372 on The Jordan Harbinger Show with Bob Saget.
[01:01:51] Man, what a brilliant woman she is? You know, she speaks seven languages. Wow. Unbelievable. My mission on this show is to create more skeptics without birthing more cynics at the same time. And I'm telling you guys, this is about to become a major, major issue in everything from entertainment to marketing, to politics, to identity theft and beyond. Misinformation and disinformation, these have always existed, but nothing at the scale that we are seeing now and nothing at the scale and level of sophistication that we will see in the near future AI character could generate an entire work history and resume. For example, they could pose as a journalist, having interviewed dozens of fans, same as people, except all the videos are fake. They're just deepfakes.
[01:02:35] Personally, I'm looking forward to never doing interviews again. I'm just going to sit around, making deepfakes, me talking with Barack Obama and Donald Trump. Maybe even at the same time, who knows. Black PR firms, so black hat PR firms, right? The bad guys, they creating disinformation to lower competitor share price or otherwise harm corporations here in the United States and abroad. Our information landscape is becoming less and less stable and certainly less trustworthy, but we shouldn't just not trust any media. That's the issue we are seeing with these QAnon cultists and people who only read fake news blogs from the fringes that confirm their existing bias or conspiracy theories.
[01:03:14] We need AI detection tools because deepfakes will soon be good, humans won't be able to tell the difference. Insurance industry, by the way, I think they're going to be on the front end of this because they could be bankrupted with fake claims, video and photographic evidence. The technology here is really going to be something. They're going to have to use watermarking and special device IDs, signatures who knows blockchain, something, something could be a part of this. Right? We just don't know. It's got to be tackled somehow.
[01:03:39] We know that disinformation is pervasive and working against us because we can't even seem to agree that Russia interfered with the 2016 election, despite unanimous conclusions of all relevant intelligence and law enforcement agencies, foreign and domestic, academia, think tanks, et cetera. I'm not saying they put their thumb on the scale, but we know that they at least tried. The idea behind this is not just to confuse the issue but foreign powers and frankly, domestic powers, they want to break people's commitment to even staying or becoming informed at all in the first place. They want to make us — I sound like a conspiracy theorist myself now — they want to make us feel hopeless so that we just don't even try anymore.
[01:04:19] I also wonder, look, my son, he's 19 months old right now, 20 months old. Will he believe anything at all that he sees because of things like this? Anything he sees on a screen could have been created by somebody on a smartphone. And he may sort of natively/intuitively know that he's not necessarily looking at real people, but that's a generation from now. Will it take this long? We won't be able to believe our own eyes and ears at all soon.
[01:04:46] By the way, we did a whole episode on Russian election interference. That's David Shimer. That was episode 419. It goes through the historical basis for this — a lot of evidence, a lot of proof of United States doing it as well. It's not a new thing, but if you want to deep dive on that episode 419. Renee DiResta also on the show, we talked about social media and mass media manipulation. So if you want to do a deeper dive on that. She talks about the Soviet playbook and how that fools a lot of old people, not just old folks or quote-unquote dumb right-wing or dumb left-wing people. We dive into that whole topic with Renee DiResta on episode 420 of this show as well.
[01:05:20] Big thank you to Nina Schick. Her book title is Deepfakes and the in-foca-lypse or in-foc-a-lypse I guess that's one of those things that's better written and read than said. Links to her stuff will be in the website in the show notes. Please use our website links if you buy the books. All those things add up. They do help support the show. Worksheets for the episode are in the show notes. Transcripts are in the show notes. There's a video of this interview going up on our YouTube channel at jordanharbinger.com/youtube. I'm at @JordanHarbinger on both Twitter and Instagram, or just hit me on LinkedIn.
[01:05:50] I'm teaching you how to connect with great people and manage relationships using systems, software, and tiny habits over at our Six-Minute Networking course. That course is free. I'm not selling you anything. You don't even need your dang credit card info or name, nothing. Just go to jordanharbinger.com/course. I will teach you how to dig the well before you get thirsty and create relationships before you need them. And most of the guests on the show, they subscribed to the course. Come join us, you'll be in smart company where you belong.
[01:06:16] This show is created in association with PodcastOne. My team is Jen Harbinger, Jase Sanderson, Robert Fogarty, Millie Ocampo, Ian Baird, Josh Ballard, and Gabriel Mizrahi. Remember we rise by lifting others. The fee for the show is that you share it with friends when you find something useful or interesting. If you know somebody who's interested in the future of media or just wants to be scared about what's becoming for us in technology world here or somebody who's an outside the box thinker, I think this is a good one for them. Hopefully, you find something great in every episode, please share the show with those you care about. In the meantime, do your best to apply what you hear on the show, so you can live what you listen, and we'll see you next time.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.