The lunar landing and the power of 'deepfakes'
How the power of AI is undermining the ability to trust what we see and hear
Quick summary:
November’s theme takes a look at the intersection of technology and trust
The rapid growth in the power of AI has led to new forms of “synthetic media”
Manipulated media can entertain us, connect us with others, or cause us harm
Once easy to spot, new “deepfake” technology blows away old, clunky CGI effects
In a world of misinformation, how can we trust what we see and hear anymore?
As trust evaporates, so does the foundation that society and institutions rely on
Did the moon landing actually happen - or not?
Last month in Forestview, I examined the power of artificial intelligence and its implications on various aspects of business and life. (If you missed any of my previous articles, you can find them in the archive.) This month, I am shifting focus to the question of trust in society and how it is shifting in the face of technological change. With the recent completion of the midterm elections in the United States, concerns about misinformation have been top of mind. It has felt this season as if no week goes by on Twitter without a headline stating that an unflattering video of a political candidate has been identified as a fake. You might have even seen one or more of them in passing. Many of us feel like we can spot these false narratives relatively easily. After all, altered media has been around for a while, and we have grown acclimated to Photoshop and CGI effects in movies. But you might not know how sophisticated these fabrications have become using deep learning and more powerful computers: deepfakes for short.
Numerous examples of deepfakes can be found through a quick internet search (be aware - a lot involves fake celebrity porn). In his fantastic book Trust No One: Inside the World of Deepfakes, author Michael Grothaus chronicles the vast improvements that have been made in the last 5-6 years, with quality improving exponentially over time. One of the most compelling examples mentioned in the book is the short film titled In Event of Moon Disaster. A couple of creators at MIT teamed up with two AI startups to produce an alternative history of the famous lunar landing in 1969. Based on a speech written for President Nixon in case the mission failed but was never delivered, the fabricated video uses actual archival footage intermixed with AI-based synthetic media that merges the face and voice of President Nixon with an actor. Grothaus states that, while comforted in knowing the moon landing was successful in real life, the alternate reality is a convincing product to anyone not alive to witness it firsthand over five decades ago.
You can’t believe your eyes and ears anymore
Some deepfakes can be fun projects, but in the wrong hands could quickly cause harm. Grothaus points out that, back in 1969, the Soviet Union could have seized on the Moon Disaster video to undermine the credibility of the United States. In his words, “Propaganda has potentially never been so dangerous.” This is why the U.S. Department of Defense has been focused on the threat of deepfakes for a few years now, and China has severely cracked down on their creation. However, as Grothaus highlights, there are differences between the desire by democratic regimes to cut down on disinformation and “fake news” on the one hand and the stifling of dissent and speech in autocratic regimes on the other hand. A balance needs to be found, and countries across the globe are in the early stages of exploring how best to regulate this new technology.
We will soon live in a world where we will need to ask of everything, ‘is this real?’ because we will no longer be able to trust that the photos we see, the videos we watch, and the audio we hear are authentic representations of fact…
At some point in the near future, the majority of audio-visual content that we find online…will be synthetically altered in whole or in part by artificial intelligence.
- Michael Grothaus
It is not simply the fact that deepfakes exist; it no longer takes lots of money and technical expertise to produce a credible deepfake, particularly one of lower resolution that might look good on a phone or laptop but grainy on a large HD screen. The fact that access to this technology is widespread today raises the level of concern about its potential nefarious uses. It is important to remember that similar problems were introduced after the advent of the printing press and the rise of printers and newspapers. When the tools of publication and distribution were suddenly more accessible, there was an explosion in the amount of valuable content and “fake news”. Readers, initially under the impression that everything written in ink was true, learned over time to be discerning customers.
Grothaus argues that digital media literacy is equally important in today’s world. We cannot rely on legislation alone to solve all issues with deepfakes. Similar to efforts to combat misinformation and fake news. One technique for aiding people in evaluating new information is known as the SIFT Method:
Stop
Investigate the source
Find more coverage
Trace back to the original context
As Grothaus points out, “we just happened to grow up in an era where, until now, video and audio weren’t malleable like other mediums…that era - that aberration - has now passed.” These mediums have changed, and so must our expectations.
What does a loss of trust mean for society?
The loss of trust goes beyond actual deepfakes. The mere possibility of deepfakes opens up a problem known as the “liar’s dilemma.” Even actual events can be denied by those who wish to avoid the consequences of truth. It will be increasingly complex for ordinary humans to determine what is real and fake: according to Grothaus, we will live in a world of unreality. Today, we increasingly rely on AI to detect deepfakes created by AI. There is a game of cat-and-mouse here: as detection AI gets better at stopping deepfake AI, the deepfake AI learns and gets better at avoiding detection. Over time, most humans will look to prominent media outlets and tech firms to know what is real and what is make-believe because they will be the only entities that have the resources devoted to helping us with this problem.
So why happens in a world where we can no longer be sure if what we see on our screens is true or false? If we can no longer believe what we see with our own eyes and hear with our own ears? Deepfakes could create a zero-trust society within the next decade, and that's something we all need to be prepared for.
- Michael Grothaus
Suppose people begin to give up on expending energy trying to determine what is real and what is fake, a condition known as reality apathy sets in. When this occurs, social cohesion is lost because we no longer have the confidence to trust what we see and hear. Without social cohesion, we lose trust. Without trust, many of the institutions that society depends upon begin to collapse: schools, courts, science, news media, and more.
Some solutions have the potential to help, including blockchain time-stamping to create a “digital fingerprint” to help authenticate videos. Another idea is provenance-based capture, where the device used to record the media embeds its unique digital signature that would be impossible to alter. Scrubbing social media feeds might help those concerned that their likeness could appear in a deepfake video depicting, for instance, committing a crime that did not occur. (Each video contains 30 still images per second, so even one 20-second selfie video contains 600 photos for a deepfake AI to train on.) Ultimately, though, the best defense is awareness and education for each of us. We must fight our confirmation bias, our desire to quickly seize upon a new video as evidence that our pre-existing beliefs were correct. The biggest concern is that, even if we can definitively show something is a deepfake, the truth still does not matter to us.
Has a deepfake fooled you before? Have you inadvertently shared a video or audio clip that turned out to be a deepfake? Are you excited about the possibilities of deepfakes - such as bringing back a loved one in videos - or worried about negative consequences? Do you see the potential for positive outcomes with deepfakes? Do you believe deepfakes affected the midterm elections? How worried should we be about deepfakes in future elections? Who do you trust as a source for reliable information on whether something is a deepfake?