Positive Implications of Deepfake Technology in the Arts and Culture

Written by Emily White

Deepfakes, one of the hottest topics to burst onto the technology scene in the last five years, have received a lot of negative publicity because of the risks they can present in the hands of bad actors. However, when used in good faith, they have many positive and exciting uses within the arts and culture sector. This report will briefly explain what deepfakes are and how they work, as well as some of their weaknesses and negative implications. Additionally, three case studies will examine the positive ways in which deepfakes can be used in artistic and cultural institutions.

Deepfakes Decoded

The word “deepfake” is commonly understood as a reference for manipulated video, but its actual definition is more specific. The name, which first emerged in 2017 as the Reddit username of an individual who used deepfake technology to create adult film videos with the faces of celebrities pasted onto the bodies of the actors, is a portmanteau of “deep,” as in deep learning, and “fake,” as in false or fabricated. “Deepfake” specifically refers to synthetic videos, images, and audio recordings that have been created through deep learning AI techniques. Deepfakes can look and sound incredibly realistic when done well, and in a world where video is seen as hard evidence, they blur the boundary between reality and artifice.

To understand deepfakes, one must first understand deep learning. Deep learning, according to Google Brain founder Andrew Ng, is a type of machine learning based on “brain simulations” consisting of “very large neural networks” trained on “huge amounts of data,” more simply explained as machine learning at great scale. The many layers of algorithms that make up these neural networks allow them to learn from the data that they are trained on in a more nuanced way than ordinary neural networks. This high level of nuance and detail is what allows them to be used for the sophisticated process of creating deepfakes.

There are two main methods of creating deepfakes. The first method, the encoder-decoder method, is commonly used to make the well-known face-swap type of deepfake. This method feeds the image and video data of two objects or individuals through an encoder algorithm that analyzes the similarities between the physical features of the two subjects and creates compressed images based on these similarities. These compressed images are then fed through decoder algorithms, one trained on each subject, that extrapolate the details of the subjects from their compressed data. To execute the face swap that has become the famous hallmark of deepfakes, one only needs to feed the encoded image data of one subject through the decoder trained on the other subject.

Figure 1: Diagram of encoder-decoder deepfake process by Author. Image from: Egor Zakharov on YouTube.

Figure 1: Diagram of encoder-decoder deepfake process by Author. Image from: Egor Zakharov on YouTube.

One example of this type of deepfake in the arts is a research project carried out by the Moscow Samsung AI Center in collaboration with the Skolkovo Institute of Science and Technology, which used the encoder-decoder method to animate famous artworks such as Leonardo Da Vinci’s Mona Lisa and Ivan Kramskoy’s Portrait of an Unknown Woman. Entitled “Few-Shot Adversarial Learning of Realistic Neural Talking Head Models,” the researchers pasted the artworks’ faces over video footage of people speaking, using facial landmarks such as eyes, nose bridges, and mouths as the image similarities that the encoder algorithm compressed to and the decoder algorithm extrapolated from. The results vary in accuracy and are most convincing when matching the artworks with individuals whose facial features and shape best resemble the portraits. Nevertheless, this is a successful integration of deepfakes into art, creating a new dimension of artworks that may be put to use in an audience engagement capacity in the future. Who wouldn’t line up to see the Mona Lisa talk?

The second method used to create deepfakes, the generative adversarial network, or GAN method, is used to create images of people and objects that don’t exist in real life. Like the encoder-decoder method, the GAN method relies on an exchange between two algorithms: the generator algorithm is trained to generate images from “random noise,” and the discriminator algorithm judges these images for realism and provides feedback to the generator algorithm, which learns from this feedback to create more realistic-looking images on the next try. Over many repetitions, the GAN method can produce extremely convincing images of complex objects such as faces that do not exist in reality.

Figure 2: Diagram of GAN deepfake process. Source: Author.

Figure 2: Diagram of GAN deepfake process. Source: Author.

An example of how GAN deepfakes can be used in the arts is Gen Studio, a collaboration between MIT, Microsoft, and the Metropolitan Museum of Art. In Gen Studio, users select images of objects from the Metropolitan Museum’s collection and “visualize the space between those pieces” by feeding those object images into the generator algorithm, which creates a somewhat realistic-looking image based on the discriminator algorithm’s feedback. This use of deepfake technology in conjunction with art objects for visual arts audience engagement is very promising, especially if the Met expands the selection of objects to include painting and sculpture in addition to decorative objects such as vases and armor. What if museum audiences could use this technology to create not just interesting teapots and wonky purses, but their own deepfake paintings and sculptures?

Drawbacks and dangers

As with any relatively new technology, deepfakes still have some weaknesses. First, deepfakes are not yet perfect—they often have revealing flaws, such as strange blinking frequencies, inconsistencies in features such as hair or eyebrows, unrealistic skin texture, mismatches between voice and face, and lighting effects that break the laws of physics. Second, creating a good deepfake requires large amounts of data and computing power, so the average person is limited to creating deepfakes with obvious flaws. Lastly, since deepfakes often look slightly unrealistic, they may have an uncanny valley effect that viewers may find off-putting.

In the wrong hands, deepfakes can wreak havoc. One of the most publicized negative uses of deepfakes is the practice of pasting the likenesses of celebrities onto the bodies of adult film actors to create deepfake celebrity pornography, as well as audio deepfakes that have been used as tools to commit fraud over the phone. In the age of fake news, deepfake technology can be used to create unflattering or seemingly incriminating videos of political opponents. This danger influenced the decision of California to make political deepfakes during election season illegal in 2019. The existence of deepfakes also creates an avenue of plausibility for claims that real video recordings are actually digitally doctored deepfakes, undermining video’s value as a record of reality and as a form of evidence more believable than the easily altered photograph. While these negative implications are serious, there is a positive side to the deepfake coin: deepfake technology can create completely new images from noise through the GAN method, which can be a valuable tool for creating art. This allows users to animate still images, create videos of people and events which cannot be filmed in real life, and embody characters or other individuals in the name of art and creativity through the encoder-decoder method.

Case Study 1: Dalí Lives (The Salvador Dalí Museum, 2019)

Dalí Lives is The Salvador Dalí Museum’s digital reanimation of deceased artist Salvador Dalí, made possible by deepfake technology. The interactive deepfake allows audiences to engage not only with Dalí’s art, but with the artist himself, who can answer questions, tell stories about his life, and even take selfies with visitors. Dalí Lives was accomplished through the encoder-decoder deepfake method: the museum trained encoder algorithms on still photos and digital footage of the artist. Then, it used decoder algorithms trained on an actor with similar physical characteristics to those of Dalí in order to map the artist’s features and mannerisms onto the artist’s body. The convincing results can be viewed in the video below:

Figure 3: “Dalí Lives (via Artificial Intelligence).” Source: The Dalí Museum on YouTube.

This deepfake application, which has been on display in the Dalí Museum since 2019, could have further implications in the arts, both visual and performing. Animating deceased individuals through interactive video is hardly a new idea. However, using deepfakes to allow audiences to engage with artists who are deceased, or who are simply absent, lends a more realistic and interactive dimension of the technique. With further refining, this technology could also be used to bring artworks such as painted and sculpted portraits to life, perhaps by combining the Dalí Museum’s techniques with those of the Moscow Samsung AI Center’s researchers. Vermeer’s Girl With A Pearl Earring could tell her life story, and presidential portraits could give history lessons. The audience engagement possibilities are immense.

Case Study 2: Wearing Gillian (Gillian Wearing, 2018)

Wearing Gillian is a 2018 short film by British artist Gillian Wearing, whose conceptual work focuses on themes of identity and the human experience. Much of Wearing’s work involves using makeup, costumes, and masks to embody famous figures such as Georgia O’Keeffe and Albrecht Durer. Thus, it is fitting that she would reverse that in Wearing Gillian by pasting her own face onto the bodies of actors who embody her in the short film. Wearing’s use of deepfakes challenges viewers’ perceptions of identity and reality as she discusses the alienation and discomfort she feels when “watching me being me.”

Figure 4: Wearing Gillian trailer. Source: Vimeo.

Wearing’s use of deepfakes in her short film has implications for photographers and filmmakers, who can use this technology to create digital masks for their subjects to “wear” while embodying a character or other figure. In an interview with Fast Company, Wearing discussed the similarities between the deepfake masks used in her video and the silicon masks and other practical effects she uses to mimic famous figures in her other work. From here, it’s no stretch to arrive at the idea that deepfakes could replace the costly CGI required to alter actors’ faces to look like their characters, something that Hollywood is already beginning to take advantage of. It can be seen through the CGI likenesses of Grand Moff Tarkin and young Princess Leia in Rogue One: A Star Wars Story. The technology Wearing used in her short film makes it possible to film or photograph anyone you have access to photos or footage of, even if they are unable to make it into the studio, an idea deepfakers online have already begun to look into.

Case Study 3: Sway: Magic Dance App (Humen, Inc., 2019)

Figure 5: Deepfake video of Hans Holbein the Younger’s Portrait of Henry VIII. Source: Author on Sway.

Sway: Magic Dance is a mobile app that allows users to create deepfake videos of themselves, their friends, their digital avatars, and their favorite animated characters dancing, skateboarding, and pulling other stunts they might not be able to achieve without some digital help. Users upload short, full-body video recordings of themselves (the images upon which the encoder algorithm is trained), and then choose from a range of viral TikTok dances and other preset actions to map their bodies onto (accomplished by the decoder algorithm). The end result is a somewhat wonky, full-body deepfake video of the user or their chosen character performing their chosen action. The irregularities in the videos come from the comparatively small amounts of data and computing power that the algorithms may use; the short clips are no match for the huge data banks used in more sophisticated deepfake operations, and mobile phones hardly measure up to the computers used by professionals and dedicated hobbyists. Because of these flaws, and perhaps also due to privacy concerns, it seems that users find it off-putting to upload their own likenesses to the app.  A quick scroll through the Sway feed turns up video after video of anime characters dancing to Lil Nas X. Some enterprising users have taken it upon themselves to upload famous artworks as avatars, so I have created two videos of myself and of Hans Holbein the Younger’s Portrait of Henry VIII doing the “renegade” dance made famous on TikTok—see the results below.

Sway and similar dancing deepfake technology’s connection to the arts may not be immediately apparent, but the video of Holbein’s Portrait of Henry VIII shows that with deepfakes, it’s possible to make an oil painting dance. This idea can be harnessed by museums and theaters to engage younger or easily bored audiences by allowing them to play with artworks and characters (with the actors’ permission or as digital mannequins wearing the costumes) to make their own potentially viral content. This content can then be shared on their social media channels, garnering attraction to the arts organization through its profile. A second potential use pertains to choreographers, filmmakers, and theater directors, especially in the socially-distanced age of the pandemic. Choreographers may use Sway and similar technology to create deepfake backup dancers for demo videos when it’s not safe to rehearse together in person. Directors may apply this concept to demonstrate the desired physicality or blocking to their actors in a similar situation. Filmmakers may use this technology to create deepfake extras and supporting actors during the pandemic, perhaps hiring actors to voice their parts and submit digital scans of their likenesses which can then be pasted onto video footage of the filmmaker’s own body acting out the scenes. The possibilities are vast, and may extend beyond the pandemic as artists continue to explore ways to document their movements and ideas on the fly.

+ Resources

Cuseum. “3 Things You Need to Know About AI-Powered ‘Deep Fakes’ in Art & Culture.” Accessed April 14, 2021. https://cuseum.com/blog/2019/12/17/3-things-you-need-to-know-about-ai-powered-deep-fakes-in-art-amp-culture.

Beer, Jeff. “This New Deep Fake Video Is Both Advertising and a Piece of Art.” Fast Company, December 11, 2018. https://www.fastcompany.com/90279597/this-new-deep-fake-video-is-both-advertising-and-a-piece-of-art.

Brownlee, Jason. “What Is Deep Learning?” Machine Learning Mastery (blog), August 15, 2019. https://machinelearningmastery.com/what-is-deep-learning/.

The National. “Can Deepfakes Be Used for Good?,” March 3, 2020. https://www.thenationalnews.com/arts-culture/art/can-deepfakes-be-used-for-good-1.987522.

Cincinnati Art Museum. “Cincinnati Art Museum: Life: Gillian Wearing.” Accessed April 14, 2021. https://www.cincinnatiartmuseum.org/wearing.

Cornwell, Lauren. “Considering Biases in AI and the Role of the Arts.” AMT Lab @ CMU. Accessed April 14, 2021. https://amt-lab.org/blog/2019/3/a-meeting-of-the-minds-exploring-intersecting-issues-between-art-and-artificial-intelligence.

Salvador Dalí Museum. “Dalí Lives: Museum Brings Artist Back to Life with AI.” Accessed April 14, 2021. https://thedali.org/press-room/dali-lives-museum-brings-artists-back-to-life-with-ai/.

Salvador Dalí Museum. “Dalí Lives (via Artificial Intelligence).” Accessed April 14, 2021. https://thedali.org/exhibit/dali-lives/.

Dhillon, Sunny. “An Optimistic View of Deepfakes.” TechCrunch (blog). Accessed April 14, 2021. https://social.techcrunch.com/2019/07/04/an-optimistic-view-of-deepfakes/.

Dimick, Mikayla. “The Environment Surrounding Facial Recognition: Do the Benefits Outweigh Security Risks?” AMT Lab @ CMU. Accessed April 14, 2021. https://amt-lab.org/blog/2020/8/the-environment-surrounding-facial-recognition-do-the-benefits-outweigh-security-risks.

“Few-Shot Adversarial Learning of Realistic Neural Talking Head Models.” YouTube video. 5:33. “Egor Zakharov.” May 21, 2019. https://www.youtube.com/watch?v=p1b5aiTrGzY. “Gen Studio.” Accessed April 14, 2021. https://gen.studio/.

Hu, Luna. “Image Recognition Technology Use in Museums.” AMT Lab @ CMU. Accessed April 14, 2021. https://amt-lab.org/blog/2020/1/image-recognition-technology-in-museums.

Knight, Will. “The World’s Top Deepfake Artist Is Wrestling with the Monster He Created.” MIT Technology Review. Accessed April 14, 2021. https://www.technologyreview.com/2019/08/16/133686/the-worlds-top-deepfake-artist-is-wrestling-with-the-monster-he-created/.

Labonte, Rachel. “Rogue One’s CGI Tarkin & Leia Improved Via Deepfake.” ScreenRant, December 9, 2020. https://screenrant.com/rogue-one-cgi-tarkin-leia-video-deepfake-better/.

Lecher, Colin. “California Has Banned Political Deepfakes during Election Season.” The Verge, October 7, 2019. https://www.theverge.com/2019/10/7/20902884/california-deepfake-political-ban-election-2020.

Lyu, Lingxi. “A General Look on Artificial Intelligence Used in Museum Audience Engagement.” AMT Lab @ CMU. Accessed April 14, 2021. https://amt-lab.org/blog/2020/4/a-general-look-on-artificial-intelligence-used-in-museum-audience-engagement.

Mattout, Lucy. “Using AI Powered Art to Increase Social Equity.” AMT Lab @ CMU. Accessed April 14, 2021. https://amt-lab.org/blog/2019/2/artificial-intelligence-to-connect-with-our-communities.

Sample, Ian. “What Are Deepfakes – and How Can You Spot Them?” the Guardian, January 13, 2020. http://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them.

Somers, Meredith. “Deepfakes, Explained.” MIT Sloan. Accessed April 14, 2021. https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained.

“Sway FAQs.” Accessed April 14, 2021. https://getsway.app/faq.html. Team Glossi. “Deepfake: Art and Artifice.” Glossi Mag (blog), March 30, 2020. https://glossimag.com/deepfake-art-and-artifice/.

Waite, Tom. “Deepfake Technology Brings the Mona Lisa to Life and It’s Too Spooky.” Dazed, June 1, 2019. https://www.dazeddigital.com/art-photography/article/44678/1/deepfake-technology-brings-mona-lisa-to-life-too-spooky-russia-dali-einstein.

“‘Wearing Gillian’ Explores The Space Between Real And Fake, Art And Ad | Wieden+Kennedy.” Accessed April 14, 2021. https://www.wk.com/news/wearing-gillian-explores-the-lines-between-real-and-fake-art-and-ad/.

Yerebakan, Osman Can. “Gillian Wearing: Life.” The Brooklyn Rail, November 1, 2018. https://brooklynrail.org/2018/11/artseen/GILLIAN-WEARING-Life.