Caroline McGinnes might be one of few people in the world who knows what its like to lend someone a face. In a new HBO documentary, her eyes, nose, and mouth help cloak the identities of LGBTQ people in Chechnya, the predominantly Muslim republic in Russia. Essentially, McGinnes volunteered to become a deepfake, in a way few have seen before.
In Chechnya, LGBTQ people have faced significantpersecution, including unlawful detentions, torture, and other forms of abuse. Because survivors can rarely reveal their own identities safely, the team behind the film Welcome to Chechnya turned to the same soft of technology typically seen in deepfake videos. Theyre using artificial intelligence to overlay faces of volunteers over those of survivors. This application of deepfake-like technology takes the place of more traditional ways of keeping sources anonymous, like having them sit in a shadow or blurring their faces, and is almost unnoticeable. The tech also helps better display the emotions of the survivors.
Deepfake has now become a shorthand term for a variety of technological techniques, its generally understood as using artificial intelligence to alter video and audio to make it look like someone is saying or doing something they havent actually said or done. The term comes from the name of a Reddit user who deployed machine learning to swap celebrities faces into porn videos without their consent. But a broader industry has begun to promote similar forms of AI-assisted media manipulation its sometimes called synthetic media that arent necessarily as nefarious.
Like that deepfake video of Barack Obama (or Mark Zuckerberg, or Kim Kardashian), the faces might not look quite right deepfake videos tend to live in the uncanny valley. Welcome to Chechnya warns viewers that the technology is featured, and the face doubles can at times appear blurry, almost like watercolor. For the people whose faces appear in the film, the experience can be pretty surreal, according to McGinnes.
They map out all the spacing on your face, she told Recode, and they match everything, eyes, your jawline, everything.
Welcome to Chechnya, which debuts on June 30 on HBO and HBO Max, represents a rare positive example of deepfakes. With the help of deepfake technology, the film can shine light on human rights abuses while minimizing the risk for victims involved in the production.
Right now, deepfake technology is better known for harming, rather than helping, people. Women are by far most likely to be hurt by deepfake technology. One recent report from the research group Deeptrace found that almost all deepfakes found online are in nonconsensual porn videos. Another major fear is that deepfakes can be used to impersonate political figures and to push disinformation campaigns, exacerbating an already rampant-problem of fake news on the internet.
Still, the promise of deepfake-like technology to anonymize people may grow more popular, experts told Recode, complicating the debate over the ethics and the regulation of this controversial application of artificial intelligence.
Deepfakes can provide a cloak of anonymity
The man behind Welcome to Chechnyas technology is visual effects expert Ryan Laney, who says the technology used in the film essentially moves faces like marionette puppets. Put simply, the original facial movements of those featured in the documentary guide how the face of the so-called doubles move with the help of deep learning.
The eyebrows and eye shapes become something like brushstrokes, Laney explained. So we took the content of subjects in the film and applied the style of the activists. (The films team refers to people who volunteered their faces as activists.)
Volunteers had their faces filmed from a variety of angles, and their faces were then mapped onto the faces of people who appeared in Welcome to Chechnya.
HBO
Ultimately, he says the idea was to create a digital prosthetic where 100 percent of the motion, the emotion, and the essence of what the subject is doing is there. Essentially, the technique would subtly change a persons eye shape, but not the fact that they were blinking.
One other challenge was avoiding the uncanny valley, which is a term more often used to discuss how realistic robots should look. To address that question, the film brought in Thalia Wheatley, a social psychologist and neuroscientist at Dartmouth, and graduate student Chris Welker, to test different approaches to the face cloaks and see how small, pilot audiences responded. For instance, they tried out a cartoon-like adaption of the survivors faces that Wheatley likened to the animations from the movie Spiderman: Into the Spiderverse. Thoseperformed the worst, the researchers said.
Another version involved masking the face but keeping the original peoples eyes, which the researchers thought could help people connect with the subject. It didnt really work, either.
We think that the answer is that you put one persons eyes in another persons face and the brain just cant resolve the incongruity and it just feels unsettling and it kicks people out of the experience, Wheatley said. But we dont know for sure.
The idea of using deepfakes to anonymize people is growing more popular
Laney says hes now working to democratize this new deepfake technology for good. Through a new company, Teus Media, he wants to turn his artificial intelligence into a journalistic digital veil for cloaking witnesses in danger, and he says hes already received interest. But Laneys not the only one pushing that approach.
Some startups, such as D-ID and Alethea AI, want people to use deepfake-like avatars to digitally cloak themselves. Researchers at the Norwegian University of Science and Technology and University of Albany have also worked on similar technology.
The director of the University of Colorado Denvers National Center on Media Forensics, Catalin Grigoras, emphasizes that the ethical questions surrounding synthetic media are raised when they appear to take on aspects of reality. No one has an issue with fake faces generated in Hollywood films, he says, but issues emerge when theyre used to create false news, for instance. As for Welcome to Chechnya, the application of deepfake technology is within reason.
Its just a new movie that has this kind of visual effects, Grigoras said. There are quite good, but still it is possible to detect them.
Sam Gregory, the program director of Witness, a human rights non-profit that focuses on video and technology, says that activists hes spoken to find it one of the few compelling applications of the technology. Gregory points to women who have used virtual masks on Snapchat to share their experience of sexual assaults through video without revealing their identities.
Over the last couple of years, it seemed like one of the most positive potential use deepfakes is to retain human features and the ability to convey emotion and preserve the essential humanity of people who faced horrible abuses, Gregory said.
Still, Gregory cautions that the deepfakes used for anonymity are not without ethical questions. For instance, he wonders to what extent a face double should match the identity and characteristics such as their race, gender, and age of the person whose identity theyre obscuring. Gregory adds that while this technology might help activists, it could also be used to misrepresent and target them.
Expect more arguments for legitimate synthetic media
When asked about the questions surrounding deepfakes, Welcome to Chechnyas Laney says that his technology doesnt technically count because deepfakes as a practice are inherently non-consensual. To him, the artificial intelligence used in the film required both the agreement of those filmed to be anonymized and the consent of the activists who volunteered their faces. Laney also emphasized that no one is trying to trick the audience. They know the technology is in place, and that its being used to communicate the extent to which these people are in danger.
That echoes what a company called Synthesia has said. The startup, which sells similar synthetic media technology, has committed to not offering its deepfake-like technology for public use and has promised that it will never re-enact someone without their explicit consent including politicians or celebrities for satirical purposes.
Currently, there is no federal law explicitly regulates the production of deepfakes in the United States, though some states have expressed interest in regulating the technology and some say existing laws may already apply. Meanwhile, social media companies like Facebook and Twitter have also attempted to create rules for moderating the use of technology on their platforms.
But deepfakes are part of a developing industry. As the technology becomes more prominent, we should expect more people to argue for legitimate use cases or, at the very least, applications that are not as terrifying as the deepfakes were more familiar with. That will inevitably complicate how we choose to regulate them.
Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.
Support Voxs explanatory journalism
Every day at Vox, we aim to answer your most important questions and provide you, and our audience around the world, with information that has the power to save lives. Our mission has never been more vital than it is in this moment: to empower you through understanding. Voxs work is reaching more people than ever, but our distinctive brand of explanatory journalism takes resources particularly during a pandemic and an economic downturn. Your financial contribution will not constitute a donation, but it will enable our staff to continue to offer free articles, videos, and podcasts at the quality and volume that this moment requires. Please consider making a contribution to Vox today.
Sports