In A New Era Of Deepfakes, AI Makes Real News Anchors Report Fake Stories

Getty

“We are now interviewing the only survivor in the recent school shooting: TikToker Krishna Sahay,” CBS News anchor Anne-Marie Green appears to say at the top of the news segment. “That must have been absolutely traumatizing,” an interviewer in another video says to Sahay. “What was going through everyone’s head?”

Sahay pauses, before responding: “Bullets, duh!”In another apparent news segment on the school shooting, this time from CNN, an interviewer asks Sahay: “How’d you live through that? You were reading a magazine or something during the shooting?”

“Reading?” Sahay replies about the magazine. “I was emptying one!”The TikTok and YouTube star is among a growing crop of social media users that have been enlisting generative AI and other software to produce seemingly real news segments from top anchors at major news outlets—from CBS Evening News’ Norah O’Donnell to journalists at CNN, the BBC and beyond.…Continue reading….

By: 

Source: In A New Era Of Deepfakes, AI Makes Real News Anchors Report Fake Stories

.

Critics:

In cinema studies, deepfakes demonstrate how “the human face is emerging as a central object of ambivalence in the digital age”. Video artists have used deepfakes to “playfully rewrite film history by retrofitting canonical cinema with new star performers”. Film scholar Christopher Holliday analyses how switching out the gender and race of performers in familiar movie scenes destabilizes gender classifications and categories.

The idea of “queering” deepfakes is also discussed in Oliver M. Gingrich’s discussion of media artworks that use deepfakes to reframe gender,including British artist Jake Elwes’ Zizi: Queering the Dataset, an artwork that uses deepfakes of drag queens to intentionally play with gender. The aesthetic potentials of deepfakes are also beginning to be explored.

Theatre historian John Fletcher notes that early demonstrations of deepfakes are presented as performances, and situates these in the context of theater, discussing “some of the more troubling paradigm shifts” that deepfakes represent as a performance genre.

Philosophers and media scholars have discussed the ethics of deepfakes especially in relation to pornography. Media scholar Emily van der Nagel draws upon research in photography studies on manipulated images to discuss verification systems that allow women to consent to uses of their images. Beyond pornography, deepfakes have been framed by philosophers as an “epistemic threat” to knowledge and thus to society.

There are several other suggestions for how to deal with the risks deepfakes give rise beyond pornography, but also to corporations, politicians and others, of “exploitation, intimidation, and personal sabotage”, and there are several scholarly discussions of potential legal and regulatory responses both in legal studies and media studies.

In psychology and media studies, scholars discuss the effects of disinformation that uses deepfakes, and the social impact of deepfakes. While most English-language academic studies of deepfakes focus on the Western anxieties about disinformation and pornography, digital anthropologist Gabriele de Seta has analyzed the Chinese reception of deepfakes, which are known as huanlian, which translates to “changing faces”.

The Chinese term does not contain the “fake” of the English deepfake, and de Seta argues that this cultural context may explain why the Chinese response has been more about practical regulatory responses to “fraud risks, image rights, economic profit, and ethical imbalances”.

An early landmark project was the Video Rewrite program, published in 1997, which modified existing video footage of a person speaking to depict that person mouthing the words contained in a different audio track. It was the first system to fully automate this kind of facial reanimation, and it did so using machine learning techniques to make connections between the sounds produced by a video’s subject and the shape of the subject’s face.

Contemporary academic projects have focused on creating more realistic videos and on improving techniques. The “Synthesizing Obama” program, published in 2017, modifies video footage of former president Barack Obama to depict him mouthing the words contained in a separate audio track. The project lists as a main research contribution its photorealistic technique for synthesizing mouth shapes from audio.

The Face2Face program, published in 2016, modifies video footage of a person’s face to depict them mimicking the facial expressions of another person in real time. The project lists as a main research contribution the first method for re-enacting facial expressions in real time using a camera that does not capture depth, making it possible for the technique to be performed using common consumer cameras.

In August 2018, researchers at the University of California, Berkeley published a paper introducing a fake dancing app that can create the impression of masterful dancing ability using AI. This project expands the application of deepfakes to the entire body; previous works focused on the head or parts of the face. Researchers have also shown that deepfakes are expanding into other domains such as tampering with medical imagery.

In this work, it was shown how an attacker can automatically inject or remove lung cancer in a patient’s 3D CT scan. The result was so convincing that it fooled three radiologists and a state-of-the-art lung cancer detection AI. To demonstrate the threat, the authors successfully performed the attack on a hospital in a White hat penetration test.

A survey of deepfakes, published in May 2020, provides a timeline of how the creation and detection deepfakes have advanced over the last few years. The survey identifies that researchers have been focusing on resolving the following challenges of deepfake creation:

  • Generalization. High-quality deepfakes are often achieved by training on hours of footage of the target. This challenge is to minimize the amount of training data and the time to train the model required to produce quality images and to enable the execution of trained models on new identities (unseen during training).
  • Paired Training. Training a supervised model can produce high-quality results, but requires data pairing. This is the process of finding examples of inputs and their desired outputs for the model to learn from. Data pairing is laborious and impractical when training on multiple identities and facial behaviors. Some solutions include self-supervised training (using frames from the same video), the use of unpaired networks such as Cycle-GAN, or the manipulation of network embeddings.
  • Identity leakage. This is where the identity of the driver (i.e., the actor controlling the face in a reenactment) is partially transferred to the generated face. Some solutions proposed include attention mechanisms, few-shot learning, disentanglement, boundary conversions, and skip connections.
  • Occlusions. When part of the face is obstructed with a hand, hair, glasses, or any other item then artifacts can occur. A common occlusion is a closed mouth which hides the inside of the mouth and the teeth. Some solutions include image segmentation during training and in-painting.
  • Temporal coherence. In videos containing deepfakes, artifacts such as flickering and jitter can occur because the network has no context of the preceding frames. Some researchers provide this context or use novel temporal coherence losses to help improve realism. As the technology improves, the interference is diminishing.

Overall, deepfakes are expected to have several implications in media and society, media production, media representations, media audiences, gender, law, and regulation, and politics. The term deepfakes originated around the end of 2017 from a Reddit user named “deepfakes”.

He, as well as others in the Reddit community r/deepfakes, shared deepfakes they created; many videos involved celebrities’ faces swapped onto the bodies of actresses in pornographic videos, while non-pornographic content included many videos with actor Nicolas Cage‘s face swapped into various movies.

Other online communities remain, including Reddit communities that do not share pornography, such as r/SFWdeepfakes (short for “safe for work deepfakes”), in which community members share deepfakes depicting celebrities, politicians, and others in non-pornographic scenarios. Other online communities continue to share pornography on platforms that have not banned deepfake pornography. 

In January 2018, a proprietary desktop application called FakeApp was launched. This app allows users to easily create and share videos with their faces swapped with each other. As of 2019, FakeApp has been superseded by open-source alternatives such as Faceswap, command line-based DeepFaceLab, and web-based apps such as DeepfakesWeb.com . Larger companies started to use deepfakes.

Corporate training videos can be created using deepfaked avatars and their voices, for example Synthesia, which uses deepfake technology with avatars to create personalized videos. The mobile app giant Momo created the application Zao which allows users to superimpose their face on television and movie clips with a single picture. As of 2019 the Japanese AI company DataGrid made a full body deepfake that could create a person from scratch. They intend to use these for fashion and apparel.

As of 2020 audio deepfakes, and AI software capable of detecting deepfakes and cloning human voices after 5 seconds of listening time also exist. A mobile deepfake app, Impressions, was launched in March 2020. It was the first app for the creation of celebrity deepfake videos from mobile phones. Deepfakes technology can not only be used to fabricate messages and actions of others, but it can also be used to revive deceased individuals.

On 29 October 2020, Kim Kardashian posted a video of her late father Robert Kardashian; the face in the video of Robert Kardashian was created with deepfake technology. This hologram was created by the company Kaleida, where they use a combination of performance, motion tracking, SFX, VFX and DeepFake technologies in their hologram creation.

In 2020, Joaquin Oliver, victim of the Parkland shooting was resurrected with deepfake technology. Oliver’s parents teamed up on behalf of their organization Nonprofit Change the Ref, with McCann Health to produce this deepfake video advocating for gun-safety voting campaign.

In this deepfake message, it shows Joaquin encouraging viewers to vote. In 2022, Elvis Presley has been resurrected in America’s Got Talent 17 using deepfake technology. There have been deepfake resurrections of pop cultural and historical figures who were murdered, for example, the member of The BeatlesJohn Lennon who was murdered in 1980….

Related contents:

 

Blog at WordPress.com.