Digital Innovation Digital Publishing
4 mins read

Explosive news story from a deepfake “a matter of time”: Publishers gear up

Getting your Trinity Audio player ready...

Artificial intelligence is being used to create a new breed of highly realistic fake videos called deepfakes. They pose a significant threat to news publishers because the technology can be used to create videos of news events that never happened.

Several publishers, including The Wall Street Journal Journal, the BBC and Reuters are gearing up to combat this menace.

One new threat on the horizon is the rise of so-called “deepfakes” – a startlingly realistic new breed of AI-powered altered videos. These ‘deepfakes’ are created by taking source video, identifying patterns of movement within a subject’s face, and using AI to recreate those movements within the context of a target piece of video. As such, effectively a video can be created of any individual, saying anything, in any setting.

Hazel Baker, Global Head of User-Generated Content Newsgathering at Reuters

Reuters’ deepfake experiment

Baker along with Reuters’ Global Head of Video Product, Nick Cohen recently created a fake video as part of a newsroom experiment. Their aim was to test how colleagues would respond to it. Insights gained from the experiment would inform the process for verifying the authenticity of user-generated videos at Reuters.

They filmed two interviewees separately, narrating the same script in English and French. The French audio was then transferred to the English speaking interviewee’s video using deepfake technology. Video editors mapped the French interviewee’s mouth movements and used AI to recreate the whole lower portion of the English interviewee’s face.

Here are the videos:

English interviewee:

French interviewee:

Synthesized video:

Cohen and Baker then shared the synthesized video with their user-generated content team to get their responses. Those who were aware of the manipulation reported the following observations:

  1. The video appeared to slip in and out of sync with the audio.
  2. The shape of the mouth appeared unusual at times, particularly with sibilant sounds.
  3. The interviewee was unusually still. This according to Baker was deliberate because a still speaker, with minimum facial expressions, works best for such videos.

Those who were not aware that the video had been altered, said that they felt something was not right with it, but could not specify. While results of the experiment suggest that at present deepfakes are not sophisticated enough to pass informed human scrutiny, the technology is developing rapidly and the day is not far off when they will.

Evolving “faster than anyone’s ability to control it”

Speaking at the SXSW 2019 event in Austin, Texas, Cohen said, “The ability to spread such content is evolving much, much faster than anyone’s ability to control it. The results aren’t perfect by any means, but it’s a small leap of imagination to see how these techniques could be used, whether intentionally or accidentally, to manipulate.”

The BBC had run a similar experiment a few months back where they created a video using deep fake technology. The video shows broadcaster Matthew Amroliwala presenting the news in languages he doesn’t know how to speak. Amroliwala speaks in English only, but in the video, he can be seen speaking in Spanish, Mandarin, and Hindi.

Jamie Angus, Director of the BBC World Service, told Campaign Asia-Pacific over an interview, “We see a lot of problems with our own news content being faked by other people and that poses a reputational challenge to us as international broadcasters. We’re asking ourselves how to detect it, how to automate the detection of it, how to team up with other international broadcasters to come up with a common set of standards that automate this kind of watching exercise.

“All eyes watching out for disinformation”

Creating fake videos is an important step in countering its menace. It enables publishers to get a first-hand experience of the technicalities involved in its production; its possibilities and limits.

The Reuters experiment pointed out limitations that can be used to identify fake videos. But the technology is evolving, and to keep pace with it publishers may need to consider having in-house deepfake technology experts, or partner with organizations that work in this area.

Thankfully, we’ve not yet had an explosive news story resulting from a ‘deep fake’, but many in our industry fear it is simply a matter of time.

Nick Cohen, Global Head of Video Product at Reuters

The Wall Street Journal has set up an internal deepfakes task force called the WSJ Media Forensics Committee. The team consists of video, photo, visuals, research, platform, and news editors who have been trained in deepfake detection.

The publisher also hosts training seminars for its reporters and is developing newsroom guides for the purpose. Additionally, it is collaborating with academic institutions such as Cornell Tech to identify ways technology can be used to combat this problem.

“There are technical ways to check if the footage has been altered, such as going through it frame by frame in a video editing program to look for any unnatural shapes and added elements, or doing a reverse image search,” says Natalia V. Osipova, Senior Video Journalist at The Wall Street Journal.

She also emphasizes the role of traditional reporting, suggesting journalists, “Reach out to the source and the subject directly, and use your editorial judgment.”

According to Christine Glancey, Deputy Standards Editor at the Journal, “Raising awareness in the newsroom about the latest technology is critical. We don’t know where future deepfakes might surface so we want all eyes watching out for disinformation.”