Digital Publishing
4 mins read

Deepfakes can put the credibility of news organizations at stake

Getting your Trinity Audio player ready...

We are now entering an era when the video footage we see with our eyes can be manipulated in a very sophisticated manner. Fake news has a powerful ally now, and it’s Artificial Intelligence. The combination has the potential to create forged media that looks so authentic, even seasoned journalists may fall for them; these are called “deepfakes”.

This creates a serious challenge for news organizations everywhere that are already battling the menace of fake news. As proactive steps, the BBC recently launched, Beyond Fake News, a project to fight back against disinformation. The Wall Street Journal, which has gone on record stating fake news as a threat to journalism, has assembled an internal task force of photo, video, research and news editors who are being equipped with training and tools to identify “deepfakes”.

Deepfake video technology allows users to swap the faces of two people to create lifelike footage that shows a person doing regular stuff like speaking and moving, but in reality, it is simulated.

The results can be highly persuasive as you can see from the deepfake video of former president Obama below. And this is just one small instance of the capabilities of deepfake video tech.

Doctored videos, a precursor to “true deepfakes”

The recent controversy over White House suspending CNN’s Jim Acosta’s press pass over alleged misconduct had video tech playing a sinister role. The White House justified its unprecedented step by releasing a video that purports to show Acosta misbehaving with an intern.

The video, however, is suspected to have been doctored. According to Storyful, a social media intelligence agency that sources and verifies insights by analyzing digital content, “different frames appear in the video shared by the White House, that seem to exaggerate the actions of Acosta”.

Check out USA Today’s analysis below:

The controversy over the Acosta video is a pale version of what we’ll face if true deepfake video gets into the world’s information bloodstream.

Tom Kent, former Standards Editor, Associated Press

Deepfake technology is getting to a level of sophistication where it can even fool an informed and suspicious mind. As of yet, the forged videos don’t look completely natural, especially the facial expressions and movements. They also have a lower resolution than the original footage.

But as the technology improves, the forgeries will become much harder to detect. When fake videos become as crisp and clear as the original, it will be very tough to establish legitimacy.

How news organizations are vulnerable

A comprehensive survey by the Video Advertising Bureau found that television news was looked upon as the most trusted source of information.

But deepfakes can be used to deceive news organizations and undermine their credibility. Imagine the upheaval if soldiers are shown committing crimes against civilians, or politicians making damaging confessions which have actually been created by AI-simulated audio.

When a publication unwittingly bases a news story around a fake video, the error could damage its reputation and trustworthiness. Journalists are vulnerable as individuals because people who want them discredited can potentially do so by creating deepfakes of them in compromising situations.

An MIT study investigating the diffusion of false content on Twitter found that false stories were 70 percent more likely to be retweeted than the truth and reached 1,500 people six times more quickly than accurate articles.

What’s more troubling is the likelihood of anyone being able to use the technology to create deepfakes in future, even those with no technical know-how.  

Preparing for the onslaught of deepfakes

According to Christine Glancey, a Deputy Editor on the Ethics and Standards team at WSJ, it’s critical to raise awareness about deepfakes in the newsroom. She says, “We don’t know where future deepfakes might surface so we want all eyes watching out for disinformation.”

Despite the present uncertainty, news organizations should consider multiple approaches to authenticate media. “There are technical ways to check if the footage has been altered, such as going through it frame by frame in a video editing program to look for any unnatural shapes and added elements, or doing a reverse image search,” said Natalia V. Osipova, a Senior Video Journalist at the Journal. The best option is often traditional reporting: “Reach out to the source and the subject directly, and use your editorial judgment.”

What’s next?

A deluge of deepfakes is coming; that much is evident. Also evident is that deepfakes are getting better and better, with sophisticated technology required to figure out what’s true and what’s not.

Regular people don’t have the time or ability to sift falsehood from truth using rigorous technical analysis. So it falls upon our newspapers and universities to lead the battle on deepfakes, as these are the institutions whose objectives are to make people aware of what is happening around the world, and what may happen. They have the training and the expertise, and in many cases, the credibility, to do it right.