Over the past decade social media has provided a rich source of news for media companies. Yet the phenomenon of fake news, and in particular the emergence of deep fakes, has meant that organisations have had to tweak their approach.
Hazel Baker is Global Head of UGC Newsgathering at Reuters, and leads a team which sources user-generated content from across the globe. At DIS2020, she will be focusing on the process Reuters undertakes to ensure that its UGC is not just newsworthy, but accurate and reliable.
Here she talks about the challenge of deep fakes and how new technologies like 5G and blockchain, will impact news gathering.
What has been your career path so far? How did you land at Reuters?
Prior to joining Reuters, I spent a decade at Sky News, where I worked as a multimedia producer, a news editor and a digital editor. Throughout this time I developed a passion for user-generated content and social media newsgathering, as I saw what a profound impact it could have on storytelling. When the opportunity arose at Reuters to build on my experience in this area by growing a global UGC news gathering team, it seemed like the perfect move.
Your role is Global Head of UGC Newsgathering. How does Reuters deploy user-generated content? And what type of content do you seek out?
At Reuters we understand the power of UGC in communicating news events to audiences, and we gather user-generated content from all regions of the world, in all languages, across a vast range of stories. Events that are witnessed by large numbers of people tend to bring the most UGC. We see a lot of content captured by residents during severe weather, for example. This year we’ve sourced eyewitness media from those affected directly by Cyclone Idai, Typhoon Hagibis, Hurricane Dorian and other meteorological events. It was a critical part of our coverage of these stories, bringing home the impact of watching your neighbourhood being destroyed.
UGC is also critical at times of major spot news events. We sourced the first pictures from residents showing the aftermath of the strikes on the Aramco oil facility in Saudi Arabia. We secured dramatic footage showing the shooting of a protester in Hong Kong. And we obtained alarming video shot by a passenger on board a stricken cruise ship that was rolling heavily after an engine failure.
At the other end of the news scale, we continually look for spontaneous moments of life that amuse, entertain and move viewers.
Authenticity and credibility are so important in gathering UGC, how do you ensure that what you chose to run is real and not fake?
We have a huge responsibility to make sure the materials we distribute us authentic and credible – and we take this responsibility extremely seriously. We have a rigorous workflow to verify UGC, which begins with an effort to find the original source of any content that interests us. Only by questioning the original source can we obtain unmodified source files and ask in-depth questions about what the author of the content saw and heard.
Simultaneously, we seek our corroborating imagery and reports, run reverse image searches, make geolocation checks and consult colleagues with regional or subject expertise relating to the story.
We essentially have to build up a case for verification. Once we have assembled enough pieces of evidence that a piece of content is authentic, and we have secured rights to distribute it, we run it – and tell our clients what we know about how it was obtained.
One of the key problems facing news organisations is the arrival of deep fakes. Can you explain how they work and what you are doing to counter them?
Deep fakes, otherwise known as synthetic media, apply the power of deep learning to computer generated imagery (CGI) to produce realistic images and footage. We have seen a number of notable hoax videos that use CGI, but up until now it has taken time and great skill to produce convincing fakes. Deep fake technology has the potential to change this, by allowing fakes to be made ever more quickly and easily as the technology develops.
You conducted an interesting experiment around deep fakes – can you explain what you did and what were its key findings?
As we began to seriously consider our preparedness at Reuters to deal with deep fakes, we realised we had very few examples to really study. We therefore decided to create our own. We wanted to learn what it takes to make a deep fake – and whether our current workflow could detect them. By working with a London-based AI start-up, we produced a piece of synthetic video that we were able to show our team and spark discussion.
Our key finding was that familiarity was key. Our colleagues who were aware that the video was a deep fake were more readily able to detect the red flags of audiovisual sync issues and unnatural face moments compared to those colleagues who were not aware of the video was a deepfake before viewing it.
Are technologies being developed that can counter this? Possibly blockchain? Or is the answer going to be driven by humans?
Journalists are not the only group concerned by the prospect of deep fakes, and so there are a growing number of research projects underway with a view to developing a solution to detect synthetic media. Blockchain may have a role to play in proving the veracity of certain types of content, but it is unlikely to be a solution for much of the UGC that crosses our desk. It seems extremely likely that we will continue to need an approach that combines technology and human intelligence.
How do you think UGC will evolve in the future? What impact, if any, do you think 5G will have?
I think UGC will evolve as smartphone usage continues to grow and networks improve. I anticipate that the arrival of 5G may have a dramatic impact – as people may be able to send large media files from the scene of news events more easily. This may assist in the verification process. Livestreaming is also likely to improve in quality. I’m optimistic that faster networks will help us to tell even more stories through eyewitness media.