Digital Innovation Digital Publishing
4 mins read

How to counter AI-generated disinformation

Getting your Trinity Audio player ready...

Tech journalists can tell us how AI generates disinformation. We need specialist internet culture reporters to tell us why.

The pope looks great in his big jacket, eh? The image of God’s earthly representative showing off his drip was all over the internet recently, shared with a mix of amusement and horror of what it represents.

The AI-generated image – one of many thousands currently being generated at scale – looks realistic enough to fool all but the most careful of observers, and that’s where the big issue arises.

Manipulated images are nothing new. We’ve had panics about airbrushing, photoshop, deepfakes etc. for years now. What is different is the sheer ease and sophistication with which almost anyone can use generative AI to create images, synthesised voices and even video, at scale. The opportunities for the deliberate spreading of disinformation are obvious, and there are many tech reporters busily explaining how AI goes about its creation.

Just as worrying, however, is the scale with which that creation is now possible: the balance of signal to noise is tipping wildly towards the latter. Public discourse is already flooded with vast amounts of misinformation, and a second deluge is on the way.

Newspapers obviously have a role to play here. They, and fact-checking sites like Snopes, will prove ever more vital in the age of AI-generated disinformation. But to counter it, they need to invest in new roles – not just building out tech desks, but hiring internet culture reporters.

Even if tech journalists help us understand how AI takes a prompt and creates audio of, say, Biden, Obama and Trump ranking the Dark Souls series, the public doesn’t necessarily know why. In that specific case it’s simple – it’s funny, and that explanation suffices. But when the created image is something as semi-plausible as ‘the Pope wears a big coat’ or ‘Trump gets arrested’ then there’s a need for the paper to discover the motivation behind its creation and dissemination.

Sometimes that will be a serious investigation explaining why a state is flooding the internet with generated images designed to sow confusion about its activities. More often, knowing the internet, it will be something trivial like a generated image of two celebrities “caught” in a tryst. 

Despite the wildly varying seriousness of those cases, internet culture reporters will be required to explain them to a public that does not have the time to trawl subreddits and message boards to understand the context. And even when the created image is trivial in its importance it will be incumbent upon newspapers to debunk them to build up trust in their capabilities, so that when something seismic happens the public knows where to turn to check if photos and videos of the event are real. 

Context at scale

Internet culture is, to put it mildly, wild. There are subreddits and communities so steeped in in-jokes and internal histories that they are mostly impenetrable to outsiders. I’m a member of r/TwoBestFriendsPlay, for instance, which is a still-extant subreddit dedicated to a YouTube channel that closed down four years ago. This image is perfectly understandable to me, but I’m willing to bet nobody reading this will be able to parse it. I’d also bet that you’re also members of online communities that are similarly obtuse to newcomers. 

And unfortunately, a lot of the motivation for someone to use AI to generate an image or video is going to be buried in communities like those. It might not be as insular as the TBFP play example above, but we are going to need reporters who are used to parsing internet subcommunities to provide that context at speed. 

It’s also important that those reporters stay abreast of the communities who are driving the tech forward. For the moment the r/stablediffusion subreddit is mostly dedicated to, well, predictably, this

The absolute state of SD twitter. People will start to have a very skewed view of AI generated content soon.
by u/Great_Ape_17 in StableDiffusion

… but there are people on there who are pushing text-to-video forward, and who will be responsible for democratising the tech that enables anyone to create it. 

None of this is to say that publishers aren’t already doing some of this. As I mentioned above there are many tech reporters who are grimly acknowledging Chat-GPT, Bard and generative AI will probably be their major beat for the next few years. The BBC recognised the threat already, with its dedicated misinformation reporter Marianna Spring, and its just-announced BBC Bitesize service that aims to protect kids from this specific type of disinformation.

BBC Creative’s Executive Creative Director, Rasmus Smith Bech, says: “At the BBC we have a duty to protect kids from misinformation online. Don’t Learn Off Randoms explores the crazy world of randomness on the internet, from unicorn-believers to Flat Earthers, offering up the trustworthy services of Bitesize as an antidote to the wild west of the wider internet.”

In fact Newsweek, Rolling Stone and many other titles already have internet culture correspondents. A lot of other titles aren’t just debunking, they’re documenting how they did so in the name of media literacy. We have had experts in navigating 4chan and other shitposting boards for years. That’s all great, effective and laudable.

But in the face of this new wave of disinformation, the industry needs so many more reporters, embedded in digital communities and reporting on not just how AI-generated art is created, but why. As people look for answers, the context is just as important as the content.

Republished with kind permission of Media Voices, a weekly look at all the news and views from across the media world.