Audience Engagement Guest Columns
4 mins read

Misinformation: “We can’t be seduced by the idea that it’s too complicated”

The digital space is all too familiar with issues caused by misinformation and disinformation. So far, attention has mostly focused inwards; looking at the part advertising plays in funding unreliable content and spend siphoned from quality publishers. But while strong concerns around revenue are understandable, it’s clear that problems reach much further.

Multiple events highlight a growing link between polarising and dubious media, and real-world consequences. This not only includes the US Capitol attack driven by claims of election fraud and social-fuelled uncertainty about COVID vaccination, but also online prejudices that have influenced many hate crimes against minority communities, especially on the basis of belief.

All of which underscores the industry’s collective responsibility to tackle these issues. This is why we gathered varied leaders and specialists at our latest CRUNCH 4.4 webinar – moderated by Harriet Kingaby, Co-Chair at The Conscious Advertising Network – to discuss what’s making the tide of misinformation and disinformation so hard to stem, and how the content agenda can be reframed to help advertisers and audiences make informed choices.

Here are the top three takeaways:

1. Misleading tactics are getting smarter

Disinformation is lucrative business: as pointed out by Clare Melford, Co-Founder and Executive Director of the Global Disinformation Index (GDI), ad spend flowing to misleading sites had climbed to  $235 million yearly in 2019. This means there are powerful financial (as well as ideological) incentives for providers to ensure content appears credible and achieves maximum reach.

Tactics aren’t always sophisticated. One popular method cited by Director of Faith Matters, Iman Atta, is simply posing as genuine news sites — a route she has observed by those promoting far-right stories to gain recognition as trusted investigative sources. But Atta has also noted more complex approaches. As recently as 2018, some sites were reproducing anti-Muslim articles with the same tag words and featuring backlinks; successfully manipulating Google’s systems into giving high search rankings for terms including ‘Muslim’ and ‘Islam’.

In addition to amplifying exposure, these activities are making it harder for readers to tell the difference between objective and biased content; particularly when combined with methods such as repackaging existing news to change its emphasis and demonise certain communities. Rita Jabri-Markwell, Chief Adviser and Lawyer at the Australian Muslim Advocacy Network, feels the “diet put out by such information operations” is troubling on many fronts. Not only offering proof points for prejudices, but also allowing bad actors to evade the measures and filters of online platforms by using dehumanising storytelling techniques, rather than explicit hate speech.

2. We need more than just self-discipline

The awareness that taking on advanced subversion techniques will require concerted effort is growing. In fact, leading buy-side forces such as ISBA have been consulting with various platforms and organisations like the AOP for several years to find which practical steps are needed to limit support and spread. 

For ISBA’s Director of Media, Steve Chester, the conclusion is self-regulation alone won’t be enough. While measures implemented by platforms are positive, he believes “marking their own homework” isn’t feasible, arguing for a blend of internal and external guidelines. On the official side, ISBA’s long-running calls for independent standards have resulted in the UK draft Online Harms Bill, which Chester views as crucial due to its cross-platform remit and penalties: £18 million or 10% of global revenue for failing to remove harmful content. Although this is enthusiasm tempered with the acknowledgment that it’s “imperfect but the right start.”

Self-imposed governance will likely follow the lead of institutions such as the Global Alliance for Responsible Media (GARM), which has developed 11 categories advertisers can harness to identify unsafe content and specify unique thresholds. GDI’s solution, combining human review with artificial intelligence to assess risk, also presents possibilities for improving clarity around the sites that ads support. Questions, however, remain about how to strike the best balance for all. As summarised by Atta: “This is not about curtailing the right to report, it is about challenging inaccurate and divisive articles with no factual basis behind them”.

3. Push for collaboration across communities

For now, there is a need to overcome the tension between the urge to move faster but retain autonomy. Experts agree mobilising defences after misinformation and disinformation have gained traction is not effective — what Marie Helly, Head of Beyond Fake News at the BBC, refers to as “trying to pull the rabbit back once it’s running”. But eagerness to address issues is also conflicting with resistance to the concept of being coerced into shared rules.

What the general consensus does settle on, however, is the importance of coming together to maintain a high benchmark and progress. Publishers have the means to produce quality, factual content and present a counter narrative that consumers can engage with, if they wish; as Helly asserts “providing clear answers but not telling them what to think or who to trust.” Equally, advertisers can proactively strive to enhance their understanding of online content and carefully judge decisions about where they do, and don’t, want to focus their investment.

Critically, the panel speakers at CRUNCH 4.4 feel keeping the lines of communication open is essential. To provide fair representation, it’s vital to ensure the voices most affected by harmful content are included in the conversation. And to increase the chances of finding a workable resolution, it will be critical to continue talking openly and welcoming different perspectives.

The best call to action comes from Jabri-Markwell: “We can’t be seduced by the idea that it’s just too hard, complicated, or a slippery slope. Collaboration across communities, publishers, platforms, and advertisers will help us find credible paths forward. We welcome the opportunity to carry on this discussion.”

Well said.

Richard Reeves
Managing Director, AOP

The UK Association for Online Publishing (AOP) is an industry body representing digital publishing companies that create original, branded, quality content. AOP champions the interests of media owners from diverse backgrounds including newspaper and magazine publishing, TV and radio broadcasting, and pure online media.