Digital Innovation Digital Publishing
6 mins read

How AI tools already contribute to daily news: Insights from Semafor, BBC and others

Getting your Trinity Audio player ready...

A look at practical use cases from different media outlets

Artificial intelligence is changing how news is reported and delivered to audiences. One of the most powerful and widely discussed AI tools is ChatGPT, which gained immense popularity due to its remarkable ability to generate highly credible natural language text.

As AI-driven technology continues to advance, it assumes an increasingly significant role in news media. We spoke with media managers and experts from Semafor, the BBC and other organisations to look at how different newsrooms already use ChatGPT and other AI tools in their work.

How ChatGPT benefits news organisations

ChatGPT and its underlying technology are already benefiting newsrooms in multiple ways, including:

  • Language generation: ChatGPT can generate natural language text in response to a prompt, which can be useful for generating news articles or summaries quickly and efficiently. This feature can be especially valuable for breaking news, where speed is essential. Thus, reporters can focus on more analytical and investigative pieces.
  • Text classification: ChatGPT can classify text according to various categories, such as topic, sentiment, or tone. This can be useful for identifying trending topics or analysing public opinion on certain issues. By understanding what people are talking about, media and newsrooms can tailor their coverage to meet audience interests and needs.
  • Summarisation: ChatGPT can summarise long texts into shorter versions, which can be useful for creating news briefs or bulletins. This feature can save time and resources, while also providing audiences with a quick and easy way to stay informed.
  • Translation: ChatGPT can quickly translate text from one language to another, which can be useful for newsrooms that cover international events or operate in multilingual regions. This possibility can help break down language barriers and make news more accessible to a global audience.
  • Personalisation: ChatGPT can be trained on specific datasets to provide personalised content recommendations or to tailor news stories to specific audiences – thus helping media and newsrooms to build stronger relationships with their audiences by delivering content that is relevant and engaging.

Of course, the use of generative AI is fraught with various challenges, including practical errors, bias and lack of transparency around data privacy. The consensus is that generative AI is unlikely to fully take over content creation anytime soon.

Still, over the past six months newsrooms have come up with various practical use cases for ChatGPT and other generative AI tools that leverage their advantage while being conscious of limitations.

How newsrooms have been using ChatGPT and other AI tools

Multiple news outlets and publishers have experimented with or adopted various AI technologies, including GPT-based models, to enhance their news reporting, analysis, and delivery.

The Washington Post, for instance, has used the Heliograf language model to generate news articles and alerts for its readers. Heliograf can craft simple stories on topics such as high school football games, local elections, or Olympics results, freeing up reporters to cover more complex and in-depth stories.

Bloomberg has also developed its own AI-based news writing tool, called Cyborg, able to create simple breaking news stories, and to summarise and translate news.

AI can also analyse the emotional tone of news articles, a feature used by The New York Times in its tool Project Feels to suggest ways to make stories more engaging or relatable to readers. 

During a session at the International Journalism Festival in Perugia 2023, Gina Chua, Semafor’s executive editor, reaffirmed that AI opens up new possibilities for reimagining news. She emphasised the potential to rebuild articles and transform the news experience using AI in ways that go beyond simply automating existing practices. Indeed, the publisher’s “Witness” format, a series that showcases first hand testimonies of individuals directly affected by significant global events, uses Stable Diffusion, an AI engine that seamlessly integrates images with the accompanying narrative. This technology allows viewers to witness the visual representation of the witness’s story as it unfolds.

Joe Posner, Head of Video at Semafor, believes “there’s so much more possible in the world of non-fiction filmmaking when you use animation and motion design to their fullest potential – both in terms of showing real evidence, and in illustrating memories. I’m sure AI-driven tools will help with both!  But one extremely exciting part of the open source, generative tools like Stable Diffusion and Runway and beyond is that they can make narrative, illustrative animation so much more possible within the budget and time constraints of a newsroom”, he told The Fix in an email interview.

A visually engaging piece based on somebody’s eyewitness accounts can be extremely moving, but used to be out of reach in this scenario. So Posner’s hope is to use these advanced tools to be able to tell more human stories, in more engaging ways.

“We’re still experimenting how that works – we did a few videos that were totally about the animation and character, where more recently we’ve been working it into a larger story“, Posner says. 

Video making aside, freelance journalist Francesco Oggiano writes in his recent newsletter that while AI cannot effectively draft or write content on historical events, it could provide starting bullet points, lists of tools, and links (e.g., “10 useful Chrome extensions for productivity”). Therefore, AI is “destined to have few practical everyday applications until it integrates with the platforms we already use” – basically, until we don’t see it anymore.

What about news organisations that have yet to routinely adopt these AI tools, like the BBC? To shed some light on this, we reached out to Laura Ellis, the Head of Technology Forecasting. According to Ellis, the BBC’s approach to AI adoption is more cautious. While they have incorporated synthetic voice technology into their weather product, they have not yet utilised generative AI for content that is broadcasted or published online. Currently, the BBC is primarily exploring different possibilities and testing the capabilities of AI tools, such as using them to summarise information and generate ideas.

The power of the small prototype

Ariane Bernard, lead of the INMA Smart Data Initiative, confirms AI text generation is becoming more and more common now, as she can recall various news organisations that have been experimenting with generative AI for at least a couple years. “More organisations are joining, which is natural considering that as any technology matures (becomes better, cheaper etc), it’s bound to be adopted by more organisations,” she told The Fix

The news organisations that have recently been exploring the possibilities of generative AI are approaching it as discrete experiments to determine its potential benefits. Companies like SchibstedBuzzFeedGannett, and many others have been developing small-scale prototypes to assess the outcomes and implications for their respective organisations.

However, while some newsrooms openly disclose their use of AI, others may not be as transparent. In the past, for example, CNET was using generative AI to produce service-oriented articles without clearly indicating that they were automatically generated. As a natural consequence, it has become increasingly common on social media for readers to question the reliability of news sources, especially when it comes to AI-generated content.

Every day, people report on social media AI-written articles. Credits to @lukemaynard.

According to Bernard’s perspective, there is no need for excessive concern because the situation is not necessarily about “hiding the evidence.” She explains that since we are dealing with a relatively new set of use cases that are still being explored on a small scale, many organisations have not yet fully established how they should disclose their use of AI tools. Formulating an appropriate language to explain something quite complex to non-technical users requires time and careful consideration. “Just slapping a line that says ‘This was generated using AI’ doesn’t actually do that much“.

Bernard also notes that most organisations are not experimenting with AI-generated content going directly into production. CNET may have done so, but in general, Bernard is more familiar with organisations that incorporate a “human in the loop” approach. This means there is human oversight of the AI-generated content, either to discard content that is incorrect or to make necessary adjustments. The involvement of a human in approving and reviewing the AI-generated content adds another dimension to the disclosure process: “All these issues are on the minds of publishers as they explore this brave new world where AI-assisting technologies are coming to augment the work of humans.”

Anna Sofia Lippolis

This piece was originally published in The Fix and is re-published with permission.