Innovation in Media: Review, Reinvent, Restart

Magda Abu-Fadil
6 min readJun 10, 2024

--

News media should adopt/adapt to innovation from best generative artificial intelligence (AI) examples, choose appropriate business templates, pick suitable paywalls, focus on fact-checking and cope with big tech appetites for learning models’ content.

The advice comes from Juan Señor and Jayant Sriram, editors of the Innovation in News Media World Report 2024–25, with an aim to help ease actionable transitions.

Innovation in News Media World Report 2024–25 (courtesy Innovation Media Consulting Group)

The report, an annual survey by Innovation Media Consulting Group for the World Association of News Publishers (WAN-IFRA), provides valuable insights into available tools and illustrations that combine generative AI, and older automation or analytical tools juiced up with generative artificial intelligence to help launch AI-powered newsrooms.

“In our many years tracking media innovation, no topic has generated as much research, surveys, best practices, and case studies in such a short period as the use of generative AI in newsrooms,” the authors said, noting that the elephant in the room remains the prospect technology would replace journalism jobs cheaply.

They cautioned that various surveys across industries showed generative AI is being used by a majority of employers without any organizational guidelines.

Publishers must unite to firmly maintain that AI will never substitute for reporting stories, a task that requires the conscience and intelligence machines simply cannot possess. Yet, for the multitude of other ways in which this technology can make the work of newsrooms more efficient and effective, there is already an abundance of information that can serve as a guide to getting started.

What are the other ways?

The list includes newsgathering applications (optical character recognition; speech-to-text and text extraction; trend detections and news discovery), news production (summarization; headline testing; copy editing and transcription; translation; image generation and article creation), news distribution, AI in broadcasting and TV channel generation, to name a few.

AI powered newsrooms (courtesy Innovation Media Consulting Group)

The report provides a handy gen AI toolbox from Bloomberg, The Washington Post, The Times of London, Reuters and Czech Radio.

It would have been helpful to include a reference or link to many news outlets using these particular tools, since the organizations mentioned in the chapter are primarily in Western countries, with the exception of India.

The report also presents detailed business models from which media may choose based on their needs and underlines the pros and cons of each.

Over the years at INNOVATION, we have always advocated that reader revenue should account for 40 percent of a successful digital business model. In order to build a vibrant business over that base, we recommend adopting or experimenting with at least three of the business models that we detail here in our flagship chapter. There have never been so many options!

Innovation business models (courtesy Innovation Media Consulting Group)

They range from content syndication to event organization, education, philanthropy, providing IT services, and more.

In a chapter headlined “Fake News: How to fight Disinformation and Earn Trust and Subscribers in the Process,” the report said 2024 “is shaping up to be a headline grabber, packed with global elections, ongoing conflicts, and the thrill of the Olympics. Amidst this flurry of events, the shadow of misinformation looms large, prompting governments worldwide to sound the alarm.”

Very true, but as a co-author of the UNESCO handbook Journalism, Fake News & Disinformation, I would caution against using the terms “fake news,” as well as “disinformation” and “misinformation” interchangeably.

This is because ‘news’ means verifiable information in the public interest, and information that does not meet these standards does not deserve the label of news. In this sense then, ‘fake news’ is an oxymoron which lends itself to undermining the credibility of information which does indeed meet the threshold of verifiability and public interest — i.e. real news.

The handbook defines disinformation as deliberate (often orchestrated) attempts to confuse or manipulate people through delivering dishonest information to them, combined with parallel and intersecting communications strategies. Misinformation generally means misleading information created or disseminated without manipulative or malicious intent.

So when AI is thrown in the mix, as featured in a chapter of the Innovation in News Media World Report, trouble arises, notably when the false information goes viral on social media.

Now more than ever, it has become clear that fact-checking and verification tools have to be integrated into news organisations and other platforms, emphasising transparency and addressing sophisticated methods of spreading false information.

AI and deepfakes (courtesy Innovation Media Consulting Group)

Tech ethicist Tristan Harris cautioned in The Economist that AI worsens ills caused by social media and the only cure is to impose change on AI firms’ incentives. He blamed a warped incentive structure within social media platforms, an invisible engine driving the psychological experience of billions of people.

Darker realities emerged. As social media tightened its grip on our everyday existence, we witnessed the steady shortening of attention spans, the outrage-ification of political discourse and big increases in loneliness and anxiety. Social-media platforms fostered polarisation, pushing online harms into offline spaces, with at times tragic, fatal results.

As it turns out, using engagement as an incentive can create profound dysfunction in society, culture and politics. Ten years of living with curation AI and its current incentives have been enough to rewire global information flows, break shared reality and fuel unprecedented mental-health crises in the young.

Hence the need for vigorous, non-stop fact-checking.

While the Innovation report listed various formats and organizations worldwide engaged in uncovering bogus information, it pointed to difficulties the Global South faces with fact checking due to problematic internet access, political constraints and cultural factors.

The difficulties the Global South faces with fact checking (courtesy Innovation Media Consulting Group)

But it acknowledged that the whole fact-checking system in the West and worldwide needs revamping.

The emergence of AI and “deepfakes” has complicated the fight against misinformation, signalling that our current methods may not suffice. However, this year offers a chance to rethink and refine strategies, aiming for solutions that reach wider audiences, explore new technologies and new partnerships. Most crucially here, the idea should be to move on from fact-checking just being a “debunking” of false information when it comes out, but also for formats to include elements of “pre-bunking” or media literacy that will be crucial to better manage misinformation.

To do so requires massive AI help to cope with the explosive volume of false content spewed by individuals, organizations, governments and media.

Which is disconcerting since artificial intelligence is a costly energy hog, per Casey Crownhart, who wrote in the MIT Technology Review that electricity consumption from data centers, AI, and cryptocurrency could reach double 2022 levels by 2026, according to projections from the International Energy Agency.

Towards a successful Al strategy (courtesy Innovation Media Consulting Group)

On a positive note, the Innovation report provides a seven-point AI strategy for success from the University of Oregon’s journalism professor Damian Radcliffe that steers media in the right direction.

--

--

Magda Abu-Fadil

Magda Abu-Fadil is a veteran foreign correspondent/editor of international news organizations, former academic, media trainer, consultant, speaker and blogger.