At every turn we’re blasted with an avalanche of generative artificial intelligence (AI) news and how ChatGPT (and clones) are either the answer to, or cause of, our problems, so we’d better hop aboard before the proverbial train leaves the station.
Let’s pick up on a major disadvantage of generative AI: It doesn’t recognize truth as a concept since that’s not a goal. CNN veteran Christiane Amanpour’s mantra of always being truthful, not neutral, immediately flies out that window.
OpenAI, the American AI research lab grouping non-profit OpenAI Incorporated and for-profit subsidiary OpenAI Limited Partnership, co-founded by Sam Altman, Peter Thiel and Elon Musk, among others, is all over the map.
It’s stressing out journalists worldwide at the prospect their jobs are on the line and their mission of digging for the truth is being made redundant.
“OpenAI is open enough to say that there is no source of truth in ChatGPT. Their limitations section covers it:
“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
That, according to Innovation in Media 2023 World Report, a must-read handbook produced by Innovation Media Consulting Group for FIPP, an almost century-old organization including media owners and content creators.
ChatGPT is proof that finding “truth” is a lot trickier than having enough data and the right algorithm. Despite its abilities, ChatGPT is unlikely to ever come close to human capabilities: its technical design, and the design of similar tools, is missing fundamental things like common sense and symbolic reasoning. Scholars who are authorities in this area describe it as being like a parrot; they say its responses to prompts resemble “pastiche” or “glorified cut and paste”. As Jenna Burrell writes in a piece for Poynter, “when you think of ChatGPT, don’t think of Shakespeare, think of autocomplete. Viewed in this light, ChatGPT doesn’t know anything at all.”
To this point, the report quotes Eric Ulken, product director at Gannett speaking to the Nieman Lab, as saying generative AI can get more wrong, faster and with greater apparent certitude and less transparency — than any innovation in recent memory.
And yet artificial intelligence has been around for years with news organizations The Associated Press, Reuters, Agence France-Presse, Bloomberg and Dow Jones using automated data-driven stories for financial reports, sports results and economic indicators during what the Innovation report termed AI’s first phase.
The second wave arrived when the emphasis shifted to augmenting reporting through machine learning and natural language processing to analyze large datasets and uncover trends, it said.
The third and current wave is generative AI, it added, powered by large language models capable of generating narrative text at scale, and offering applications to journalism that go beyond simple automated reports and data analysis.
“Now, we could ask a chatbot to write a longer, balanced article on a subject or an opinion piece from a particular standpoint. We could even ask it to do so in the style of a well-known writer or publication,” it said.
Sure enough, the Innovation report publishers prompted ChatGPT to answer the question:
“How should a chapter on AI start for the Innovation in Media World Report?” This is particularly important given the publication is read by executives and editors worldwide.
ChatGPT’s reply in under 20 seconds is telling:
“Artificial Intelligence (AI) has become a buzzword in the media industry in recent years, promising to revolutionise the way we create, distribute, and consume content. With the explosion of data and the increasing demand for personalised experiences, AI has emerged as a powerful tool for media companies to improve their workflows, engage their audiences, and drive business growth.
However, the impact of AI on the media industry is not just about technological advancements. It raises fundamental questions about the role of journalism, the ethics of data collection and analysis, and the relationship between humans and machines. As AI technologies continue to evolve, media organisations need to understand the opportunities and challenges they present, and to develop strategies to leverage them effectively.
In this chapter, we will explore the latest trends and innovations in AI for the media industry, and examine how media companies are leveraging AI to create new forms of content, enhance user experiences, and drive business growth. We will also examine the ethical and societal implications of AI in the media, and consider how media organisations can navigate this rapidly evolving landscape to stay ahead of the curve.”
No wonder media are invested in generative AI in their newsrooms to improve output, workflows and create savings on many levels.
The Innovation report quotes Marcela Kunova, the editor at Journalism.co.uk, as pinpointing reliable and ethical ways in which journalists and newsrooms can capitalize daily on ChatGPT’s power to generate summaries of large texts and documents; generate questions and answers; provide quotes; generate headlines; translate articles into different languages, generate email subjects, write emails; generate social media posts; provide context for articles; generate images, transcribe interviews; and be used in other multimedia.
But digital outlet Buzzfeed CEO Jonah Peretti, who reportedly told staffers in a memo he didn’t plan to use AI to write journalistic articles, did exactly the opposite by publishing fully AI-generated pieces produced by non-editorial staff and creating quite a backlash.
Where does that leave journalists, editors, multimedia producers and today’s fact-checkers? Journalists are constantly harangued about learning new skills and how to use new tools but is the technology outpacing them into their own oblivion?
We’ve seen the emergence of fact-checkers as full-time staffers with major responsibilities in news organizations, a job title that didn’t exist a few years ago.
Even that role may become part of AI’s mainstreamed bailiwick, although the Innovation report said artificial intelligence was “notorious for producing incorrect information that needs to be vetted through human intervention.”
The report assiduously and rightly pays close attention to Gen Z and how they consume media.
It said personality and passions define Gen Z more than any socioeconomic categorization, according to Rachel Richardson, previously head of editorial at Snapchat and an editor at News UK.
Gen Z tends to be very inclusive, they reject hierarchy, crave transparency, and want to talk openly about issues that impact them and others.
They are hyper-connected to the Internet and don’t tend to distinguish between online and offline worlds.
When it comes to content, the biggest difference between Gen Z and older generations is their inclination to create as well as to consume. They highly value and expect personalization. They love streaming and subtitles.
Lest anyone forget, media need income to operate and one of the biggest challenges has been generating it in an ecosystem replete with innovation, dizzying change, shrinking budgets, layoffs and uncertainty.
This can range from paid content, data mining, philanthropy, clubs, providing IT software and services, education, event organization, archives, brand licensing, being ad agencies, affiliate marketing, providing retail products, becoming think tanks and, of all things, turning into betting agencies.
The report provides a very clear infographic of business models to suit every organization that should be studied carefully.