Explained | AI journalism: Can artificial intelligence replace journalists?


The origins of journalism as a practice and profession can be traced back centuries, from news sheets in ancient Rome to now various digital means and mediums of not only reading the news but also writing it. Journalism has seen significant changes over the years, so what’s next? With Artificial Intelligence (AI) slowly and increasingly becoming a part of daily lives, it also threatens millions of jobs across the world, will journalists be next to go? The answer is not anytime soon, here’s why. 

Is the use of AI in journalism new?

The advent of AI in journalism began earlier than one might think, the news agency, the Associated Press (AP) has been using AI to produce corporate earnings stories since 2014 and has also reportedly used the technology for sports recaps. In 2018, China’s Xinhua news agency created the world’s first AI-powered male news anchor and earlier this year debuted its first AI female news anchor. 

To what extent is AI being used in journalism now?

While these AI-based tools have been around for quite a few years, the cause of concern has seemingly increased recently with the extent to which it is being used. However, most of the journalists who used ChatGPT when it was launched, to write their columns, concluded that the bot was not good enough to take over their jobs; yet, reported AFP. 

However, in late 2022, American technology news website CNET took things to the next level when it quietly published dozens of feature articles written entirely by an AI program that it had deployed. It was not until January, that the speculations were confirmed by the company which had called it a mere experiment. 

This came after the media outlet began receiving lengthy correction notices for some of these AI-generated stories. While the Associated Press also did use AI for their stories, it is the extent to which it used machine-generated content, as their system is said to be more simplistic. The AP would essentially insert new information into pre-formatted stories as opposed to CNET which created feature-length articles.

However, it did not stop there. This year also witnessed the launch of the world’s first news channel whose content is generated entirely by AI called NewsGPT. The platform aims to revolutionise the way that news is delivered by providing unbiased, fact-based news without any hidden agendas, while its CEO, Alan Levy, called it a game changer. 

According to reports, NewsGPT is powered by machine learning algorithms and natural language processing technology which scans relevant news sources across the world in real-time to create stories which are supposedly accurate, up-to-date, and unbiased. It also claims to be free from the influence of advertisers, political affiliations, or personal opinions. 

Use of AI in journalism sparks concerns

As the media ecosystem is going through major changes, it has sparked concerns about how and to what extent AI can be used for journalism, and rightly so. The several cases encountered so far and the gloomy outlook of the future, on the subject, disinformation seems to be a running theme. 

In an interview with the German-based media outlet DW, media columnist and ombudsperson Pamela Philipose pointed out how there is a very real threat posed by AI and its applications in the newsroom and said, “The more multi-layered problem is the potential AI has to create disinformation by design.” 

Similarly, Brandi Geurkink, strategy and technology advisor at Reset, also spoke about how the rapid rise of AI globally has also sparked fears over its potential to spread disinformation and erode trust in mainstream media, reported AFP. 

A United States-based research firm Graphika, in its report, claimed that China has allegedly been using AI-generated deep fakes in first-of-their-kind propaganda videos. They cited the use of a fictitious news outlet called Wolf News the creation of which has been attributed to Chinese state-aligned personnel by the report. 

The “news broadcasters” of the supposed news channel were AI-generated. “This is the first time we’ve seen a state-aligned operation use AI-generated video footage of a fictitious person to create deceptive political content,” the Vice President of Intelligence at Graphika, Jack Stubbs, told AFP. 

Similar concerns about the ethics of using AI in newsrooms were raised when Kuwait-based media outlet, Kuwait News introduced their first AI-powered virtual news presenter ‘Fedha’. Notably, Kuwait ranked 158 out of 180 countries in the Reporters Without Borders 2022 World Press Freedom Index. 

This comes as media professionals have raised concerns about the growing dependence on algorithms and automation, threatening and even undermining the credibility of journalism. 

Meanwhile, CNET and sister publication Bankrate, which also used AI-generated content, reportedly disclosed the issues they faced with the accuracy of the bot-written stories, which could potentially give rise to misinformation. 

CNET was forced to take note of its errors after another news site noticed that the bot had made mistakes, some of them serious. Not to mention, Google chatbot Bard’s latest mishaps that cost the tech giant billions. 

Misinformation, unlike disinformation, may or may not be deliberate, but given accuracy is one of the fundamental elements of journalism and AI getting its basic factual data incorrect on multiple occasions can threaten the trustworthiness of journalism, in the long term.  

While AI might have its flaws, some media workers have also hailed bots like ChatGPT as a “revolution” for the industry. “Artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it,” said publishing behemoth Axel Springer’s boss Mathias Doepfner to his staff, as quoted by AFP. 

He announced a restructuring which would see “significant reductions” in production and proofreading of content as both companies have pushed AI as a tool to support journalists. Notably, the practice of media organisations using automation for routine or repetitive work is not new. 

Can AI replace journalists?

“AI is not stealing your job…Let that sink in,” wrote the now former manager of the JournalismAI initiative at the London School of Economics and Political Science, Mattia Peretti wrote in an analysis for Global Investigative Journalism Network (GIJN). 

He added that the “truth is that artificial intelligence is not nearly as intelligent as it would need to be to replace you (journalists)”. A new report by Goldman Sachs, released in March said AI could potentially replace the equivalent of around 300 million full-time jobs. But would journalism be one of them? Not yet. 

AI can play various roles to support a journalist’s work, but it will be long before it is changing newsroom roles, wrote Peretti, adding that by itself AI does not have the ambition nor the ability to steal jobs anytime soon. 

Similarly, Alex Connock, author of “Media Management and Artificial Intelligence,” as per AFP said, that the use of content creation tools will see some people lose their jobs, but not in the realms of analytical or high-end reporting. 

(With inputs from agencies)


You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.


Leave a Reply

Your email address will not be published. Required fields are marked *