Skip to content

AI Capabilities Challenged in Maintaining Journalistic Integrity?

Unreliable outcomes observed thus far.

AI's Capability to Adhere to Journalistic Ethics and Standards
AI's Capability to Adhere to Journalistic Ethics and Standards

AI Capabilities Challenged in Maintaining Journalistic Integrity?

In the rapidly evolving world of technology, the use of artificial intelligence (AI) in various sectors has become increasingly prevalent. However, the integration of AI in journalism has sparked a significant debate, with concerns about biases, plagiarism, and ethical implications.

Recent reports have highlighted troubling results for nonwhite and non-male subjects when using image-oriented generative AI tools. These tools, designed to assist in content creation, have inadvertently perpetuated biases present in their training data, raising questions about their objective capabilities.

One such instance involved the news organisation Sports Illustrated, which discovered that some of its stories were written by people who didn't exist, produced by a company called AdVon. This revelation led to the termination of SI's relationship with AdVon.

In a similar vein, the New York Times accused OpenAI of attempting to use its journalism to build substitutive products without permission or payment. This accusation was one of several lawsuits filed by the Times, the Center for Investigative Reporting, Intercept Media, and eight media outlets owned by Alden Global Capital, accusing OpenAI and Microsoft of violating copyright laws by ingesting their content.

AI tools have been found to sometimes copy text from published sources without attribution, leaving news organisations open to accusations of plagiarism. Tim Marchman and Dhruv Mehrotra, in a Wired article published in 2022, described how the AI tool Perplexity exactly reproduced one of their sentences, which they considered plagiarism. Forbes also called out Perplexity for ingesting one of its articles and producing an AI-generated article, podcast, and video without any attribution to the outlet.

Moreover, AI tools are not able to reliably and accurately cite and quote their sources, a crucial aspect of journalistic integrity. They commonly "hallucinate" authors and titles, or quote real authors and books with invented content. Google's AI tool, Gemini, generated drawings of Asian and Black Nazis when asked to illustrate a 1943 German soldier, demonstrating the lack of basic common sense and cultural sensitivity in AI-generated content.

Despite these concerns, AI tools are being used by some journalists to crunch numbers and make sense of vast databases. They are aiding in the analysis of large amounts of data, thereby streamlining the journalistic process.

In an effort to address the growing concerns, more than 80% of people surveyed in 2023 believe news organisations should alert readers or viewers that AI was used. Among those who believe consumers should be alerted, 78% said news organisations should provide an explanatory note describing how AI was used.

In May 2025, the New York Times signed an AI licensing deal with Amazon, allowing the tech company to use the outlet's content across its platforms. This move, while controversial, signifies the evolving relationship between AI and journalism.

As the use of AI in journalism continues to grow, it is crucial that the industry addresses these concerns and ensures that AI tools are used ethically and responsibly, upholding the principles of journalistic integrity.

Read also:

Latest