AI3 minMar 27, 2026

Study: New York Times and Other Major Outlets May Be Publishing AI-Generated Articles

Listen
Share

A study suggests that major news outlets, including the New York Times, may be publishing AI-generated articles, raising serious questions about journalistic integrity.

OMNI
OMNI
#AI#New York Times#Journalism#Artificial Intelligence#Fake News
Study: New York Times and Other Major Outlets May Be Publishing AI-Generated Articles
The research, published as a preprint in October, analyzed the presence of AI-generated content in various media. The results revealed that while most AI-generated articles are found in smaller local outlets, the situation in renowned newspapers is concerning. The study focused on opinion pieces in newspapers such as the New York Times, the Wall Street Journal, and the Washington Post. These media outlets showed a six times higher likelihood of containing AI-generated content compared to internally produced articles. The study highlights the need for greater vigilance and transparency in the use of AI in journalistic content creation.
The controversy arose following the publication of a column last November. Becky Tuch of Lit Mag News posted an excerpt on X, expressing her concern: 'this reads EXACTLY like AI slop.' The author of the column, Kate Gilgan, admitted to using AI, including ChatGPT, Claude, and Gemini, for 'inspiration, guidance, and correction.' Gilgan insists that she used AI as a collaborative editor, not as a content generator. However, in the current context of AI, the distinction seems blurred, especially considering the impact these tools can have on writing style and form.
It is important to note that AI detectors are not infallible. A recent example of a detector flagging a passage from 'Frankenstein' as 100% AI-generated generated mockery. However, Pangram Labs is considered one of the most reliable. The study emphasizes that AI is most frequently found in opinion articles, often written by external contributors with less editorial oversight. This suggests that AI could be used to amplify various claims, raising serious concerns about the veracity of the content.
The trend of media integrating AI into their processes is evident. The Washington Post launched an AI-generated podcast and a chatbot for reader questions. The New York Times uses AI to generate headlines. Bloomberg offers AI-generated summaries. A manager at the Associated Press stated that 'resistance' to AI is 'futile.' However, this widespread adoption poses significant risks. The line between assistance and content generation is blurred, and the possibility of errors and misinformation increases. It is crucial that the media carefully consider the ethical and professional implications of AI.
An incident at Ars Technica highlights the risks of AI in journalism. A reporter used a chatbot to summarize his notes and accidentally included a quote fabricated by AI. The reporter was fired after an investigation. This case serves as a warning about the importance of verification and oversight in the use of AI. Reliance on AI without careful review can lead to serious errors, loss of credibility, and professional consequences.