• Business Business

New York Times fires book review writer over blatant AI plagiarism: 'A serious violation'

AI use in journalism can quickly lead to plagiarism, a reduction in article quality, and the degradation of trust in media.

The illuminated facade of The New York Times building at night, showcasing its iconic logo.

Photo Credit: iStock

The use of artificial intelligence in many aspects of daily life is quickly proliferating. But journalism is one area where AI use is especially frowned upon. 

And despite the New York Times' policies on AI and journalistic standards, one book review written with AI slipped through the cracks, creating "a serious violation" and leading to the dismissal of one of its freelancers, according to TheWrap.

What's happening?

On Jan. 6, the New York Times published freelance journalist Alex Preston's book review of Jean-Baptiste Andrea's "Watching Over Her."

But unfortunately, it was clear to some readers that portions of Preston's review were eerily similar to another review of the same piece published in the Guardian four months prior. 

After one Times reader alerted the paper to the similarities, it began a review that uncovered the truth: Preston had used AI to draft the book review and unknowingly included portions from the Guardian piece that ended up being published.

Since then, the Times cut ties with Preston, who, according to TheWrap, had written six reviews for the paper since 2021. During the investigation, Preston told the Times that he hadn't used AI for the other reviews.

Preston has since apologized to the Guardian, the New York Times, and the writer of the other review.

Why is AI use in journalism significant?

AI use in journalism can quickly lead to plagiarism, a reduction in article quality, and the degradation of trust in media. 

For these reasons, it's critical that media companies employ strict standards against the stealing of others' work and careless use of AI.

According to the Guardian, a spokesman for the Times acknowledged the mistake, writing, "We spoke to the author of this piece … his reliance on AI and his use of unattributed work by another writer are a clear violation of the Times's standards."

Which of these savings plans for rooftop solar panels would be most appealing for you?

Save $1,000 this year 💸

Save less this year but $20k in 10 years 💰

Save less in 10 years but $80k in 20 years 🤑

Couldn't pay me to go solar 😒

Click your choice to see results and earn rewards to spend on home upgrades.

What's being done about improper AI use in journalism?

Most news outlets have developed and continue to develop policies to fight AI-driven plagiarism. At the same time, it's clear that corporations and newsrooms see value in using AI for various tasks. 

In 2025, just under 10% of newspaper news contained AI-generated text, according to University of Maryland researchers. And the International News Media Association found in a survey that 97% of publishers are investing in AI for a multitude of purposes.

This is true despite the fact that roughly half of Americans see AI as being problematic for journalism.

And there is little doubt that as AI technology improves, it will be more difficult to identify improper use. 

Fortunately, newsrooms have the power to employ stricter standards and safeguards to prevent improper AI use and the stealing of others' content.

Get TCD's free newsletters for easy tips to save more, waste less, and make smarter choices — and earn up to $5,000 toward clean upgrades in TCD's exclusive Rewards Club.

Cool Divider