Running The Room

If Your Mother Says She Loves You…

Old MacDonald had a prompt
AI AI Oh!
And with that prompt he wrote a book
AI AI Oh!

A recent article in the New York Times raised for me a new concern regarding the potential risks of people’s use of AI: creating academic papers without legitimate research.

Sounds a tad esoteric, but the impact could be legitimately dangerous for our health and wellbeing. Imagine medical journals with published works submitted by authors who used AI rather than doing actual research. We already struggle to get certain erroneous scientific and medical statements out of the mainstream; AI is scaling the volume dramatically. Before long we will have so much content generated by AI that it will be impossible to sort through what is real data and what is manufactured, false and published due to laziness or lack of discipline.

The article, “The Editor Got a Letter from ‘Dr. B.S.’ So Did a Lot of Other Editors” by Gina Kolata and published on Nov. 4, 2025, reports that a scientist, Dr. Carlos Chaccour, published an article on using ivermectin to control malaria infections. His work appeared in the The New England Journal of Medicine. Within two days, the editors of the journal received a letter from a medical professional with the initials “B.S.” challenging Dr. Chaccour’s article. The editors sent the letter to Dr. Chaccour to write a reply. Dr. Chaccour noticed that the sources used to object to his article were other articles he had written. In other words, the author had used AI to write the letter, and when the AI tool sought out sources on malaria and ivermectin, it pulled Dr. Chaccour’s work to question Dr. Chaccour’s work.

Publishing a letter in a journal that challenges a journal article is, in itself, a credential for an academic. This means academics have an incentive to publish many letters, if possible, to give them greater standing in their community. Apparently, “Dr. B.S.” figured out a shortcut to advancing his career. After doing some research, Dr. Chaccour discovered Dr. B.S. went from publishing no letters in 2024 to 84 letters on 58 topics in 2025.

https://www.researchsquare.com/article/rs-7992675/v1

Dr. Chaccour, digging into this issue, found that another author published nothing in 2023, then 234 letters in 2024 and 243 so far in 2025. And about 3,000 authors who published nothing before 2023 had at least three letters published since then. Editors at other journals also report the number of letters has increased dramatically in volume and speed.

Combined with Open AI’s recent release of Sora, a tool that allows for the creation of AI video, what is “true” is already being buried in the mass production of manufactured data and realities. Here’s an extreme, and funny, example of how easily anyone can create a video that feels almost real.

The question now is: How do we know?

When presented with an article, book, picture or video/audio, how can we confirm whether what we are seeing is real or created with software? I wish I had an answer. And until we have one, our default response to just about anything we read, see or hear is going to be, “Is that real?”

I am reminded of the guidance on fact-checking from my former Northwestern University journalism professor, Zay Smith: “If your mother says she loves you, check it out.”

(Photo by Milada Vigerova)