Science detective Elizabeth Bik worries that the proliferation of AI-generated images and text in academic papers could undermine trust in science.
An infographic about a rat with an ridiculously large penis. An infographic showing a human leg with too many bones. An introductory paragraph that starts with “Sure, this could be an introduction to your topic.”
These are some of the most egregious examples of artificial intelligence to have appeared in scientific journals recently, shedding light on the wave of AI-generated text and images inundating the academic publishing industry.
Several experts whose research tracks the issue told AFP that the rise of AI has exacerbated existing problems in the multi-billion-dollar industry.
All the experts stressed that AI programs like ChatGPT, if thoroughly checked and made public, could be useful tools for paper writing and translation.
But that hasn't been the case in several recent cases that somehow slipped through peer review.
Earlier this year, a graphic of a rat with impossibly large genitals, apparently generated by AI, was widely shared on social media.
The study was published in the leading academic journal Frontiers, which later retracted the study.
Last month, another study of AI graphics showing a leg with strange articulated bones resembling a hand was retracted.
While these examples were images, it is believed that the chatbot “ChatGPT,” released in November 2022, has brought about the biggest change in how researchers around the world publish their research results.
One study published by Elsevier went viral in March because of its introduction, which was apparently a ChatGPT prompt that read, “Sure, here's a potential introduction for your topic.”
Such embarrassing cases are rare and unlikely to make it through the peer review process of the most prestigious academic journals, experts told AFP.
Dissatisfaction with the paper mill
Spotting the use of AI isn't always easy, but one clue is that ChatGPT tends to favor certain words.
Andrew Gray, a librarian at University College London, combed through millions of research papers, looking for overuse of words like “meticulous,” “intricate” and “commendable.”
He determined that at least 60,000 research papers will involve the use of AI in 2023, more than 1% of the annual total.
“We're going to see a significant increase in 2024,” Gray told AFP.
Meanwhile, more than 13,000 papers were retracted last year, the most ever, according to the US-based group Retraction Watch.
Ivan Oransky, co-founder of Retraction Watch, told AFP that AI had enabled bad actors in scientific publishing and academia to “industrialise” the proliferation of “junk” papers.
These bad actors include so-called paper mills.
Elisabeth Bik, a Dutch researcher who detects scientific image manipulation, said these “scammers” sell authorship rights to researchers and churn out very poor quality, plagiarised or fabricated papers.
Bik told AFP that 2% of all research is thought to be published by paper mills, but the advent of AI has opened the door for publishing and that proportion is “exploding”.
The issue came to light when Wiley, a major academic publisher, acquired the struggling publisher Hindawi in 2021.
Since then, Wiley has retracted more than 11,300 papers linked to the Hindawi special issue, a Wiley spokesman told AFP.
Wiley is now introducing a “Paper Mill Detection Service” to detect AI misuse, which itself uses AI.
“vicious circle”
Oransky stressed that the problem is not just with paper mills, but with a broader academic culture that pressures researchers to “publish or perish.”
“Publishers have built these systems that demand volume and have generated 30 to 40 percent profit margins and billions of dollars in profits,” he said.
The insatiable demand for papers puts pressure on academics who are ranked by the number of papers they publish, creating a “vicious cycle”, he said.
Many people use ChatGPT to save time, which is not necessarily a bad thing.
With nearly all research papers published in English, Bik says AI translation tools can be invaluable for researchers for whom English is not their first language, including herself.
But there are also concerns that AI errors, inventions and unwitting plagiarism could further erode public trust in science.
Another example of AI misuse occurred last week, when a researcher discovered what appeared to be a rewritten version of his research into ChatGPT published in an academic journal.
Samuel Payne, a professor of bioinformatics at Brigham Young University in the US, told AFP he was asked to peer review the study in March.
After realising it was “100% plagiarism” of his own work (though the text appeared to have been paraphrased by an AI program), he rejected the paper.
Payne said he was “shocked” to discover that the plagiarized paper had simply been published in Wiley's new journal, Proteomics.
It has not been retracted.
© 2024 AFP
Source: A Flood of “Junk”: How AI Will Change Scientific Publishing (August 10, 2024) Retrieved August 10, 2024 from https://phys.org/news/2024-08-junk-ai-scientific-publishing.html
This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.