In mid-2019, I was reading an interesting article in Cosmos, one of Australia's leading scientific publications, that featured a photograph of a man lying on an operating table covered in a bag of McCain's frozen fries and hash browns.
Scientists have discovered that rapidly cooling the body could improve the survival rate of patients who suffer a heart attack. This man was one of those patients, and so Frozen Meal Fresco was born. The accompanying report was written by Paul Biegler, a bioethicist at Monash University, who visited the trauma ward at Melbourne's Alfred Hospital to learn more about the method and to see if humans might be able to hibernate in the distant future.
This is the kind of story I remember when I start to panic about AI infiltrating the news – after all, it can't visit The Alfred Hospital, and isn't giving interviews, at least not yet.
But AI-generated articles are already being written, and their recent appearance in the media suggests a worrying development. Last week, it emerged that Cosmos staff and contributors claim they had not been consulted about the development of commentary articles purportedly written by a generative artificial intelligence. The articles cover topics such as “What are black holes?” and “What are carbon sinks?” At least one article contained inaccuracies. The commentary articles were written by OpenAI's GPT-4 and were then fact-checked against Cosmos' 15,000-article strong archive.
Details of the publication's use of AI were published by the ABC on August 8. In the article, CSIRO Publishing, an independent division of the CSIRO and Cosmos' current publisher, said the AI-generated articles were an “experimental project” to evaluate the “potential usefulness (and risks)” of using models like GPT-4 to “assist science communication experts in drafting science commentary articles.” Two former editors said Cosmos' editorial staff were not informed of the proposed custom AI services. This comes just four months after Cosmos fired five of its eight staff members.
The ABC also wrote that Cosmos contributors were unaware of the intention to deploy the AI model and were not informed that their work would be used as part of the fact-checking process.CSIRO Publications denied concerns that the AI service was trained on contributors' articles, and a spokesperson said the experiment used a pre-trained GPT-4 model from OpenAI.
But a lack of internal transparency and consultation left journalists and contributors feeling betrayed and angered. The experiment has now been paused, according to multiple sources, but CSIRO Publishing did not respond to a request for comment.
This controversy has a dizzying sense of déjà vu. We've seen this before. CNET, the highly regarded US technology website where I was science editor until August 2023, published dozens of articles generated by a custom AI engine in late 2022. In total, CNET's robot writers wrote 77 bylines, and an investigation by a rival publication found more than half of the articles to contain inaccuracies.
The backlash was swift and condemned. One report said the internet was “horrified” by CNET's use of AI. The Washington Post called the experiment a “journalistic disaster.” Trust in the publication was shattered almost overnight, leaving the organization's journalists feeling betrayed and angry.
The Cosmos example shows striking similarities: Once again, journalists spoke out, the backlash was swift — “absolutely awful,” wrote ABC's Big Ideas presenter Natasha Mitchell — and the response from organisations was much the same, pausing their rollout, calling it an experiment.
But this time, AI is being used to present facts backed by scientific research. This is a worrying development with potentially dire consequences. At a time when trust in both scientific expertise and the media is declining (the latter has fallen more rapidly than the former), deploying AI experiments that lack transparency is ignorant at best and dangerous at worst.
Science can reduce uncertainty, but it can't erase it. Effective science journalism involves helping readers understand that uncertainty, which studies have shown to increase trust in the scientific process. Unfortunately, generative AI remains a predictive text tool that can undermine this process and produce confident-sounding bullshit.
That doesn’t mean generative AI has no place in newsrooms or should be banned. It’s already being used as an idea generator, to provide quick feedback on manuscripts and help craft headlines. And with the right oversight, it could be crucial for small publishers like Cosmos to maintain a steady flow of content in the more-content-hungry internet age.
Still, if AI is to be deployed in this way, there are still problems that remain to be solved. The confident-sounding falsehoods are just the beginning. Issues over copyright and the theft of artworks to train these models have made their way into court, and there are also serious sustainability issues that need to be addressed. AI's energy and water usage, while difficult to calculate explicitly, is enormous.
But the bigger barrier is audiences: the University of Canberra's Digital News Report 2024 found that just 17% of Australians would be happy with news produced “predominantly by AI”, and only 25% of respondents noted they would be happy with AI being used specifically for science and technology reporting.
If your audience doesn’t want to read AI-generated content, then who is it made for?
The Cosmos controversy has brought that question into sharp focus. It's the first question that must be answered when deploying AI, and it's a question that should be answered transparently. Both editorial staff and readers need to know why a media outlet is starting to use generative AI and where it will do so. There can be no secrecy or subterfuge. We've seen time and time again that that's how trust is destroyed.
But if you're like me, you'll read to the end of this article wanting to know more about the man whose heart attack was saved by McCain's frozen meals, and there's a moral to it: the best stories stay with you.
So far, AI-generated articles have not had staying power.