Column Earlier this year, I was fired and replaced by a robot, and the managers who made the decision didn't tell me, or anyone else affected by the change.
The job I lost began with an enjoyable and rewarding relationship with Cosmos magazine, the Australian equivalent of New Scientist, where I wrote occasional features and columns that appeared online every three weeks.
Everyone seemed happy with the arrangement—editors, readers, and myself—and I was convinced we had found a groove that would last for years to come.
That didn't happen: In February, just a few days after I submitted my column, I and all of the other freelancers at Cosmos received an email informing us that they would no longer be accepting any more submissions.
It is a rare enterprise that can benefit both science and the public, and Cosmos was no exception. I have heard that Cosmos was sustained by financial support. When that funding ended, Cosmos faced difficulties.
Accepting the economic realities of our time, I lamented the loss of a great avenue for more scientific inquiry and moved on.
But it turned out that wasn't the whole story. Six months later, on August 8, a friend texted me a news story from the Australian Broadcasting Corporation. To summarize (courtesy of the ABC):
When it emerged that Cosmos had been using generative AI to generate articles for its website, funded by a grant from a non-profit that runs Australia's most prestigious journalism awards, my job writing articles for the website suddenly disappeared.
But that's not even half the story: To ensure the accuracy of its content, the AI likely “fed” my article through the “Common Crawl,” a giant tarball of nearly all content published on the web.
I wasn't just fired and replaced by a robot, the robot was programmed to represent me.
The article goes on to say that Cosmos' editor-in-chief knew nothing about the incident. It was all done in secret, which shows how the suggestion might have been received if it had been passed on to staff in charge of working with freelancers. In its apology for the incident, Cosmos lamented the lack of communication before the AI-written article was published.
What an understatement!
Editors know that readers want to read human-written words (like this one), and while it's great for summaries, bland “medium” AI-generated content just doesn't feel human. It's useful in a pinch, but nobody is particularly happy with it.
Cosmos decided to focus on generating the garbage that fills all marketing channels across the web, because generative AI is giving us more of what marketers want to show us, but less of what people want to read.
Cosmos was brave enough to label articles that were AI-generated, which is more transparent than other publications where a single human being controls the output of a giant content farm and operates in the shadows.
Wiley closes 19 academic journals over AI paper factory issue
read more
The technology exists to watermark AI-generated content and would be easily noticeable to readers, but the idea has already been shot down by OpenAI CEO Sam Altman, who recently declared that AI watermarking threatens at least 30% of ChatGPT makers' business: organizations don't want to admit they're spamming us with poorly generated content.
In the absence of such detection, we need something like a chain of provenance that shows the trajectory of these words from my keyboard to your eyes — exposing the writing, editing, and publishing process. With that kind of transparency, we can see the human element shine through.
There's never been anything like the human touch. But now that it exists, it's the most rewarding experience for your readers. That alone should be reason enough to make it happen.®