Beloved actress, movie star and refugee advocate Cate Blanchett stands at a podium addressing the European Union Parliament: “The future is now,” she says authoritatively. So far, so normal, but then she says, “But where are the sex robots?”
The footage is from an actual speech Blanchett gave in 2023, but the rest is fictional.
Her voice was generated by Australian artist Xanthe Dobie using text-to-speech platform PlayHT for Dobie's 2024 video work, Future Sex/Love Sounds, which imagines a feminist utopia populated by sex robots and voiced by celebrity clones.
Much has been written about the world-changing potential of large-scale language models (LLMs) such as Midjourney and Open AI's GPT-4. These models are trained on vast amounts of data to generate everything from academic papers, fake news, and “revenge porn” to music, images, and software code.
While supporters praise the technology for speeding up scientific research and eliminating routine administrative tasks, it also presents a wide range of workers, from accountants, lawyers and teachers to graphic designers, actors, writers and musicians, with an existential crisis.
As the debate rages, artists like Dobie are using these very tools to explore the possibilities and precarity of technology itself.
The problem is not the capabilities of technology (which is bad), but how flawed, stupid, or evil people use it. Xanthe Dobie, artist
“The technology itself is spreading at a faster rate than the law can keep up with, which creates ethical grey areas,” says Dobie, who uses celebrity internet culture to explore questions of technology and power.
“We see replicas of celebrities all the time, but data on us, the little people of the world, is collected at exactly the same rate. It's not the power of the technology that's bad (it is), it's the way flawed, stupid, evil people use it that's bad.”
Choreographer Alisdair McIndoe is another artist working at the intersection of technology and art: His new work, Plagiary, premieres this week at Melbourne's Now or Never festival before running in a season at the Sydney Opera House, and uses custom algorithms to generate new choreography for dancers to receive for the first time each night.
Although the AI-generated instructions are specific, each dancer is able to interpret them in their own way, making the resulting performance more like a human-machine collaboration.
“A common question I get from dancers early on is, 'You turn your left elbow repeatedly, you go to the back corner and you're asked to imagine you're a newborn cow. Are you still turning your left elbow at that point?'” McIndoe says. “It quickly becomes a really interesting discussion of what is meaning, interpretation and truth.”
Dancers respond to AI-generated instructions in Alisdair McIndoe's “Plagiary” at Now or Never Festival. Photo: Now or Never
Not all artists are fans of the technology: In January 2023, Nick Cave posted a scathing critique of a ChatGPT-generated song that imitated his work, calling it “bullshit” and a “grotesque mockery of humanity.”
“Songs come from suffering,” he says, “which means they're based on complex, inner human conflicts of creation. And as far as I know, algorithms don't have emotions.”
Painter Sam Leach doesn't agree with Cave's idea that “creative genius” is an exclusively human trait, but he encounters this kind of “total rejection of technology and everything related to it” frequently.
“I've never been particularly interested in anything to do with purity of soul. I see my practice as a way of studying and understanding the world around me. I just don't believe I can draw a line between myself and the rest of the world and define myself as a unique individual.”
Leach sees AI as a valuable artistic tool that allows him to address and interpret a wide range of creative artifacts. He has customized a series of open-source models trained on his own paintings, reference photographs, and historical artworks to produce dozens of works, some of which are surrealistic oil paintings, such as a portrait of a polar bear standing on a bunch of chrome bananas.
Skip Newsletter Promotions
Save to “Save for Later”
Check out Guardian Australia's fun roundup of pop culture, trends and tips about culture and lifestyle.
Privacy Notice: Our newsletter may contain information about charities, online advertising and externally funded content. For more information, please see our privacy policy. We use Google reCaptcha to protect our website and are subject to the Google Privacy Policy and Terms of Use.
After newsletter promotion
“Fruit Preservation (2023)” by Sam Leach. Photo: Albert Zimmermann/Sam Leach
He justifies his use of sources by emphasizing that he spends hours “editing” with a paintbrush to refine the software's suggestions. He also uses an art critic chatbot to question his ideas.
For Leach, the biggest concern about AI isn't the technology itself or how it's being used, but who owns it: “A very small number of giant companies own the biggest models with incredible power.”
One of the most common concerns about AI is copyright. This is an especially complicated issue for people working in the artistic sector, whose intellectual property is being used to train multi-million dollar models, often without their consent or compensation. For example, last year it was revealed that 18,000 Australian books had been used in the Book3 dataset without permission or compensation. Booker Prize-winning author Richard Flanagan described this as “the biggest act of copyright theft in history.”
And last week, Australian music rights organisation APRA AMCOS released the results of a survey which found that 82% of its members are concerned that AI will reduce their ability to make a living from music.
Suno AI lets you create songs in seconds. We tested it to see if it actually sounds good – Video
In the European Union, the Artificial Intelligence Act came into force on August 1 to mitigate such risks. Meanwhile, in Australia, although eight voluntary AI ethics principles have existed since 2019, there are still no specific laws or regulations regulating AI technology.
This legal vacuum has forced some artists to create their own custom frameworks and models to protect their work and culture. Rowan Savage, a sound artist from Kombumeri who works as Salvage, has collaborated with musician Alexis Weaver to develop an AI model called Koup Music, a tool that transforms field recordings of his own voice in the country into a digital representation, and will be presenting the process at the Now or Never festival.
Savage's abstract dance music sounds like a dense flock of computerized birds, animal-code hybrid lifeforms that are haunting, alien, and familiar all at once.
“When people think of Indigenous Australians, they sometimes associate us with the natural world. There's a kind of infantilisation there, and we can use technology to counter that,” Savage says. “We often think of there as a strict separation between what we call nature and what we call technology. I don't think so. I want to break that and let the natural world influence the technological world.”
Savage designed Koup Music to give him full control over the data it uses to train it, so that it wouldn't appropriate other artists' work without their consent. In exchange, the model prevents his recordings from being co-opted into the larger network on which Koup is built. Savage sees the recordings as the property of the community.
“I'm OK with making recordings about my country for personal use, but I wouldn't necessarily put them out there (for someone or something to use),” Savage says. “I'm reluctant to do that (without speaking to key members of the community). Aboriginal people have always had a sense of community, so there's no individual ownership of sources the way Anglo people can.”
For Savage, AI holds great creative potential, but also poses “so many dangers”: “My concern as an artist is, how do we use AI ethically but still allow it to actually do all sorts of exciting things?”