The patriotic image shows megastar Taylor Swift dressed as Uncle Sam, falsely suggesting she supports Republican presidential candidate Donald Trump.
“Taylor wants you to vote for Donald Trump,” the image, which appears to have been generated by artificial intelligence, says.
Trump shared the image to his 7.6 million followers on his social network, Truth Social, over the weekend, along with other images depicting support from Swift fans, spreading the lie.
Deception has long played a key role in politics, but the rise of artificial intelligence tools that can quickly generate fake images and videos by simply entering a phrase has added a new complication to a common social media problem. Called deepfakes, these digitally altered images and videos can make someone appear to say or do something they did not.
As the battle between President Trump and Democratic candidate Kamala Harris intensifies, disinformation experts are sounding the alarm about the risks of generative AI.
“I'm worried that this is going to explode as we get closer to the election,” said Emilio Ferrara, a computer science professor at the University of Southern California's Viterbi School of Engineering. “It's going to get even worse than it is now.”
Platforms like Facebook and Twitter have rules against doctoring images, audio and video, but they have struggled to enforce those policies as AI-generated content floods the internet. They have faced accusations of censoring political speech and have focused on labeling and fact-checking content rather than removing posts. They also have exceptions to the rules, such as satire, that allow people to create and share fake images online.
“We've had all the problems in the past. We've had 10 years of dealing with all the myths and disagreements and general stupidity,” said Hany Farid, a professor at the University of California, Berkeley who specializes in misinformation and digital forensics. “Now we're seeing that accelerated even more with generative AI, and we've become really, really partisan.”
Amid growing interest in OpenAI, the developer of the popular generative AI tool ChatGPT, tech companies are encouraging people to use new AI tools that can generate text, images, and videos.
Farid, who analyzed the Swift images shared by Trump, said they appeared to be a mix of both real and fake images, a “nasty” way to spread misleading content.
People share fake images for a variety of reasons — they may do so to spread them on social media or simply to troll others — and visual images are a powerful piece of propaganda, distorting people's views on politics, including the legitimacy of the 2024 presidential election, he said.
On X, images that appear to be AI-generated show Swift hugging Trump, holding his hand and singing a duet with him while the Republican strums his guitar. Social media users have used other methods to falsely claim that Swift has endorsed Trump.
X labeled a video that falsely claimed Swift supported Trump as “manipulated media.” The video, posted in February, used footage of Swift at the 2024 Grammy Awards and made her appear to hold a sign that read, “Trump Won. The Democrats Rigged!”
Political campaigns are preparing for how AI will affect elections.
Harris' campaign is putting together a cross-functional team to “prepare for the potential impact of AI in this election, including the threat of malicious deepfakes,” spokeswoman Mia Ellenberg said in a statement, adding that the campaign is only allowing AI to be used for “productivity tools” such as data analysis.
The Trump campaign did not respond to a request for comment.
One challenge in curbing fake or doctored videos is that federal laws governing social media don't specifically address deepfakes. The 1996 Communications Decency Act protects social media companies from liability for hosting content unless they aid or control the people who post it.
But for years, tech companies have come under fire for the content that appears on their platforms, and many social media companies have instituted content moderation guidelines to address the issue, including banning hate speech.
“It's a real tightrope walk for social media companies and online operators,” said Joanna Rosen Forster, a partner at law firm Crowell & Moring.
Lawmakers are working to address the issue by proposing bills that would require social media companies to remove unauthorized deepfakes.
Gov. Gavin Newsom said in July he would support legislation that would make it illegal to use AI to alter people's voices in election ads. His comments were in response to billionaire Elon Musk, who owns X, sharing a video of him using AI to replicate Harris' voice. Musk, a Trump supporter, later clarified that the video he shared was a parody.
The Screen Actors Guild and the National Association of Television and Radio Entertainers are among the groups pushing for legislation to combat deepfakes.
Duncan Crabtree Ireland, national executive director and chief negotiator for SAG-AFTRA, said social media companies aren't doing enough to address the issue.
“Misinformation or outright lies spread by deepfakes can never be undone,” Crabtree-Ireland said. “Especially since elections are often decided by close margins and by complex, arcane systems like the Electoral College, lies spread by deepfakes can have devastating real-world effects.”
Crabtree-Ireland has experienced this issue firsthand: Last year, he was the subject of a deepfake video that went viral on Instagram during a contract ratification drive. The video, which featured false images of Crabtree-Ireland urging union members to vote against the contract he had negotiated, was viewed tens of thousands of times. The video was captioned “deepfake,” but he received dozens of messages from union members asking about it.
He said it took several days for Instagram to remove the deepfake videos.
“I felt it was very abusive,” Crabtree-Ireland said. “My voice and my face should not be stolen to make claims that I don't agree with.”
With Harris and Trump in such a close race, it's not surprising that both candidates are turning to celebrities to appeal to voters. Harris' campaign has embraced pop star Charli XCX's description of Harris as a “brat” and used popular songs like Beyoncé's “Freedom” and Chapel Lawn's “Femininomenon” to promote the Black and Asian American female Democratic presidential candidate. Musicians Kid Rock, Jason Aldean and Ye, formerly known as Kanye West, have voiced their support for Trump, who was the target of an assassination attempt in July.
Swift, who has been the target of deepfakes before, has never publicly endorsed a candidate in the 2024 presidential election but has criticized Trump in the past. In the 2020 documentary “Miss Americana,” Swift said in a tearful conversation with her parents and team that she regretted not speaking out against Trump in the 2016 election, and slammed Tennessee Republican Marsha Blackburn, who was running for U.S. Senate at the time, as a “Trump in a wig.”
Swift's spokeswoman, Tree Payne, did not respond to a request for comment.
AI-powered chatbots from platforms like Meta, X, and OpenAI allow people to easily create fictional images. News outlets have found that X's AI chatbot Grok can generate election fraud images, but other chatbots are more limited.
Meta AI's chatbot rejected creating an image of Swift supporting Trump.
“We are not allowed to generate images that may spread misinformation or give the impression that a public figure endorses a particular political candidate,” Meta AI's chatbot responded.
Meta and TikTok cited efforts to label AI-generated content and partner with fact-checkers. For example, TikTok said it would not allow AI-generated videos that falsely portray public figures as endorsing political positions by individuals or groups. X did not respond to a request for comment.
When asked how Truth Social moderates AI-generated content, the platform's parent company, Trump Media and Technology Group, accused journalists of “demanding more censorship.” Truth Social's community guidelines include rules prohibiting scam and spam posts, but do not specify how it will handle AI-generated content.
As social media platforms face the threat of regulation and lawsuits, some misinformation experts question whether social networks want to adequately moderate misleading content.
Because the social network makes most of its revenue from advertising, keeping users on the platform longer is “good for business,” Farid said.
“It's the absolute most conspiratorial, hateful, obscene, angry content that captures people's attention,” he said. “That's just the nature of who we are as people.”
It's a harsh reality that even Swifties can't shake.
Staff writer Michaël Wood contributed to this report.