No, Taylor Swift does not endorse Donald Trump. Yes, the huge crowds that turned out at Kamala Harris rallies were real. Do you agree?
Whether you agree or not, the very fact that it’s being questioned indicates that we’re all in the midst of an ongoing AI election nightmare, with examples of AI-generated disinformation relating to the 2024 election piling up quickly.
Last week, Donald Trump falsely claimed that photos of large crowds at Kamala Harris rallies were generated by AI, and two days ago, Trump shared several images on Truth Social implying that Taylor Swift was endorsing him, some of which were clearly generated by AI. There was also news that Iranian groups were using OpenAI's ChatGPT to generate divisive content related to the US elections, and that Elon Musk's Grok AI model on X was spitting out false information about voting.
Following the widespread falsehoods seen after the 2016 and 2020 elections, the chaos that generative AI will wreak this election season has been predicted before: Last December, Nathan Lambert, a machine learning researcher at the Allen Institute for AI, told me that AI would wreak “major chaos” on the 2024 election.
It certainly seems that way to me. With Kamala Harris poised to become the Democratic nominee, I've been surprised to see a flurry of discussion about whether the crowds at the Democratic National Convention and Trump rallies are real or AI-generated. As The Washington Post reported yesterday, many of the AI fakes aren't necessarily meant to fool anyone. Rather, they can be powerful, provocative memes meant simply to provoke, humiliate, or elicit a cheap laugh that pleases the candidate's supporters.
Either way, it feels like an insidious march towards mass self-doubt about what's real and what's not. I myself find myself beginning to question what I'm seeing, assuming it's all AI-generated or frantically scanning photos for clues.
But things could get even worse. What about real-time, live deepfake video? A tool called Deep-Fake-Cam has been trending on X over the past two weeks. Using a single image of Elon Musk, for example, a developer was able to replace their own face with Musk's and broadcast a high-quality live video as the billionaire founder of Tesla and SpaceX. Combined with one of the easy-to-use AI voice clones currently available, this kind of technology could bring a new level of opportunity to deepfakes.
“I've seen a lot of deepfake technology, and this is kind of scary,” said Ariel Herbert Voss, founder of RunSybil and former OpenAI's first security researcher, adding that deepfake cameras are “light invariant.” That means that even as light moves around the image, the AI-generated image “retains its signature,” which makes it “harder to detect in the moment,” he told Fortune.
Nor can we expect much support from the platforms on which these images and videos are shared. Social media companies have “drastically scaled back” their election integrity departments, according to a panel hosted in Chicago yesterday by the University of Southern California's Annenberg School for Communications and Journalism. As a result, the panel warned, we will see a proliferation of AI-generated media, or deep fakes, before and after the 2024 election.
“It's only August, what's going to happen in December?” said Adam Powell III, executive director of the USC Election Cybersecurity Initiative.
With federal AI regulation on hold until after the election, it seems there’s little we can do except wait and hope we wake up from this AI election nightmare.
Sharon Goldman
[email protected]
Sharon Goldman
AI in the News
Another AI copyright lawsuit has been filed this week. According to Reuters, three authors filed suit yesterday in California federal court against AI model development company Anthropic. They claim that the company used its AI-powered chatbot, “Claude,” to learn their own books and hundreds of thousands of others. The plaintiffs, authors and journalists Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, say that Anthropic “used pirated versions of their works and others to teach Claude to respond to human instructions.” This follows other lawsuits filed by authors against generative AI companies, including a lawsuit filed in December by 11 non-fiction authors against OpenAI and Microsoft, and a lawsuit filed in September 2023 by more than a dozen authors, including John Grisham, against OpenAI.
Chip giant AMD acquires ZT Systems to compete with Nvidia. Yesterday, AMD announced that it had signed a definitive agreement to acquire ZT Systems, which provides AI infrastructure to major technology companies, for $4.9 billion. According to Axios, this “shows how far rival Nvidia is ahead of the pack in AI technology infrastructure.” Artificial intelligence requires more than just chips: it also needs the right software and networking. AMD's bid for ZT Systems is “a kind of admission that it was weak in this area.”
Controversy over California's AI bill. California Governor Gavin Newsom has yet to publicly take a stance on the state's flagship AI bill, SB-1047, but that hasn't stopped other lawmakers from speaking out. Rep. Nancy Pelosi called the bill “uninformed,” adding, “We must enact a model law that gives our nation the opportunity and responsibility to give small entrepreneurs and academic institutions, not big tech companies, the advantage,” TechCrunch reports. State Senator Scott Wiener, who introduced the bill, said in a statement that while he has “great respect” for Pelosi, he “respectfully and strongly disagrees with her comments.”
LVMH CEO Bernard Arnault's family office has been on the hunt for AI startups. According to CNBC, LVMH founder and CEO Bernard Arnault has made a series of artificial intelligence investments this year through his family office, Aglaé Ventures. According to family office database Fintrx, the largest funding round this year was in a company called H (formerly known as Holistic AI), a French startup working towards “full general artificial intelligence.” Fintrx said funding rounds in AI companies have totaled more than $300 million.
The fate of AI
Alphabet's robotaxi service hits new milestone as ridership doubles in just a few months Jessica Matthews
TSMC's first European factory will boost EU's semiconductor ambitions, but Intel's big decision is yet to be made – David Meyer
These boom-and-bust tech cycles show that even when AI investments decline, they bounce back quickly — Jeff Grabow (Commentary)
The number of Fortune 500 companies flagging AI risks has increased by 473.5% — Jason Marr
Women are using ChatGPT to catch men who lie about their height on dating apps — Sydney Lake
AI Calendar
August 28: NVIDIA earnings report
September 10-11: AI Conference, San Francisco
September 10-12: AI Hardware and AI Edge Summit, San Jose, CA
September 17-19: Dreamforce, San Francisco
September 25-26: Meta Connect, Menlo Park, CA
October 22-23: TedAI, San Francisco
October 28-30: Voice & AI, Arlington, VA
November 19-22: Microsoft Ignite, Chicago, IL
December 2-6: AWS re:Invent, Las Vegas, NV.
December 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, BC
December 9-10: Fortune Brainstorm AI San Francisco (Register here)
Focus on AI research
AI weather forecasting breakthrough during hurricane season. Nvidia has announced new research that uses AI to predict extreme weather and improve short-term weather forecasts. Nvidia claims that its new generative AI model, Stormcast, demonstrates better simulation of extreme weather events at kilometer scales. According to Axios, until now, AI weather and climate models from Nvidia, Microsoft, Google, and other researchers have shown progress using AI and machine learning to create medium-term global weather forecasts that match or surpass traditional physics-based models running on supercomputers. In addition to more accurate forecasts, the new model could help scientists take global climate change projections and apply them more accurately to local scales. “We believe we are at a moment when AI can compete with physics in storm-scale forecasting,” Mike Pritchard, a co-author of the study and a climate scientist at Nvidia, told Axios.
Brain Food
Is taking an anti-generative AI stance good for business? Popular iPad design app Procreate was in the spotlight yesterday on X. I posted a video The company's CEO, James Cuda, said in the video that he “really hates generative AI” and vowed to never introduce generative AI features into its products. The video, which has been viewed more than 8.5 million times, clearly hit a nerve, especially with artists who have protested against training AI models with copyrighted images. But it also raises the question of whether taking an anti-generative AI stance is simply good for business in some cases. After all, Google pulled an Olympics ad featuring an AI writing a little girl fan letters to her favorite Olympian after it received significant backlash online. Apple also faced backlash earlier this year when it released an ad in which a creative tool was crushed by a giant hydraulic press and replaced with an iPad Pro.