The city of San Francisco filed a major lawsuit on Thursday against 18 websites and apps that generate unauthorized deepfake nudes of unsuspecting victims.
The complaint, which was made public without naming the plaintiff services, targets the “proliferation of websites and apps offering to 'undress' or 'naked' women and girls.” The complaint alleges that these sites were visited a total of more than 200 million times in the first six months of 2024.
“This investigation has reached the darkest corners of the internet, and I am truly terrified for the women and girls who have had to endure this exploitation,” San Francisco City Attorney David Chiu said in announcing the lawsuit. “Generative AI holds great potential, but as with all new technology, there are unintended consequences and bad actors who will seek to exploit it for their own purposes.
“This is not innovation, this is sexual abuse,” Chiu added.
While celebrities like Taylor Swift are frequent targets of this image-making, he pointed to a recent case in the news involving a California middle school student.
“These images, which are nearly indistinguishable from real photographs, are used to intimidate, bully, threaten and humiliate women and girls,” the city's release said.
The rapid spread of so-called “non-consensual intimate images” (NCII) has led governments and organizations around the world to make efforts to curb the practice.
“Victims face significant obstacles and have little to no recourse to remove the images once they have been distributed,” the lawsuit states. “Victims suffer severe psychological, emotional, financial and reputational harm, as well as a loss of control and autonomy over their bodies and images.”
What's even more troubling, Chiu said, is that some sites “allow users to create child pornography.”
The use of AI to generate child sexual abuse material (CSAM) is particularly troubling because it severely hinders efforts to identify and protect real victims. The Internet Watch Foundation, which tracks the issue, said known pedophile rings have already adopted the technology and that AI-generated CSAM could “overwhelm” the internet.
A Louisiana law that explicitly bans AI-created CSAM went into effect this month.
Big tech companies have promised to prioritize child safety when developing AI, but Stanford researchers say such images are already being built into AI datasets.
The lawsuit seeks to require the services to pay $2,500 for each violation and to cease operations, and it also demands that domain name registrars, web hosts and payment processors stop providing services to organizations that create deepfakes.
Generally intelligent newsletter
A weekly AI journey narrated by generative AI model Gen.