In this story
Lawmakers and activists are calling for federal legislation to criminalize AI-generated pornography, which they say is being used to ruin the lives of its victims, many of whom are women and girls.
Medicare patients could save $1.5 billion on 10 prescription drugs
“In the absence of clear laws at the federal and state levels, when victims go to police, they are often told there is nothing they can do,” Andrea Powell, executive director of the advocacy group Image-Based Sexual Violence Initiative, said at a recent roundtable discussion on the issue, an online forum hosted by the nonprofit National Organization for Women (NOW).
“These people then go on to face threats of sexual violence and harassment offline, and unfortunately, we find that some[victims]do not survive,” added Powell, who called AI deepfake nude apps a “virtual gun” for men and boys.
The term “deepfake” was coined in late 2017 by a Reddit user who used Google's (GOOGL) open-source face-swapping technology to create pornographic videos. Since ChatGPT brought generative artificial intelligence to the mainstream, AI-generated sexually explicit content has gone viral. Tech companies are racing to develop better AI photo and video tools, and some are misusing them. According to Powell, a Google search lists 9,000 websites that show explicit deepfake exploits. And between 2022 and 2023, online deepfake sexual content increased by more than 400%.
“We're starting to see 11- and 12-year-old girls being afraid to use the internet,” she said.
Deepfake regulations vary by state. Ten states currently have laws on the books, six of which impose criminal penalties. Additional deepfake bills are pending in Florida, Virginia, California and Ohio, and San Francisco filed a landmark lawsuit this week against 16 deepfake pornography websites.
But advocates say a lack of consistency in state laws creates problems, federal regulation is long overdue, and that platforms, not just individuals, should be held liable for non-consensual deepfakes.
Some federal policymakers are working on this. Rep. Joe Morrell (NY) introduced the Prevent Deepfake Intimate Images Act of 2023, which would criminalize the non-consensual distribution of deepfakes. Shortly after Taylor Swift's deepfake nudes caused an internet uproar, lawmakers introduced the DEFIANCE Act, which would strengthen victims' civil rights to sue. And a bipartisan bill called the Protect Intimate Privacy Act would hold tech companies accountable if they fail to address deepfake nudes on their platforms.
Meanwhile, victims and advocates are taking matters into their own hands. Breeze Liu was working as a venture capitalist when she was the target of a deepfake sexual harassment scam in 2020. She developed an app called Alecto AI to help track and remove deepfake content that uses her likeness online.
Reflecting on her experience as a victim of deepfake abuse, Liu said in an online meeting with supporters: “It was such a horrible experience that I thought it would be better if I were dead.
“We have struggled with the violation of our image online for far too long,” she added, “and I founded this company in the hope that one day we can all, and our future generations, take for granted that no one will lose their life to online violence.”
In addition to Alecto AI, Liu has also advocated for federal policy changes to criminalize non-consensual AI deepfake pornography, such as Rep. Morrell's 2023 bill, but the Prevent Deepfake Intimate Images Act has not made any progress since being introduced last year.
Notably, some tech companies have already taken steps to address the issue: Google updated its policies on July 31 to mitigate non-consensual deepfake content. Others are facing pressure: META's oversight board said in late July that the company needs to do more to address explicit AI-generated content on its platform.
In this story
Source link