“Photoshop has been around for 35 years” is a common response to refute concerns about generative AI, and the reason you ended up here is because someone made that claim in a comment thread or on social media.
There are countless reasons to be concerned about how AI image editing and generation tools will affect our trust in photography, and how that trust (or lack of it) might be used to manipulate us. We know this is bad, and it's already happening. So to save us all time and energy, and to keep our fingers from wearing out from constantly responding to the same arguments, we're compiling a list of them all in this article.
Ultimately, sharing this will make things much more efficient, just like AI. Isn't that great?
Discussion: “Photoshop already allows you to manipulate images like this.”
It's easy to make this argument if you've never manually edited a photo in an app like Adobe Photoshop, but it's a frustratingly simplistic comparison. Let's say some nasty bad guy wants to manipulate an image to make it look like someone has a drug problem. In that case, he'd need to do a few things:
Have access to desktop software (which can be expensive). Sure, mobile editing apps exist, but they aren't very good for anything other than minor tweaks like skin smoothing and color adjustments. That's why you'll need a computer for this work. It's an expensive investment to work with on the Internet. While some desktop editing apps are free (Gimp, Photopea, etc.), most of the professional-level tools cost money. Adobe's Creative Cloud apps are some of the most popular, but their subscriptions ($263.88 a year for Photoshop alone) are notoriously difficult to cancel. Find the right photos of drug paraphernalia. Even if you have some on hand, you can't just paste in any old image and expect it to look good. You need to consider the proper lighting and positioning of the photos you're adding, so they all match up. For example, all of the reflections on the bottles should hit them from the same angle. Also, an object taken at eye level will look obviously fake if inserted into an image taken at a more oblique angle. Understand and use a variety of complex editing tools. Inserts should be cut out of the original background to blend seamlessly into the new environment. This might require adjusting color balance, tones, exposure levels, smoothing edges, adding new shadows and reflections, etc. It takes both time and experience to make the result look not only natural but even passable.
Photoshop has some really handy AI tools that make this easy, like Auto Object Selection and Background Removal. But even with those, it still takes a lot of time and energy to work through a single image. In contrast, here's what The Verge editor Chris Welch did to achieve the same result using the Google Pixel 9's “Reimagine” feature:
Open the Google Photos app on your phone. Tap an area and it tells you to add a “syringe filled with red liquid” and a “thin line of crushed chalk,” as well as some wine and a rubber tube.
The Google Pixel 9's “Reimagine” tool is smart enough to take into account angles and rug textures. Image: Chris Welch, Image: Chris Welch
That's it. A similarly simple process exists on Samsung's latest smartphones. The skill and time barriers are not just reduced, they're gone. Google's tool is also incredibly good at blending generated materials into your image; lighting, shadows, opacity, and even focus are all taken into account. Photoshop itself has a built-in AI image generator, but the results are often less than half of what this free Android app from Google spits out.
Image manipulation techniques and other methods of fakery have been around for nearly 200 years, almost as long as photography itself (19th century spirit photos and the Cottingley Fairies are good examples). But making such changes takes skill and time, so we don't think to inspect every photo we see. For most of the history of photography, image manipulation was rare and unexpected. But the simplicity and scale of smartphone AI allows any fool to mass-produce image manipulations with a frequency and scale we've never experienced before. It's clear why that's alarming.
Argument: “People will adapt to this being the new normal.”
Just because we have a great ability to spot when an image is fake doesn't mean we can all do it. Not everyone sneaks around tech forums (we love you sneaky buddies), so the classic signs of AI that are obvious to us can be easily missed by those who don't know what to look for (if they even exist at all). AI is rapidly improving its ability to generate natural-looking images without the seven-finger or Cronenberg-esque distortions.
In a world where everything could be fake, it's very hard to prove something is real.
Deepfakes may have been easy to spot when they occasionally made their way into your feed, but the scale of their production has changed dramatically in the past two years alone. Deepfakes are so incredibly easy to create that they're now ubiquitous. We're moving dangerously close to a world where we must be vigilant against being fooled by every image that appears in front of us.
And it's so hard to prove something is real when everything could be fake, making it easy for doubts to be exploited and allowing people like former President Donald Trump to spin false accusations that Kamala Harris manipulated attendance figures at her rallies.
Discussion: “Photoshop was another big barrier-lowering technology, but it ended up working out.”
While AI is certainly much easier to use than Photoshop, Photoshop was still a technological revolution that forced people to confront a whole new world of fakery. But Photoshop and other pre-AI editing tools created societal problems that persist to this day and still cause great harm. The ability to digitally retouch photos for magazines and billboards fostered impossible beauty standards for both men and women, with the latter disproportionately affected. For example, in 2003, Kate Winslet, then 27, was unwittingly slimmed down for the cover of GQ, with the British magazine's editor Dylan Jones justifying her appearance by saying she looked “no different from any other cover star.”
Such editing was widespread and largely undisclosed, even as early blogs like Jezebel caused major scandals by publishing unedited photos of celebrities on the covers of fashion magazines. (France even passed a law requiring disclosure of retouching.) And with the advent of easy-to-use tools like Facetune on burgeoning social media platforms, editing has become even more insidious.
One 2020 study found that 71% of Instagram users edit their selfies with Facetune before publishing them, and another found that media images, with or without labels denying that they've been digitally altered, similarly degrade women's and girls' body image. There's a direct pipeline from social media to real-life plastic surgery, sometimes aiming for physically impossible results. And men are no exception. Social media has a real, measurable impact on boys and their self-image, too.
Impossible beauty standards aren't the only problem. Staged photos and photo editing can mislead viewers, undermine trust in photojournalism, and even reinforce racist narratives. A 1994 photo illustration showed O.J. Simpson's arrest photo with his face painted darker.
Generative AI image editing not only amplifies these issues by lowering the barrier even further, but sometimes does so without clear instruction. AI tools and apps have been accused of making women's breasts look larger or wearing more revealing clothing without being told to do so. Aside from viewers being unable to believe what they're seeing is real, photographers can't trust their tools.
Argument: “There should be laws to protect us.”
First, writing good speech laws — and frankly, these will probably be speech laws — is very difficult. Regulating how people can create and publish edited images requires distinguishing between uses that are overwhelmingly harmful and uses that many people consider valuable, such as art, commentary, or parody. Lawmakers and regulators will have to take into account existing laws on free speech and access to information, including the First Amendment to the U.S. Constitution.
Tech giants seem to have rushed full speed into the AI era without considering the possibility of regulation.
Big tech companies also seem to have rushed full speed into the AI era without any consideration for possible regulation: Governments around the world are still struggling to enact laws that could rein in those misusing generative AI technology (including the companies building it), and development of systems to distinguish between real and doctored photos has been slow and proven woefully inadequate.
Meanwhile, simple AI tools are already being used to manipulate votes, create digital images of undressed children, and create grotesque deepfakes of celebrities like Taylor Swift. And that's just in the last year, but the technology will continue to evolve.
In an ideal world, proper guardrails would be in place before we had free, foolproof tools in our pockets that let us add bombs, car crashes, and other nasty elements to photos in seconds. Perhaps we're already so far gone. Optimism and willful ignorance won't solve this problem, and at this stage it's not clear what could or even if it could be solved.