Stay up to date with free updates
Simply sign up to the Artificial Intelligence myFT Digest and have it delivered straight to your inbox.
Eighteen months ago, I was at a party in San Francisco celebrating generative AI as the next industrial revolution. The atmosphere was light and empty. AI is going to destroy our way of life, one party-goer said. We are like farmers tending their crops, unaware of the machines that are going to consume us all.
It's fair to say that generative AI hasn't gobbled up much yet. Accountants, designers, software engineers, filmmakers, translators, and all the other professions we were told would be a disaster are still being hired. Elections haven't been disrupted. The world still moves. These early warnings are starting to sound like a weird form of marketing.
Silicon Valley tends to be associated with optimism. The indomitable sense that the world is on an upward trajectory is one of the tech industry's most endearing qualities. When aspirational plans go awry (e.g., Elon Musk's insistence that a manned spacecraft will fly to Mars by 2024), the world can be forgiving. There's a recognition that optimistic ambition is a good thing.
But California isn't just nurturing optimism: some in the tech industry are feeling fearful.
At the most acute of these are survivalists who worry about societal collapse. For some, this means buying up land in New Zealand or stockpiling water. For others, it can be a business strategy. Software and consultancy Palantir is known for using its quarterly earnings to inform investors of the potential destruction of the planet. The existential musings add to the company's appeal. Palantir is publicly traded and has been around for over 20 years, yet it is still described as an “enigmatic” company.
Fear-mongering about tech products isn't necessarily unhelpful: claims that social media is addictive and violates privacy may unnerve users, but it doesn't alienate advertisers.
Take Facebook: its shares fell in 2018 after it was revealed that Cambridge Analytica had harvested user data and used it in experiments that supposedly changed election outcomes. Not only did the stock recover over the course of a year, but the company's market capitalization has now doubled. Being seen as powerful enough to influence world politics made the platform sound more impressive — even if that wasn't true. (There is still little evidence that “psychological” data collection sways voters.)
In AI, worriers have found something to dump all their fears on. Last year, OpenAI co-founder Sam Altman joined a group of scientists and other executives to sign a letter saying the risk of AI-induced annihilation should become a global priority. Other tech leaders have called for a six-month pause on research, citing “significant risks to society and humanity.” Goldman Sachs has declared that the technology could automate 300 million full-time jobs.
Many of these struggles are undoubtedly real. But a side effect is a growing awe and then disappointment in the technology. When OpenAI released Sora, an AI-generated video app, one critic called it “one step closer to the end of reality itself.” And it doesn't matter if filmmakers who have used it find it less than inspiring.
Like all marketing, big claims tend to fall apart when people actually try them, and as generative AI becomes more accessible to us through gadgets, Google Docs and multimedia platforms, questions are growing about whether this is all hype.
Some of the earliest consumer products, such as Humane's $699 AI Clip-on Pin, have proven poorly received: Technology news site The Verge reports that Humane's Pins have been returned more than sold in the past three months.
Meta's Ray-Ban AI sunglasses have gotten better reviews. The sunglasses tell you what you're looking at by taking a picture and identifying what's in it. But while this feature is nice, it's not perfect. When I tried them out, I found the earpiece speaker feature more useful. Other staffers in our San Francisco bureau seemed to feel the same way, trying them on and dutifully using them to identify what they were looking at and giving it back to me.
Perhaps one day the glasses will translate road signs, give directions, and help the visually impaired. But commercial applications of the new technology won’t come soon; we’re still in the early stages of testing ideas. The challenge is reconciling this with the message that the technology is already scary. If we hadn’t been repeatedly told that AI could kill us all, we might be able to wait more patiently for AI killer apps.
email address