OPINION Companies love to use familiar phrases in unconventional ways. “We respect your privacy” is actually the digital equivalent of robbing your phone with compliments. So what is “partnership”? Usually it means “someone with more money bribing someone with more credit.”
The word “partner” has a more accurate technological use, the kind that accompanies a toxic relationship. Windows is that partner because it keeps doing things even when you tell it to stop. It promises to change, but doesn't. It's always trying to take control. Want proof? The recall is back.
Readers may remember that Recall was a big part of Microsoft's AI/ML Windows 11 strategy. Recall creates a searchable timeline of desktop activity by continually taking snapshots of work in progress and feeding it into a remote analytics engine. Microsoft didn't know how this universal automated snooping was compatible with corporate privacy and data protection policies — it wasn't. After being criticized time and time again, Recall was recalled for unspecified fixes.
Now the recall is back, and the fix is still unclear. Microsoft really wants to give us the recall, even though no one asked for it. If they really want it, there is a way to achieve it without all this AI/ML centralization nonsense. Creating a giant database of work done across multiple apps and services, even if stored locally, is not a good idea. It's certainly an attractive target. It just doesn't make much sense. In that sense, the recall is a microcosm of how the misapplication of AI/ML could risk bringing about a new AI winter.
AI winters come periodically, just like seasonal winters. Their mechanisms are as obvious as climate change. A technology is declared to be AI in the form of an egg, and it just needs a warm, fluffy chicken bottom of huge investment to hatch into a miraculous giant robot god. The egg never lives up to expectations, and the resulting stench makes the AI completely obsolete for a decade or two before people forget about it.
So where does this leave us? There are some good questions: is it useful, is it valuable, and can we build a sustainable industry? Targeted machine learning, leveraging the vast capabilities of modern hardware, is doing good in medicine, science, and engineering. These are all fields that foster niche technologies with incredible ingenuity and ability, but are not generally applicable. If we go from vertical to horizontal and pull from what will generate the most benefit for all of us, it will be something very different.
It's not at all clear that consumers are all that interested in the AI/ML bait that's being touted. Google is betting big here, with the Pixel 9 launch being more focused on Gemini and as a platform for AI/ML apps than on regular flagship smartphone features. Reviewers have yet to find a single thing that justifies this change in focus. These are like the hundreds of very clever yet very forgettable online AI/ML services. Not only is there no business model here, but no one will use it.
That may be a good thing. If you Google the question, “How much has Google invested in AI?”, the same AI now built into the search engine will report to you, “In April 2024, Google CEO Demis Hassabis said Google will spend more than $100 billion.” That, dear reader, is a direct copy and paste. This will be the first time this information has been shared with Google's actual CEO, Sundar Pichai. Google's flagship AI, built for Google's flagship product, has no idea who Google's CEO is. And the company has arranged for this to be the first row of the first result that appears on the screen.
This is not good. This is far from good.
Would spending billions of dollars actually help? Probably not. Microsoft already spends nearly $19 billion per quarter on AI/ML infrastructure, but recently had to publicly remind people that its AI is not entirely trustworthy. ChatGPT, the flag-bearer for general AI, itself already seems to have saturated the market.
Even Microsoft's comprehensive Windows AI/ML tool, Copilot, is getting cold shoulders in the company's most trusted market: enterprise computing. COO after COO is saying “not now” as the purported benefits aren't enough to balance the risk of damaging data governance.
With no revenue model, the initial excitement fading, and the atmosphere shifting from cold to hostile, Recall’s reason for resurrection is a perfect example of the only way forward through the raging storm: the bet that AI/ML is too smart to fail and must become a huge mass market, and the bet that the massive investments that would be required if a miracle were to happen would crowd out all other companies.
Technology doesn't work that way. VR/AR are another example where the experience is not worth the hassle except in niche areas. Self-driving cars are stalled because the hard parts are way beyond what money can solve; there's no point going to 80% when 20% could kill you. AI/ML has no clear path to sufficient reliability, no sensible business model, and its indiscriminate and massive data demands cannot be safely supplied.
This is not an AI bubble. Bubbles happen when a lot of people jump on an idea that is literally unsustainable. Not many people jump on AI/ML in general. A few people spend a lot of money. If you don't turn your smartphone into an AI/ML platform, what else will you do? If you don't turn your productivity platform into an AI/ML platform, what else will you do?
If the economic impact of widespread AI/ML is not a mirage, it will soon become reality. Winter is coming. For the first time in our industry, that term means exactly what it means.®