I still don't use generative AI regularly, even though Gemini has taken over in every Google product lately, and while the AI is more important than ever with the Pixel 9 series, I'm frustrated by the same fundamental flaws in generative AI that the Pixel 9 can't escape.
This issue of the 9to5Google Weekender is part of our relaunched 9to5Google newsletter, breaking down the biggest news from Google along with commentary and other fun facts. Sign up here to get it delivered early to your inbox.
One thing that's always been an issue for me with generative AI from the start, especially in terms of using it to “power” search, is that the technology is often reliably wrong. You might get 10 correct responses, but if you later get one response that spits out incorrect information as absolute fact, then I don't think it's worth it.
This is a problem that Google’s generative AI, at least, has yet to solve.
Confident misinformation seems to be a roadblock affecting Google’s AI ambitions more than any other company. When I think about this topic, I can’t help but think of “glue pizza.”
With the Pixel 9 series, Google is pushing AI to the forefront, but after just a week of using it, I've come across this false sense of confidence so many times that it's reminded me why I rarely use AI tools.
One example from this week is I had an afternoon meeting. By the time I decided to go to the meeting (around 3pm), the restaurant I was going to was closed, so on the way there I asked for other options that were actually open near the original location. The specific prompt was, “Can you tell me any other restaurants that are open near (insert restaurant name)?”
Gemini instead spat out exactly where I just said it.
I fixed the AI by asking it what options were open, and it repeated the same places, then I told it again that the places listed were closed, and lo and behold, the AI repeated the same thing.
On the other hand, opening Google Maps and doing a 20-second search gave me the answer I needed with much less stress.
The whole point of generative AI in an assistant is to get better results that can be reasoned about, not just looked up. But Gemini continues to fail in this regard too often. In this example, it's not that Gemini didn't have the information available to it – in fact, it even directly communicated the opening hours of the location – it just couldn't use that information.
Gemini had all the information they needed at 2:45pm but chose to ignore it.
I found the same to be true with Pixel Screenshots.
One thing that excited me about the app is that you can take a screenshot of an online order and ask the AI to find an estimated arrival or order date. But it's pretty lame. For example, I asked it to find the date I bought a shirt (I ordered it on August 15th and had taken a screenshot of the checkout page), but instead it showed me a totally unrelated Amazon return that claimed I ordered the shirt a few weeks later on September 9th. Further attempts after adding more screenshots to the app didn't improve the situation. At one point it said I ordered the shirt on August 19th, the day I asked the question, but in the screenshot below, the app says I ordered it on August 9th, just a few days before I started using the phone, and that date isn't even listed in any of the screenshots I've taken.
However, this is a bit easier to forgive. The Pixel Screenshots app only uses the information you provide in the screenshot. It doesn't have tracking numbers available, and it can't read your emails, so the information available is limited. Ultimately, that's fine by me. If you can't, I'd understand. This isn't particularly easy to do.
I still don't know where this date came from
The problem is giving an answer that you can be confident in and that may be correct, or it may be a complete misconception.
This is why something like Gemini Live doesn't appeal to me. Google's conversational AI assistant is undoubtedly great at the way it talks and responds to what you say. But that doesn't stop it from spewing out completely false information as absolute fact. Andrew Romero, who tried out the Pixel 9 this week, ran into an issue with Gemini Live, where the AI insisted he was planning a trip to Florida, despite him repeatedly telling him that he wasn't. This is clearly an AI hallucination, but it really reinforces the lack of trust I have in these products.
Until these issues are resolved, I think there's a big hole in Google's vision for AI in the Pixel 9 series that a little disclaimer simply can't fix. The practicality and whether anyone wants these features is up for debate, but they're here, and I just want to be able to have some confidence in them.
This week's top stories
Pixel 9 review
Following their announcement earlier this month, our first reviews of the Google Pixel 9 series are here. Stay tuned for our camera-focused review of the Pixel 9 Pro XL, our review of the Pixel 9 Pro Fold, and our reviews of the Pixel Watch 3 and Buds Pro 2 coming soon.
MORE TOP STORIES
From the rest of 9to5
9to5Mac: AirPods Max 2 Coming Soon: What to Expect
9to5Toys: Amazon officially announces massive Prime Big Deal Days event coming in October
Electrek: Tesla Semi spotted in Europe, but why?
Follow Ben: Twitter/XThreads, Instagram
FTC: We use automated affiliate links that generate revenue. Learn more.