Zoom In / I asked Gemini to “reimagine” the background of this Pixel 9 group shot (originally on beige paper) as a “sci-fi lunar landscape”, then used “Auto Frame” to zoom in on the initially tight shot. Might this explain why we see another moon on this lunar surface?
Kevin Purdy / Gemini AI
Google has made its AI assistant, Gemini, the centerpiece of its pitch to reviewers and the general public, which the company says will differentiate Pixel phones from other Android phones. In fact, it's not until 24 minutes into Google's keynote, and after a few failed live AI demos, that the Pixel's hardware is finally detailed.
I've been using the Pixel 9 Pro as my everyday phone for about a week. The Pixel 9 has very few new features that aren't somehow related to Gemini, aside from the physical design. So in this review, I'll look at how Gemini works on the Pixel 9, which is Google's primary platform for Gemini at the moment. Some of the AI-powered features on the Pixel 9 may make it to other Android phones in future Android releases, but that's not a certainty. AI is something Google is using to differentiate the Pixel, as are free trials, custom Google-designed chips, and OS integration.
I've written separate reviews for the three flagship Pixel 9 devices. But it's odd to think of the Pixel 9 as a hardware-only product. Simply put, the phone itself is a great evolution of the Pixel series, perhaps the best version Google has ever made, and it's priced to reflect that. If you love your Pixel phone and are eager to upgrade, and plan to ignore Gemini in particular and the AI features in general, this may be all you need to know.
But if you buy a Pixel 9 Pro, Pro XL, or Pro Fold (coming later), with Pro prices starting at $1,000, you get a free year of Gemini Advanced ($240 per year thereafter), and Gemini is suggested in every Google-made corner of your device. So let's talk about Gemini as your phone's task assistant, image editor, and screenshot librarian. During my week with the Pixel 9 Pro, I used Gemini as much as I felt was reasonable.
I am quite new to general AI chatbots and prompt-based image generation and have never used an “advanced” model like Gemini Live before, but someone with more experience and enthusiasm could get more out of Google's Gemini tools than I could, and I will cover Google's approach to on-device AI and its energy impact in another post.
Gemini, in general: Like a very fast blogger working for you
While testing the Pixel 9 Pro, I had access to both the most advanced version of Gemini, the “Advanced” model itself (with a one-year free trial available to all Pixel 9 buyers), and its advanced voice dialogue, “Gemini Live.” Was it helpful?
It's as if you hired a blogger who, at the touch of a button, works much faster and with far less frustration than a human. This blogger is a competent, if not stylish, writer who can quickly research and compile facts and advice. But this blogger is easily distracted and is not someone you can inherently trust to make important decisions, perhaps without further research into the sources they cite.
I know this all too well. During my time at Lifehacker, I was a fast-writing blogger who posted six articles a day. In the late 2000s, I was in my mid-to-late 20s and simply lacked the knowledge and experience necessary to confidently write about any subject under the broad headings of “technology,” “productivity,” and “little things that, if we just thought about them, might improve our lives.”
But I was certainly able to search, read, triangulate and come up with reasonable summaries and suggestions from several sites and blogs. Depending on how you look at it, I was either an astute general assignment writer, a talented hoaxer or both.