Google is putting even more Gemini AI in Search
It’s the start of Google’s yearly I/O developer event today. But Google doesn’t seem all that interested in telling developers what they’ll be able to do with Google — it’s more interested in telling regular users all the cool stuff that the company’s Gemini AI can do right now, and will be able to do in the future. Presenters really wanted us to know we’re in the “Gemini era.”
What does that mean for regular users? More or less, it’s all about smarter searching that pulls more and deeper information from more websites, and organizes it in smart ways. It all starts with “AI Overviews,” the new, fancy version of the Rich Results you currently see before (and sometimes to the right of) standard text results. AI Overview will soon be graduating from the Search Labs walled-off area to standard users in the US, with Google hoping to expand it to “over a billion people by the end of the year.”
These auto-generated results created from indexed and crawled websites will include the familiar “people also ask” queries, shopping results (which, of course, earn Google some ad revenue), and results for more complex questions phrased in natural language. The example on the Google Search blog and given live on stage include “find the best yoga or pilates studios in Boston and show me details on their intro offers, and walking time from Beacon Hill [Boston].” AI-generated results include the local studios in card form with a map showing their location relative to the smartphone user. Pretty standard stuff.
It’s worth pointing out that Google claims people are “visiting a greater diversity of websites for help with more complex questions” with this tool. But how that actually translates into traffic to said websites was not elucidated. AI-powered general results, with their source context hard to dig into or purposefully obscured, is a concern for the sustainability of the websites Google Search is built upon.
A much more impressive demo was taking a live photo with Google Lens, asking vocally about the context, and then being delivered relevant results. The presenter took a video of an analog record player, asked why “this” wasn’t staying in place, and was given step-by-step troubleshooting tips for fixing the tonearm of that exact model of turntable. That’s more of the “magic” feeling Google was hoping for throughout its presentation…though you’ll only get to try it out in Search Labs, sometime “soon” in the US.
Author: Michael Crider, Staff Writer
Michael is a former graphic designer who’s been building and tweaking desktop computers for longer than he cares to admit. His interests include folk music, football, science fiction, and salsa verde, in no particular order.
Recent stories by Michael Crider:
Google’s new ‘Web’ tab is search without all the extra junkFeds demanded ID of YouTube users who watched certain videosThe FCC just quadrupled minimum broadband Internet speeds