Google Lens may soon listen to audio prompts
Google Lens is testing audio prompts, allowing users to refine their search queries without tapping the screen, making it more convenient.
Search is Google’s primary business, and the company recently admitted that Chrome exists for the sole purpose of making search accessible. Tools like Google Lens and the Assistant are in the same boat, serving as vehicles to enhance search. However, one clearly focuses on visual cues while the other listens to questions and commands. Now, it appears that Google is testing audio prompts for Lens, specifically to refine queries like multisearch.
Google Lens works with all our favorite Android phones and all you have to do is point your camera at the question and tap the shutter button to get the answer. Lens can help you translate text in real-time, copy printed text to your device’s clipboard, do your homework, and find prices for items you want to buy. Google recently realized that Lens is only a starting point for a user’s search experience, and they often go to the Google bar or new browser tab to conduct follow-up searches with a more detailed query in mind. This gave rise to MultiSearch and MultiSearch in Lens last year.
MultiSearch gives you the option to refine your search query right in Lens. This means you can start by snapping a photo of a dress you like, and then launch MultiSearch to find other colors available in it. While this saves you the hassle of opening a new browser tab and searching, it still requires taking a picture. item, tap again to open MultiSearch, and type your query manually.
AssembleDebug, a popular feature spotter and tinkerer of the GAppsLeaks channel on Telegram, recently spotted Google secretly testing a new way to streamline Lens even more. After enabling some hidden flags in the app, the new feature allows you to point the device at an item and have your search refinement parameters spoken out loud.
The new workflow removes all screen taps, making Lens much easier to use, even when you’re looking for specific information from Search. Less interaction with screens saves time and also helps with accessibility. That said, AssembleDebug doesn’t mention whether Google is using the Assistant’s voice match technology behind the scenes, but we suspect it’s useful for filtering out crosstalk and noise while you’re speaking. It is possible
Since this new feature in Lens is still hidden behind an in-app flag, it’s clear that testing is underway. However, Google has not said anything about such a big change in the lens yet. Unless an official announcement or public beta testing comes out, we don’t have high hopes of it reaching the average lens user in the near future.