recent
Hot news

Everything you need to know about the new Google Lens feature

 


If you can't capture what you're looking for with just a photo, Google Lens will now let you shoot a video and even use your voice to ask what you see.

The feature, which will display an AI overview and search results based on the content of the video and your question, is rolling out in Search Labs on Android and iOS today.

Google first previewed the use of video for search at its I/O conference in May. For example, Google says that someone curious about the fish they see in an aquarium can hold their phone up to the exhibit, open the Google Lens app, and then hold down the shutter button. Once Lens starts recording, they can ask, “Why are they swimming together?” Google Lens then uses the Gemini AI model to provide the answer. 

Speaking about the technology behind the feature, Rajan Patel, Google's VP of engineering, told The Verge that Google captures video "as a series of image frames and then applies the same computer vision techniques" previously used in Lens, but Google is taking it a step further by passing the information to a "custom" Gemini model that was developed to "understand multiple frames in a sequence and then provide an answer on the web."

Google Lens is also updating its image search feature with the ability to ask a question using your voice. To try it out, point the camera at a subject, hold down the shutter button, and then ask your question. Prior to this change, you could only type your question into Lens after taking a photo. Voice questions are rolling out globally on Android and iOS, but are only available in English for now.

google-playkhamsatmostaqltradent