Google announced the new Pixel Screenshots app at its Made By Google event , and the Screenshots app launches alongside the Pixel 9, Pixel 9 Pro, and Pixel 9 Pro Fold, and uses Gemini AI to help locate specific images.
After you give the app access to your photos, the AI will not only ingest files it thinks are screenshots, but it will also start identifying what’s inside each image.
On the home page, you’ll see a row at the top called “Collections,” with a series of pre-organized snaps like “Gift Ideas,” “Shoes,” or “Places to Visit,” which you can organize yourself or have the system suggest.
Below that row is a grid of all your recent snaps, and at the bottom is a search bar with a plus icon next to it, and tapping that icon will either launch the camera or import a photo from your album.
This is useful for photos you’ve taken of real-world signs that contain information you want Gemini AI to help you remember.
Tapping on each screenshot in this app will expand the image and reveal a title, summary, and buttons based on its contents, all of which are generated by AI, so if you’re looking at a photo of an Instagram post from a music festival about upcoming dates, the title might say “Lollapalooza headline acts” with buttons to add specific events from that photo to your calendar.
And if you pull up a photo of a restaurant’s website, the screenshots might provide shortcuts to call the store or navigate to the business address via Maps.
You can also use the app from the home page, where you can either type in the search bar or tap the microphone icon in it and ask Google things like “What’s Sam’s WiFi password?” or “How much do I owe Sherlyn?” The app will search your gallery and not only show you photos with potentially relevant information, but it will also try to answer your question at the top.
After you give the app access to your photos, the AI will not only ingest files it thinks are screenshots, but it will also start identifying what’s inside each image.
On the home page, you’ll see a row at the top called “Collections,” with a series of pre-organized snaps like “Gift Ideas,” “Shoes,” or “Places to Visit,” which you can organize yourself or have the system suggest.
Below that row is a grid of all your recent snaps, and at the bottom is a search bar with a plus icon next to it, and tapping that icon will either launch the camera or import a photo from your album.
This is useful for photos you’ve taken of real-world signs that contain information you want Gemini AI to help you remember.
Tapping on each screenshot in this app will expand the image and reveal a title, summary, and buttons based on its contents, all of which are generated by AI, so if you’re looking at a photo of an Instagram post from a music festival about upcoming dates, the title might say “Lollapalooza headline acts” with buttons to add specific events from that photo to your calendar.
And if you pull up a photo of a restaurant’s website, the screenshots might provide shortcuts to call the store or navigate to the business address via Maps.
You can also use the app from the home page, where you can either type in the search bar or tap the microphone icon in it and ask Google things like “What’s Sam’s WiFi password?” or “How much do I owe Sherlyn?” The app will search your gallery and not only show you photos with potentially relevant information, but it will also try to answer your question at the top.