Google adds new AI-powered features for photo editing and image generation
At Made by Google 2024 on Tuesday, the company launched its new Pixel 9 series of phones. These devices have a ton of AI-powered features apart from having Gemini as the default assistant.
The company is adding features for photo editing, plus new apps for storing and searching through screenshots on-device and an AI-powered studio for image generation.
Add Me lets the person taking a group photo be a part of it. The company uses a combination of AR and different machine-learning models, and asks the photographer to trade places with someone after taking the first photo. The phone will guide the second person to reset the image and AI models will realign both frames to create a photo with all folks in one frame.
The company launched the Magic Editor feature last year with Pixel 8 and 8 Pro, which had a Magic Eraser function to remove unwanted objects or people. Google is now adding two new features to Magic Editor with the Pixel 9 series.
The first feature is called auto framing, which will recompose an image to place objects or people in the photo in focus. Google says Magic Editor will generate a few options for users to choose from. The autoframing feature can also use generative AI to expand the photo. Google says that with the second feature, people can type in the kind of background they want to see in the photo and AI will generate it for them.
New screenshots and studio apps
Google is adding new screenshots and Pixel Studio apps to the new Pixel 9 series of phones. The screenshot app will store screenshots taken on the device and also lets people search for information through them, such as Wi-Fi details of a vacation home, for example.
Notably, Google Photos also has a search feature that lets people look for information such as their license plate or passport number. However, the new screenshot app works locally.
The company is also adding a new Pixel Studio app to create AI-powered images on the device. Google notes that the new apps utilize an on-device diffusion model and Google’s on-cloud models. Users can type in any prompt to generate an image and then use the in-app option to change the style. Google says it cannot generate human faces yet — presumably because of Gemini slipping up in terms of historical accuracy earlier this year — but didn’t say whether there are any other limitations to generating potentially harmful images.