technology

Google's Gemini uses your Google data to create personalized images

Gemini's new personalized intelligence pulls context from linked Google apps and Google Photos so the app can generate tailored images with minimal prompts, using Nano Banana 2 in the United States for early access.

Apr 16th 2026 · United States

Google announced it is expanding its Personal Intelligence feature to its Nano Banana 2 image generation model, allowing Gemini to create images tailored to users based on their personal data from Google apps. The feature lets users generate personalized images using simple prompts like "Design my dream house" or "Create a picture of my desert island essentials," with Gemini automatically incorporating details about the user's tastes, lifestyle, and connections from services like Google Photos. The feature will use labels users have applied to their photos to identify people, places, and activities relevant to the prompt. The feature is rolling out over the coming days to paid subscribers on Google AI Plus, Pro, or Ultra plans in the United States, with plans to expand to Gemini on Chrome desktop and additional users soon. Google clarifies that while it won't directly train AI models on users' private Google Photos libraries, it does train on limited information such as specific prompts in Gemini and model responses to improve functionality. A "sources" button will show users how Gemini derived the context for each image generation, and users can provide feedback or upload reference photos if the context is incorrect. Personal Intelligence was first launched earlier this year and made available to all U.S. users in March. Earlier this week, Google expanded the feature to more users in countries including India and Japan. The company claims this addresses one of the biggest hurdles in AI image generation by eliminating the need for long, detailed prompts and manual reference photo uploads.