Home>News>Enterprise AI
Enterprise AIThursday, April 16, 2026·8 min read

New ways to create personalized images in the Gemini app

AD
AI Agents Daily
Curated by AI Agents Daily team · Source: Google AI Blog
New ways to create personalized images in the Gemini app
Why This Matters

Google has updated its Gemini app with a feature called Personal Intelligence, which uses the Nano Banana 2 image model and your Google Photos library to generate personalized images without requiring detailed prompts. This matters because it shifts AI image generation from a gen...

Animish Sivaramakrishnan, Group Product Manager for the Gemini App, and David Sharon, Group Product Manager and Multimodal Generation Lead for Gemini Apps, published the announcement on April 16, 2026, via the Google AI Blog. The two authors detail how Gemini's new Personal Intelligence capability taps into your connected Google Photos library to produce images featuring you, your family, and your personal surroundings, all without requiring lengthy, manually crafted prompts or individual photo uploads.

Why This Matters

Google owns the largest consumer photo library ecosystem on the planet through Google Photos, and it is now putting that structural advantage to direct use in generative AI. This is not a minor quality-of-life update. It is Google converting a decade-long investment in photo storage into a moat that competitors like OpenAI and Midjourney simply cannot replicate overnight. The feature is currently restricted to U.S. subscribers on Google AI Plus, Pro, or Ultra tiers, which signals a premium upsell strategy, but rollout to broader tiers seems inevitable given how fundamental personalization is becoming to user retention in AI products.

Stay ahead in AI agents

Daily briefing from 50+ sources. Free, 5-minute read.

The Full Story

Google's Gemini app has offered image generation for some time, but the output has largely been generic. You could describe a scene in painstaking detail and still receive an image that looked nothing like your actual life, your dog, your kitchen, or your face. The new Personal Intelligence feature, powered by the Nano Banana 2 model, is designed to fix exactly that problem.

The core mechanic is straightforward. When you connect your Google Photos account to Gemini, the app draws on your personal photo library as visual reference material. Ask it to generate an image of you hiking in the mountains, and it will pull reference images of your actual face from Photos rather than inventing a generic person. Ask for a birthday card featuring your daughter, and it can incorporate her likeness. No manual uploads, no exhaustive prompt engineering.

Nano Banana 2 is the technical engine making this possible. The model is described by Google as combining advanced world knowledge, improved image quality, and faster processing speeds compared to earlier iterations. It also offers granular creative controls, letting users adjust atmospheric conditions such as shifting a scene from bright daylight to moody nighttime, alter camera angles and perspectives, and change focal emphasis to make a subject more prominent in the frame. These adjustments can be made after the initial generation, giving users iterative creative control rather than a single take-it-or-leave-it result. You can also swap reference photos at any point to steer the style in a different direction.

Privacy is the obvious concern when any company proposes feeding your personal photo library into an AI system. Google addresses this directly in the announcement: Gemini does not train its models on your private photo library. Your photos are used as reference material for generation, not as training data for future model iterations. Whether users broadly accept that assurance is a separate question, but the explicit commitment is notable and likely necessary for adoption.

The rollout is happening now for U.S. subscribers on Google AI Plus, Pro, and Ultra plans, with users told to expect the feature in their apps within the next few days from the April 16 publication date. The announcement does not specify a timeline for expanding access to free-tier users or users outside the United States.

This capability arrives roughly five months after Google DeepMind introduced Nano Banana Pro in November 2025, a model built on the Gemini 3 Pro architecture that emphasized improved text rendering across multiple languages and enhanced world knowledge. The Personal Intelligence feature appears to be the next phase in that roadmap, layering user-specific context on top of the stronger foundational model.

Key Details

  • Published April 16, 2026, by authors Animish Sivaramakrishnan and David Sharon on the Google AI Blog.
  • Powered by Nano Banana 2, Google's latest image generation model.
  • Requires a connected Google Photos account for personalized reference imagery.
  • Available to U.S. subscribers on Google AI Plus, Pro, and Ultra plans at launch.
  • Google states it does not train models on users' private photo libraries.
  • Nano Banana Pro, the predecessor model, launched in November 2025 on Gemini 3 Pro architecture.
  • Nano Banana Pro is also available in Google Ads and Google AI Studio, not just the Gemini app.

What's Next

Google's decision to restrict the feature to paid U.S. tiers at launch suggests the company will use the coming weeks to monitor generation quality and privacy feedback before broadening access. Watch for announcements around Google I/O 2026, which historically serves as the venue where the company formalizes AI feature rollouts to larger user bases. The integration of Nano Banana capabilities into Google Ads is also worth tracking closely, since advertisers generating personalized creative assets at scale represents a significant commercial opportunity that could fund faster model development.

How This Compares

The closest parallel in the current market is Meta AI's image generation features, which have experimented with pulling user context from Facebook and Instagram profiles to personalize outputs. But Meta's approach has been inconsistent and has generated repeated privacy concerns, partly because Meta's data practices have a long and complicated history with regulators. Google is betting that its cleaner framing, explicit no-training commitment, and tighter integration with a purpose-built photo product will land differently with users.

Compare this also to OpenAI's memory features in ChatGPT, which allow the model to remember facts you share across conversations. OpenAI's approach is text-centric and user-curated, meaning you have to tell it things explicitly. Google's approach is visual and automatic, pulling from an existing library of thousands of your photos without requiring you to do anything beyond granting access. That is a meaningful difference in friction, and lower friction almost always wins with mainstream consumers.

Midjourney and Adobe Firefly sit in a different category entirely. Both tools are aimed at creative professionals who want precise stylistic control and are comfortable building elaborate prompts or style references manually. Google is not competing for that user. It is competing for the person who wants a custom holiday card or a personalized greeting image in under 30 seconds, and that is a far larger audience. For anyone building or evaluating AI tools in the creative space, Google's move here sets a new baseline expectation for what consumer-grade personalization should look like.

FAQ

Q: What is Nano Banana 2 and how does it work? A: Nano Banana 2 is Google's latest image generation model inside the Gemini app. It combines improved image quality, faster processing, and the ability to pull personal context from your Google Photos library. When you ask for an image featuring yourself or a family member, the model uses reference photos from your connected account to generate a result that looks like your actual life rather than a generic stock photo.

Q: Is it safe to connect Google Photos to Gemini for image generation? A: Google states explicitly that it does not use your private photo library to train its AI models. Your photos serve as visual references for generating specific images you request, but they are not fed into broader model training. As with any data-sharing decision, reviewing Google's current privacy policy before connecting accounts is a reasonable step.

Q: Who can use Gemini's Personal Intelligence image feature right now? A: As of the April 16, 2026 announcement, the feature is rolling out to U.S. subscribers on Google AI Plus, Pro, and Ultra plans. Google did not announce a specific date for expanding access to free-tier users or users outside the United States, so international availability remains unconfirmed for now.

Google has just made a strong argument that the future of AI image generation is not about better prompts but about better context, and it has the photo library to back that claim up. If the privacy commitment holds and the output quality lives up to the demonstration, this could become the standard against which every other consumer image tool gets measured. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.

Our Take

This story matters because it signals a shift in how AI agents are being adopted across the industry. We are tracking this development closely and will report on follow-up impacts as they emerge.

Post Share

Get stories like this daily

Free briefing. Curated from 50+ sources. 5-minute read every morning.

Share this article Post on X Share on LinkedIn