Skip to content
AI Primer
breaking

AI Studio adds edit mode and Nano Banana image assets

Google added a redesigned edit mode to AI Studio Build with component selection, on-canvas annotation, and Nano Banana-generated image assets. The update makes AI Studio a more interactive app editor, so try it for iterative app tweaks instead of one-shot generation.

4 min read
AI Studio adds edit mode and Nano Banana image assets
AI Studio adds edit mode and Nano Banana image assets

TL;DR

You can open Build directly, skim Google's March launch post, and the screenshots now show AI Studio inching toward import-heavy app editing too, with WesRoth's screenshot surfacing GitHub and Figma entries in the menu. testingcatalog's image also shows the new component picker treating an image as just another editable app part, which is the useful little detail in this rollout.

Edit mode

Google's main change is that Build now exposes editing as a direct manipulation flow instead of a chat-first one.

The shipped actions are simple:

  • select a component and issue a targeted edit
  • annotate directly on the canvas with a pen
  • select an image asset and replace it
  • upload content while swapping assets

That last clip matters because it shows the feature arriving in stages. DynamicWebPaige described the new upper-right edit tool as an easier version of the older lower-left Annotate workflow, so the May 5 rollout looks more like a cleaned-up editor surface than a brand new interaction model.

Nano Banana assets

Nano Banana is now inside the Build loop, not parked as a separate image generation step.

GoogleAIStudio said the integration can automatically create custom image assets as an app generates, while OfficialLoganK's thread adds that selected assets can be changed with Nano Banana and uploaded content. In practice, that turns image generation into a local edit operation on a specific screen element.

TestingCatalog's screenshot makes the workflow concrete: the image block in the preview gets its own edit affordance, and the prompt bar shows both the main and img components selected. That is a more structured editing model than asking the model to regenerate the whole page because one hero image is wrong.

Build context

This update lands on top of Google's March push to turn AI Studio into a full-stack app builder.

In Google's March announcement, the company positioned AI Studio Build around the Antigravity coding agent, Firebase integrations, and prompt-to-app generation. The full-stack docs add the harder technical detail: Build projects can include a Node.js server-side component for secrets, external APIs, npm packages, and real-time features.

That context explains why visual editing matters here. AI Studio is not just spitting out static React mockups, Google is trying to keep you inside one surface while you generate code, wire backend pieces, and now patch UI details without dropping back to a blank prompt.

Import hooks

One more breadcrumb showed up alongside the edit rollout: import options.

Wes Roth posted a screenshot of the Build input menu showing Import from GitHub and a Figma entry next to Drive, file upload, and skills upload. Google has not publicly documented those imports in the sources above, so for now this sits in the "visible in UI" bucket rather than the "announced feature" bucket.

It still fits the direction of travel. The March launch framed AI Studio as a place to turn prompts into production-ready apps, and import buttons for repos and design files would be the obvious next step for a tool that is already moving from one-shot generation toward iterative editing.

Share on X