Summit attendees posted a preview of Firefly generating 3D objects from text, and creators also showed a Boards-based short-film pipeline built in Firefly. Try the workflow if you want one setup for asset generation, background removal, scene layout, and reference-driven animation.

Adobe had a keynote slot at the Runway AI Summit, and the most interesting Firefly material coming out of it was half preview, half workflow proof. You can watch the attendee clip of Text to 3D, read Adobe's own writeup on Firefly's expanding model stack, and cross-check the production side with the official docs for Firefly Boards and 3D scene reference tools.
The summit-floor clip is short, but it is specific. Firefly appears to generate clean 3D objects directly from text, not just image variations, and the examples in the video look like asset candidates for product viz, motion design, or scene blocking.
Adobe has not published a matching launch post for text-to-3D yet. What it has published is a broader Firefly roadmap around image, video, partner models, and custom models, so for now the 3D tool reads as a preview spotted in public rather than a formally documented release (Adobe Firefly update).
Koldo's thread is more useful than the teaser because it shows an end-to-end pipeline. Every asset in the short film, astronaut, alien waiter, diner exterior, diner interior, was generated in Firefly, cleaned up inside Firefly, then assembled in Firefly Boards before animation process breakdown.
The workflow breaks into four steps:
That last step matters because it turns a pile of separate generations into a single staged scene. The final clip keeps the astronaut, diner, and waiter coherent because the composition work happened before motion final animation.
Adobe's official materials already describe the surrounding pieces. Firefly Boards is documented as a mood-boarding and ideation surface inside Firefly, and the current Firefly help pages also list video generation, image generation from partner models, object composites, and 3D-scene-based reference workflows (About Firefly Boards, What's new in Firefly).
That makes the summit story pretty concrete even without an Adobe text-to-3D announcement. One part is already documented, Boards as the place where scenes get assembled. The other part is the fresh reveal, text-to-3D as a likely next input into the same stack, alongside the 3D scene reference tools Adobe already exposes for image generation (3D scene reference docs).
Text to 3D in @AdobeFirefly was just previewed at @runwayml AI Summit in NYC.
Built a short animated film entirely inside Adobe Firefly from blank canvas to final video. Characters, environments, compositing and video generation. One pipeline. Here's how 👇 #AdobeFireflyAmbassadors #Ad #HowToAdobeFirefly
This is the step that makes everything work. Inside Firefly Boards, the Artboard function lets you place multiple generated images into a single composed scene, characters, props, environment, all in context. You feed that artboard to Sora 2 or Veo 3.1 as a reference. It Show more