Skip to content
AI Primer
workflow

Adobe Firefly supports Boards and Kling 3.0 in five-location concept workflow

An Adobe Firefly ambassador published a five-location workflow that starts in Boards, transforms landmark shots with Edit Image and animates them in Kling 3.0. The breakdown shows how moodboards, partner models and location prompts can drive a consistent multi-scene concept series; use it as a template for similar pipelines.

6 min read
Adobe Firefly supports Boards and Kling 3.0 in five-location concept workflow
Adobe Firefly supports Boards and Kling 3.0 in five-location concept workflow

TL;DR

You can trace the official surface area in Adobe's Boards overview, the partner models matrix, and the Generate soundtrack guide. The weirdly useful part is how little of the workflow depends on one model. AllaAisling's opener swaps models by stage, and egeberkina's campaign thread uses the same pattern for brand work instead of environment concepts.

Firefly Boards

Adobe's About Firefly Boards page positions Boards as a place to explore early ideas, build moodboards, develop storyboards, and create pitch materials. The What's new in Firefly page also places partner-image models, partner-video models, collaboration, linked documents, and presets under the same Boards workflow.

That matches the role Boards plays in the landmark thread. It is not the image generator in the middle of the pipeline, it is the place where the visual language gets locked before generation starts.

AllaAisling repeats that setup across five scenes:

The landmark recipe

The thread is basically a template for multi-scene concept work. Each location uses the same sequence, with the creative differences pushed into the moodboard, the before prompt, and the after prompt.

The recipe is consistent across the series:

  1. Pick a real landmark and pin it with coordinates.
  2. Generate a present-day establishing shot with lighting baked in.
  3. Run a year-2100 transformation while keeping the landmark recognizable.
  4. Add a short animation prompt that only describes motion.

That last split matters. In AllaAisling's Rome example, the image prompt handles the submerged Colosseum, while the animation prompt only adds rising water, wave motion, reflected light, and a passing fish. In AllaAisling's London example, the still establishes the ruined tower, while the motion layer is just rain, falling stone, splashes, and a flickering light.

Prompt structure

The strongest craft choice in the thread is the separation between scene facts and future-world changes.

Each setup starts with a plain location render, then mutates it:

That structure is why the series feels coherent even though the moods swing from hopeful to catastrophic. The continuity comes from geography and camera conditions, not from forcing one aesthetic across every city.

A second Firefly campaign

The other useful evidence in this pool is that the same stack already shows up in a completely different format: brand campaign building.

In that thread, egeberkina starts with a long GPT Image 2 prompt for a fictional sportswear identity system, then uses the visual identity board as a reference image for product and scene generation egeberkina's identity-board prompt and egeberkina's reference-image prompt. After that, the stills move into Kling 3.0 inside Firefly and get stitched in the Firefly Video Editor egeberkina's video-edit step.

The structure is nearly identical to the Europe-in-2100 project:

  • board first
  • still generation second
  • motion third
  • final assembly inside Firefly

The difference is the anchor. AllaAisling anchors on geography and landmarks. egeberkina anchors on a visual identity system.

Access and model mix

Adobe's partner models matrix makes the stack explicit. GPT Image 2 is listed for Firefly text-to-image, Prompt to Edit Image, and Boards. Kling 3.0 is listed for text-to-video, image-to-video, Firefly video editor, and Boards. Adobe's Boards help page says anyone with an Adobe account can access Boards, but premium features and partner models require a qualifying subscription and generative credits.

The last step in the campaign thread also maps to an official Firefly feature Adobe is actively documenting. Its Generate soundtrack for an uploaded video guide says Firefly can inspect an uploaded clip, draft a prompt from its vibe, style, purpose, energy, and tempo, then let the user refine that prompt before composing the music. That is exactly the kind of finishing pass egeberkina's soundtrack step adds after the visuals are already cut.

One more clue about where Adobe wants this to go sits outside the ambassador demos. In Adobe's Claude connector announcement, the company describes Claude orchestrating multi-step workflows across Firefly, Photoshop, Illustrator, Premiere, Lightroom, Express, InDesign, and Stock. The landmark thread looks small next to that pitch, but it already shows the same idea in miniature: one project, several tools, one consistent visual system.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 5 threads
TL;DR1 post
Firefly Boards3 posts
The landmark recipe1 post
Prompt structure2 posts
A second Firefly campaign3 posts
Share on X