Adobe Firefly opens Custom Models beta for style and character training
Firefly opened Custom Models beta to everyone, letting creators train on their own images for consistent styles and recurring characters. Brands and filmmakers can keep visual assets on-model across image generation.

TL;DR
- Adobe has opened Firefly Custom Models beta to everyone, with training aimed at keeping a creator’s own style or recurring character consistent across generations, according to the launch post.
- The clearest creator pitch is production continuity: filmmaker Kris Kashtanova says in the demo thread they trained on their photography to generate images for an upcoming film that would have been hard to shoot otherwise.
- Adobe’s rollout is being framed around three training targets — photo style, illustration style, and characters — as echoed in a shared demo post.
- Early community sharing shows the feature being used for highly specific visual identities, including a neon-lit animal portrait example in the image reply and broader reposts of the launch in the retweet.
What shipped
Firefly Custom Models is now in open beta, letting users upload their own images and train a model around a specific visual language or character. In the announcement, Adobe positions it as a way to preserve consistency rather than start from a generic house style, and the attached [vid:0|launch demo] shows fast cuts of abstract, photographic, and graphic outputs under the “Custom Models” beta branding.
That positioning matters for creative teams that need repeatable looks. A supporting repost from the Adobe Firefly share describes the workflow as uploading assets so Firefly can learn a “unique style,” while the training example explicitly calls out photo styles, illustrations, and characters as the main categories.
What creators are making with it
The strongest use case in the evidence is style transfer from a creator’s own archive into new concept work. In Kashtanova’s example, the trained model is based on their photography and used to generate images for a film pipeline, suggesting Custom Models is less about one-off prompts and more about extending an existing body of work.
Community posts also point to narrower visual signatures. The shared image shows a cougar portrait rendered with intense magenta-and-amber rim lighting against a flat purple background, the kind of repeatable color treatment and subject styling that brands, editorial artists, and pitch-deck makers usually have to brute-force by prompting or compositing. Even smaller replies like this thread starter center on the training step itself, which suggests the workflow, not just the final image, is what creators are testing first.