Luma adds translation, lip sync, and scene replacement in Agents
Luma posted new Agents workflows for translating videos with lip sync and localization, plus dropping a subject into new environments with matched blending and lighting. The additions matter because Luma is moving from generation-only output into post-production localization and scene editing.

TL;DR
- LumaLabsAI's translation post shows Luma Agents now pitching video translation, lip sync, and localization as a single workflow, not a stack of separate post tools.
- LumaLabsAI's scene-change post adds another editing lane: dropping a reference subject into a new environment with matched blending and lighting.
- Luma's own Creating At Scale guide already described post-production steps like translate, subtitle, voiceover, and "translate + lip sync," which makes this week's clips look less like one-off demos and more like product packaging around an existing workflow.
- DreamLabLA's WITNESS post and DreamLabLA's board walkthrough show the other half of the pitch, identity-consistent longform projects built on boards, timelines, and frame-by-frame planning.
You can see Luma formalizing the pitch across its own app landing page, its Agents overview, and a more specific Localization at Scale guide. The docs also spell out adjacent features, including video generation and editing, voiceover, and lip sync, while Text & Natural Language frames translation as scripts, captions, and voiceovers plus cultural adaptation.
Translation and localization
Luma's clip pitches a simple handoff: upload a video, pick languages, and let Agents handle translation, lip sync, and localization. The company frames that as market-by-market reuse of one source asset, with no reshoots and no separate production pipeline.
That lines up closely with Luma's own docs. In Creating At Scale, the company says one finished video can be used to translate the script, generate a new voiceover, lip sync the character, and swap on-screen text. Its Text & Natural Language guide adds that localization is supposed to cover scripts, captions, and voiceovers while maintaining tone and adapting culturally.
Scene replacement
The second workflow is narrower and more useful than the ad copy makes it sound. Luma is not showing a net-new character generator here. It is showing subject preservation plus environment replacement, with the blend sold on lighting continuity.
That, again, matches the broader product framing. The Luma Agent page lists video generation and editing, style transformation, scene extension, captions, and reframing in the same toolset, while the app page describes Agents as systems that plan, generate, iterate, and refine across the whole project rather than outputting a single asset.
Identity-led projects
DreamLabLA's WITNESS demo shows the creative upside of that workflow stack better than the product posts do. According to DreamLabLA's description, Keith Paciello used his own face as an identity anchor across 28 fictional films, with the same underlying subject moving through decades, genres, and emotional registers.
That is also where Luma's "shared context" pitch gets concrete. The fox samurai making-of clip from LumaLabsAI's behind-the-scenes breakdown reduces the process to three durable levers:
- Character
- Motion
- Cinematic style
Luma's Welcome to Luma Agents article describes the same idea at product level, shared context and memory carrying across image, video, audio, and voice so assets can evolve without restarting work.
Boards and frame maps
The most revealing detail is not a finished video. It is the board. DreamLabLA's walkthrough shows WITNESS mapped frame by frame inside a Luma board, which makes the product feel closer to a planning surface with generation attached than a prompt box with extras.
That emphasis runs through Luma's own materials. The FAQ says Agents are meant to run projects from brief to final delivery, and the app page says teams can organize initiatives on a shareable board where agents generate, iterate, and evolve assets in place. Even mrjonfinger's reaction post fixates on the identity exploration, which is really a board and continuity story as much as a generation story.