A GitHub repo and demo post package prompt handling, scene timing and local generation into a free ComfyUI movie pipeline. Try it if you want a local movie workflow, and compare it with nearby multi-ControlNet, crop-and-stitch inpainting and Ace-step 1.5 XL posts.

You can watch the 9 minute demo video, browse the KupkaProd Cinema Pipeline repo, inspect the JLC ControlNet Composition workflow JSON, and jump straight to the ACE-Step 1.5 XL collection. The interesting bit is not one giant platform launch. It is how much of the local movie stack is being assembled in public from repos, nodes, and forum posts.
The core launch is simple: a free, local, ComfyUI-based movie pipeline agent, shared as an open GitHub repo by the creator behind the Reddit announcement. The post says you enter a prompt, set a desired scene time, and let the pipeline run.
That pitch lands because it collapses a messy workflow into two creator-native controls, text intent and timing. The linked repository, KupkaProd Cinema Pipeline, is the actual product here, while the Reddit post serves as the demo reel and distribution layer.
The clearest adjacent technical upgrade came from jessidollPix's writeup, which replaces recursive ControlNet chaining with parallel aggregation. Instead of nesting A(B(C(x))), the new node evaluates each ControlNet independently, combines them with weights, and passes the sampler one equivalent ControlNet object.
The posted setup used FLUX.1-dev-ControlNet-Union-PRO with OpenPose, HED, and Depth at 1024×1536 on an RTX 4090 laptop. The claimed results were:
The node is linked at jlc-comfyui-nodes, with a separate downloadable workflow JSON.
A separate ComfyUI thread shows where local movie pipelines still depend on community glue. The question was not about generating whole scenes, but about the boring part that decides whether edits hold up: accurate person masking and inpainting on small regions.
The first reply in that thread pointed to a grab bag of detector options, including SAM3, SAM2, BirefNet, and YOLO, then named ComfyUI-Inpaint-CropAndStitch as the straightforward answer for mask-only style inpainting. That is a useful contrast with KupkaProd's high-level movie-agent framing. The cinematic wrapper is getting nicer, but the underlying edit stack is still modular and very node-driven.
One more signal from the same day: a Stable Diffusion post flagged that ACE-Step 1.5 XL was already live on Hugging Face, then immediately wished for a ComfyUI-ready format. That reaction says a lot about where this audience lives.
The linked ACE-Step 1.5 XL collection adds a fresh model-side variable to the local video workflow conversation. On the same day creators were passing around a full movie agent, they were also scanning for faster ControlNet composition and new model drops they could slot into the stack.