New local-stack threads covered ACE-Step 1.5XL, survey-driven Ultimate Upscaler tuning, plus ComfyUI mask and cinema-pipeline experiments. Creators are still hand-optimizing checkpoint access, detail fidelity, and automation, so local workflows remain very manual.

You can browse the ACE-Step 1.5XL collection, inspect the Ultimate Upscaler survey and its live ELO viewer, grab the CropAndStitch node, and read through the KupkaProd Cinema Pipeline. The threads are small, but together they make the state of local image tooling pretty clear: model access, mask quality, upscale settings, and automation glue are still getting tuned by hand.
The ACE-Step post was barely more than a link drop, but that is the point. The model collection existed on Hugging Face before the surrounding workflow ergonomics caught up, and the first reaction in the thread was a request for ComfyUI format support rather than a long breakdown.
That is a familiar local-stack pattern. Checkpoints and collections appear fast, then node packs, workflow templates, and practical recipes arrive later.
The most useful thread of the batch came from a student turning Ultimate Stable Diffusion Upscaler tuning into a live research project. The survey compares outputs across three categories: fidelity to the original image, prettiness, and detail quality.
The tested variables are explicit in the post: denoise, ControlNet strength, and step count. Instead of posting a single recommended preset, the project exposes a pairwise voting loop and a live ranking board, which is a better fit for an argument creators keep having anyway, namely whether "better" upscale settings preserve the source or beautify it away.
The ComfyUI thread asked for two very specific comforts from other UIs:
In the included reply, the thread's top visible comment suggested a detector grab bag, SAM3 or SAM2, BirefNet, and YOLO, then pointed to ComfyUI-Inpaint-CropAndStitch for the second half. Even in 2026, local editing still looks like assembling the right detector and patching node behavior together.
At the far end of the same ecosystem, the KupkaProd post packaged ComfyUI as a movie pipeline agent instead of a node graph. The claim was broad, enter a prompt plus desired scene time and let it run, with the linked GitHub repo positioned as a free local stack for longer-form generation.
The notable detail is not just "local video," it is compression. The post said the nine-minute video in the thread was made with fewer than 40 words, which puts prompt-to-runtime orchestration, not raw prompting, at the center of the pitch.