Higgsfield claims a 23-minute sci-fi pilot made in 4 days with Seedance 2.0
Higgsfield said a team made a 23-minute sci-fi pilot in four days, and a public breakdown detailed moodboards, Blender blocking, Claude prompts, and XML edit handoff. The pipeline matters because it handles multi-director planning, voice consistency, and post.

TL;DR
- Higgsfield's Episode 1 page and YouTube upload put Hell Grind on official channels, while AIwithSynthia's post and CharaspowerAI's repost spread the headline claim that a roughly 23-minute sci-fi pilot was generated with Seedance 2.0 in four days.
- The useful part is not the runtime boast. In PJaccetturo's thread, the team described a production stack built around a vibe-coded planning app, generated character sheets, master location boards, Blender blocking, Claude prompt conversion, and XML handoff into DaVinci Resolve.
- Higgsfield's own Seedance 2.0 page claims "one prompt, multiple shots, native audio, full control," and PJaccetturo's breakdown shows a much messier real pipeline behind a finished episode: many prompts, many passes, and thousands of clips.
- The strongest workflow reveal in PJaccetturo's breakdown is the pre-vis layer. The team blocked scenes in Blender with colored proxy shapes, then used Claude to turn those spatial maps into Seedance prompts.
- Post did real work here. According to PJaccetturo's breakdown, five directors generated sequences on separate machines, sent XMLs to a lead editor, and finished with grain, halation, and glow in Resolve instead of heavy VFX.
You can watch the official episode page, skim Higgsfield's Seedance 2.0 product claims, and browse the company's Original Series hub, which pitches the project inside a broader AI-native streaming platform. The most concrete production details still come from PJaccetturo's public thread, including the Blender maps, the prompt screenshots, and the Resolve timeline.
Hell Grind
Higgsfield did not just post a teaser. Its official episode page and YouTube upload present Hell Grind as a full episode, with YouTube listing a 22:32 runtime and the Higgsfield page placing it inside the company's Original Series lineup.
That makes the story less about a flashy demo clip and more about whether the production system holds together across a half-hour TV shape. Higgsfield's Original Series page says filmmakers can get picked to produce full series with the company, so Hell Grind looks like a proof point for a bigger platform pitch, not just a one-off short.
CollabHub
The pipeline started before generation. In PJaccetturo's moodboarding post, PJaccetturo says the team used a vibe-coded internal app called CollabHub to keep four directors aligned on character designs, locations, and visual references.
The pre-production stack breaks into a few clear pieces:
- Moodboards: Higgsfield Soul Cinema generated early reference images, including a "post-cyberpunk daytime" look that the team could not source from real photography, according to PJaccetturo's moodboarding post.
- Color system: the same post says the film's look locked around a red-and-white palette early.
- Script treatment: the directors reportedly spent about a week iterating on script and treatment before the four-day sprint, per PJaccetturo's moodboarding post.
- Character sheets: PJaccetturo's moodboarding post says each character got front, back, and close-up views, plus props like skateboards and alternate emotional states for consistency in Seedance 2.0.
The interesting bit is how much of the "AI film" work still looks like old-school production design, just with generated references instead of a Pinterest board and a concept department.
Blender blocking
The best reveal in the thread is the bridge between storyboards and generation. PJaccetturo's breakdown says the team built low-poly location models in Blender, dropped in colored stand-ins for characters, and used those maps to control who stood where before prompting.
That section also adds a second layer: Claude. According to PJaccetturo's breakdown, the team uploaded those Blender maps to Claude so it could turn spatial layouts into detailed prompts for Seedance 2.0.
PJaccetturo later said in his reply about the workflow that the Blender workflow and the Claude prompts were the most eye-opening parts of the breakdown. That tracks. Higgsfield's Seedance 2.0 page sells "consistent characters, automatic camera cuts, and narrative flow" from a single prompt, but the production recipe shown here adds a separate planning layer to keep those shots coherent.
Generation sprint
The four-day sprint still involved brute force. In PJaccetturo's breakdown, the team says five directors worked full-time, generated thousands of clips, and assembled scenes out of 15-second segments.
The production mechanics are easier to scan as a list:
- Master location: one museum image became five camera angles via Nano Banana Pro, then expanded into pristine, destroyed, and dark variants, according to PJaccetturo's breakdown.
- Context-first prompting: the first prompt established environment and blocking, then later prompts moved into close-ups for dialogue, per PJaccetturo's breakdown.
- Hit rate: the directors reportedly kept the best 3 or 4 cuts out of every 20 generations, according to PJaccetturo's breakdown.
- Dialogue pacing: pauses were added on purpose because crowded prompts made performances speed up and turn "slop-like," per PJaccetturo's breakdown.
- Voice consistency: each prompt repeated a one-sentence voice description such as "Deep Japanese voice" or "25-year-old American accent," according to PJaccetturo's breakdown.
That is Christmas-come-early material for AI filmmaking nerds because it turns an abstract claim about "consistent characters" into a usable stack of constraints, retries, and shot planning.
XML handoff
The last useful reveal is that the team did not merge everything inside one magical generation UI. PJaccetturo's breakdown says each director built assigned sequences on their own machine, then sent XML files to a lead editor managing the master timeline.
The finish pass also stayed surprisingly conventional:
- Lead editor model: one "Director of Editing" managed the master file, while other directors cut locally and handed over XML, per PJaccetturo's breakdown.
- Resolve finish: the final assembly happened in DaVinci Resolve, according to PJaccetturo's breakdown.
- Look dev in post: grain, halation, and glow did more for the final image than heavy VFX, per PJaccetturo's breakdown.
- Official product context: Higgsfield's homepage separately pitches Seedance 2.0 in 1080p and a Cinema Studio with collaborative elements and per-shot camera control, which lines up with the direction of this workflow even if the thread shows a more manual version in practice.
For creative teams, that edit handoff may be the most grounded part of the whole story. The AI-native pieces are real, but the pipeline still closes like a distributed post house.