Fresh stories
Google reportedly tests Gemini Omni video editing with chat remix and templates
Multiple posts preview a Google video model called Gemini Omni with remix, templates, and chat editing, plus demos that keep chalkboard math readable. The clips are still unofficial, but creators are watching the text-fidelity claim closely.


Seedance 2 supports Midjourney, GPT Image 2, and Agent One pipeline workflows
Creators shared repeatable pipelines pairing Seedance 2 with Midjourney, GPT Image 2, Nano Banana, custom editors, and Agent One for shorts, UGC, and story clips. The examples focus on shot planning, asset prep, and post steps, so creators can build finished outputs instead of one-off generations.

Codex supports 16-scene HTML landing pages with season and timezone logic
Creators shared a Codex and GPT Image 2 workflow that outputs static HTML landing pages whose scenes shift by season and local time. The setup gives humans a cleaner format to review, tweak, and navigate than Markdown when agents generate multi-scene pages.

Google reportedly tests Gemini Omni video editing with chat remix and templates
Multiple posts preview a Google video model called Gemini Omni with remix, templates, and chat editing, plus demos that keep chalkboard math readable. The clips are still unofficial, but creators are watching the text-fidelity claim closely.
Luma Agents adds Kling Omni to moodboard-to-ad workflows
Luma Agents added Kling Omni as a generation option and paired the integration with demos that carry a reference moodboard through to finished ad visuals. The update gives creators another video model inside Luma's existing campaign workflow.
Briefs forMay 11
Top storiesthis week
Seedance 2.0 supports low-detail storyboard pipelines in Firefly, BeatBandit, and Leonardo
Creators documented low-detail storyboard pipelines for Seedance 2.0 across Firefly, BeatBandit, Leonardo, and InVideo. The guidance improves multi-shot continuity, but long generations still show cut and character errors.


Claude Code adds HTML artifacts and publish-to-link workflows in creator demos
A viral creator workflow swaps markdown for Claude-generated HTML files, then publishes them as live links for review and iteration. Users say the format is easier to scan and share for one-pagers, slides, and handoff docs, though the practice is entirely community-led.

Claude Design opens research preview with Canva, PPTX, and HTML exports
Anthropic says Claude Design can generate slides, prototypes, one-pagers, and other visual assets with exports to Canva, PDF, PPTX, and HTML. The preview also supports org-scoped sharing and one-step handoff into Claude Code, so teams can test the export flow now.

Higgsfield adds Ad Reference via MCP for top-performing video ad remixes
Higgsfield says Ad Reference MCP lets agents ingest winning video ads and generate new variants around the same patterns. The launch lands alongside Luma campaign builders and creator reports of Claude-and-Seedance phone-demo pipelines, pointing to repeatable ad iteration systems rather than one-off prompts.

Seedance 2.0 adds ComfyUI video extension for broadcast-shot workflows
Creators shared repeatable Seedance 2.0 workflows for ComfyUI clip extension, GPT Image 2 shot planning, and fake-broadcast or iPhone footage. The examples push Seedance beyond isolated shorts into longer, more controllable production pipelines.

Daily AI Digest
Get the best stories delivered
to your inbox
Skills Spotlighttop by stars
comfyui
Generate images, video, and audio with ComfyUI — install, launch, manage nodes/models, run workflows with parameter injection. Uses the official comfy-cli for lifecycle and direct REST/WebSocket API for execution.
hyperframes
Create HTML-based video compositions, animated title cards, social overlays, captioned talking-head videos, audio-reactive visuals, and shader transitions using HyperFrames. HTML is the source of truth for video. Use when the user wants a rendered MP4/WebM from an HTML composition, wants to animate text/logos/charts over media, needs captions synced to audio, wants TTS narration, or wants to convert a website into a video.
kanban-orchestrator
Decomposition playbook + specialist-roster conventions + anti-temptation rules for an orchestrator profile routing work through Kanban. The "don't do the work yourself" rule and the basic lifecycle are auto-injected into every kanban worker's system prompt; this skill is the deeper playbook when you're specifically playing the orchestrator role.
Workflows you can try today
Codex
11th MayCodex supports 16-scene HTML landing pages with season and timezone logic
Creators shared a Codex and GPT Image 2 workflow that outputs static HTML landing pages whose scenes shift by season and local time. The setup gives humans a cleaner format to review, tweak, and navigate than Markdown when agents generate multi-scene pages.
Seedance
11th MaySeedance 2 supports Midjourney, GPT Image 2, and Agent One pipeline workflows
Creators shared repeatable pipelines pairing Seedance 2 with Midjourney, GPT Image 2, Nano Banana, custom editors, and Agent One for shorts, UGC, and story clips. The examples focus on shot planning, asset prep, and post steps, so creators can build finished outputs instead of one-off generations.
Seedance
10th MaySeedance 2.0 supports low-detail storyboard pipelines in Firefly, BeatBandit, and Leonardo
Creators documented low-detail storyboard pipelines for Seedance 2.0 across Firefly, BeatBandit, Leonardo, and InVideo. The guidance improves multi-shot continuity, but long generations still show cut and character errors.
Seedance
9th MayInVideo Agent One tests Seedance 2.0 storyboard guidance
Creator tests show InVideo Agent One generating storyboards that Seedance 2.0 then uses as clip guidance, with similar production-sheet planning also appearing in GPT Image 2 workflows. It matters because scene beats and camera moves get defined before rendering, which can improve continuity across multi-tool video pipelines.





