Fresh stories
Google reportedly tests Gemini Omni video editing with chat remix and templates
Multiple posts preview a Google video model called Gemini Omni with remix, templates, and chat editing, plus demos that keep chalkboard math readable. The clips are still unofficial, but creators are watching the text-fidelity claim closely.

Prmptbio adds sref import and Smart Poster exports
A creator partner demo shows Prmptbio turning uploaded Midjourney style references into auto-labeled profile pages and poster exports in Grid, Bento, and Detailed layouts. The tool packages style references into shareable assets without rebuilding showcase pages by hand.


Codex supports 16-scene HTML landing pages with season and timezone logic
Creators shared a Codex and GPT Image 2 workflow that outputs static HTML landing pages whose scenes shift by season and local time. The setup gives humans a cleaner format to review, tweak, and navigate than Markdown when agents generate multi-scene pages.

Google reportedly tests Gemini Omni video editing with chat remix and templates
Multiple posts preview a Google video model called Gemini Omni with remix, templates, and chat editing, plus demos that keep chalkboard math readable. The clips are still unofficial, but creators are watching the text-fidelity claim closely.
Luma Agents adds Kling Omni to moodboard-to-ad workflows
Luma Agents added Kling Omni as a generation option and paired the integration with demos that carry a reference moodboard through to finished ad visuals. The update gives creators another video model inside Luma's existing campaign workflow.
Briefs forMay 11
Top storiesthis week
GPT Image 2 supports 9-panel storyboards and 10-page brand books
Creators used GPT Image 2 for storyboard sheets, brand books, posters, and campaign visuals across Firefly, Paper, Codex, and Leonardo. The shift turns it into a preproduction tool, but tests still report inconsistent guideline adherence without extra context.


Seedance 2.0 supports low-detail storyboard pipelines in Firefly, BeatBandit, and Leonardo
Creators documented low-detail storyboard pipelines for Seedance 2.0 across Firefly, BeatBandit, Leonardo, and InVideo. The guidance improves multi-shot continuity, but long generations still show cut and character errors.

Claude Code adds HTML artifacts and publish-to-link workflows in creator demos
A viral creator workflow swaps markdown for Claude-generated HTML files, then publishes them as live links for review and iteration. Users say the format is easier to scan and share for one-pagers, slides, and handoff docs, though the practice is entirely community-led.

Claude Design opens research preview with Canva, PPTX, and HTML exports
Anthropic says Claude Design can generate slides, prototypes, one-pagers, and other visual assets with exports to Canva, PDF, PPTX, and HTML. The preview also supports org-scoped sharing and one-step handoff into Claude Code, so teams can test the export flow now.

Higgsfield adds Ad Reference via MCP for top-performing video ad remixes
Higgsfield says Ad Reference MCP lets agents ingest winning video ads and generate new variants around the same patterns. The launch lands alongside Luma campaign builders and creator reports of Claude-and-Seedance phone-demo pipelines, pointing to repeatable ad iteration systems rather than one-off prompts.

Daily AI Digest
Get the best stories delivered
to your inbox
Skills Spotlighttop by stars
comfyui
Generate images, video, and audio with ComfyUI — install, launch, manage nodes/models, run workflows with parameter injection. Uses the official comfy-cli for lifecycle and direct REST/WebSocket API for execution.
hyperframes
Create HTML-based video compositions, animated title cards, social overlays, captioned talking-head videos, audio-reactive visuals, and shader transitions using HyperFrames. HTML is the source of truth for video. Use when the user wants a rendered MP4/WebM from an HTML composition, wants to animate text/logos/charts over media, needs captions synced to audio, wants TTS narration, or wants to convert a website into a video.
kanban-orchestrator
Decomposition playbook + specialist-roster conventions + anti-temptation rules for an orchestrator profile routing work through Kanban. The "don't do the work yourself" rule and the basic lifecycle are auto-injected into every kanban worker's system prompt; this skill is the deeper playbook when you're specifically playing the orchestrator role.
Workflows you can try today
Codex
11th MayCodex supports 16-scene HTML landing pages with season and timezone logic
Creators shared a Codex and GPT Image 2 workflow that outputs static HTML landing pages whose scenes shift by season and local time. The setup gives humans a cleaner format to review, tweak, and navigate than Markdown when agents generate multi-scene pages.
Seedance
11th MaySeedance 2 supports Midjourney, GPT Image 2, and Agent One pipeline workflows
Creators shared repeatable pipelines pairing Seedance 2 with Midjourney, GPT Image 2, Nano Banana, custom editors, and Agent One for shorts, UGC, and story clips. The examples focus on shot planning, asset prep, and post steps, so creators can build finished outputs instead of one-off generations.
AI Tool
10th MayGPT Image 2 supports 9-panel storyboards and 10-page brand books
Creators used GPT Image 2 for storyboard sheets, brand books, posters, and campaign visuals across Firefly, Paper, Codex, and Leonardo. The shift turns it into a preproduction tool, but tests still report inconsistent guideline adherence without extra context.
Seedance
10th MaySeedance 2.0 supports low-detail storyboard pipelines in Firefly, BeatBandit, and Leonardo
Creators documented low-detail storyboard pipelines for Seedance 2.0 across Firefly, BeatBandit, Leonardo, and InVideo. The guidance improves multi-shot continuity, but long generations still show cut and character errors.






