Skip to content
AI Primer
release

LTX-2.3 ships production API with native vertical video and stronger image-to-video

LTX-2.3 opened a production API with upgrades to detail, audio, image-to-video motion, prompt following, and native vertical output. Use it to ship open video in real workflows, whether you run locally or in the cloud for lip-synced shorts.

3 min read
LTX-2.3 ships production API with native vertical video and stronger image-to-video
LTX-2.3 ships production API with native vertical video and stronger image-to-video

TL;DR

  • LTX-2.3 has opened a production API, moving its open multimodal video model from local and self-hosted setups into a cloud workflow that can plug into products and content pipelines, according to the launch thread and the API breakdown.
  • The release centers on five creator-facing upgrades: a rebuilt VAE for sharper detail, cleaner audio training, stronger image-to-video motion, native vertical output, and better prompt following, as described in the feature list and the prompt update.
  • A creator demo from TechHalla's walkthrough shows the model being used for a short character-led piece in under two hours for $9.39, with stills, lip-synced audio-to-video shots, and extra filler scenes.
  • For short-form creators, the native portrait support in the vertical video post matters as much as the API itself: LTX is positioning vertical, audio-aware video generation as a production format rather than a cropped afterthought.

What shipped

The main change is access. As the launch thread frames it, LTX-2.3 is now available as a production API, so teams no longer need a local GPU or self-hosted setup to use the open video model in real workflows. The linked model page describes 4K output, synchronized audio-visual generation, portrait video up to 1080×1920, and clips up to 20 seconds.

Under that API launch is a meaningful model refresh. Hasan's breakdown says LTX rebuilt its VAE on less-compressed data for cleaner textures and edges detail upgrade, filtered training data to reduce noisy, artifact-heavy audio audio upgrade, and improved image-to-video so motion feels less like a slideshow and more like actual scene movement I2V upgrade. The prompt-following update also matters for directors and editors: the prompt post says camera angle, motion direction, and composition now stick more reliably.

Native vertical output may be the most immediately useful creative change. Instead of reframing landscape generations for Shorts, Reels, and TikTok, the portrait demo says LTX-2.3 handles portrait video natively.

A workable short-form pipeline

The most concrete workflow in the evidence comes from TechHalla's demo, who says a finished video cost $9.39 and took under two hours. The thread in the setup steps starts with generated stills, using Nano Banana 2 inside LTX with a reference photo, then turns those stills into animated shots.

The second step is audio-to-video. the workflow post says you attach audio to a still, use prompting to control camera movement and character motion, and rely on built-in lip sync for dialogue shots. For pickup shots, the same thread describes generating filler scenes from just a still and a prompt, using fast passes for testing and Pro renders for finals.

That makes LTX-2.3 look less like a pure text-to-video toy and more like a shot assembly tool: stills for visual continuity, audio-to-video for speaking beats, and extra generations to bridge cuts.

Who this is for

LTX's clearest target is not hobby prompting. In the API breakdown, Hasan points to companies embedding AI video in products, model-aggregation platforms, builders of verticalized tools, and teams automating content pipelines. That fits the release itself: the big story is not one flashy sample, but an open model that now ships in API form with vertical video, audio, and stronger controllability.

For creators, that combination is the point. Local use is still part of the pitch, but the new API means the same model can sit behind desktop experimentation and production deployment without changing tools or rebuilding the workflow from scratch.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 3 threads
TL;DR1 post
What shipped4 posts
A workable short-form pipeline1 post
Share on X