Skip to content
AI Primer
release

Lightning V3.1 releases 10-second voice cloning with 44.1kHz output and sub-100ms latency

Smallest says Lightning V3.1 can clone a voice from about 10 seconds of audio with 44.1kHz output, sub-100ms latency and 50-plus languages on Waves. Test it for multilingual narration and dubbing, but get explicit permission before cloning any voice.

2 min read
Lightning V3.1 releases 10-second voice cloning with 44.1kHz output and sub-100ms latency
Lightning V3.1 releases 10-second voice cloning with 44.1kHz output and sub-100ms latency

TL;DR

  • Smallest's launch thread says Lightning V3.1 on Waves can clone a voice from roughly 10 seconds of audio, with 44.1kHz output, sub-100ms latency, and support for 50-plus languages.
  • In a separate demo, speed demo frames the main workflow simply: upload a short clip, then generate a cloned voice fast enough for near-real-time use.
  • The broader tech breakdown positions this less as a novelty voice toy and more as a creator tool for multilingual narration, podcast reads, and quick turnaround voiceover work.
  • Smallest's own product post says the feature is free to try on Waves right now.

What shipped

The release centers on a narrow but useful promise: dramatically less source audio. According to the speed demo, Lightning V3.1 needs about 10 seconds to build a clone, versus the much longer samples many voice tools have historically required. The same launch thread claims 44.1kHz output and sub-100ms latency, which points to cleaner export quality and faster preview loops for creators working on voice-led content.

Smallest's tech breakdown says the model runs on Waves and supports 50-plus languages. The Waves entry point is already live, and the company says the tool is free to test.

What creators can actually do with it

The clearest creator angle is voice reuse without another recording session. In the use-case demo, the examples focus on narrating reels and tutorials in your own voice, generating podcast intros and ad reads, and using one English sample to speak other languages including Spanish, Hindi, French, and Japanese. That makes the release more interesting for dubbing and rapid social edits than for one-off novelty clones.

Quality is the open question in every cloning launch, and the side-by-side demo is the main evidence here. The post claims most listeners cannot easily tell the real and cloned voices apart, though that is still the company's own showcase rather than an independent test. Even so, the combination of short input, multilingual output, and fast generation is the real production shift.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 2 threads
TL;DR3 posts
What shipped2 posts
Share on X