TRIBE v2 adds 100-ad-variant pre-screens before paid tests
TRIBE v2 is being used to score up to 100 short-form ad variants before paid testing, using attention, motion, emotion, memory, and scene cues. It predicts brain response instead of CTR, and the CC-BY-NC license limits agency use.

TL;DR
- Meta released TRIBE v2 as an open-weight brain-response model for video, audio, and text, and shannholmberg's source roundup matched the official Meta announcement, model page, and code repo.
- The creative use case spreading through X is a pre-screen loop where, as shannholmberg's workflow post puts it, teams generate 100 short-form variants, score them on attention, motion, emotion, memory, and scene cues, then send only a handful into paid tests.
- The strongest caveat came from shannholmberg's caveat post, which says TRIBE v2 predicts brain response rather than CTR or conversion, while the official Hugging Face page says outputs are for an "average" subject on a roughly 20,000-vertex cortical mesh.
- Early creator experiments from AmirMushich's comparison post and AmirMushich's TikTok test are comparing predicted curves against real YouTube and TikTok retention, but AmirMushich's follow-up reply also says more real tests are needed before making bigger claims.
- Commercial use is not wide open: shannholmberg's license post says the release is CC-BY-NC, and the official GitHub repo lists the same license.
You can try Meta's demo, browse the open weights, and inspect a fast-moving creator wrapper in this GitHub app. The interesting bit is not just that Meta published a neuroscience model. It is that creators immediately turned it into a rough cut filter for ads, Shorts, and TikToks.
What Meta shipped
Meta's launch post describes TRIBE v2 as a predictive model of human brain responses to sight, sound, and language. The release package included a paper, code, model weights, and an interactive demo.
The official model page adds the technical shape. TRIBE v2 combines LLaMA 3.2 for text, V-JEPA2 for video, and Wav2Vec-BERT for audio, then maps those multimodal representations onto the cortical surface.
Meta says the model was trained on data from more than 700 healthy volunteers, while the launch post also frames it as a 70x resolution jump over similar models. On the inference side, the Hugging Face page says predictions target an average subject and live on the fsaverage5 cortical mesh, about 20,000 vertices.
The 100-variant workflow
The clearest creator workflow came from shannholmberg's workflow post. It turns TRIBE v2 into a triage layer before paid distribution.
- Generate a large batch of short-form ad variants.
- Run each cut through TRIBE v2.
- Score the outputs on attention, faces, motion, emotion, memory, and scene.
- Drop the weakest set before buying traffic.
- Push the finalists into normal A/B testing.
That pitch lands because the economics changed. As shannholmberg's cost comparison notes, older neuromarketing studies needed scanners, paid subjects, and weeks of work, while TRIBE v2 can be run in software at near-zero marginal cost per additional variant.
The same idea is already being wrapped in creator tooling. According to this community GitHub app, one local interface built around the official model can review one upload deeply, compare two to four versions side by side, visualize response-over-time curves and 3D brain activity, then export JSON and PDF reports.
Early correlation tests
The first wave of user testing is messy, but it is concrete. AmirMushich's comparison post says creators are already uploading multiple videos to compare predicted performance, and AmirMushich's YouTube follow-up shows one early check against real YouTube results.
AmirMushich's TikTok test goes further with a 2.4 million-view TikTok, claiming high predicted activity in visual and recognition regions during the opening seconds. In a separate example, youraipulse's YouTube metrics post paired a TRIBE readout with YouTube Studio numbers showing 130 percent average percentage viewed and 81.5 percent retention.
None of that closes the loop on business outcomes. AmirMushich's cautionary reply explicitly says more real tests are needed, which is a better read than the more breathless "predict virality" framing in youraipulse's virality claim.
Agency limits
The biggest practical brake is licensing. shannholmberg's license post says the release is CC-BY-NC, which lines up with the official GitHub repository and model page.
That matters because the creative use case spreading fastest is commercial. shannholmberg's license post says testing your own creative fits the current release, but agencies doing paid client work need a separate Meta agreement.
A second brake is what the model actually predicts. As shannholmberg's caveat post puts it, TRIBE v2 is a directional pre-screen for brain response, not a replacement for CTR models, conversion models, or paid experiments.