Skip to content
AI Primer
workflow

youraipulse opens Meta TRIBE v2 video analyzer with cut comparison and engagement graphs

A free local tool built on Meta TRIBE v2 now scores uploaded videos with predicted response graphs, edit suggestions and multi-cut comparison. That matters because creators can test alternate edits before publishing, though the release is still framed as a community prototype rather than an official Meta editing product.

3 min read
youraipulse opens Meta TRIBE v2 video analyzer with cut comparison and engagement graphs
youraipulse opens Meta TRIBE v2 video analyzer with cut comparison and engagement graphs

TL;DR

  • AmirMushich's launch post introduced a free local video analyzer built on Meta TRIBE v2 that outputs a predicted response curve, brain-area visualizations, and edit suggestions.
  • According to AmirMushich's feature rundown, the app supports side-by-side comparison for up to four cuts, so creators can A/B test versions before posting.
  • The underlying Meta TRIBE v2 announcement describes the research model as a multimodal predictor of brain responses to sight and sound, while AmirMushich's repo link frames this release as a non-commercial community prototype, not a Meta product.
  • The repo README summary says the tool can export JSON and PDF reports, uses local Whisper transcription hints, and can optionally call Ollama for rewrite help.

You can read Meta's research post, inspect the official TRIBE v2 code, and download the community wrapper from Amir Mushichge's GitHub repo. The interesting bit is not just the brain-model flex, it is the editing UI: AmirMushich's screenshots show an overlaid response graph for multiple cuts, while youraipulse's comparison screenshot turns those curves into a ranked winner list with concrete weak spots like curve start and hold.

What shipped

The app wraps Meta's TRIBE v2 into a creator-facing workflow. In Meta's own release, TRIBE v2 is pitched as a research model that predicts brain responses to complex stimuli; in AmirMushich's repo post, the shipping layer is a free local interface for uploaded videos.

The interface centers on three outputs that creators can actually edit against:

  • a predicted response-over-time graph
  • a cortical brain map with highlighted active regions
  • recommendation cards tied to timestamps and weak sections

Compare mode

The best part is the multi-cut view. AmirMushich's feature rundown says the tool compares two to four uploads side by side, and youraipulse's comparison summary shows the output as overlaid curves, per-cut scores, and notes about where one version beats another.

That turns the model into a rough pre-publish testing rig instead of a single vanity score. In AmirMushich's screenshot thread, one cut leads by 21 points on hold and pace, while weaker versions get flagged for the same recurring gaps.

The editing workflow

The screenshots make the intended workflow unusually explicit. The app tells users to keep the video on the left, read the curve and brain map in the center, then use the zone bars on the right as the explanation layer.

The five named zones visible in the interface screenshot are:

  1. Frontal zone
  2. Action zone
  3. Attention zone
  4. Speech and recognition zone
  5. Visual zone

Local stack and license

The repo matters because it answers the obvious caveat. The README summary says this is a local web app built around Meta's official TRIBE v2 inference path, with JSON and PDF exports, Whisper-based transcript timing hints, and optional Ollama support for local copy rewriting.

The official TRIBE v2 repo and model card describe the base model as a CC BY-NC 4.0 research release for video, audio, and text brain-response prediction. AmirMushich's caveat post repeats the same boundary: non-commercial, community-built, and not affiliated with Meta.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 3 threads
TL;DR1 post
Compare mode2 posts
Local stack and license1 post
Share on X