Hyperbrowser launches HyperPlex to run parallel browser agents across models
Hyperbrowser released HyperPlex, an open-source research agent that splits a goal into subtasks, runs browser workers in parallel, and returns cited reports. Teams building deep-research products can study the repo for orchestration, live browsing, and report synthesis patterns.

TL;DR
- Hyperbrowser launched HyperPlex as an open-source research agent that takes a goal, schedules work, and returns a cited report after running browser-based research in the background, according to the launch thread.
- The core workflow is parallel by design: HyperPlex "spawns browser agents in parallel," reads live web pages, and can run with multiple models rather than a single-provider stack, as described in Hyperbrowser's demo and echoed by a practitioner summary.
- For engineers, the release is as much a reference implementation as a product announcement: Hyperbrowser linked the code and positioned it on top of its browser API infrastructure in the repo post and the API post.
What shipped
HyperPlex is an open-source "research agent that works while you're away," in Hyperbrowser's launch thread, built to take a user goal, break the work into browser-driven research steps, and deliver a cited output. The launch video HyperPlex overview shows the system fanning out from a single prompt into multiple browser tasks before compiling a final report.
Hyperbrowser also published the code in the HyperPlex repo, which makes this more relevant to engineers than a typical agent teaser. The implementation sits alongside Hyperbrowser's broader browser API pitch, which frames the company as providing "cloud browsers for AI agents via API" rather than only shipping an end-user app.
How the architecture looks from the launch materials
The technical pattern is a multi-agent research pipeline. Hyperbrowser says HyperPlex can "let it run with multiple models" and "spawn browser agents in parallel" in the product description, while the practitioner walkthrough adds that it breaks a goal into subtasks, sends sub-agents across Anthropic, OpenAI, and Gemini models, scrapes the web through Hyperbrowser, and returns a cited answer in real time. That cross-model orchestration detail is attributed to the external summary, not the primary launch post.
The practical takeaway is the combination of three pieces engineers usually have to wire together themselves: browser execution, parallel task decomposition, and report synthesis with citations. Hyperbrowser's own supporting thread keeps the scope simple — "Give it a goal. Schedule it" — but the launch materials suggest HyperPlex is meant as a template for deep-research systems that need live-page access and asynchronous completion, not just a chat wrapper.