Skip to content
AI Primer
breaking

Sigma launches private AI browser with local OpenClaw, Gemma 4, and Qwen support

Sigma added a private AI browser mode that runs OpenClaw with local models such as Gemma 4, Qwen, and Nemotron on-device. That matters because browser automation and page context can stay local instead of being routed through a hosted agent service.

3 min read
Sigma launches private AI browser with local OpenClaw, Gemma 4, and Qwen support
Sigma launches private AI browser with local OpenClaw, Gemma 4, and Qwen support

TL;DR

  • Sigma said its new browser mode runs a private OpenClaw agent locally, with support for Gemma 4, Qwen 3.5, and Nemotron 3, according to testingcatalog's launch post.
  • The core pitch is that browser context and task execution stay on-device instead of being sent to a hosted agent backend, as testingcatalog and rohanpaul_ai's summary both describe.
  • Sigma is framing the feature as open source and cloud-free, with kimmonismus's post calling out "No cloud" and testingcatalog pointing to Sigma's site.
  • The browser agent is positioned to manage tabs and navigate pages directly, while rohanpaul_ai argues that putting the model inside Chromium turns the browser into an action surface rather than a chat box.
  • The first public access point in the evidence is a macOS test flow that testingcatalog's follow-up links from the launch thread.

You can browse Sigma's site, open the macOS test link from testingcatalog's thread follow-up, and read rohanpaul_ai's breakdown for the clearest articulation of why a browser-native agent changes the product shape. testingcatalog's main post also spells out the model lineup, which is a more concrete launch detail than most "AI browser" announcements manage.

Local agent runtime

Sigma's most concrete claim is simple: OpenClaw runs inside the browser with local models, and browser data stays on the machine. The launch post names Gemma 4, Qwen 3.5, and Nemotron 3 as supported options, which puts this closer to a local inference wrapper around browser actions than to the usual remote copilot tab.

The privacy framing is blunt. Sigma and early commentary are describing the product as no-cloud, open-source browser AI, with the model, page context, and task steps kept on-device.

Browser actions

The evidence points to a narrow but useful first capability set:

That is the interesting product line here. Sigma is not only adding sidebar chat. It is placing the agent in the one surface that already contains session state, forms, searches, and logged-in context.

OpenClaw in Chromium

Rohan Paul's thread supplies the clearest architecture description in the evidence: a local LLM sits inside a Chromium browser, reads the live page, understands intent, and acts on the web directly. That framing explains why Sigma is emphasizing locality so heavily. A browser agent has access to messy real task state that standalone assistants usually have to reconstruct through pasted context.

macOS test access

The launch thread points to a macOS test entry point, which suggests Sigma is shipping access as a live preview rather than only a concept demo. The evidence does not include a broader platform matrix, but it does make macOS the one concrete availability detail attached to day-one access.

For a low-key launch, that is the part worth bookmarking: Sigma is pairing a privacy-first browser pitch with an actual test link, not just a manifesto about where browser agents should go.

Share on X