Skip to content
AI Primer
breaking

OpenCode adds zero-retention for Go providers as operators report 3-4 GB idle sessions

OpenCode says all Go models now run under zero-data-retention agreements and that hosted requests use the same upstream providers as direct access. That tightens the privacy boundary for hosted coding agents, but operators still need to watch RAM use, rapid updates, and plan economics.

4 min read
OpenCode adds zero-retention for Go providers as operators report 3-4 GB idle sessions
OpenCode adds zero-retention for Go providers as operators report 3-4 GB idle sessions

TL;DR

  • OpenCode says every model on its $10 per month Go plan now runs under zero-data-retention agreements, and says provider-side data is not used for training OpenCode zero-retention announcement.
  • The official Go docs already described Go providers as zero-retention and non-training, but the new post makes that policy explicit across all providers serving the plan OpenCode zero-retention announcement, OpenCode Go docs.
  • After users claimed OpenCode Go might be routing through a weaker or different hosted path, OpenCode co-founder @thdxr said the service uses the same upstream providers customers would hit directly thdxr on same providers.
  • The harder problem is unit economics: @thdxr says the $10 tier only works if OpenCode can negotiate enough inference margin to turn roughly $10 of revenue into something like $40 of model spend, and a higher tier would scale the downside when those estimates are wrong thdxr on plan economics.
  • Practitioners in the main Hacker News thread describe a different bottleneck, RAM and process overhead, with one fresh comment saying a couple of idle sessions can quietly consume 3 to 4 GB combined Fresh HN discussion, while official docs still pitch OpenCode as a privacy-first multi-surface agent for terminal, desktop, and IDE use HN launch page, OpenCode homepage.

You can read the new Go docs, check the pricing page, and compare that polished privacy story with a very engineer-coded HN thread about remote opencode serve workflows, plugin IPC hacks, and a context plugin that adds prune and retrieve.

Zero-retention becomes the hosted pitch

OpenCode has always sold the open source product as privacy-first, with enterprise docs saying code and context data do not need to leave customer infrastructure when teams use their own provider or gateway. Go is the hosted wrinkle, because OpenCode sits between the user and the model provider there.

The current Go documentation says the hosted plan is in beta, runs models in the US, EU, and Singapore, and that providers follow a zero-retention policy and do not use customer data for training. The new announcement matters because it turns that from a docs line into a provider-wide claim, explicitly tied to signed zero-data-retention agreements.

What the $10 plan actually includes

Go is a subscription layer on top of OpenCode Zen, not a separate agent. The docs say one workspace member subscribes, pastes an API key into /connect, and then gets a rotating set of hosted open models through the normal provider interface.

The current Go docs list six models: GLM-5, Kimi K2.5, MiMo-V2-Pro, MiMo-V2-Omni, MiniMax M2.5, and MiniMax M2.7. They also publish budget-based limits instead of raw request caps:

  • 5 hour limit: $12 of usage
  • Weekly limit: $30 of usage
  • Monthly limit: $60 of usage
  • Overflow path: optional Zen balance top-up after limits are hit

That structure explains @thdxr's thread about why a $50 tier is not trivial. The company is selling predictability on top of volatile inference costs, not just reselling tokens with a markup.

Same upstreams, plus an operator trust gap

One complaint around cheap hosted model plans is that the aggregator may be silently serving a different backend than the model you would get by going direct. OpenCode addressed that head on.

@thdxr said Go uses the exact same providers users would access directly, while adding that providers constantly tweak behavior and sometimes ship bugs. That does not remove the trust gap, but it narrows the technical claim to something concrete: OpenCode is positioning Go as packaging, routing, and negotiated access, not as a shadow model tier with different weights or hidden degradation.

Memory leaks and idle overhead are still the loudest field reports

The liveliest practitioner feedback is not about retention policy, it is about runtime behavior. HN commenters describe OpenCode as powerful and flexible, but also resource-hungry, especially when multiple sessions or background processes pile up.

Y
Hacker News

Fresh discussion on OpenCode – Open source AI coding agent

1.3k upvotes · 624 comments

The public bug trail backs that up. A March 24 GitHub issue reported huge memory consumption on Debian in v1.1.51 and was closed as completed on April 2. Another issue from March 22 described orphaned LSP processes continuing to run after OpenCode exits, causing severe memory leaks on macOS. That leaves OpenCode with a familiar coding-agent split screen: the hosted privacy boundary just got sharper, while the local operational story is still being debugged in public.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

Share on X