OpenCode said all Go models now run under zero-retention agreements, then clarified that hosted routes use the same providers customers get direct and explained why higher subscription tiers are risky to price. The clarification matters for users debating telemetry, proxying, and how local the web UI really is, so teams should verify their data path.

The concrete product change is narrow but important: OpenCode says it has signed zero-data-retention agreements "with all providers for Go," so every model on the hosted Go plan now follows that policy announcement. That is stronger than a generic privacy claim because it describes provider-side retention terms, not just OpenCode's own handling.
Posted by rbanffy
OpenCode is an open source AI coding agent that assists developers in writing code via terminal, IDE, or desktop app. Features include LSP support, multi-session capabilities, shareable session links, integration with GitHub Copilot and ChatGPT Plus/Pro, compatibility with 75+ LLM providers including local models, and privacy-first design with no code or context storage. Boasts 120,000 GitHub stars, 800 contributors, 10,000+ commits, and 5M monthly users. Offers free models or Zen for optimized coding models, with desktop beta available for macOS, Windows, and Linux.
The change also tightens a tension already visible in OpenCode's launch materials. Its product page describes the tool as "privacy-first" and says it stores no code or context, while the Hacker News discussion around the launch showed engineers using it across terminal, IDE, desktop, and local-model setups. Zero retention for Go makes the hosted subscription more consistent with that positioning, even though it does not by itself answer every question about the full request path.
OpenCode addressed one of the louder user suspicions directly: according to thdxr's post, the $10 Go plan uses "the exact same providers you use when you go direct." The caveat was operational rather than architectural — providers "are constantly tweaking things" and sometimes introduce small bugs — but the claim is that OpenCode is not serving a cheaper or specially degraded model variant behind the subscription.
Posted by rbanffy
OpenCode looks interesting as a terminal/IDE/desktop coding harness with LSP support, multi-backend sessions, local-model compatibility, and plugin extensibility. The thread’s main practical signal is how people are running it: remotely via WebUI/Tailscale, inside Emacs, and inside sandboxes. The cautionary signal is operational: telemetry, proxying to app.opencode.ai, rapid release churn, and resource usage are recurring concerns.
That clarification matters because the community discussion was not only about model quality. In the Hacker News thread, users highlighted practical upside in opencode serve, multi-backend WebUI control, LSP integration, and custom IPC plugins, but others warned that telemetry still reaches OpenCode's servers and that the WebUI proxies via OpenCode infrastructure rather than running fully local. OpenCode's pricing explanation adds another piece: the company says subscriptions depend on negotiated inference economics, where a $10 plan might target something like "$40 of inference," and larger tiers are risky because any pricing error scales with them. Put together, the new message is clearer on provider retention and routing parity, while the older deployment questions around telemetry and proxying remain separate concerns.
we’ve signed Zero Data Retention agreements with all providers for Go all models now follow a zero-retention policy your data is not used for training
we see conspiracy theories claiming models on OpenCode Go ($10 plan) are served differently we're using the exact same providers you use when you go direct all providers are constantly tweaking things and sometimes there are bugs but it's pretty minor