LangChain launched Deep Agents Deploy in beta as a production path for open, model-agnostic agent harnesses configured with AGENTS.md, skills, and mcp.json. Deployments run on LangSmith and can expose MCP, A2A, and agent protocol while teams choose models and sandbox providers.

AGENTS.md, /skills, and mcp.json, while the official docs add deepagents.toml as the deployment config entrypoint.You can read the launch post, skim the deploy docs, and compare the whole pitch to Claude Managed Agents right in LangChain's own copy. The useful bits are concrete: a single deepagents deploy command, 30+ server endpoints, self-hostable LangSmith Deployments, and one easy-to-miss constraint in the docs, deployed mcp.json only supports HTTP and SSE transports, not stdio.
LangChain is packaging agent deployment around the same artifacts many teams already use during local agent development. The blog post describes AGENTS.md as the session-start instruction set, skills as markdown and scripts for specialized knowledge and actions, and mcp.json as the tool registry.
The docs add one more file that matters in practice: deepagents.toml. That file carries the agent name, model choice, and optional sandbox config, while skills and MCP servers are auto-discovered from the project layout instead of being declared manually.
The core implementation detail is that deepagents deploy bundles the agent into a LangSmith Deployment server. In LangChain's telling, that server is horizontally scalable and ships with more than 30 endpoints, including MCP, A2A, Agent Protocol, human-in-the-loop controls, and memory APIs.
That makes the release less about another agent framework and more about the production wrapper around one. The blog's checklist is blunt: deploy orchestration and memory, spin up per-session sandboxes, then expose interfaces for tool calling, multi-agent use, UI clients, and memory access. The command is supposed to collapse that stack into one step.
LangChain keeps returning to the same thesis: the important control points are harness, model, memory, and sandbox, not just raw hosting. The blog post explicitly positions Deep Agents Deploy against Claude Managed Agents on openness, self-hosting, and multi-provider support.
That framing shows up in the reaction too. Vtrivedy10's follow-up post reduces the product to four nouns, open harness, model choice, open memory, open protocols, which is basically the same architecture diagram in sentence form.
One of the more useful details lives in the docs, not the launch thread. deepagents init scaffolds deepagents.toml, AGENTS.md, .env, mcp.json, and a skills/ directory, then deepagents dev runs locally before deepagents deploy pushes the bundle to LangSmith.
The docs also spell out two deployment constraints:
mcp.json in deployed environments only supports http and sse transports.stdio MCP servers are rejected at bundle time, because there is no local process to spawn.thread, the default, for one sandbox per conversation.assistant, which shares one sandbox across all conversations for the same assistant.That last bit matters because it changes whether filesystem state persists across conversations, which is a real behavioral choice, not a packaging detail.