LangChain launches SmithDB, LangSmith Engine, and Sandboxes at Interrupt
LangChain unveiled SmithDB, LangSmith Engine, Managed Deep Agents, and GA sandboxes at Interrupt. The stack gives agent teams a purpose-built trace database, autonomous failure triage, and managed execution environments for production workflows.

TL;DR
- LangChain's launch post bundled seven releases at Interrupt, but the practical center of gravity was a tighter loop around agent operations: SmithDB for trace storage, LangSmith Engine for automatic triage, Managed Deep Agents for hosted deployment, and LangSmith Sandboxes GA for code execution.
- According to hwchase17's Engine launch thread, LangSmith Engine sits on top of traces, identifies issues in the background, and suggests concrete follow-up work such as code changes or new evaluators.
- LangChain's SmithDB announcement positioned SmithDB as a purpose-built distributed database for agent observability, while Hacubu's post claimed an order-of-magnitude speedup across the board.
- LangChain's Managed Deep Agents post reduced a production deep agent to three managed pieces, harness, context, and code execution, while LangChain's Sandboxes GA thread added snapshots, cheap forks, service URLs, a CLI, and auth proxy callbacks.
You can jump from the Interrupt launch hub into the SmithDB post, the LangSmith Engine post, the Managed Deep Agents post, and the Sandboxes GA post. One stray detail from andrewlamb1111's repost is that SmithDB is based on Apache DataFusion. Another from the Deep Agents CLI docs link is that LangChain was already pushing model swapping in the CLI before this broader packaging landed.
Release bundle
Interrupt looked less like a single product launch than a packaging pass over LangChain's agent stack. LangChain's launch post named seven items at once: LangSmith Engine, SmithDB, Sandboxes, Managed Deep Agents, LLM Gateway, Context Hub, and Deep Agents 0.6.
Two of those posts got the strongest follow-through in the evidence. hwchase17's recap narrowed the day's favorites to SmithDB and LangSmith Engine, which is probably the right read, because those are the pieces that change how teams store traces and turn them into fixes.
LangSmith Engine
LangSmith Engine is LangChain's attempt to make observability act on its own data. According to hwchase17's launch thread, it runs on top of traces, identifies issues automatically, and suggests action items such as code changes and new evaluators.
A slide in
breaks the product into three steps:
- Automatic detection and prioritization
- Trace-backed diagnosis
- Concrete resolution actions
That is a more specific claim than the generic "agent for improving your agents" framing in the launch copy. LangChain's product thread says the goal is to spend less time triaging, ship fixes faster, and catch regressions earlier, while a later reaction repost described the value as clustering failures and proposing targeted fixes.
SmithDB
SmithDB is the storage layer underneath the rest of this push. LangChain's announcement says agent traces have outgrown general-purpose databases and describes SmithDB as a purpose-built distributed database for agent observability.
The strongest concrete claim in the evidence came from Hacubu's post, which called SmithDB an order of magnitude faster across the board. andrewlamb1111's repost adds one architectural clue, SmithDB is based on Apache DataFusion, which suggests LangChain is leaning on a query engine already familiar in analytical data systems.
That product framing matches the workload LangChain keeps describing in tweets about traces: nested, long-running, and growing fast. a repost quoting Jake Broekhuizen called out "industry-leading performance across every key observability workload," but the public evidence here is still short on raw benchmark tables.
Managed Deep Agents
Managed Deep Agents packages the deployment side. LangChain's launch post says the managed offering covers three things, harness, context, and code execution, and can be deployed with one line of code.
That matters because LangChain is selling less of a model wrapper and more of an agent runtime bundle. The official post is linked at Introducing Managed Deep Agents, and the product language in LangChain's tweet makes clear what is being abstracted away:
- Harness
- Context
- Code execution
The missing piece in that short list is execution infrastructure, which is where sandboxes show up.
Sandboxes GA
LangSmith Sandboxes moved to general availability as the execution substrate for the rest of the stack. LangChain's GA post describes them as secure, scalable environments for agent code execution, integrated with Deep Agents SDK and the LangSmith platform.
The GA thread is more useful than the headline announcement because it actually enumerates the release:
- Snapshots and cheap forks
- Blueprints
- Pause when inactive
- Service URLs
- Sandbox CLI
- Creator-private by default
- Auth proxy with custom callbacks
Those are the mechanics an engineer would want to know first, especially snapshots, cheap forks, and pause-when-inactive, because they point to iterative debugging and cost control rather than just generic hosted execution.
Deep Agents context
The day before Interrupt, LangChain's DeltaChannel thread filled in why all this packaging showed up now. Deep Agents already had durable execution, with every step checkpointed for observability, fault tolerance, and human-in-the-loop, but longer-running agents were making full-state checkpointing harder to scale.
LangChain's answer was DeltaChannel, described there as the mechanism for scaling storage as runs get longer and context grows. Paired with Caspar's repost about offering both CLI and MCP and the Deep Agents CLI docs link showing model swapping in the CLI, the Interrupt launches read like consolidation: better trace storage, an automated triage agent on top, hosted execution underneath, and a developer surface that was already drifting toward production use before the conference announcements hit.