Devin launches Devin for Terminal with `/handoff` cloud sessions and frontier-model switching
Cognition launched Devin for Terminal, a local CLI agent that can hand active sessions to the cloud with `/handoff` and switch across frontier models. It gives teams a hybrid local/remote workflow without forcing them into a separate cloud IDE from the start.

TL;DR
- Cognition shipped Devin for Terminal as a local coding agent that runs in your shell, and Cognition's launch thread says it keeps full access to your codebase, tools, and environment.
- The sharpest feature is
/handoff, which dabit3's demo and Cognition's demo post show moving an active terminal session into the cloud so work continues after the laptop closes. - Model choice is part of the pitch: according to dabit3's overview, users can switch across Claude, GPT, SWE, GLM, Kimi, and more, while Cognition's thread specifically names Opus 4.7, GPT-5.5, and SWE-1.6.
- Cognition also says it built a custom terminal rendering library in Rust, and DevinAI's VT100 photo plus Cognition's thread turn that into a very literal demo on a real VT-100 terminal.
You can jump straight to the Devin for Terminal page, watch dabit3's handoff demo bounce a session into the cloud, and see DevinAI's VT100 post turning a launch into retro terminal fan service.
Devin for Terminal
The launch frames Devin for Terminal as a shell-native version of Devin, not a separate browser workspace. In Cognition's thread, the company describes it as "everything we learned building Devin, now as a local agent," which keeps the starting point inside the developer's existing terminal.
That local-first angle is the whole shape of the product. Cognition's thread says the agent gets full access to the repo, tools, and environment already on the machine.
/handoff
The cloud story is built around a single command. dabit3's demo and Cognition's handoff post both show /handoff as the mechanism for sending an in-progress session from the local CLI to Devin's cloud runtime.
Cognition's wording is blunt: start locally, then "when your work outgrows your laptop, hand it off to the cloud," per the handoff post. The pitch is a hybrid workflow, local when you want control, cloud when you want the machine back.
Model switching
Model selection is exposed at the terminal layer instead of being fixed to Devin's own model stack. dabit3's launch post lists Claude, GPT, SWE, GLM, and Kimi, while Cognition's thread highlights Opus 4.7, GPT-5.5, and SWE-1.6.
That makes the product read less like one hosted agent and more like a shell UI that can sit on top of several frontier models. The evidence here is still launch-level, but the named model list is already broader than the usual single-model copilot framing.
Rust renderer and VT-100
One implementation detail made the launch more interesting than a standard CLI ship. Cognition's thread says the team wrote a custom terminal rendering library in Rust to make the interface "fast and snappy."
The same thread says they got it running on a real VT-100, and DevinAI's photo shows the payoff. For one day at least, the cleanest proof that the terminal still matters was a coding agent booting on 1970s hardware.