Developers launch Agent FM, Mate, and ntm for multi-session Claude Code and Codex control
Independent developers shipped new control-plane tools for long-running coding agents, including Agent FM audio monitoring, Mate phone-first remote control, and ntm for provider-agnostic multi-agent workflows. It matters because teams running many Claude Code and Codex sessions still need better visibility, handoff, and checkpointing than a single built-in session list provides.

TL;DR
- Anthropic shipped claudeai's agent view announcement as a research preview in Claude Code, with
claude agentsgiving one terminal list for running, blocked, and finished sessions, and claudeai's availability post says it is on all paid plans. - Independent builders moved faster around the edges: Gold-Juice-6798's Agent FM post turns Claude Code and Codex sessions into live audio narration, while matiizen's Mate post pitches a phone-first control layer with approvals, IDE, terminal, previews, and file access.
- Provider-agnostic orchestration is part of the same pattern, with doodlestein's ntm post describing an open source tool used for larger mixed Claude Code and Codex setups, and doodlestein's follow-up framing it as a building block for skills-driven multi-agent workflows.
- OpenAI appears to be heading the same direction: WesRoth's screenshot and btibor91's Android beta find both point to Codex mobile and remote monitoring features inside ChatGPT, while mattlam_'s prototype showed how quickly
/goalturns into ad hoc remote control. - The missing piece is still session memory and review, not just a prettier process list: a Reddit discussion in r/AI_Agents surfaced context-window blindness and cross-session knowledge loss, while EntireHQ's transcript export post and MelkeyDev's agent-tail.nvim launch both treat agent work as something to archive, diff, and hand off.
You can read Anthropic's blog post and the agent view docs, browse the Agent FM repo, check out Mate, and inspect ntm. The weirdly consistent reveal is that everybody is reinventing the same layer: Entire's Dispatch 0x000D adds public session sharing and transcript export, and agent-tail.nvim turns Claude Code edits into a Neovim review queue.
Agent view
Anthropic's native answer is narrow and useful. The launch post calls agent view a research preview, and a follow-up from claudeai says claude agents can dispatch multiple sessions without keeping a terminal tab open for each one.
The 2.1.139 changelog that ClaudeCodeLog summarized pairs agent view with /goal, a plugin cost view, and subagent request headers. That makes this less like a pure UI ship, more like Claude Code admitting multi-session work is now normal.
What the built-in control plane covers:
- one list of sessions, running, blocked, or done, per the changelog summary
- inline replies to unblock work, per claudeai's thread
- resume across repos from a higher-level directory, per trq212's usage note
- all paid plans, per claudeai's availability post
What it does not solve is the bigger operator problem. The Reddit thread in r/AI_Agents reads like an incident log for power users: ps aux, tmux grids, webhook hacks, forgotten sessions, and no clean view of when context compaction is about to eat a run.
Agent FM
I built a Mac app that turns Claude Code agents into live radio stations | Free & open source
0 comments
Agent FM takes the most literal swing at the visibility problem. Instead of another dashboard, its launch post gives each agent a live radio station, plus a Global Mix across all active agents.
The repo pitch and the Reddit write-up line up on the mechanics: it runs locally on Mac, reads Claude Code and Codex session activity, filters noisy events into higher-signal updates, then streams narration through a bring-your-own-key voice stack. The project is open source at GitHub.
The interesting part is the event model. the parallel Reddit post in r/ClaudeCode lists the updates Agent FM tries to surface as a scan-friendly operator layer:
- what the agent is doing
- what changed
- when it hits an error
- when tests fail
- when it needs attention
- when it makes a visible assumption or decision
- when it seems blocked
That sounds gimmicky until you line it up with the multi-agent Reddit thread. People are already treating agent sessions like background services. Christmas came early for coding-agent nerds, apparently in AM radio format.
Mate and Codex mobile
Is “Claude Code from your phone with IDE + terminal + previews” something people want?
0 comments
Mate is a broader bet. matiizen's post describes it as a native, local-first control layer that exposes agents, IDE, terminal, previews, approvals, and files across desktop and phone, with support for Claude Code, Codex, and Copilot adapters.
The feature list is much closer to remote workbench software than a session list:
- phone-first approvals, replies, and follow-ups
- spawning new tasks from the phone, with attachments and voice input
- push notifications when agents finish or need input
- full IDE and file tree access
- terminals and app previews
- multi-device control, including Android, iOS TestFlight, and even Meta Quest, per the launch post
That is also where the OpenAI leak points. WesRoth's screenshot, koltregaskes' similar screenshot, and btibor91's Android beta report all describe Codex mobile as a ChatGPT-based remote surface for threads, projects, notifications, and live desktop-backed work.
There is already a hacked-together preview of the pattern. mattlam_'s demo built a /remote-control flow in Codex with /goal, a tiny local server, and a phone web app, and a Codex release screenshot referenced codex remote-control as a headless app-server entrypoint.
Provider-agnostic control planes
Anthropic's built-in view is Claude-only. The independent control-plane projects are converging on a different claim, that the orchestration layer should outlive any one vendor's CLI.
That is explicit in doodlestein's ntm post, which pitches ntm as provider-agnostic, battle-tested, and suitable for larger mixed fleets. The follow-up gets more concrete: ntm is modular, composable, and can be embedded into skills to run setups like five Claude Codes and five Codexes at once. The project lives at GitHub.
The same thread shows why the category keeps splintering into adjacent tools instead of one winner-take-all app:
- native session list inside Claude Code, per Anthropic
- mobile-first remote workspace, per Mate
- ambient audio monitoring, per Agent FM
- orchestration substrate for mixed providers, per ntm
That split maps to the real complaints in the Reddit discussion. Visibility, approvals, context retention, and cross-agent coordination are related problems, but not the same one.
Transcripts and review queues
The sharpest new signal is that session artifacts are becoming first-class outputs. EntireHQ's release thread added public sharing for private-repo sessions, deep links into specific transcript messages, Markdown and JSONL transcript downloads, and an interactive recap TUI for browsing recent agent work. The company collected the release in Dispatch 0x000D.
Those features are less about running agents than about making their work legible after the fact. EntireHQ's sharing post makes individual sessions publicly inspectable with badges, its transcript export post turns sessions into portable files, and its recap TUI post adds a cross-session summary view.
The Neovim crowd is pulling the same direction. agent-tail.nvim auto-captures Claude Code edits, opens a dedicated review tab with per-event diffs, and can push changed files straight into quickfix. No daemon, no watcher, no git required, per the GitHub repo.
That matters because the hardest quote in the evidence pool is still the r/AI_Agents thread: when session seven compacts, whatever architectural insight it found can just disappear. Transcript export, review queues, and recap TUIs are all attempts to make agent work survive long enough to be reused.