Agents that click, type, browse, and operate software directly.
Expect wraps browser QA for Claude Code, Codex, or Cursor into a CLI that records bug videos and feeds failures back into a fix loop. It gives coding agents a tighter UI validation cycle without requiring a custom browser harness.
Firecrawl’s new /interact endpoint lets agents click, fill, scroll, and keep live browser sessions right after /scrape. It shortens the path from page extraction to web automation, but Playwright remains the better fit when you need deterministic full-session control.
Claude can now drive macOS apps, browser tabs, the keyboard, and the mouse from Claude Cowork and Claude Code, with permission prompts when it needs direct screen access. That makes legacy desktop workflows automatable, and Anthropic is pairing the push with more background-task support for longer agent loops.
Agent Computer launched cloud desktops that boot in under half a second and expose persistent disks, shared credentials, SSH access, and ACP control for agents. It gives coding agents a faster place to run tools and reuse auth, but teams still need to design safe session and credential boundaries.
A solo developer wired Claude into emulators and simulators to inspect 25 Capacitor screens daily and file bugs across web, Android, and iOS. The writeup is a solid template for unattended QA, but it also shows where iOS tooling and agent reliability still crack.
OpenClaw 3.13 now connects to a real Chrome 146 session over MCP so agents can drive your signed-in browser instead of a separate bot context. Update if captchas or auth state were blocking your web automation flows.
Hermes Agent v0.3.0 added a first-class plugin system, live browser attach via CDP, real-time streaming, and VS Code, Zed, and JetBrains integration through ACP. Update if you want shareable skills, browser control, and a more stable long-running agent setup.
OpenAI shipped GPT-5.4 mini to ChatGPT, Codex, and the API, and GPT-5.4 nano to the API, with 400K context, lower prices, and stronger coding and computer-use scores. Route subagents and high-volume tasks to the smaller tiers to cut spend without giving up much capability.
Perplexity shipped an enterprise version of Comet with admin controls, silent deployment via MDM, telemetry, audit logs, and CrowdStrike Falcon integration. Test it if your team wants browser-native agents without giving up endpoint management and security review.
H Company launched Holotron-12B, an open multimodal model for computer-use agents built on a hybrid SSM-attention stack that targets KV-cache bottlenecks. Benchmark it if you need high-concurrency browser agents and want better throughput without giving up web-task accuracy.
Manus moved from a cloud sandbox onto local machines with My Computer, a desktop app that can organize files, run commands, and build apps on macOS and Windows. Use it if you want agent workflows over private local data and hardware instead of a remote browser sandbox.
Perplexity expanded Computer to Android and added control of a local Comet browser session, including logged-in sites, from the agent. Try it if you want one agent workflow across mobile and browser surfaces without per-site connectors or custom MCP glue.
Chrome DevTools MCP now lets agents attach to an existing signed-in browser session, and companion tools added one-command auto-connect flows. Use it to debug and automate in the tabs you already use instead of setting up separate logins or headless sessions.
Markov AI released Computer Use Large on Hugging Face with 48,478 screen recordings spanning about 12,300 hours across six professional apps. Use it to train and evaluate GUI agents on real software workflows with a large CC-BY dataset.
Perplexity brought Computer to iOS with cross-device sync so multi-step cloud tasks can keep running after you leave the screen. Try it if you want to start agent workflows from a phone instead of a desktop-only session.
The OpenClaw-RL paper proposes training agents continuously from normal interactions by turning user corrections, logs, and next-state feedback into rewards and word-level supervision. Watch it if you build persistent agents and want adaptation to come from live deployment traces instead of offline labeling.
OpenAI published runtime details for the Responses API computer environment, including shell loops, capped output, automatic compaction, proxied outbound traffic, and reusable skills folders. Use it as a reference architecture for hosted agents that need state, safety controls, and tool execution patterns.