Using strong specs to make coding agents more reliable.
Anthropic is testing a new /init flow that interviews users and configures Claude.md, hooks, and skills in new or existing repos. Try it in a sandbox repo, then watch for skills behavior differences between chat and web surfaces.
ACE open-sources a platform that turns AGENTS.md instructions into evolving playbooks backed by execution history, with hosted and self-hosted options. It is a notable response to prompt drift and prompt extraction, because procedures become revisable operating docs instead of static prompts.
Google rolled out a redesigned Stitch workspace that accepts text, code, PRDs, and images on a spatial canvas, then generates prototypes and portable DESIGN.md files. Teams testing AI-native UI workflows can use it to try a tighter design-to-code loop in the live product.
Geoffrey Huntley published a four-loop Ralph workflow for porting codebases by turning tests and source into cited specs before implementation. Try it when you need AI help translating a mature codebase across languages without losing behavioral coverage.
OpenAI detailed how repo-local skills, AGENTS.md, and GitHub Actions now drive repeatable verification, release, and pull request workflows across its Agents SDK repositories. Maintainers can copy the pattern to reduce prompt sprawl and keep agent behavior closer to the codebase.