Merge introduced Gateway, a routing layer that sends requests across providers through one API while tracking spend and project-level usage. The launch centralizes model routing and budget controls that many teams currently maintain as provider-specific glue code.

Gateway is positioned as an abstraction layer in front of existing model vendors rather than a new model API. In the launch post, TestingCatalog says Merge routes traffic across OpenAI, Anthropic, Google, and more through one endpoint, while the follow-up frames the target user as teams already "burning thousands a month" and maintaining their own provider-specific orchestration.
The more detailed launch writeup says provider switching happens through configuration instead of application code changes. It also describes budget controls at the account, team, or customer level; alerts intended to fire before invoices arrive; and a unified dashboard for latency, error rates, cost per call, and model performance. The same writeup says Gateway supports Python and TypeScript SDKs and is compatible with existing OpenAI, Anthropic, and LangChain clients, which suggests Merge is trying to slot into current inference stacks with minimal code churn.
Merge launched Gateway, a new way to route all your LLM traffic across OpenAI, Anthropic, Google, and more through one API endpoint, with spend limits and full observability baked in from day one.
We just built the #1 tool every AI team wants. Introducing @merge_api Gateway: LLM routing, fallback, cost guardrails, and security, all in one place. Everyone gets $10 free LLM usage on us to try it out. RT+ comment “Gateway” → we’ll double your credits.