Migrate from Anthropic direct
Same Messages API, two-line change. Add governance and 100+ other models for free.
Synapse Garden speaks the Anthropic Messages API natively. Migrating from a direct Anthropic integration is two lines — and once you're on our base URL you can also reach the same Claude models through the AI SDK's streamText without rewriting your prompts.
Diff
import Anthropic from "@anthropic-ai/sdk"
const client = new Anthropic({
- apiKey: process.env.ANTHROPIC_API_KEY,
+ apiKey: process.env.MG_KEY,
+ baseURL: "https://synapse.garden/api",
})
const message = await client.messages.create({
- model: "claude-opus-4-6",
+ model: "anthropic/claude-opus-4.6",
max_tokens: 1024,
messages: [{ role: "user", content: "..." }],
})Note: the Anthropic base URL has no /v1 suffix (Anthropic's SDK adds it internally). For OpenAI-compatible calls, use https://synapse.garden/api/v1.
What changes
| Aspect | Anthropic direct | Synapse Garden |
|---|---|---|
| Base URL | https://api.anthropic.com | https://synapse.garden/api |
| API key | sk-ant-... | mg_live_... |
| Model name | claude-opus-4-6 | anthropic/claude-opus-4.6 |
| Streaming | Same | Same |
| Tool use | Same | Same |
| Extended thinking | Same | Same |
| Vision | Same | Same |
| Cache control | Same (manual markers) — or use caching: 'auto' for hands-free | Better |
| Beta headers | Most pass through | Same |
What you gain
- Per-project keys —
mg_live_*keys scoped per project, audit-logged, revocable in 5 seconds. - Auto-caching — set
caching: 'auto'and stop thinking aboutcache_controlmarkers. - Spend caps — hard ceiling per project. Returns 402 when exceeded.
- Model allowlists — lock production to specific Claude variants.
- Cross-provider routing — same code calls
anthropic/claude-opus-4.6oropenai/gpt-5.4orgoogle/gemini-3.1-pro-preview. The Anthropic SDK works for all of them via our compat layer. - Provider failover — if Anthropic direct is overloaded, requests automatically failover to Claude on Bedrock or Vertex (configurable per request).
What stays the same
- The Anthropic SDK contract — every method, every option, every error code.
- Your existing tool definitions, prompts, system messages.
- Vision content blocks (URL or base64).
- Streaming events (
message_start,content_block_delta,message_stop, etc.). - Token counting (
input_tokens,output_tokens,cache_*counters).
Migration steps
Sign up + create a key
synapse.garden/signup → workspace → Keys → New API key.
Mirror your env var
Add MG_KEY=mg_live_... alongside ANTHROPIC_API_KEY. Don't remove the old key yet.
Migrate one route, verify usage flows
Pick a low-traffic route, swap the client config and the model prefix. Open Dashboard → Usage → Recent requests and confirm.
Add `caching: 'auto'`
For routes with stable system prompts or large corpora, add providerOptions.gateway.caching: 'auto' and watch your bill drop.
Roll out the rest, set caps + allowlists
One service at a time. After each cut, configure project-level governance.
Decommission the Anthropic key
After clean operation, revoke your direct Anthropic key.
Auto-caching
The single biggest improvement over direct Anthropic for most apps:
client.messages.create({
model: "anthropic/claude-sonnet-4.6",
max_tokens: 1024,
system: "You are a legal research assistant with access to a 50K-token corpus...",
messages: [{ role: "user", content: "What does the law say about ...?" }],
// ▼ this is the new line
// @ts-expect-error - providerOptions is a gateway extension
providerOptions: {
gateway: { caching: "auto" },
},
})We add the cache_control marker on the right boundary so your stable prefix gets cached. Repeat calls with the same system prompt are 90% cheaper.
See Caching for the full breakdown.
Tool use
Anthropic's tool format is supported as-is:
client.messages.create({
model: "anthropic/claude-opus-4.6",
max_tokens: 4096,
tools: [
{
name: "search_database",
description: "Search the customer database",
input_schema: {
type: "object",
properties: { query: { type: "string" } },
required: ["query"],
},
},
],
messages: [{ role: "user", content: "Find customer John Smith." }],
})You can also use the AI SDK for a cleaner DX (Zod schemas instead of hand-written JSON Schema):
import { generateText, tool } from "ai"
import { z } from "zod"
await generateText({
model: "anthropic/claude-opus-4.6",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
prompt: "Find customer John Smith.",
tools: {
searchDatabase: tool({
description: "Search the customer database",
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => searchDb(query),
}),
},
})Extended thinking
Pass thinkingBudget (USD per request) to control reasoning depth:
client.messages.create({
model: "anthropic/claude-opus-4.6",
max_tokens: 8192,
// @ts-expect-error
providerOptions: {
anthropic: { thinkingBudget: 0.05 },
},
messages: [{ role: "user", content: "Solve this proof: ..." }],
})See Reasoning for the full guide.
1M context
Synapse Garden auto-enables the 1M context window on Claude Opus 4.6 / 4.7 and Sonnet 4.6 / 4.5 / 4. No flag needed. Pricing for >200K-token prompts uses the long-context tier — visible on the model detail page.
Common gotchas
- Model names use dots, not dashes.
anthropic/claude-opus-4.6— Anthropic's direct API accepts bothclaude-opus-4-6andclaude-opus-4.6; we normalized on dots. - Base URL has no
/v1. The Anthropic SDK appends it automatically. - Cache control markers are still pass-through. If you've already manually placed
cache_control: { type: 'ephemeral' }markers, they keep working.caching: 'auto'is purely additive. - Stop sequences — supported.
- Tool choice — supported (
auto,any,tool). - Multi-turn caching —
caching: 'auto'adds markers at the boundary of the system prompt; for explicit per-message caching, use manual markers.
Beyond Anthropic
Once you're on Synapse Garden, you can experiment with non-Anthropic models without changing your SDK — keep using the Anthropic SDK for OpenAI / Gemini / Llama via our compat layer:
const client = new Anthropic({
apiKey: process.env.MG_KEY,
baseURL: "https://synapse.garden/api",
})
// All of these work with the Anthropic SDK shape:
client.messages.create({ model: "anthropic/claude-opus-4.6", ... })
client.messages.create({ model: "openai/gpt-5.4", ... })
client.messages.create({ model: "google/gemini-3.1-pro-preview", ... })
client.messages.create({ model: "meta/llama-4-405b", ... })For more advanced multi-provider DX, the AI SDK gives you a single generateText API across all of them — see SDK guide.