Migration

Migrate from Anthropic direct

Same Messages API, two-line change. Add governance and 100+ other models for free.

FIG.
FIG. 00 · MIGRATE FROM ANTHROPIC DIRECTAnthropic · BASE_URL SWAP

Synapse Garden speaks the Anthropic Messages API natively. Migrating from a direct Anthropic integration is two lines — and once you're on our base URL you can also reach the same Claude models through the AI SDK's streamText without rewriting your prompts.

FIG. 01TWO-LINE CUTOVER
SCHEMATIC
Swap `apiKey` to `mg_live_*` and `baseURL` to `https://synapse.garden/api` (no `/v1` for the Anthropic SDK). Prefix model names with `anthropic/`. Tool definitions, vision blocks, extended thinking, streaming events all pass through unchanged.

Diff

  import Anthropic from "@anthropic-ai/sdk"

  const client = new Anthropic({
-   apiKey: process.env.ANTHROPIC_API_KEY,
+   apiKey: process.env.MG_KEY,
+   baseURL: "https://synapse.garden/api",
  })

  const message = await client.messages.create({
-   model: "claude-opus-4-6",
+   model: "anthropic/claude-opus-4.6",
    max_tokens: 1024,
    messages: [{ role: "user", content: "..." }],
  })

Note: the Anthropic base URL has no /v1 suffix (Anthropic's SDK adds it internally). For OpenAI-compatible calls, use https://synapse.garden/api/v1.

What changes

AspectAnthropic directSynapse Garden
Base URLhttps://api.anthropic.comhttps://synapse.garden/api
API keysk-ant-...mg_live_...
Model nameclaude-opus-4-6anthropic/claude-opus-4.6
StreamingSameSame
Tool useSameSame
Extended thinkingSameSame
VisionSameSame
Cache controlSame (manual markers) — or use caching: 'auto' for hands-freeBetter
Beta headersMost pass throughSame

What you gain

  • Per-project keysmg_live_* keys scoped per project, audit-logged, revocable in 5 seconds.
  • Auto-caching — set caching: 'auto' and stop thinking about cache_control markers.
  • Spend caps — hard ceiling per project. Returns 402 when exceeded.
  • Model allowlists — lock production to specific Claude variants.
  • Cross-provider routing — same code calls anthropic/claude-opus-4.6 or openai/gpt-5.4 or google/gemini-3.1-pro-preview. The Anthropic SDK works for all of them via our compat layer.
  • Provider failover — if Anthropic direct is overloaded, requests automatically failover to Claude on Bedrock or Vertex (configurable per request).

What stays the same

  • The Anthropic SDK contract — every method, every option, every error code.
  • Your existing tool definitions, prompts, system messages.
  • Vision content blocks (URL or base64).
  • Streaming events (message_start, content_block_delta, message_stop, etc.).
  • Token counting (input_tokens, output_tokens, cache_* counters).

Migration steps

01

Sign up + create a key

synapse.garden/signup → workspace → Keys → New API key.

02

Mirror your env var

Add MG_KEY=mg_live_... alongside ANTHROPIC_API_KEY. Don't remove the old key yet.

03

Migrate one route, verify usage flows

Pick a low-traffic route, swap the client config and the model prefix. Open Dashboard → Usage → Recent requests and confirm.

04

Add `caching: 'auto'`

For routes with stable system prompts or large corpora, add providerOptions.gateway.caching: 'auto' and watch your bill drop.

05

Roll out the rest, set caps + allowlists

One service at a time. After each cut, configure project-level governance.

06

Decommission the Anthropic key

After clean operation, revoke your direct Anthropic key.

Auto-caching

The single biggest improvement over direct Anthropic for most apps:

client.messages.create({
  model: "anthropic/claude-sonnet-4.6",
  max_tokens: 1024,
  system: "You are a legal research assistant with access to a 50K-token corpus...",
  messages: [{ role: "user", content: "What does the law say about ...?" }],
  // ▼ this is the new line
  // @ts-expect-error - providerOptions is a gateway extension
  providerOptions: {
    gateway: { caching: "auto" },
  },
})

We add the cache_control marker on the right boundary so your stable prefix gets cached. Repeat calls with the same system prompt are 90% cheaper.

See Caching for the full breakdown.

Tool use

Anthropic's tool format is supported as-is:

client.messages.create({
  model: "anthropic/claude-opus-4.6",
  max_tokens: 4096,
  tools: [
    {
      name: "search_database",
      description: "Search the customer database",
      input_schema: {
        type: "object",
        properties: { query: { type: "string" } },
        required: ["query"],
      },
    },
  ],
  messages: [{ role: "user", content: "Find customer John Smith." }],
})

You can also use the AI SDK for a cleaner DX (Zod schemas instead of hand-written JSON Schema):

import { generateText, tool } from "ai"
import { z } from "zod"

await generateText({
  model: "anthropic/claude-opus-4.6",
  baseURL: "https://synapse.garden/api/v1",
  apiKey: process.env.MG_KEY,
  prompt: "Find customer John Smith.",
  tools: {
    searchDatabase: tool({
      description: "Search the customer database",
      parameters: z.object({ query: z.string() }),
      execute: async ({ query }) => searchDb(query),
    }),
  },
})

Extended thinking

Pass thinkingBudget (USD per request) to control reasoning depth:

client.messages.create({
  model: "anthropic/claude-opus-4.6",
  max_tokens: 8192,
  // @ts-expect-error
  providerOptions: {
    anthropic: { thinkingBudget: 0.05 },
  },
  messages: [{ role: "user", content: "Solve this proof: ..." }],
})

See Reasoning for the full guide.

1M context

Synapse Garden auto-enables the 1M context window on Claude Opus 4.6 / 4.7 and Sonnet 4.6 / 4.5 / 4. No flag needed. Pricing for >200K-token prompts uses the long-context tier — visible on the model detail page.

Common gotchas

  • Model names use dots, not dashes. anthropic/claude-opus-4.6 — Anthropic's direct API accepts both claude-opus-4-6 and claude-opus-4.6; we normalized on dots.
  • Base URL has no /v1. The Anthropic SDK appends it automatically.
  • Cache control markers are still pass-through. If you've already manually placed cache_control: { type: 'ephemeral' } markers, they keep working. caching: 'auto' is purely additive.
  • Stop sequences — supported.
  • Tool choice — supported (auto, any, tool).
  • Multi-turn cachingcaching: 'auto' adds markers at the boundary of the system prompt; for explicit per-message caching, use manual markers.

Beyond Anthropic

Once you're on Synapse Garden, you can experiment with non-Anthropic models without changing your SDK — keep using the Anthropic SDK for OpenAI / Gemini / Llama via our compat layer:

const client = new Anthropic({
  apiKey: process.env.MG_KEY,
  baseURL: "https://synapse.garden/api",
})

// All of these work with the Anthropic SDK shape:
client.messages.create({ model: "anthropic/claude-opus-4.6", ... })
client.messages.create({ model: "openai/gpt-5.4", ... })
client.messages.create({ model: "google/gemini-3.1-pro-preview", ... })
client.messages.create({ model: "meta/llama-4-405b", ... })

For more advanced multi-provider DX, the AI SDK gives you a single generateText API across all of them — see SDK guide.