Migration

Migrate from OpenAI direct

Two-line change. Same SDK, same models, all the governance you didn't have before.

FIG.
FIG. 00 · MIGRATE FROM OPENAI DIRECTOpenAI · BASE_URL SWAP

Migrating from a direct OpenAI integration to Synapse Garden takes two lines of code. Your existing OpenAI SDK calls continue to work — you change the base URL and the API key, and prefix model names with openai/. The same swap also unlocks the AI SDK's streamText and friends across every other model in the catalog.

FIG. 01TWO-LINE CUTOVER
SCHEMATIC
Repoint your existing OpenAI client at `https://synapse.garden/api/v1` with an `mg_live_*` key, and prefix model ids with `openai/`. Tools, JSON mode, vision, and embeddings keep their exact shape — and other providers become reachable from the same code.

Diff

  import OpenAI from "openai"

  const client = new OpenAI({
-   apiKey: process.env.OPENAI_API_KEY,
+   apiKey: process.env.MG_KEY,
+   baseURL: "https://synapse.garden/api/v1",
  })

  const res = await client.chat.completions.create({
-   model: "gpt-5.4",
+   model: "openai/gpt-5.4",
    messages: [{ role: "user", content: "..." }],
  })

That's it. Streaming, tool use, vision, JSON mode, embeddings, image generation — all work unchanged.

What changes

AspectOpenAI directSynapse Garden
Base URLhttps://api.openai.com/v1https://synapse.garden/api/v1
API keysk-...mg_live_...
Model namegpt-5.4openai/gpt-5.4
StreamingSameSame
Tool useSameSame
JSON modeSameSame
VisionSameSame
EmbeddingsSameSame
Image generationSameSame
ErrorsOpenAI codesSame codes + our governance codes (MODEL_NOT_ALLOWED, BUDGET_EXCEEDED)

What you gain

  • Per-project keys — issue mg_live_* keys per environment / per service / per developer. Revoke without touching production.
  • Spend caps — hard ceiling per project. Returns 402 when exceeded.
  • Model allowlists — restrict which models a project can call.
  • Audit log — every key action logged for 90+ days.
  • One key for every model — same SDK, swap to anthropic/claude-opus-4.6 or google/gemini-3.1-pro-preview without re-integrating.
  • Provider routing — automatic fallback across providers when one is overloaded.
  • Single bill — pay one place for every model.

What stays the same

  • The OpenAI SDK contract — every method, every option, every error code shape.
  • Your existing tool definitions, JSON schemas, prompts, and chains.
  • Streaming behavior (same SSE format).
  • Token counting (same usage shape).
  • Pricing per token (we charge the published list rate; see /pricing for plans).

Multi-step migration plan

If you have a large codebase, do the cutover in steps:

01

Sign up + create a key

synapse.garden/signup → create workspace → create a key in Keys → New API key. Use a name like migration-test.

02

Mirror your env var

Add MG_KEY=mg_live_... next to your existing OPENAI_API_KEY. Don't remove the old one yet.

03

Migrate one route or service first

Pick a low-traffic route. Change its OpenAI client to use the Synapse Garden base URL and MG_KEY. Prefix model names with openai/. Deploy.

04

Verify in the dashboard

Open Dashboard → Usage and confirm requests are flowing. Check token counts match what you expect.

05

Roll out to the rest

Once comfortable, migrate the remaining routes / services one at a time. Each is isolated by its API key, so a problem in one doesn't affect others.

06

Set governance rules

  • Create separate projects for production / staging / dev.
  • Set spend caps per project.
  • Configure model allowlists for production (lock to openai/gpt-5.4, openai/gpt-5.4-mini).
  • Invite teammates with appropriate roles.
07

Decommission the OpenAI key

After ~7 days of clean traffic on Synapse Garden, revoke your direct OpenAI key. You won't need it again.

Common gotchas

  • Model names must include the openai/ prefix. gpt-5.4 will return 404; openai/gpt-5.4 works.
  • Embedding model names also need the prefix: openai/text-embedding-3-large, not text-embedding-3-large.
  • response_format JSON mode — both json_object and json_schema strict: true work transparently.
  • Tool definitions — exact same shape. No changes needed.
  • Vision URLs — public HTTPS URLs work; for private images, base64-encode like before.
  • Organization header — OpenAI's OpenAI-Organization header is silently ignored (we use our own org/project model). Drop it.
  • Beta headers — most beta features (predicted outputs, structured outputs strict mode) pass through. If something doesn't, it's almost certainly because the upstream model in our catalog hasn't been updated yet — check the model detail page for capability flags.

Cost comparison

You're paying the same list rate for tokens as you were calling OpenAI direct. The plan/included-tokens model is on top — see /pricing.

If your traffic was already small (<1M tokens / month), the free tier covers it.

If you had OpenAI volume discounts negotiated directly, those don't transfer; check your direct rate vs. our published rates on /models. For enterprise volumes, email us about custom pricing.

Beyond OpenAI

The biggest reason to migrate is flexibility. Once you're on Synapse Garden:

  • Try Anthropic with one line: change the model to anthropic/claude-opus-4.6. The SDK doesn't change (we speak both wire formats).
  • Try Google: google/gemini-3.1-pro-preview.
  • Try open weights: meta/llama-4-405b.
  • A/B test models with provider routing — set fallback chains, sort by cost or latency.
  • Add governance: per-project allowlists, spend caps, audit trails — without redeploying.

You went from one provider to all of them, with strict governance, in two lines of code.