Migrate from OpenAI direct
Two-line change. Same SDK, same models, all the governance you didn't have before.
Migrating from a direct OpenAI integration to Synapse Garden takes two lines of code. Your existing OpenAI SDK calls continue to work — you change the base URL and the API key, and prefix model names with openai/. The same swap also unlocks the AI SDK's streamText and friends across every other model in the catalog.
Diff
import OpenAI from "openai"
const client = new OpenAI({
- apiKey: process.env.OPENAI_API_KEY,
+ apiKey: process.env.MG_KEY,
+ baseURL: "https://synapse.garden/api/v1",
})
const res = await client.chat.completions.create({
- model: "gpt-5.4",
+ model: "openai/gpt-5.4",
messages: [{ role: "user", content: "..." }],
})That's it. Streaming, tool use, vision, JSON mode, embeddings, image generation — all work unchanged.
What changes
| Aspect | OpenAI direct | Synapse Garden |
|---|---|---|
| Base URL | https://api.openai.com/v1 | https://synapse.garden/api/v1 |
| API key | sk-... | mg_live_... |
| Model name | gpt-5.4 | openai/gpt-5.4 |
| Streaming | Same | Same |
| Tool use | Same | Same |
| JSON mode | Same | Same |
| Vision | Same | Same |
| Embeddings | Same | Same |
| Image generation | Same | Same |
| Errors | OpenAI codes | Same codes + our governance codes (MODEL_NOT_ALLOWED, BUDGET_EXCEEDED) |
What you gain
- Per-project keys — issue
mg_live_*keys per environment / per service / per developer. Revoke without touching production. - Spend caps — hard ceiling per project. Returns 402 when exceeded.
- Model allowlists — restrict which models a project can call.
- Audit log — every key action logged for 90+ days.
- One key for every model — same SDK, swap to
anthropic/claude-opus-4.6orgoogle/gemini-3.1-pro-previewwithout re-integrating. - Provider routing — automatic fallback across providers when one is overloaded.
- Single bill — pay one place for every model.
What stays the same
- The OpenAI SDK contract — every method, every option, every error code shape.
- Your existing tool definitions, JSON schemas, prompts, and chains.
- Streaming behavior (same SSE format).
- Token counting (same
usageshape). - Pricing per token (we charge the published list rate; see /pricing for plans).
Multi-step migration plan
If you have a large codebase, do the cutover in steps:
Sign up + create a key
synapse.garden/signup → create workspace → create a key in Keys → New API key. Use a name like migration-test.
Mirror your env var
Add MG_KEY=mg_live_... next to your existing OPENAI_API_KEY. Don't remove the old one yet.
Migrate one route or service first
Pick a low-traffic route. Change its OpenAI client to use the Synapse Garden base URL and MG_KEY. Prefix model names with openai/. Deploy.
Verify in the dashboard
Open Dashboard → Usage and confirm requests are flowing. Check token counts match what you expect.
Roll out to the rest
Once comfortable, migrate the remaining routes / services one at a time. Each is isolated by its API key, so a problem in one doesn't affect others.
Set governance rules
- Create separate projects for production / staging / dev.
- Set spend caps per project.
- Configure model allowlists for production (lock to
openai/gpt-5.4,openai/gpt-5.4-mini). - Invite teammates with appropriate roles.
Decommission the OpenAI key
After ~7 days of clean traffic on Synapse Garden, revoke your direct OpenAI key. You won't need it again.
Common gotchas
- Model names must include the
openai/prefix.gpt-5.4will return 404;openai/gpt-5.4works. - Embedding model names also need the prefix:
openai/text-embedding-3-large, nottext-embedding-3-large. response_formatJSON mode — bothjson_objectandjson_schema strict: truework transparently.- Tool definitions — exact same shape. No changes needed.
- Vision URLs — public HTTPS URLs work; for private images, base64-encode like before.
- Organization header — OpenAI's
OpenAI-Organizationheader is silently ignored (we use our own org/project model). Drop it. - Beta headers — most beta features (predicted outputs, structured outputs strict mode) pass through. If something doesn't, it's almost certainly because the upstream model in our catalog hasn't been updated yet — check the model detail page for capability flags.
Cost comparison
You're paying the same list rate for tokens as you were calling OpenAI direct. The plan/included-tokens model is on top — see /pricing.
If your traffic was already small (<1M tokens / month), the free tier covers it.
If you had OpenAI volume discounts negotiated directly, those don't transfer; check your direct rate vs. our published rates on /models. For enterprise volumes, email us about custom pricing.
Beyond OpenAI
The biggest reason to migrate is flexibility. Once you're on Synapse Garden:
- Try Anthropic with one line: change the model to
anthropic/claude-opus-4.6. The SDK doesn't change (we speak both wire formats). - Try Google:
google/gemini-3.1-pro-preview. - Try open weights:
meta/llama-4-405b. - A/B test models with provider routing — set fallback chains, sort by cost or latency.
- Add governance: per-project allowlists, spend caps, audit trails — without redeploying.
You went from one provider to all of them, with strict governance, in two lines of code.