Setup flow · Claude Code
From a fresh repo to a streaming chat call in under a minute, driven entirely by your coding agent. No tab switching, no copying URLs.
This is a runbook your coding agent follows verbatim. Open Claude Code in a fresh Next.js app, paste the prompt at the bottom of step 3, and the agent does the rest — picks a model, mints a key, writes the wiring, and proves the loop with a streaming call. The AI SDK's streamText is the default client; the MCP server is what gives the agent live state.
Install the MCP server
Once per machine. The CLI registers synapse-garden as a stdio MCP server that runs over npx — no global install, the package is fetched on first use and cached.
claude mcp add synapse-garden -- npx -y @synapse-garden/mcpConfirm it registered:
claude mcp list
# synapse-garden npx -y @synapse-garden/mcp (stdio)If you're on Cursor, Cline, or Claude Desktop, the equivalent block goes in .mcp.json — see the MCP server install docs. The rest of this flow is identical regardless of client.
Mint a personal access token
The MCP server authenticates with a mg_pat_* token scoped per dashboard checkbox. Open /app/linear-prod/agent-tokens, click New token, name it claude-code-laptop, and grant the smallest set this flow needs:
keys:write— to mint anmg_live_*proxy key for the projectmodels:read— to readmg://cataloglogs:read— to verify the smoke test landed
Copy the cleartext (mg_pat_…) — it is shown once. Export it in the shell that runs Claude Code:
export MG_PAT=mg_pat_8f3c2a91d4e76b50aa1c2f9eThe token is sha256-hashed at rest the moment you click create. The dashboard cannot show it again; if you lose it, revoke and re-issue.
Hand the work to the agent
In the project root (web-server/, a fresh create-next-app), open Claude Code and paste:
Wire Synapse Garden into this app. Pick a cheap model for chat, mint an
mg_live_*key scoped to a new project calledweb-server, write.env.local, editapp/api/chat/route.tsto stream a hello world with the AI SDK, and run the smoke test.
You do not type URLs. You do not paste model ids. The agent fans out from here.
Watch the agent work
The agent emits these tool calls in order. The shapes below are real tool_use blocks the MCP server will receive — useful when you're debugging your own skill compositions.
First, the catalog read — no token cost beyond the resource fetch:
{
"type": "tool_use",
"name": "read_resource",
"input": { "uri": "mg://catalog" }
}Then a recommendation, narrowed to the cheap-and-fast slice:
{
"type": "tool_use",
"name": "recommend_model",
"input": {
"task": "Streaming chat for a Next.js demo route. Latency-sensitive. Cost matters more than reasoning depth.",
"constraints": { "max_input_price_per_mtok_usd": 0.40, "must_support": ["streaming"] },
"limit": 1
}
}The server returns openai/gpt-4o-mini as the top pick with the rationale and a price comparison against two near-neighbors. The agent commits to it.
Now the mint — this is a write tool, so the first call comes back asking for confirmation:
{
"type": "tool_use",
"name": "create_api_key",
"input": {
"project": { "create_if_missing": true, "slug": "web-server" },
"name": "web-server · local dev",
"scopes": ["chat", "messages"],
"environment": "live",
"budget": { "monthly_usd": 25 }
}
}Response (truncated):
{
"status": "confirmation_required",
"summary": "Create live key 'web-server · local dev' in project 'web-server' (new). Scopes: chat, messages. Monthly cap: $25.",
"diff": { "projects.created": ["web-server"], "keys.created": 1 }
}Claude Code surfaces that summary inline; you click approve; the agent re-issues with confirm: true and gets back the cleartext exactly once:
{
"key": {
"id": "key_01HZX9R2K8M3N7QY",
"prefix": "mg_live_",
"cleartext": "mg_live_b14d72e9c3a8ff05a26d1f73c829ab44",
"project": "web-server",
"scopes": ["chat", "messages"],
"monthly_cap_usd": 25
}
}The agent writes the cleartext to .env.local and never repeats it back to you in chat. It then edits app/api/chat/route.ts:
import { streamText } from "ai"
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({
model: "openai/gpt-4o-mini",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
messages,
})
return result.toTextStreamResponse()
}Smoke test, run by the agent, not you:
curl -N http://localhost:3000/api/chat \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"say hi in 6 words"}]}'It reads back the streaming bytes, confirms a 200, and tails errors as a sanity check:
{
"type": "tool_use",
"name": "tail_errors",
"input": { "since": "5m", "project": "web-server" }
}A clean run returns { "errors": [], "checked_window_seconds": 312 }.
Verify in the dashboard
Open /app/linear-prod/keys — you'll see one new mg_live_* row, prefix-only, scoped chat, messages, project web-server, monthly cap $25. Open /app/linear-prod/audit and you'll see two events the MCP server filed on your behalf:
mcp.project_created—web-server, bypriya@yourco.comviaclaude-code(PATmg_pat_8f3c…)mcp.key_created—key_01HZX9R2K8M3N7QY, scopeschat, messages, cap$25/mo
That's the audit trail. Every mutation tool the agent runs lands here with the user, the org, the tool name, the args hash, and the client identifier the MCP transport reported. Silent spend is not possible — it's what the confirm: true step buys you.
What just happened
The agent never typed a URL, never opened a browser tab, and never asked you for a model name. The MCP server gave it three things the docs site cannot: live catalog state, a scoped credential surface (the PAT), and a confirmation gate on writes. The skill pack — see skill pack — gave it the AI SDK shape it pasted into your route. Together they collapse the "read docs, copy snippet, paste key, fix typo" loop into one prompt.
If something looked off mid-flow:
- Confirmation summary didn't match what you wanted. Reject in Claude Code; the agent will narrow scopes or change the cap and ask again. There is no partial mutation —
confirm: trueis the only path that writes. recommend_modelpicked something more expensive than you expected. Addmax_input_price_per_mtok_usdto the constraints in your prompt; the agent passes it through.- Smoke test came back 4xx. The agent will call
tail_errorsitself; you can also callmg://errors/recentfrom the next prompt to read the same window.
Next
Skill pack
A curated bundle of Anthropic Agent Skills that teach your coding agent how to use Synapse Garden idiomatically. AI SDK recipes, migration playbooks, debugging.
Migration flow · Cursor
Two-line repoint from `api.openai.com` to your Synapse Garden deployment, driven by Cursor's agent over MCP. Tools, JSON mode, vision, and embeddings keep their exact shape.