AgentsFlows

Setup flow · Claude Code

From a fresh repo to a streaming chat call in under a minute, driven entirely by your coding agent. No tab switching, no copying URLs.

FIG.
FIG. 00 · SETUP FLOW60 SEC TO 200 OK

This is a runbook your coding agent follows verbatim. Open Claude Code in a fresh Next.js app, paste the prompt at the bottom of step 3, and the agent does the rest — picks a model, mints a key, writes the wiring, and proves the loop with a streaming call. The AI SDK's streamText is the default client; the MCP server is what gives the agent live state.

FIG. 01AGENT TIMELINE
SCHEMATIC
The agent reads `mg://catalog`, asks for a recommendation, mints a scoped key behind a confirmation, writes `.env.local` and one route file, runs the smoke test, and reads back `mg://errors/recent` to confirm a clean run. Total wall time on a healthy connection: 38–62 seconds.
01

Install the MCP server

Once per machine. The CLI registers synapse-garden as a stdio MCP server that runs over npx — no global install, the package is fetched on first use and cached.

claude mcp add synapse-garden -- npx -y @synapse-garden/mcp

Confirm it registered:

claude mcp list
# synapse-garden  npx -y @synapse-garden/mcp  (stdio)

If you're on Cursor, Cline, or Claude Desktop, the equivalent block goes in .mcp.json — see the MCP server install docs. The rest of this flow is identical regardless of client.

02

Mint a personal access token

The MCP server authenticates with a mg_pat_* token scoped per dashboard checkbox. Open /app/linear-prod/agent-tokens, click New token, name it claude-code-laptop, and grant the smallest set this flow needs:

  • keys:write — to mint an mg_live_* proxy key for the project
  • models:read — to read mg://catalog
  • logs:read — to verify the smoke test landed

Copy the cleartext (mg_pat_…) — it is shown once. Export it in the shell that runs Claude Code:

export MG_PAT=mg_pat_8f3c2a91d4e76b50aa1c2f9e

The token is sha256-hashed at rest the moment you click create. The dashboard cannot show it again; if you lose it, revoke and re-issue.

03

Hand the work to the agent

In the project root (web-server/, a fresh create-next-app), open Claude Code and paste:

Wire Synapse Garden into this app. Pick a cheap model for chat, mint an mg_live_* key scoped to a new project called web-server, write .env.local, edit app/api/chat/route.ts to stream a hello world with the AI SDK, and run the smoke test.

You do not type URLs. You do not paste model ids. The agent fans out from here.

04

Watch the agent work

The agent emits these tool calls in order. The shapes below are real tool_use blocks the MCP server will receive — useful when you're debugging your own skill compositions.

First, the catalog read — no token cost beyond the resource fetch:

{
  "type": "tool_use",
  "name": "read_resource",
  "input": { "uri": "mg://catalog" }
}

Then a recommendation, narrowed to the cheap-and-fast slice:

{
  "type": "tool_use",
  "name": "recommend_model",
  "input": {
    "task": "Streaming chat for a Next.js demo route. Latency-sensitive. Cost matters more than reasoning depth.",
    "constraints": { "max_input_price_per_mtok_usd": 0.40, "must_support": ["streaming"] },
    "limit": 1
  }
}

The server returns openai/gpt-4o-mini as the top pick with the rationale and a price comparison against two near-neighbors. The agent commits to it.

Now the mint — this is a write tool, so the first call comes back asking for confirmation:

{
  "type": "tool_use",
  "name": "create_api_key",
  "input": {
    "project": { "create_if_missing": true, "slug": "web-server" },
    "name": "web-server · local dev",
    "scopes": ["chat", "messages"],
    "environment": "live",
    "budget": { "monthly_usd": 25 }
  }
}

Response (truncated):

{
  "status": "confirmation_required",
  "summary": "Create live key 'web-server · local dev' in project 'web-server' (new). Scopes: chat, messages. Monthly cap: $25.",
  "diff": { "projects.created": ["web-server"], "keys.created": 1 }
}

Claude Code surfaces that summary inline; you click approve; the agent re-issues with confirm: true and gets back the cleartext exactly once:

{
  "key": {
    "id": "key_01HZX9R2K8M3N7QY",
    "prefix": "mg_live_",
    "cleartext": "mg_live_b14d72e9c3a8ff05a26d1f73c829ab44",
    "project": "web-server",
    "scopes": ["chat", "messages"],
    "monthly_cap_usd": 25
  }
}

The agent writes the cleartext to .env.local and never repeats it back to you in chat. It then edits app/api/chat/route.ts:

import { streamText } from "ai"

export async function POST(req: Request) {
  const { messages } = await req.json()
  const result = streamText({
    model: "openai/gpt-4o-mini",
    baseURL: "https://synapse.garden/api/v1",
    apiKey: process.env.MG_KEY,
    messages,
  })
  return result.toTextStreamResponse()
}

Smoke test, run by the agent, not you:

curl -N http://localhost:3000/api/chat \
  -H "Content-Type: application/json" \
  -d '{"messages":[{"role":"user","content":"say hi in 6 words"}]}'

It reads back the streaming bytes, confirms a 200, and tails errors as a sanity check:

{
  "type": "tool_use",
  "name": "tail_errors",
  "input": { "since": "5m", "project": "web-server" }
}

A clean run returns { "errors": [], "checked_window_seconds": 312 }.

05

Verify in the dashboard

Open /app/linear-prod/keys — you'll see one new mg_live_* row, prefix-only, scoped chat, messages, project web-server, monthly cap $25. Open /app/linear-prod/audit and you'll see two events the MCP server filed on your behalf:

  • mcp.project_createdweb-server, by priya@yourco.com via claude-code (PAT mg_pat_8f3c…)
  • mcp.key_createdkey_01HZX9R2K8M3N7QY, scopes chat, messages, cap $25/mo

That's the audit trail. Every mutation tool the agent runs lands here with the user, the org, the tool name, the args hash, and the client identifier the MCP transport reported. Silent spend is not possible — it's what the confirm: true step buys you.

What just happened

The agent never typed a URL, never opened a browser tab, and never asked you for a model name. The MCP server gave it three things the docs site cannot: live catalog state, a scoped credential surface (the PAT), and a confirmation gate on writes. The skill pack — see skill pack — gave it the AI SDK shape it pasted into your route. Together they collapse the "read docs, copy snippet, paste key, fix typo" loop into one prompt.

If something looked off mid-flow:

  • Confirmation summary didn't match what you wanted. Reject in Claude Code; the agent will narrow scopes or change the cap and ask again. There is no partial mutation — confirm: true is the only path that writes.
  • recommend_model picked something more expensive than you expected. Add max_input_price_per_mtok_usd to the constraints in your prompt; the agent passes it through.
  • Smoke test came back 4xx. The agent will call tail_errors itself; you can also call mg://errors/recent from the next prompt to read the same window.

Next