Structured output
Generate JSON that matches a schema. Type-safe results with the AI SDK, JSON mode for raw fetch.
Most production LLM use cases want JSON, not prose. With the AI SDK use generateObject to get a typed object back from a Zod schema. Synapse Garden supports both flavors of structured output:
- AI SDK
generateObject/streamObject— give it a Zod schema, get a typed object back. - OpenAI
response_format— JSON mode and JSON schema mode for raw API users. - Anthropic tool-use trick — define a single tool with the desired schema and force it.
With the AI SDK (recommended)
The AI SDK validates the model's output against your Zod schema and returns a typed object. If validation fails, it retries up to two times with the parser error fed back to the model.
import { generateObject } from "ai"
import { z } from "zod"
const Recipe = z.object({
name: z.string(),
servings: z.number().int().positive(),
ingredients: z.array(
z.object({
name: z.string(),
quantity: z.string(),
}),
),
steps: z.array(z.string()),
})
const { object } = await generateObject({
model: "openai/gpt-5.4",
baseURL: "https://synapse.garden/api/v1",
apiKey: process.env.MG_KEY,
schema: Recipe,
prompt: "Give me a simple weeknight pasta recipe.",
})
console.log(object.name) // string
console.log(object.servings) // number
for (const ing of object.ingredients) console.log(` ${ing.quantity} ${ing.name}`)object is fully typed against z.infer<typeof Recipe>. No casts, no parsing.
Streaming structured output
streamObject emits partial objects as the model produces them. Useful for showing progressive UI:
import { streamObject } from "ai"
const result = streamObject({
model: "openai/gpt-5.4",
schema: Recipe,
prompt: "Give me a recipe.",
})
for await (const partial of result.partialObjectStream) {
// `partial` is `Partial<z.infer<typeof Recipe>>` — fields fill in as they arrive
renderProgress(partial)
}
const final = await result.object // fully typed final resultOpenAI-style JSON mode
For non-AI-SDK callers, the OpenAI Chat Completions API surface accepts response_format:
json_object — guarantees valid JSON
const res = await client.chat.completions.create({
model: "openai/gpt-5.4",
messages: [
{
role: "system",
content:
"You are a recipe API. Reply with a JSON object: {name, servings, ingredients, steps}.",
},
{ role: "user", content: "Weeknight pasta." },
],
response_format: { type: "json_object" },
})
const recipe = JSON.parse(res.choices[0].message.content)json_object mode means the response is valid JSON — but the schema is up to your prompt. Always include an explicit shape in the system message.
json_schema — guarantees a specific schema
const res = await client.chat.completions.create({
model: "openai/gpt-5.4",
messages: [{ role: "user", content: "Weeknight pasta." }],
response_format: {
type: "json_schema",
json_schema: {
name: "Recipe",
strict: true,
schema: {
type: "object",
required: ["name", "servings", "ingredients", "steps"],
additionalProperties: false,
properties: {
name: { type: "string" },
servings: { type: "integer", minimum: 1 },
ingredients: {
type: "array",
items: {
type: "object",
required: ["name", "quantity"],
additionalProperties: false,
properties: {
name: { type: "string" },
quantity: { type: "string" },
},
},
},
steps: { type: "array", items: { type: "string" } },
},
},
},
},
})strict: true makes the upstream provider guarantee schema conformance — no parser retries needed. Supported on OpenAI gpt-5*, Google Gemini 2.5+, and DeepSeek v3.
You don't have to write JSON schemas by hand. Use zod-to-json-schema to convert your Zod definitions, or just use the AI SDK's generateObject which does it for you.
Anthropic via tool use
Anthropic Claude doesn't have a response_format field, but you can simulate strict JSON output by defining a single tool with the desired schema and forcing it:
const message = await client.messages.create({
model: "anthropic/claude-opus-4.6",
max_tokens: 1024,
tool_choice: { type: "tool", name: "extract_recipe" },
tools: [
{
name: "extract_recipe",
description: "Extract a structured recipe.",
input_schema: {
type: "object",
properties: {
name: { type: "string" },
servings: { type: "integer" },
ingredients: { type: "array", items: { type: "string" } },
steps: { type: "array", items: { type: "string" } },
},
required: ["name", "servings", "ingredients", "steps"],
},
},
],
messages: [{ role: "user", content: "Weeknight pasta." }],
})
const recipe = (message.content[0] as any).inputThe AI SDK's generateObject handles this transparently — you don't have to think about provider-specific tricks.
Validation, retries, and partial output
When you use generateObject:
- The AI SDK validates with Zod after the model finishes.
- On parse failure, it retries up to 2 times with the error message appended.
- If all retries fail,
objectisnullanderroris populated.
const { object, error, finishReason } = await generateObject({
model: "…",
schema,
prompt: "…",
experimental_repairText: async ({ text, error }) => {
// last-ditch repair logic — tell the model exactly what's wrong
return await tryRepair(text, error)
},
})For mission-critical schemas, validate again on receipt:
const validated = Recipe.safeParse(object)
if (!validated.success) await sendToErrorBucket(validated.error)Best practices
- Keep schemas small. Each level of nesting costs tokens both as schema description and as model output. Flatten where you can.
- Use enums liberally.
z.enum(["pending", "active", "expired"])is much cheaper and more reliable thanz.string()with a description telling the model to pick from a set. - Prefer
json_schema strict: trueon supported models. Saves you the parse-retry loop entirely. - Always include an example in the prompt for non-strict modes. One example beats two paragraphs of description.
- Cache the schema description. When you call repeatedly with the same schema, prefix caching kicks in on supported providers (see Caching).
What models support what
| Model | AI SDK generateObject | OpenAI json_schema strict | OpenAI json_object | Anthropic tool trick |
|---|---|---|---|---|
OpenAI gpt-5* | ✓ | ✓ | ✓ | n/a |
| Anthropic Claude 4.6 / 4.7 | ✓ | n/a | n/a | ✓ |
| Google Gemini 2.5+ | ✓ | ✓ | ✓ | n/a |
| DeepSeek v3 / r2 | ✓ | ✓ | ✓ | n/a |
| Mistral Large 3 | ✓ | n/a | ✓ | n/a |
| Llama 4 70B+ | ✓ (best-effort) | n/a | ✓ (best-effort) | n/a |
When in doubt, route through generateObject and let the AI SDK negotiate. It picks the most reliable mode per model.