Models & providers
Browse 100+ models — language, vision, audio, embedding, image, video. Pricing is honest and synced nightly.
How to specify a model
Always use the creator/model-slug format:
model: "openai/gpt-5.4"
model: "anthropic/claude-opus-4.6"
model: "google/gemini-3.1-pro-preview"
model: "meta/llama-4-405b"
model: "bfl/flux-2-flex" // image
model: "google/veo-3.1-generate-001" // videoA model id maps 1:1 to its (creator, slug) pair. The full live catalog is at /models — filter by modality, search by capability. With the AI SDK, pass the same id to streamText and swap models without touching the rest of the call.
Modalities
Every model has one or more modalities. Filter or branch on them:
| Modality | What it means | Example |
|---|---|---|
| Text | Plain text in / text out | openai/gpt-5.4 |
| Vision | Accepts image input alongside text | openai/gpt-5.4, google/gemini-3.1-pro-preview |
| Audio | Accepts audio input | google/gemini-2.5-pro (multimodal) |
| Embedding | Returns vectors | openai/text-embedding-3-large |
| Reranking | Scores docs against a query | cohere/rerank-english-v3.0 |
| Image generation | Generates images from prompts | bfl/flux-2-flex, google/imagen-4.0-generate-001 |
| Video generation | Generates video from prompts | google/veo-3.1-generate-001, klingai/kling-v2.6-i2v |
For example, openai/gpt-5.4 is text + vision. google/gemini-3-pro-image is text + vision + image generation. The catalog page lists every supported modality per model.
Catalog freshness
The catalog is synced nightly from the routing layer. New models appear within 24 hours of upstream availability. Pricing is the live rate — the number you see on /models is the number you pay.
const res = await fetch("https://synapse.garden/api/v1/models", {
headers: { Authorization: `Bearer ${process.env.MG_KEY}` },
})
const { data: models } = await res.json()
// Filter by type
const text = models.filter((m) => m.type === "language")
const image = models.filter((m) => m.type === "image")
const video = models.filter((m) => m.type === "video")const res = await fetch(
"https://synapse.garden/api/v1/models/openai/gpt-5.4/endpoints",
{ headers: { Authorization: `Bearer ${process.env.MG_KEY}` } },
)
const { data } = await res.json()
console.log(data.architecture.input_modalities) // ["text", "image"]
console.log(data.endpoints[0].pricing.prompt) // "0.0000025"Featured providers
OpenAI
GPT family + reasoningxAI
Grok — multi-agent, real-timeAlibaba
Qwen — multilingual frontierMiniMax
M-series — fast and cheapZ.AI
GLM — open weights frontierAnthropic
Claude — long context, agenticByteDance
Seedance + Seedream — generative mediaXiaomi
MiMo — long-context Chinese frontierDeepSeek
Cost-leader reasoningKlingAI
Cinematic text-to-videoRecraft
Vector + raster image generationArcee AI
Compact reasoningInception
Mercury — diffusion-based LLMKwaipilot
KAT — coding specialistMoonshot AI
Kimi — long-context generalistNVIDIA
Nemotron — open weightsTry a model
Switch between models to compare their personality, speed, and price. Uses a docs-only sandbox key.
Pricing math
Prices on the catalog page are the list price you pay. They include our flat margin baked in — no separate "markup" line on your bill.
For the full math (passthrough + flat 10% DX premium), see /legal/pricing-disclosure.
Choosing a model
Quick heuristics:
- Frontier reasoning —
openai/gpt-5.5-pro,anthropic/claude-opus-4.6with extended thinking - Production workhorse —
openai/gpt-5.4,anthropic/claude-sonnet-4.6 - High-volume cheap —
openai/gpt-5.4-mini,google/gemini-2.5-flash - Edge / latency-sensitive —
openai/gpt-5.4-nano,cerebras/* - Long context —
google/gemini-3.1-pro-preview(1M+),anthropic/claude-opus-4.6(1M variant) - Image generation —
google/gemini-3-pro-image,bfl/flux-2-flex - Video generation —
google/veo-3.1-generate-001,klingai/kling-v2.6-i2v
When in doubt, run the playground above with the same prompt across two or three models and compare.