How do I call glm-4.5-air from my code?
Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘zai/glm-4.5-air’ and supply your Synapse Garden API key. No code changes beyond the base URL.
zai/glm-4.5-air idGLM-4.5 and GLM-4.5-Air are our latest flagship models, purpose-built as foundational models for agent-oriented applications. Both leverage a Mixture-of-Experts (MoE) architecture. GLM-4.5 has a total parameter count of 355B with 32B active parameters per forward pass, while GLM-4.5-Air adopts a more streamlined design with 106B total parameters and 12B active parameters.
# Drop-in OpenAI-compatible client$ import { generateText } from 'ai'$$ const { text } = await generateText({$ model: 'zai/glm-4.5-air',$ baseURL: 'https://synapse.garden/api/v1',$ apiKey: process.env.MG_KEY,$ prompt: 'Why is the sky blue?',$ })
| Rate | Per million tokens · USD |
|---|---|
| Input | $0.220/M |
| Output | $1.21/M |
| Cache read | $0.033/M |
Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘zai/glm-4.5-air’ and supply your Synapse Garden API key. No code changes beyond the base URL.
Input: $0.220/M per million tokens. Output: $1.21/M per million tokens. The free tier includes a million tokens every month at no cost.
glm-4.5-air supports a context window of 128K tokens, with a maximum output of 96K tokens.
No. Synapse Garden is the single API surface — one key gives you OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere, and more. Billing, rate limits, and audit logs are unified.
Sign up, create a key, drop our base URL into your existing client. The free tier includes a million tokens every month — no credit card.