How do I call grok-3-mini-fast from my code?
Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘xai/grok-3-mini-fast’ and supply your Synapse Garden API key. No code changes beyond the base URL.
xai/grok-3-mini-fast idxAI's lightweight model that thinks before responding. Great for simple or logic-based tasks that do not require deep domain knowledge. The raw thinking traces are accessible. The fast model variant is served on faster infrastructure, offering response times that are significantly faster than the standard. The increased speed comes at a higher cost per output token.
# Drop-in OpenAI-compatible client$ import { generateText } from 'ai'$$ const { text } = await generateText({$ model: 'xai/grok-3-mini-fast',$ baseURL: 'https://synapse.garden/api/v1',$ apiKey: process.env.MG_KEY,$ prompt: 'Why is the sky blue?',$ })
| Rate | Per million tokens · USD |
|---|---|
| Input | $0.660/M |
| Output | $4.40/M |
Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘xai/grok-3-mini-fast’ and supply your Synapse Garden API key. No code changes beyond the base URL.
Input: $0.660/M per million tokens. Output: $4.40/M per million tokens. The free tier includes a million tokens every month at no cost.
grok-3-mini-fast supports a context window of 131.1K tokens, with a maximum output of 131.1K tokens.
No. Synapse Garden is the single API surface — one key gives you OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere, and more. Billing, rate limits, and audit logs are unified.
Sign up, create a key, drop our base URL into your existing client. The free tier includes a million tokens every month — no credit card.