meta · MultimodalReleased 2025-04-05

llama-4-maverick

meta/llama-4-maverick id

As a general purpose LLM, Llama 4 Maverick contains 17 billion active parameters, 128 experts, and 400 billion total parameters, offering high quality at a lower price compared to Llama 3.3 70B.

Tool useChat
Type
Use llama-4-maverick
# Drop-in OpenAI-compatible client
$ import { generateText } from 'ai'
$
$ const { text } = await generateText({
$ model: 'meta/llama-4-maverick',
$ baseURL: 'https://synapse.garden/api/v1',
$ apiKey: process.env.MG_KEY,
$ prompt: 'Why is the sky blue?',
$ })
128K
CONTEXT WINDOW
8.2K
MAX OUTPUT
$0.264/M
INPUT · PER M
$1.07/M
OUTPUT · PER M
PRICING

List prices, every modality.

RatePer million tokens · USD
Input$0.264/M
Output$1.07/M
Honest list pricesHow we calculate prices
MORE FROM META

Other meta models

See all 9
Model
Input
Output
Context
Type
FAQ · LLAMA-4-MAVERICK

Frequently asked

01 / 04

How do I call llama-4-maverick from my code?

Use the OpenAI or Anthropic SDK and point baseURL at https://synapse.garden/api/v1. Set model: ‘meta/llama-4-maverick and supply your Synapse Garden API key. No code changes beyond the base URL.

02 / 04

How much does llama-4-maverick cost?

Input: $0.264/M per million tokens. Output: $1.07/M per million tokens. The free tier includes a million tokens every month at no cost.

03 / 04

What's the context window for llama-4-maverick?

llama-4-maverick supports a context window of 128K tokens, with a maximum output of 8.2K tokens.

04 / 04

Do I need a separate Anthropic or OpenAI account?

No. Synapse Garden is the single API surface — one key gives you OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, xAI, Cohere, and more. Billing, rate limits, and audit logs are unified.

READY

Try llama-4-maverick in three minutes.

Sign up, create a key, drop our base URL into your existing client. The free tier includes a million tokens every month — no credit card.