All posts
Deep diveMay 26, 20262 min read

Vercel AI Elements: 20+ React components for AI apps explained

A walk-through of every AI Elements component, what each one solves, and where rolling your own still wins. Practical patterns, real composition.

Synapse PublicationEditorial
  • how-to
  • vercel-ai-sdk
  • react
  • ui

Most AI app UIs end up reinventing the same five components: a message list that auto-scrolls but pauses when the user scrolls up, a code block that renders incrementally during streaming without flickering, a tool-call display, a reasoning collapse, and a markdown renderer that handles half-finished tokens gracefully. Each one is a 200-line component with edge cases. AI Elements is Vercel's open-source set of those components, released in August 2025, built on top of shadcn/ui.

AI Elements is a registry of 20+ React components specifically designed for AI interfaces, distributed via a CLI that copies source code into your project (the shadcn registry pattern). A registry is different from an npm package: you own the code after install, customize freely, and pull updates manually. We tested it across three production chatbots before writing this post.

This post walks through every component in the library, what it actually solves, and the cases where you still want to fork or replace it.

What you're getting

AI Elements is a registry, not a package. When you run npx ai-elements@latest add message, the source code is copied into your repo under components/ai-elements/. You own it. Customize it like any shadcn component. The CLI just makes it easy to keep up with upstream improvements.

The components are split into three rough categories:

LayoutConversation, Message, PromptInput. The shell of a chat surface. Streaming-aware contentResponse, CodeBlock, Reasoning, ToolCall. Things that need to render gracefully while tokens are still arriving. AuxiliarySources, Suggestions, Branch, Image, Action, WebPreview. Things you bolt on once the basic chat works.

There are 20+ components total as of v0.0.x (May 2026). The set is growing.

The layout components

<Conversation>

The container. Handles the auto-scroll behavior that's surprisingly hard to get right.

<Conversation>
  <ConversationContent>
    {messages.map((m) => <Message key={m.id} from={m.role}>...</Message>)}
  </ConversationContent>
  <ConversationScrollButton />
</Conversation>

The non-obvious problem <Conversation> solves: auto-scroll-with-pause. You want the chat to scroll to the bottom when a new message arrives — but only if the user is already near the bottom. If the user has scrolled up to read an earlier message, an auto-scroll yanks them back down and ruins their reading. AI Elements detects whether the user is "anchored" to the bottom and only scrolls when they are. The ConversationScrollButton appears when they're not, letting them re-anchor manually.

You don't appreciate this until you've written it twice and gotten it wrong both times.

<Message> + <MessageContent>

A single message container. The from prop is "user" | "assistant" | "system" | "tool" and drives styling. Wraps the avatar, name, content, and timestamp.

<Message from={m.role}>
  <MessageAvatar src={m.role === "user" ? userAvatar : "/icons/icon-512.png"} />
  <MessageContent>
    <Response>{m.content}</Response>
  </MessageContent>
</Message>

The asymmetry from-user vs from-assistant is built in. User messages right-aligned with a different background; assistant messages left-aligned. You can override the styles per from-state with Tailwind's data-[from=user]: variants — or fork the component if you want a fundamentally different layout (e.g. centered, no-avatar, etc.).

<PromptInput>

The input bar. Handles auto-grow textarea, submit-on-enter (with shift-enter for newlines), and the disabled-while-streaming state.

<PromptInput onSubmit={handleSubmit}>
  <PromptInputTextarea
    value={input}
    onChange={(e) => setInput(e.target.value)}
    placeholder="Ask anything"
  />
  <PromptInputToolbar>
    <PromptInputModelSelect models={MODELS} value={model} onValueChange={setModel} />
    <PromptInputSubmit status={status} />
  </PromptInputToolbar>
</PromptInput>

PromptInputModelSelect is interesting — it's a dropdown that lets the user pick which model to use, and the value flows through to the useChat body. Use it for power-user tools where the model choice matters; skip it for consumer apps where the user shouldn't have to think about model names.

Streaming-aware content

<Response>

The default markdown renderer for assistant messages. Handles partial tokens — when the streaming text is mid-word or mid-code-block, <Response> doesn't flicker.

The naive implementation calls a markdown parser on every chunk; that re-parses the whole tree, the DOM thrashes, and code blocks visibly re-render. AI Elements caches the parsed tree and only re-renders the changed nodes. On a long response, this is the difference between "feels native" and "looks like Lovable in 2023."

Worth using even if you have a markdown renderer already.

<CodeBlock>

Syntax-highlighted code. Streams cleanly. Has a copy button. Detects the language from the fence string.

<CodeBlock language="typescript" showLineNumbers>
  {code}
</CodeBlock>

The under-the-hood detail: the highlighter (Shiki) is invoked lazily and the result is cached per language. First time you stream a TypeScript block, there's a small delay while Shiki loads its grammar; subsequent blocks render immediately. If you have a chat where 99% of code blocks are the same language, you can preload the grammar at app boot to skip even the first delay.

<Reasoning>

For Claude's extended thinking and OpenAI's o3-class models. Collapses the chain-of-thought tokens into a "Thinking..." summary that the user can expand.

<Reasoning isStreaming={status === "streaming"} duration={timeMs}>
  <ReasoningTrigger />
  <ReasoningContent>{reasoningText}</ReasoningContent>
</Reasoning>

The right behavior here is non-obvious: show the reasoning live during streaming, then collapse it after. While the model is thinking, the user wants to see something (otherwise the UI looks dead for 5 seconds). After the final answer arrives, most users don't want the chain-of-thought taking up half the screen. AI Elements does the auto-collapse on completion.

<ToolCall>

Renders an in-progress or completed tool call. Shows the tool name, the arguments (collapsed by default), and the result.

<ToolCall>
  <ToolCallHeader name={call.name} status={call.state} />
  <ToolCallContent>
    <ToolCallArguments>{JSON.stringify(call.args, null, 2)}</ToolCallArguments>
    <ToolCallResult>{call.result}</ToolCallResult>
  </ToolCallContent>
</ToolCall>

The status pill ("running", "success", "error") is doing real work — it tells the user the model is doing something instead of just hanging. For agent loops with multiple tool calls in a row, this is what makes the UX feel deliberate instead of broken.

Auxiliary components

<Sources>

For RAG apps. Renders the list of source documents the model retrieved, with a "view source" link.

<Sources>
  {message.sources.map((s) => (
    <Source key={s.url} url={s.url} title={s.title}>
      {s.snippet}
    </Source>
  ))}
</Sources>

The hidden value: citation rendering during streaming is hard. Sources usually arrive as a structured part of the response (e.g. as a tool call result), but the user is mid-reading the assistant's answer. <Sources> defers showing citations until the message is complete, which keeps the reading flow clean.

<Suggestions>

The "follow-up questions" chips below an assistant message. Useful for guided experiences.

<Suggestions>
  {suggestions.map((s) => (
    <Suggestion key={s} onClick={() => sendMessage(s)}>
      {s}
    </Suggestion>
  ))}
</Suggestions>

You generate the suggestions either from the model itself (a second streamText call asking "what would the user logically ask next?") or from a static set tied to the conversation topic. Both work; model-generated is more accurate, static is free.

<Branch>

Lets the user fork a conversation. Click an earlier message → "edit and retry" → both the original and the new branch are kept.

<Branch>
  <BranchSelector branches={messageBranches} current={currentBranch} onSelect={setBranch} />
  <BranchPrev /><BranchNext />
</Branch>

This is power-user UI. ChatGPT's "regenerate response" is a primitive version of this; AI Elements lets you build a tree of conversations the user can navigate. Useful for prompt-engineering tools, A/B comparing model outputs, or "save my favorite path" UX. Most consumer chatbots don't need it.

<Image>

For multimodal apps. Renders generated images with proper aspect-ratio handling, loading states, and lightbox-on-click.

<WebPreview>

A link preview card. When the model returns a URL, render it as a card with title, description, and og-image instead of bare text.

<Action> + <Actions>

Inline action buttons under an assistant message — copy, regenerate, thumbs up/down. The user feedback you collect here becomes eval data later.

Composition: a real chat surface

Putting it together:

"use client"

import { useChat } from "@ai-sdk/react"
import {
  Conversation, ConversationContent, ConversationScrollButton,
} from "@/components/ai-elements/conversation"
import { Message, MessageAvatar, MessageContent } from "@/components/ai-elements/message"
import { Response } from "@/components/ai-elements/response"
import { Reasoning, ReasoningTrigger, ReasoningContent } from "@/components/ai-elements/reasoning"
import { ToolCall, ToolCallHeader, ToolCallContent } from "@/components/ai-elements/tool-call"
import { PromptInput, PromptInputTextarea, PromptInputSubmit } from "@/components/ai-elements/prompt-input"

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, status } = useChat({
    api: "/api/chat",
  })

  return (
    <main className="mx-auto flex h-svh max-w-3xl flex-col">
      <Conversation className="flex-1">
        <ConversationContent>
          {messages.map((m) => (
            <Message key={m.id} from={m.role}>
              {m.role === "assistant" && <MessageAvatar src="/icons/icon-512.png" />}
              <MessageContent>
                {m.parts.map((part, i) => {
                  if (part.type === "reasoning") {
                    return (
                      <Reasoning key={i} isStreaming={status === "streaming"}>
                        <ReasoningTrigger />
                        <ReasoningContent>{part.text}</ReasoningContent>
                      </Reasoning>
                    )
                  }
                  if (part.type === "tool-call") {
                    return (
                      <ToolCall key={i}>
                        <ToolCallHeader name={part.toolName} status={part.state} />
                        <ToolCallContent>{JSON.stringify(part.args, null, 2)}</ToolCallContent>
                      </ToolCall>
                    )
                  }
                  return <Response key={i}>{part.text}</Response>
                })}
              </MessageContent>
            </Message>
          ))}
        </ConversationContent>
        <ConversationScrollButton />
      </Conversation>

      <PromptInput onSubmit={handleSubmit} className="border-t">
        <PromptInputTextarea
          value={input}
          onChange={handleInputChange}
          placeholder="Ask anything"
        />
        <PromptInputSubmit status={status} />
      </PromptInput>
    </main>
  )
}

That's a production-grade chat surface in ~50 lines. The same thing without AI Elements is 400 lines once you handle auto-scroll, code-block streaming, and reasoning collapse.

When to fork the components

The shadcn-registry model means you're getting source code, not a black box. Fork when:

  • Your design system isn't shadcn. If you're on Mantine, Chakra, or a custom system, the AI Elements styling will feel grafted on. Take the behavior (auto-scroll, partial-token rendering) and reimplement the visuals in your stack.
  • You need behaviors AI Elements doesn't expose. Custom tool-call UI per tool name, side-by-side branch comparison, alternative scroll behaviors. Forking is encouraged.
  • You want different streaming primitives. AI Elements assumes the AI SDK's typed stream format. If you're on a custom stream protocol, the streaming-aware components don't transfer.

When not to fork:

  • "I don't like the colors." Just override the Tailwind classes; the components are styled with cn() and accept className.
  • "I'm not using TypeScript." TypeScript is the path of least resistance; downgrading to JS is more work than learning the types.
  • "Vercel might charge for it later." It's MIT-licensed; check the LICENSE.

How AI Elements compares to building from scratch on shadcn

The honest framing: AI Elements is shadcn for AI. If you're already comfortable with shadcn, AI Elements adds the AI-specific edge cases on top. The cost is one more dependency to track upstream changes for.

Vs rolling your own:

  • ✅ 50 lines instead of 400 for a basic chat.
  • ✅ Streaming edge cases (partial tokens, code-block re-render, reasoning collapse) handled.
  • ✅ The auto-scroll-with-pause behavior alone is worth it.
  • ❌ Locked into shadcn-derived styling and the AI SDK's typed stream.
  • ❌ The component set will keep growing — staying on the latest version is a small recurring cost.

For most teams the trade is obviously worth it. For teams with a custom design system that strongly diverges from shadcn, the answer is to take the behaviors and re-skin.

What's next

If you're building a chat surface, the chatbot tutorial covers the server side that pairs with these components. If you're trying to decide whether to use the AI SDK at all, the gateway comparison covers what your routing layer adds underneath. For routing requests to specific models from inside these components, the models page lists every model you can pass as the provider/model string, and the API reference covers the request shape.

For the AI Elements registry itself, elements.ai-sdk.dev has live demos for every component, and the GitHub repo is the source of truth for what's shipped.

Written by

Synapse Publication

Field notes, technical write-ups, and benchmarks from the team building Synapse Garden.

More posts