How to Build Agentic Frontends with the AG-UI Protocol and Python

Build real-time agentic frontends using the AG-UI protocol with Python and FastAPI. Covers streaming events, shared state synchronization, frontend tool calls, and human-in-the-loop patterns — completing the agentic protocol trinity alongside MCP and A2A.

You built a powerful AI agent backend. It reasons, calls tools, manages state, and orchestrates complex workflows. Great. Now comes the part that always seems to take longer than it should: connecting it to an actual user interface with real-time streaming, shared state, and human-in-the-loop capabilities.

Until recently, this meant wiring up custom WebSocket handlers, inventing ad-hoc event schemas, and building bespoke state synchronization logic for every single project. Honestly, it was exhausting. The AG-UI (Agent-User Interaction) protocol changes that entirely. It provides a standardized, event-driven interface between AI agents and user-facing applications — the missing "last mile" that completes the agentic protocol stack alongside MCP and A2A.

This guide walks you through building a production AG-UI server in Python with FastAPI, connecting it to a React frontend with CopilotKit, and implementing advanced patterns like shared state synchronization, frontend tool calls, and human-in-the-loop interrupts.

What Is the AG-UI Protocol?

AG-UI is an open, MIT-licensed protocol created by the CopilotKit team that standardizes how AI agents communicate with user-facing applications. Rather than defining yet another agent framework (we really don't need more of those), AG-UI focuses exclusively on the interaction layer — the events that flow between your agent and the user's screen.

The protocol is built on three core design principles:

  • Event-driven communication — Agents emit standardized events (text deltas, tool calls, state updates) that any compatible frontend can consume
  • Transport-agnostic — Works over SSE, WebSockets, binary Protobuf, or webhooks
  • Loosely coupled — Events need to be AG-UI-compatible, not exact matches. Client-side middleware handles the adaptation

The Agentic Protocol Trinity: MCP + A2A + AG-UI

AG-UI completes what the community has been calling the "agentic protocol trinity" — three complementary protocols that together cover every communication layer in an agentic system:

LayerProtocolPurposeOrigin
Agent ↔ Tools & DataMCP (Model Context Protocol)Connects agents to external tools, data sources, and APIsAnthropic
Agent ↔ AgentA2A (Agent-to-Agent)Enables agents to discover, delegate, and coordinate with each otherGoogle
Agent ↔ UserAG-UI (Agent-User Interaction)Connects agents to user-facing applications with real-time streamingCopilotKit

In practice, a production agent typically uses all three. MCP gives it tools. A2A lets it collaborate with other agents. AG-UI brings it to the user's screen. This guide focuses on that last — and arguably most visible — layer.

AG-UI Core Concepts: Events, State, and Tools

Before we start writing code, let's get familiar with the three pillars of the AG-UI protocol: lifecycle events, the start-content-end streaming pattern, and the snapshot-delta state model.

Event Categories

AG-UI defines approximately 16 core event types organized into five categories:

Lifecycle Events — Every agent run begins with RUN_STARTED and ends with either RUN_FINISHED or RUN_ERROR. Within a run, individual steps are bracketed by STEP_STARTED and STEP_FINISHED.

Text Message Events — These follow the start-content-end pattern: TEXT_MESSAGE_START opens a new message stream, TEXT_MESSAGE_CONTENT delivers text deltas, and TEXT_MESSAGE_END signals completion. There's also a convenience TEXT_MESSAGE_CHUNK event that auto-expands into all three via client middleware.

Tool Call Events — Mirror the text pattern: TOOL_CALL_START (with tool name), TOOL_CALL_ARGS (streaming JSON argument fragments), TOOL_CALL_END, and TOOL_CALL_RESULT (execution result). A TOOL_CALL_CHUNK convenience event is available here too.

State Management Events — These use a snapshot-delta pattern: STATE_SNAPSHOT delivers a complete state replacement, while STATE_DELTA sends incremental updates as RFC 6902 JSON Patch operations. MESSAGES_SNAPSHOT provides full conversation history.

Special EventsRAW passes through events from external systems. CUSTOM allows application-specific extensions. More recent versions also include REASONING_* events for exposing chain-of-thought to the UI, which is a nice touch for transparency.

The Start-Content-End Pattern

This is the fundamental streaming pattern in AG-UI. Both text messages and tool calls follow it:

TEXT_MESSAGE_START  →  message_id, role="assistant"
TEXT_MESSAGE_CONTENT  →  message_id, delta="Hello"
TEXT_MESSAGE_CONTENT  →  message_id, delta=" world"
TEXT_MESSAGE_CONTENT  →  message_id, delta="!"
TEXT_MESSAGE_END  →  message_id

The client accumulates deltas and renders them progressively. If you've worked with OpenAI's streaming API, this will feel familiar — same idea, just formalized into a proper protocol with clean message boundaries.

The Snapshot-Delta State Model

AG-UI supports shared state between agent and frontend using two complementary events:

  • STATE_SNAPSHOT — Delivers a complete state object, replacing whatever the client currently holds
  • STATE_DELTA — Delivers an array of RFC 6902 JSON Patch operations for incremental updates

This model is bandwidth-efficient (send the full state once, then only diffs) and conflict-resistant since patches apply atomically. It's a solid design choice.

Building an AG-UI Server in Python

Alright, let's build something. We'll create a production AG-UI server step by step using FastAPI for the HTTP layer and the official ag-ui-protocol Python SDK for event types and encoding.

Installation and Setup

pip install ag-ui-protocol fastapi uvicorn openai

The ag-ui-protocol package (v0.1.15+) provides two key modules:

  • ag_ui.core — Pydantic models for all event types, RunAgentInput, and message types
  • ag_ui.encoderEventEncoder class that serializes events to SSE format

One thing that tripped me up initially: the Python SDK uses snake_case field names (e.g., thread_id, message_id) but automatically serializes to camelCase JSON (e.g., threadId, messageId) for protocol compliance. So don't worry if the wire format looks different from your Python code — that's by design.

Minimal AG-UI Server

Here's the simplest possible AG-UI server — it just echoes back a greeting:

import uuid
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from ag_ui.core import (
    RunAgentInput,
    EventType,
    RunStartedEvent,
    RunFinishedEvent,
    TextMessageStartEvent,
    TextMessageContentEvent,
    TextMessageEndEvent,
)
from ag_ui.encoder import EventEncoder

app = FastAPI(title="AG-UI Agent")


@app.post("/")
async def run_agent(input_data: RunAgentInput, request: Request):
    encoder = EventEncoder(accept=request.headers.get("accept"))

    async def event_stream():
        # Every run MUST start with RUN_STARTED
        yield encoder.encode(RunStartedEvent(
            type=EventType.RUN_STARTED,
            thread_id=input_data.thread_id,
            run_id=input_data.run_id,
        ))

        # Stream a text message
        msg_id = str(uuid.uuid4())
        yield encoder.encode(TextMessageStartEvent(
            type=EventType.TEXT_MESSAGE_START,
            message_id=msg_id,
            role="assistant",
        ))

        for word in ["Hello", " from", " AG-UI", "!"]:
            yield encoder.encode(TextMessageContentEvent(
                type=EventType.TEXT_MESSAGE_CONTENT,
                message_id=msg_id,
                delta=word,
            ))

        yield encoder.encode(TextMessageEndEvent(
            type=EventType.TEXT_MESSAGE_END,
            message_id=msg_id,
        ))

        # Every run MUST end with RUN_FINISHED
        yield encoder.encode(RunFinishedEvent(
            type=EventType.RUN_FINISHED,
            thread_id=input_data.thread_id,
            run_id=input_data.run_id,
        ))

    return StreamingResponse(
        event_stream(),
        media_type=encoder.get_content_type(),
    )

Run it with uvicorn app:app --host 0.0.0.0 --port 8000. The server accepts POST requests with a RunAgentInput JSON body and streams back SSE events. Nothing fancy yet, but it covers the core lifecycle.

Connecting an LLM with Streaming

A greeting server is cute, but a real agent needs an LLM. Here's how to wrap OpenAI's streaming API in AG-UI events, forwarding the conversation history from the client:

import uuid
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from openai import AsyncOpenAI
from ag_ui.core import (
    RunAgentInput,
    EventType,
    RunStartedEvent,
    RunFinishedEvent,
    RunErrorEvent,
    TextMessageStartEvent,
    TextMessageContentEvent,
    TextMessageEndEvent,
)
from ag_ui.encoder import EventEncoder

app = FastAPI(title="AG-UI LLM Agent")
client = AsyncOpenAI()


def convert_messages(messages):
    """Convert AG-UI messages to OpenAI format."""
    result = []
    for msg in messages:
        result.append({
            "role": msg.role,
            "content": msg.content
                if isinstance(msg.content, str)
                else msg.content[0].text,
        })
    return result


@app.post("/")
async def run_agent(input_data: RunAgentInput, request: Request):
    encoder = EventEncoder(accept=request.headers.get("accept"))

    async def event_stream():
        yield encoder.encode(RunStartedEvent(
            type=EventType.RUN_STARTED,
            thread_id=input_data.thread_id,
            run_id=input_data.run_id,
        ))

        try:
            openai_messages = [
                {"role": "system", "content": "You are a helpful assistant."}
            ] + convert_messages(input_data.messages)

            stream = await client.chat.completions.create(
                model="gpt-4o",
                messages=openai_messages,
                stream=True,
            )

            msg_id = str(uuid.uuid4())
            yield encoder.encode(TextMessageStartEvent(
                type=EventType.TEXT_MESSAGE_START,
                message_id=msg_id,
                role="assistant",
            ))

            async for chunk in stream:
                delta = chunk.choices[0].delta.content
                if delta:
                    yield encoder.encode(TextMessageContentEvent(
                        type=EventType.TEXT_MESSAGE_CONTENT,
                        message_id=msg_id,
                        delta=delta,
                    ))

            yield encoder.encode(TextMessageEndEvent(
                type=EventType.TEXT_MESSAGE_END,
                message_id=msg_id,
            ))

            yield encoder.encode(RunFinishedEvent(
                type=EventType.RUN_FINISHED,
                thread_id=input_data.thread_id,
                run_id=input_data.run_id,
            ))

        except Exception as e:
            yield encoder.encode(RunErrorEvent(
                type=EventType.RUN_ERROR,
                message=str(e),
            ))

    return StreamingResponse(
        event_stream(),
        media_type=encoder.get_content_type(),
    )

This pattern works with any LLM that supports streaming — you can swap AsyncOpenAI for Anthropic's AsyncAnthropic, a local vLLM endpoint, or pretty much any other provider. The AG-UI event layer stays the same regardless.

Frontend Tool Calls: Letting Agents Act on the UI

This is where things get really interesting. One of AG-UI's most powerful features is frontend tool calls. The frontend defines tools (actions the agent can trigger in the browser), and the agent invokes them by emitting tool call events. Think of it as the reverse of traditional tool calling — instead of the agent calling a backend API, it's calling a function that lives in the user's browser.

This enables patterns like updating UI state, triggering navigation, showing confirmation dialogs, or manipulating a canvas.

Python Backend: Emitting Tool Call Events

The agent decides to call a frontend tool based on user intent. Here's how to emit the tool call lifecycle events:

import json
import uuid
from ag_ui.core import (
    EventType,
    ToolCallStartEvent,
    ToolCallArgsEvent,
    ToolCallEndEvent,
)


async def emit_tool_call(encoder, tool_name: str, arguments: dict):
    """Emit a complete tool call event sequence."""
    tool_call_id = str(uuid.uuid4())

    yield encoder.encode(ToolCallStartEvent(
        type=EventType.TOOL_CALL_START,
        tool_call_id=tool_call_id,
        tool_call_name=tool_name,
    ))

    yield encoder.encode(ToolCallArgsEvent(
        type=EventType.TOOL_CALL_ARGS,
        tool_call_id=tool_call_id,
        delta=json.dumps(arguments),
    ))

    yield encoder.encode(ToolCallEndEvent(
        type=EventType.TOOL_CALL_END,
        tool_call_id=tool_call_id,
    ))

The agent can stream arguments incrementally (multiple TOOL_CALL_ARGS events with JSON fragments) or send them in one shot as shown above. For simple tool calls, the one-shot approach is cleaner.

React Frontend: Defining and Handling Tools

On the frontend side, CopilotKit's useCopilotAction hook registers tools that the agent can call:

import { useCopilotAction } from "@copilotkit/react-core";

function Dashboard() {
  const [theme, setTheme] = useState("light");

  useCopilotAction({
    name: "change_theme",
    description: "Change the application theme",
    parameters: [
      {
        name: "theme",
        type: "string",
        description: "The theme to apply",
        enum: ["light", "dark", "ocean"],
        required: true,
      },
    ],
    handler: async ({ theme: newTheme }) => {
      setTheme(newTheme);
      return `Theme changed to ${newTheme}`;
    },
  });

  return <div className={`app ${theme}`}>{/* ... */}</div>;
}

When the agent emits TOOL_CALL_START with tool_call_name: "change_theme", CopilotKit matches it to this handler, executes it locally in the browser, and sends the result back to the agent as a TOOL_CALL_RESULT event. Pretty elegant.

Shared State Synchronization

AG-UI supports bidirectional state synchronization between agent and frontend. The agent can read the current frontend state (received in RunAgentInput.state) and push updates back using state events.

State Snapshots vs. Deltas

Use STATE_SNAPSHOT when you need to replace the entire state object — typically at the start of a run or after a major state transition:

from ag_ui.core import EventType, StateSnapshotEvent

yield encoder.encode(StateSnapshotEvent(
    type=EventType.STATE_SNAPSHOT,
    snapshot={
        "documents": [],
        "selectedIndex": 0,
        "processingStatus": "idle",
    },
))

Use STATE_DELTA for incremental updates during a run. Deltas use RFC 6902 JSON Patch operations — add, remove, replace, move, copy, or test:

from ag_ui.core import EventType, StateDeltaEvent

# Add a document to the list
yield encoder.encode(StateDeltaEvent(
    type=EventType.STATE_DELTA,
    delta=[
        {"op": "add", "path": "/documents/-", "value": {
            "title": "Q1 Report",
            "status": "analyzed",
        }},
        {"op": "replace", "path": "/processingStatus", "value": "processing"},
    ],
))

One thing to keep in mind: the /documents/- path with the trailing dash is JSON Pointer syntax for "append to the end of the array." It's a small detail but it'll save you some debugging if you weren't aware.

Reading State on the Frontend

CopilotKit's useCoAgent hook provides reactive access to the shared state:

import { useCoAgent } from "@copilotkit/react-core";

function DocumentAnalyzer() {
  const { state, setState } = useCoAgent({
    name: "document-analyzer",
    initialState: {
      documents: [],
      selectedIndex: 0,
      processingStatus: "idle",
    },
  });

  return (
    <div>
      <p>Status: {state.processingStatus}</p>
      <ul>
        {state.documents.map((doc, i) => (
          <li key={i}>{doc.title} — {doc.status}</li>
        ))}
      </ul>
    </div>
  );
}

When the agent emits STATE_DELTA events, CopilotKit applies the JSON patches to the local state, triggering a React re-render. The frontend can also call setState to update state locally — and the updated state gets included in the next RunAgentInput. It's bidirectional by default, which is exactly what you want.

Human-in-the-Loop Patterns

Production agents often need human approval before taking consequential actions. Nobody wants an AI agent silently deleting database records without asking first. AG-UI handles this through tool calls combined with frontend-side confirmation handlers.

Approval Workflow

Define a confirmation tool on the frontend that pauses execution until the user responds:

useCopilotAction({
  name: "request_approval",
  description: "Request user approval before proceeding",
  parameters: [
    {
      name: "action_description",
      type: "string",
      description: "What the agent wants to do",
      required: true,
    },
    {
      name: "risk_level",
      type: "string",
      enum: ["low", "medium", "high"],
      required: true,
    },
  ],
  handler: async ({ action_description, risk_level }) => {
    // Show a modal/dialog and wait for user input
    const approved = await showApprovalDialog({
      message: action_description,
      riskLevel: risk_level,
    });
    return approved ? "APPROVED" : "REJECTED";
  },
});

On the Python side, the agent calls this tool before performing sensitive operations. The TOOL_CALL_RESULT event carries the user's decision, and the agent branches accordingly:

# In the agent's event_stream generator:
# 1. Emit tool call for approval
yield encoder.encode(ToolCallStartEvent(
    type=EventType.TOOL_CALL_START,
    tool_call_id=approval_id,
    tool_call_name="request_approval",
))
yield encoder.encode(ToolCallArgsEvent(
    type=EventType.TOOL_CALL_ARGS,
    tool_call_id=approval_id,
    delta=json.dumps({
        "action_description": "Delete 47 stale records from the database",
        "risk_level": "high",
    }),
))
yield encoder.encode(ToolCallEndEvent(
    type=EventType.TOOL_CALL_END,
    tool_call_id=approval_id,
))
# The frontend executes the handler, and the result
# comes back in the next RunAgentInput as a tool result message

Connecting the Full Stack with CopilotKit

CopilotKit is the most feature-complete frontend client for AG-UI. Let's wire everything together.

Frontend Setup

npm install @copilotkit/react-core @copilotkit/react-ui

Wrap your app with the CopilotKit provider and point it at your AG-UI server:

import { CopilotKit } from "@copilotkit/react-core";
import { CopilotChat } from "@copilotkit/react-ui";
import "@copilotkit/react-ui/styles.css";

function App() {
  return (
    <CopilotKit
      runtimeUrl="http://localhost:8000"
      agent="my-agent"
    >
      <div className="app-layout">
        <MainContent />
        <CopilotChat
          labels={{ title: "AI Assistant", placeholder: "Ask me anything..." }}
        />
      </div>
    </CopilotKit>
  );
}

That's really all it takes. CopilotKit handles the AG-UI event parsing, state management, tool execution, and chat rendering out of the box.

Quick Scaffold with create-ag-ui-app

If you want the fastest path to a working project, use the official scaffolding tool:

npx create-ag-ui-app@latest my-agent-app

This CLI generates a complete project with your choice of server framework (Python/FastAPI, LangGraph, Mastra, etc.) and client framework (CopilotKit React), pre-wired with AG-UI events and ready to run. It's a solid starting point if you don't want to set everything up manually.

Using AG-UI Without CopilotKit

AG-UI is an open protocol — you absolutely don't need CopilotKit. The @ag-ui/client package provides a framework-agnostic TypeScript client:

import { HttpAgent } from "@ag-ui/client";
import { EventType } from "@ag-ui/core";

const agent = new HttpAgent({
  url: "http://localhost:8000",
});

const observable = agent.runAgent({
  threadId: "thread_123",
  runId: "run_456",
  messages: [],
  tools: [],
  context: [],
  state: {},
  forwardedProps: {},
});

observable.subscribe({
  next: (event) => {
    switch (event.type) {
      case EventType.TEXT_MESSAGE_CONTENT:
        process.stdout.write(event.delta);
        break;
      case EventType.STATE_SNAPSHOT:
        console.log("State updated:", event.snapshot);
        break;
      case EventType.TOOL_CALL_START:
        console.log("Tool called:", event.toolCallName);
        break;
    }
  },
  error: (err) => console.error("Run failed:", err),
  complete: () => console.log("\nRun complete"),
});

The runAgent() method returns an RxJS Observable<BaseEvent>, which makes it straightforward to integrate with any reactive framework — or even build a terminal-based client if that's your thing.

Framework Integrations

AG-UI has picked up adoption surprisingly fast across the agentic AI ecosystem. As of early 2026, here's where official support stands:

FrameworkIntegration LevelNotes
LangGraphPartnershipDeep integration with graph nodes as AG-UI steps
CrewAIPartnershipMulti-agent crew outputs streamed via AG-UI
Pydantic AI1st PartyBuilt-in AG-UI support via pydantic-ai[ag-ui]
Google ADK1st PartyAgent Development Kit natively emits AG-UI events
AWS Strands Agents1st PartyAmazon's agent SDK with AG-UI output
AWS Bedrock AgentCore1st PartyManaged agent runtime with AG-UI streaming
Microsoft Agent Framework1st PartyAzure-native agent platform
Mastra1st PartyTypeScript-first agent framework
LlamaIndex1st PartyData framework with AG-UI agent output
Agno1st PartyLightweight Python agent framework

OpenAI Agent SDK and Cloudflare Agents integrations are reportedly in progress too. The fact that both AWS and Google have shipped native support says a lot about where this protocol is heading.

Production Considerations

Error Handling and Resilience

Always wrap your event generator in a try/except block and emit RUN_ERROR on failure. This sounds obvious, but I've seen too many AG-UI servers that just silently drop the SSE connection when something goes wrong, leaving the frontend hanging:

async def event_stream():
    try:
        yield encoder.encode(RunStartedEvent(
            type=EventType.RUN_STARTED,
            thread_id=input_data.thread_id,
            run_id=input_data.run_id,
        ))
        # ... agent logic ...
        yield encoder.encode(RunFinishedEvent(
            type=EventType.RUN_FINISHED,
            thread_id=input_data.thread_id,
            run_id=input_data.run_id,
        ))
    except Exception as e:
        yield encoder.encode(RunErrorEvent(
            type=EventType.RUN_ERROR,
            message=str(e),
            code="INTERNAL_ERROR",
        ))

CORS Configuration

When serving a frontend from a different origin (which you almost certainly will during development), configure CORS on your FastAPI server:

from fastapi.middleware.cors import CORSMiddleware

app.add_middleware(
    CORSMiddleware,
    allow_origins=["http://localhost:3000"],
    allow_credentials=True,
    allow_methods=["*"],
    allow_headers=["*"],
)

Capability Discovery

AG-UI agents can declare their capabilities at runtime, allowing clients to adapt their UI based on what the agent actually supports:

@app.get("/capabilities")
async def get_capabilities():
    return {
        "identity": {
            "name": "document-analyzer",
            "version": "1.0.0",
            "description": "Analyzes and summarizes documents",
        },
        "transport": {
            "supportedProtocols": ["sse"],
        },
        "tools": {
            "supportsFrontendTools": True,
        },
        "state": {
            "supportsSharedState": True,
            "stateFormat": "json",
        },
        "humanInTheLoop": {
            "supportsInterrupts": True,
            "interruptTypes": ["approval", "input"],
        },
    }

This endpoint is optional but worth adding. It lets your frontend conditionally render features — for instance, only showing the approval UI if the agent actually supports interrupts.

Middleware for Logging and Monitoring

AG-UI supports client-side middleware for cross-cutting concerns. On the server side, you can wrap your encoder to add observability:

import logging
import time

logger = logging.getLogger("ag_ui")


class ObservableEncoder:
    """Wraps EventEncoder with logging and metrics."""

    def __init__(self, encoder: EventEncoder, run_id: str):
        self.encoder = encoder
        self.run_id = run_id
        self.event_count = 0
        self.start_time = time.time()

    def encode(self, event):
        self.event_count += 1
        logger.info(
            "ag_ui_event",
            extra={
                "run_id": self.run_id,
                "event_type": event.type,
                "event_number": self.event_count,
                "elapsed_ms": int((time.time() - self.start_time) * 1000),
            },
        )
        return self.encoder.encode(event)

Having structured logs for every event makes debugging production issues significantly easier, especially when you're trying to figure out why a particular run stalled or failed.

AG-UI vs. Custom WebSocket Streaming

If you're on the fence about adopting AG-UI versus rolling your own WebSocket streaming, here's an honest comparison:

AspectCustom WebSocketAG-UI
Event schemaAd-hoc, custom per project16+ standardized event types
State managementBuild your own sync logicBuilt-in snapshot/delta with RFC 6902
Tool executionCustom protocolStandardized Start/Args/End/Result lifecycle
Transport optionsWebSocket onlySSE, WebSocket, binary Protobuf, webhooks
Framework supportNone15+ frameworks with official integrations
Frontend componentsBuild from scratchCopilotKit hooks and components
MiddlewareBuild your ownBuilt-in middleware chain
Agent portabilityLocked to your implementationSwap backends without changing frontend

The biggest advantage of AG-UI isn't any single feature — it's portability. Write your frontend once, and it works with any AG-UI-compatible agent backend. You can switch from a custom Python agent to LangGraph or Pydantic AI without touching the UI code. If you've ever had to rewrite a frontend because you changed your agent framework, you know how valuable that is.

Frequently Asked Questions

Do I need CopilotKit to use AG-UI?

No. CopilotKit is the most feature-complete frontend client, but AG-UI is an open protocol. You can consume the SSE event stream directly from any language or framework using the @ag-ui/client TypeScript package, or parse the SSE format manually in any HTTP client.

How does AG-UI handle authentication and security?

AG-UI is transport-agnostic and doesn't mandate a specific auth mechanism. In production, pass authentication tokens via HTTP headers (Bearer tokens, API keys) on the POST request to your agent endpoint. Your FastAPI server validates them before processing. For multi-tenant deployments, include tenant context in RunAgentInput.forwardedProps.

Can AG-UI work with non-chat interfaces like dashboards or editors?

Absolutely. While streaming chat is the most common use case, AG-UI's shared state and tool call features support any agent-driven UI. Use STATE_DELTA events to update dashboard widgets, emit tool calls to manipulate canvas elements, or stream progress events for indicators. The protocol is UI-pattern agnostic.

What happens if the SSE connection drops mid-stream?

The client should implement reconnection logic. Since AG-UI runs are stateless from the server's perspective (the client sends full state and messages on each request), you can safely retry the entire run. For long-running agents, consider implementing checkpointing — emit STATE_SNAPSHOT events at key milestones so the client can resume from the last known state.

How do I test an AG-UI server without building a frontend?

Use curl to POST a RunAgentInput payload to your endpoint and observe the SSE stream directly. The AG-UI Dojo at dojo.ag-ui.com also provides an interactive test viewer where you can point it at your server URL and see events rendered in real time — it's incredibly handy during development.

About the Author Editorial Team

Our team of expert writers and editors.