Skip to content
digital garden
Back to Blog

Building LLM Agents with Custom UI Components

5 min read
aillmreactnestjsopen-sourcetypescript

Every AI chat interface looks the same: a stream of text bubbles. But real agentic applications need richer output — tables, task boards, status dashboards, interactive cards — rendered safely and consistently.

The naive approach is to have the LLM generate JSX or HTML directly. This breaks fast:

  • Output is inconsistent and impossible to style
  • There's no validation — the model can emit anything
  • There's no interactivity loop back to the agent
  • Your design system goes out the window

I built AgentUI to solve this. It's an open-source TypeScript library that introduces a UI event protocol between your agent and your frontend. The agent never touches your DOM. It emits structured events. Your frontend renders them through a whitelisted component registry you control.

The Architecture

The core idea is simple: instead of the agent writing markup, it calls a tool that emits a typed UI event.

Agent (LLM) → emit_ui_event (tool call) → NestJS → SSE stream (UIEvent) → React → Registry lookup → Components

User interactions flow back the other direction as ActionEvents — the agent can react to clicks, form submissions, or any custom action and emit new UI events in response.

How It Works

Step 1: The Agent Emits a UI Event

Instead of generating <table>...</table>, the agent calls a tool with structured data:

{
  "op": "append",
  "id": "sales-table",
  "component": "data-table",
  "props": {
    "columns": ["Product", "Revenue", "Growth"],
    "rows": [
      ["Pro Plan", "$48,200", "+12%"],
      ["Starter", "$18,700", "+4%"]
    ]
  }
}

Four operations are supported: append (add a component), replace (swap props), remove (delete by ID), and toast (transient notification).

Step 2: Validate and Stream

The backend validates every event against a Zod schema, then streams it to the client over SSE. With the NestJS package, setup is one line:

import { createAgentController } from '@kibadist/agentui-nest';
 
const controller = createAgentController({ agent, tools });

Step 3: React Renders Through Your Registry

On the frontend, you define exactly which components the agent can use:

import {
  createRegistry,
  AgentUIProvider,
  AgentRenderer,
} from '@kibadist/agentui-react';
 
const registry = createRegistry({
  'data-table': DataTable,
  'info-card': InfoCard,
  'text-block': TextBlock,
  'task-board': TaskBoard,
  'stat-card': StatCard,
});
 
export function App() {
  return (
    <AgentUIProvider registry={registry} sessionId="demo">
      <Chat />
      <AgentRenderer />
    </AgentUIProvider>
  );
}

Only components in your registry can be rendered. The model cannot escape the sandbox. This is the key security property — you get the flexibility of agent-driven UI without giving up control.

Step 4: User Actions Route Back to the Agent

Components can dispatch actions that the agent receives and reacts to:

import { useAgentAction } from '@kibadist/agentui-react';
 
function TaskCard({ id, title, status }) {
  const dispatch = useAgentAction();
 
  return (
    <button onClick={() => dispatch({
      type: 'task.complete',
      payload: { id },
    })}>
      Complete
    </button>
  );
}

This closes the loop. The agent renders a task board, the user marks a task complete, the agent sees the action and updates the board — all through structured, validated events.

The Package Structure

AgentUI is a monorepo with six focused packages:

| Package | Purpose | |---------|---------| | @kibadist/agentui-protocol | TypeScript types for the wire protocol — zero dependencies | | @kibadist/agentui-validate | Zod schemas and parsers for runtime validation | | @kibadist/agentui-react | Registry, renderer, SSE hook, and action context | | @kibadist/agentui-nest | Session event bus and controller factory for NestJS | | @kibadist/agentui-ai | Provider-agnostic adapter via Vercel AI SDK | | @kibadist/agentui-next | SSE and action proxy helpers for Next.js App Router |

The protocol package sits at the bottom with zero dependencies — just pure TypeScript types. Everything else builds on top of it. This means you can swap out the backend (NestJS, Next.js, or your own) or the frontend framework without touching the protocol layer.

The AI package supports multiple providers through the Vercel AI SDK — Anthropic, OpenAI, Google, and DeepSeek all work out of the box.

Why a Protocol, Not a Component Library

A lot of "AI UI" tools ship pre-built components. AgentUI deliberately doesn't. Here's why:

Your components, your design system. The registry pattern means you bring your own components. A data-table in your registry could be a minimal HTML table, a full-featured AG Grid wrapper, or anything in between. The agent doesn't care — it just emits props.

Validation at the boundary. Every event is validated with Zod before it reaches your frontend. Malformed events are rejected, not rendered. This is the same principle as validating API responses — don't trust the source, validate the data.

Composability over configuration. Instead of configuring a monolithic "AI chat" component, you compose small pieces: a registry, a provider, a renderer. Need a custom layout? Swap the renderer. Need custom event handling? Add middleware to the event bus.

Real Use Cases

I've been using this pattern for:

  • Internal dashboards — the agent queries a database and renders tables, charts, and stat cards based on natural language questions
  • Agentic workflows — the agent builds task boards and checklists that users interact with, updating state through action events
  • Dev tools — structured output for test results, API responses, and diffs that's actually readable

Getting Started

git clone https://github.com/kibadist/agentui
cd agentui
pnpm install
pnpm build
 
# Add your API key
echo "ANTHROPIC_API_KEY=sk-ant-your-key" > examples/nest-api/.env
echo "PORT=3001" >> examples/nest-api/.env
 
# Run everything
pnpm dev

The example app runs a NestJS backend on :3001 and a Next.js frontend on :3000. Try prompts like "Show me a summary of recent sales" or "Create a project task board" to see the agent compose UI in real time.

What's Next

The roadmap includes streaming partial renders (render components as props stream in), a built-in starter component library for zero-config setups, a Vue adapter, and a persistence layer for replaying UI state across sessions.

If you're building agentic applications that need richer output than text bubbles, check out the repo — issues and PRs welcome.

Comments

No comments yet. Be the first to comment!