Agent Integration
Connect any AI agent to HyperGen — Claude, OpenAI, Ollama, or custom systems. The pattern is always the same — agent generates HTML, server streams it via SSE.
HyperGen is agent-agnostic by design. Any system that can produce an HTML string can generate UI through HyperGen. This guide shows the integration pattern with several popular agent backends.
The Universal Pattern
Every HyperGen agent integration follows the same three steps:
- Agent generates an HTML string (with HTMX attributes for interactivity and
--hg-*CSS variables for theming) - Server yields the HTML into an async generator
createSSEStream()delivers it to the iframe via SSE
Agent (any) ──HTML string──> Server (async generator) ──SSE──> Iframe (HTMX renders)The server-side async generator is the integration point. Everything before it is agent-specific; everything after it is handled by the HyperGen drop-in.
Claude Agent SDK
This example is based on the examples/claude-agent/ directory.
The pattern: the Claude Agent SDK's query() yields SDKMessage objects. An adapter function converts each message into an HTML fragment.
import { createSSEStream, HyperGenResponse } from "./hypergen-server";
import { AgentSDK, type SDKMessage } from "@anthropic-ai/claude-agent-sdk";
const agent = new AgentSDK({
model: "claude-sonnet-4-20250514",
tools: [weatherTool, searchTool],
});
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
const stream = agent.query("Show me the weather in Tokyo");
for await (const message of stream) {
const html = sdkMessageToHtml(message);
if (html) yield html;
}
}),
);
});The adapter function maps SDK message types to HTML:
function sdkMessageToHtml(message: SDKMessage): string | null {
switch (message.type) {
case "assistant": {
// Extract text blocks from the assistant's response
const textBlocks = message.message.content.filter(
(block) => block.type === "text",
);
if (textBlocks.length === 0) return null;
const text = textBlocks.map((b) => b.text).join("");
return `<div style="
border-left: 3px solid var(--hg-accent, #4f46e5);
padding: 12px 16px;
margin: 8px 0;
background: var(--hg-surface-elevated, #f9fafb);
border-radius: var(--hg-radius, 8px);
line-height: 1.6;
">${text}</div>`;
}
case "result": {
if (message.subtype === "success") {
return `<div style="
background: var(--hg-surface-elevated, #f9fafb);
padding: 16px;
border-radius: var(--hg-radius, 8px);
border: 1px solid var(--hg-border, #e5e7eb);
">${message.result}</div>`;
}
return `<div style="color: var(--hg-error, #ef4444);">
Agent error: ${message.subtype}
</div>`;
}
default:
return null;
}
}Tool Results as Interactive HTML
The real power comes from converting tool results into interactive HTML with HTMX attributes:
function renderWeatherCard(data: WeatherData): string {
return `<div id="weather-${data.city}" style="
background: var(--hg-surface-elevated);
border: 1px solid var(--hg-border);
border-radius: var(--hg-radius);
padding: 16px;
margin: 8px 0;
">
<h3 style="color: var(--hg-text);">${data.city}</h3>
<p style="font-size: 36px; font-weight: bold; color: var(--hg-accent);">
${data.temperature}°C
</p>
<p style="color: var(--hg-text-muted);">${data.conditions}</p>
<button
hx-post="/api/action"
hx-vals='{"action": "refresh_weather", "city": "${data.city}"}'
hx-target="#weather-${data.city}"
hx-swap="outerHTML"
style="
background: var(--hg-accent);
color: var(--hg-accent-fg);
border: none;
padding: 8px 16px;
border-radius: var(--hg-radius-sm);
cursor: pointer;
margin-top: 8px;
">
Refresh
</button>
</div>`;
}When the user clicks "Refresh", HTMX posts to /api/action, your server calls the agent again (or fetches fresh data), and returns new HTML that HTMX swaps in.
OpenAI-Compatible APIs
For OpenAI, Anthropic's Messages API, or any OpenAI-compatible endpoint (Groq, Together, etc.):
import { createSSEStream, HyperGenResponse } from "./hypergen-server";
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4o",
messages: [
{
role: "system",
content: `You are a UI generator. Output valid HTML fragments styled with CSS variables:
--hg-surface, --hg-surface-elevated, --hg-text, --hg-text-muted,
--hg-accent, --hg-accent-fg, --hg-border, --hg-radius, --hg-space-4.
Use HTMX attributes (hx-post, hx-target, hx-swap) for interactivity.
Respond with ONLY HTML, no markdown, no explanation.`,
},
{
role: "user",
content: "Show me a dashboard with a counter and a task list",
},
],
}),
});
const data = await response.json();
const html = data.choices[0].message.content;
yield html;
}),
);
});Streaming Token-by-Token
For real-time progressive rendering, use the OpenAI streaming API and accumulate HTML until you have a complete fragment:
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
const response = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"Authorization": `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4o",
stream: true,
messages: [
{ role: "system", content: "Output HTML styled with --hg-* CSS variables." },
{ role: "user", content: "Generate a welcome card" },
],
}),
});
let buffer = "";
const reader = response.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value);
for (const line of text.split("\n")) {
if (!line.startsWith("data: ") || line === "data: [DONE]") continue;
const json = JSON.parse(line.slice(6));
const delta = json.choices[0]?.delta?.content;
if (delta) buffer += delta;
}
}
// Yield the complete HTML fragment
if (buffer) yield buffer;
}),
);
});Fragment boundaries
You can yield partial HTML as it arrives for a typewriter effect (using swapStrategy: "beforeend"), or buffer the complete response and yield it once for a clean render. The right choice depends on your UX.
Local LLMs (Ollama)
Ollama runs LLMs locally and exposes an OpenAI-compatible API:
import { createSSEStream, HyperGenResponse } from "./hypergen-server";
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
const response = await fetch("http://localhost:11434/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "llama3",
prompt: `Generate an HTML dashboard card styled with these CSS variables:
var(--hg-surface-elevated), var(--hg-text), var(--hg-accent),
var(--hg-border), var(--hg-radius), var(--hg-space-4).
Use hx-post="/api/action" for interactive buttons.
Output ONLY valid HTML, no markdown.`,
stream: false,
}),
});
const data = await response.json();
yield data.response;
}),
);
});Or use Ollama's OpenAI-compatible endpoint for a drop-in replacement:
// Same code as the OpenAI example, just change the URL and model
const response = await fetch("http://localhost:11434/v1/chat/completions", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "llama3",
messages: [
{ role: "system", content: "Output HTML styled with --hg-* CSS variables." },
{ role: "user", content: "Generate a task list" },
],
}),
});Custom Agents (Non-LLM)
HyperGen does not require an LLM. Any system that produces HTML can be an "agent" -- rule-based engines, template systems, database-driven generators, etc.
Rule-Based System
import { createSSEStream, HyperGenResponse } from "./hypergen-server";
interface Alert {
severity: "info" | "warning" | "critical";
message: string;
timestamp: Date;
}
function alertToHtml(alert: Alert): string {
const colors = {
info: "var(--hg-accent, #3b82f6)",
warning: "var(--hg-warning, #f59e0b)",
critical: "var(--hg-error, #ef4444)",
};
return `<div style="
border-left: 4px solid ${colors[alert.severity]};
background: var(--hg-surface-elevated);
padding: 12px 16px;
margin: 8px 0;
border-radius: var(--hg-radius);
">
<strong style="color: ${colors[alert.severity]};">
${alert.severity.toUpperCase()}
</strong>
<p style="color: var(--hg-text); margin: 4px 0 0;">${alert.message}</p>
<small style="color: var(--hg-text-muted);">
${alert.timestamp.toLocaleTimeString()}
</small>
<button
hx-post="/api/action"
hx-vals='{"action": "acknowledge", "id": "${alert.timestamp.getTime()}"}'
hx-target="closest div"
hx-swap="outerHTML"
style="
display: block; margin-top: 8px;
background: none; border: 1px solid var(--hg-border);
padding: 4px 12px; border-radius: var(--hg-radius-sm);
cursor: pointer; color: var(--hg-text-muted);
">
Acknowledge
</button>
</div>`;
}
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
// Stream alerts from a monitoring system
for await (const alert of monitoringSystem.subscribe()) {
yield alertToHtml(alert);
}
}),
);
});Database-Driven Generator
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
const records = await db.query("SELECT * FROM products LIMIT 10");
yield `<h2 style="color: var(--hg-text); margin-bottom: var(--hg-space-4);">
Products (${records.length})
</h2>`;
for (const record of records) {
yield `<div style="
display: flex; justify-content: space-between; align-items: center;
padding: var(--hg-space-2) var(--hg-space-4);
border-bottom: 1px solid var(--hg-border);
">
<div>
<strong style="color: var(--hg-text);">${record.name}</strong>
<p style="color: var(--hg-text-muted); margin: 0;">$${record.price}</p>
</div>
<button
hx-post="/api/action"
hx-vals='{"action": "view_product", "id": "${record.id}"}'
hx-target="#product-detail"
hx-swap="innerHTML"
style="background: var(--hg-accent); color: var(--hg-accent-fg); border: none; padding: 6px 12px; border-radius: var(--hg-radius-sm); cursor: pointer;">
View
</button>
</div>`;
}
yield `<div id="product-detail" style="margin-top: var(--hg-space-4);"></div>`;
}),
);
});Best Practices
System Prompts for LLM Agents
When using LLMs, include these instructions in the system prompt:
You are a UI generator. Output valid HTML fragments with:
- CSS variables for theming: var(--hg-surface), var(--hg-text), var(--hg-accent), etc.
- HTMX attributes for interactivity: hx-post, hx-target, hx-swap, hx-vals
- Self-contained fragments (include <style> blocks if needed)
- No markdown, no code fences, no explanation — just HTMLValidate Agent Output
LLMs can produce malformed HTML. Consider adding a validation step:
function sanitizeHtml(html: string): string {
// Strip markdown code fences that LLMs sometimes add
return html
.replace(/^```html?\n?/i, "")
.replace(/\n?```$/i, "")
.trim();
}
app.get("/api/stream", () => {
return HyperGenResponse(
createSSEStream(async function* () {
const raw = await agent.generate("Create a card");
yield sanitizeHtml(raw);
}),
);
});Error Handling
Wrap agent calls in try/catch and yield error UI:
createSSEStream(async function* () {
try {
for await (const fragment of agent.stream()) {
yield fragment;
}
} catch (error) {
yield `<div style="
background: var(--hg-surface-elevated);
border-left: 4px solid var(--hg-error, #ef4444);
padding: 12px 16px;
border-radius: var(--hg-radius);
color: var(--hg-text);
">
<strong>Something went wrong</strong>
<p style="color: var(--hg-text-muted); margin-top: 4px;">
${error instanceof Error ? error.message : "Unknown error"}
</p>
<button
hx-get="/api/stream"
hx-target="#hg-root"
hx-swap="innerHTML"
style="margin-top: 8px; background: var(--hg-accent); color: var(--hg-accent-fg); border: none; padding: 8px 16px; border-radius: var(--hg-radius-sm); cursor: pointer;">
Retry
</button>
</div>`;
}
});