Exploring the interplay between AI-generated tokens and dynamic UI updates

Contents
- 1 ๐ค The Challenge: Creating a Fluid and Adaptive AI Chat UI
- 2 ๐ ๏ธ Key Architectural Concepts
- 3 ๐ Step 1: Pseudocode for UI State Management with Tokens
- 4 ๐ Step 2: Implementing Streaming in Remix
- 5 ๐ Step 3: Frontend with UI Toggle Logic
- 6 ๐ Step 4: Dynamic Styling
- 7 ๐ The Final Product
๐ค The Challenge: Creating a Fluid and Adaptive AI Chat UI
When building an AI chat system, real-time interaction is key to a great user experience. We want the UI to respond dynamically to incoming AI tokens, adjusting elements like:
- Loading indicators (when AI is “thinking”)
- Message bubbles that expand dynamically
- Progressive rendering (token-by-token)
- Graph and table integration (when data-driven responses are detected)
- File generation buttons (when AI suggests downloadable content)
- Code blocks with syntax highlighting
Instead of waiting for an entire response to arrive, we want elements to appear as soon as tokens arrive. The solution? Token-based UI state management.
๐ ๏ธ Key Architectural Concepts
- Token Streaming from AI Model โ AI sends tokens incrementally.
- State Management for UI Toggles โ UI elements enable/disable based on token patterns.
- Dynamic Rendering of UI Components โ Real-time expansion of messages, code snippets, tables, and graphs.
- User Experience Enhancements โ Typing indicators, collapsible sections, and animated loading states.
๐ Step 1: Pseudocode for UI State Management with Tokens
Before writing actual code, letโs outline how state toggling works dynamically.
1. Core Chat Flow
initializeChatUI()
onUserInput(message):
disableInputField() # Prevents multiple requests
showTypingIndicator() # Shows "AI is typing..."
requestID = generateUniqueID()
startStreamFromAI(message, requestID)
startStreamFromAI(userMessage, requestID):
openConnectionToLLM() # Establish token stream
buffer = ""
for token in streamFromLLM():
buffer += token
updateChatUI(buffer)
# Handle special UI triggers
if token == "[START_TABLE]":
enableTableMode()
if token == "[END_TABLE]":
finalizeTableRendering()
if token.startswith("```"): # Detects code blocks
toggleCodeSnippetMode()
if "[DOWNLOAD_FILE]" in token:
enableFileDownloadButton()
finalizeResponse(buffer)
hideTypingIndicator()
enableInputField()
๐ Step 2: Implementing Streaming in Remix
Weโll use Server-Sent Events (SSE) for real-time token streaming in Remix.
Backend Streaming Handler (app/routes/chat.tsx
)
import { json } from "@remix-run/node";
import { useFetcher } from "@remix-run/react";
// Backend: Streaming AI Response
export async function action({ request }) {
const body = await request.json();
const userMessage = body.message;
const aiResponse = await fetch("https://api.openai.com/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${process.env.OPENAI_API_KEY}`,
},
body: JSON.stringify({
model: "gpt-4",
messages: [{ role: "user", content: userMessage }],
stream: true, // Token streaming enabled
}),
});
const stream = new ReadableStream({
async start(controller) {
const reader = aiResponse.body?.getReader();
while (true) {
const { value, done } = await reader?.read() || {};
if (done) break;
controller.enqueue(value);
}
controller.close();
},
});
return new Response(stream, {
headers: { "Content-Type": "text/event-stream" },
});
}
๐ Step 3: Frontend with UI Toggle Logic
Key Features:
- Token-based dynamic updates
- Enables/disables UI elements dynamically
- Manages state for different response types (text, tables, code, files, etc.)
๐ app/routes/chat.tsx
import { useFetcher } from "@remix-run/react";
import { useState, useEffect } from "react";
export default function Chat() {
const fetcher = useFetcher();
const [messages, setMessages] = useState([]);
const [currentMessage, setCurrentMessage] = useState("");
const [aiResponse, setAiResponse] = useState("");
const [isStreaming, setIsStreaming] = useState(false);
const [isTableMode, setIsTableMode] = useState(false);
const [isCodeMode, setIsCodeMode] = useState(false);
const [fileDownloadLink, setFileDownloadLink] = useState(null);
const sendMessage = async () => {
if (!currentMessage.trim()) return;
setMessages([...messages, { role: "user", content: currentMessage }]);
setCurrentMessage("");
setIsStreaming(true);
fetcher.submit({ message: currentMessage }, { method: "post", action: "/chat" });
};
useEffect(() => {
if (!fetcher.data) return;
const reader = fetcher.data?.body?.getReader();
let buffer = "";
let tempTable = "";
let tempCode = "";
async function readStream() {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const token = new TextDecoder().decode(value);
buffer += token;
if (token.includes("[START_TABLE]")) setIsTableMode(true);
if (token.includes("[END_TABLE]")) {
setIsTableMode(false);
// Finalize table rendering
}
if (token.startsWith("```")) setIsCodeMode(!isCodeMode);
if (token.includes("[DOWNLOAD_FILE]")) {
setFileDownloadLink("/path/to/generated/file");
}
setAiResponse(buffer);
}
setIsStreaming(false);
setMessages([...messages, { role: "assistant", content: buffer }]);
}
readStream();
}, [fetcher.data]);
return (
<div className="chat-container">
<div className="messages">
{messages.map((msg, index) => (
<div key={index} className={`message ${msg.role}`}>
{msg.content}
</div>
))}
{isStreaming && <div className="message assistant">{aiResponse}</div>}
{isTableMode && <div className="table-container">Loading table...</div>}
{isCodeMode && <pre className="code-snippet">Rendering Code...</pre>}
{fileDownloadLink && (
<a href={fileDownloadLink} download>
Download File
</a>
)}
</div>
<div className="input-container">
<input
type="text"
value={currentMessage}
onChange={(e) => setCurrentMessage(e.target.value)}
placeholder="Type a message..."
/>
<button onClick={sendMessage} disabled={isStreaming}>Send</button>
</div>
</div>
);
}
๐ Step 4: Dynamic Styling
app/styles/chat.css
.table-container {
background: #f8f9fa;
padding: 10px;
border: 1px solid #ddd;
}
.code-snippet {
background: #282c34;
color: #61dafb;
padding: 5px;
font-family: monospace;
}
a {
color: #007aff;
text-decoration: none;
}
๐ The Final Product
โ
Live-streaming AI responses (token-by-token)
โ
Dynamic UI toggling (Tables, Code, File Downloads)
โ
Smooth state management with Remix
๐ฎ Whatโs Next?
- Enhance graph rendering for AI-generated analytics.
- Persist chat history using databases (PostgreSQL, Firebase).
- Improve typing animations for UX.
- ๐ Built with Remix.
“For more insights and the latest updates, explore our blog archives or visit nomadule.com for more.”