Convex Agents Streaming
About
This skill enables real-time streaming of agent responses to clients without blocking, allowing text to appear character-by-character. It's ideal for building responsive chat interfaces, handling long-running generations, and supporting asynchronous streaming to multiple clients simultaneously. Developers can use it for progressive display updates and smooth text animations in their applications.
Quick Install
Claude Code
Recommended/plugin add https://github.com/Sstobo/convex-skillsgit clone https://github.com/Sstobo/convex-skills.git ~/.claude/skills/Convex Agents StreamingCopy and paste this command in Claude Code to install this skill
Documentation
Purpose
Streaming allows responses to appear character-by-character in real-time, improving UX and perceived performance. Supports async streaming and multiple clients.
When to Use This Skill
- Building real-time chat interfaces with live updates
- Generating long responses that benefit from progressive display
- Streaming to multiple clients from single generation
- Using asynchronous streaming in background actions
- Implementing smooth text animation
Basic Async Streaming
Stream and save deltas to database:
export const streamResponse = action({
args: { threadId: v.string(), prompt: v.string() },
handler: async (ctx, { threadId, prompt }) => {
const { thread } = await myAgent.continueThread(ctx, { threadId });
await thread.streamText(
{ prompt },
{ saveStreamDeltas: true }
);
return { success: true };
},
});
Configure Stream Chunking
await thread.streamText(
{ prompt },
{
saveStreamDeltas: {
chunking: "line", // "word" | "line" | regex | function
throttleMs: 500, // Save deltas every 500ms
},
}
);
Retrieve Stream Deltas
import { vStreamArgs, syncStreams } from "@convex-dev/agent";
export const listMessagesWithStreams = query({
args: {
threadId: v.string(),
paginationOpts: paginationOptsValidator,
streamArgs: vStreamArgs,
},
handler: async (ctx, { threadId, paginationOpts, streamArgs }) => {
const messages = await listUIMessages(ctx, components.agent, {
threadId,
paginationOpts,
});
const streams = await syncStreams(ctx, components.agent, {
threadId,
streamArgs,
});
return { ...messages, streams };
},
});
Display Streaming in React
import { useUIMessages, useSmoothText } from "@convex-dev/agent/react";
function ChatStreaming({ threadId }: { threadId: string }) {
const { results } = useUIMessages(
api.streaming.listMessages,
{ threadId },
{ initialNumItems: 20, stream: true }
);
return (
<div>
{results?.map((message) => (
<StreamingMessage key={message.key} message={message} />
))}
</div>
);
}
function StreamingMessage({ message }: { message: UIMessage }) {
const [visibleText] = useSmoothText(message.text, {
startStreaming: message.status === "streaming",
});
return <div>{visibleText}</div>;
}
Key Principles
- Asynchronous streaming: Best for background generations
- Delta throttling: Balances responsiveness with write volume
- Stream status: Check
message.status === "streaming" - Smooth animation: Use
useSmoothTextfor text updates - Persistence: Deltas survive page reloads
Next Steps
- See messages for message management
- See fundamentals for agent setup
- See context for streaming-aware context
GitHub Repository
Related Skills
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
MetaThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
cloudflare-turnstile
MetaThis skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.
