Chat
TeamWeb AI provides a built-in chat interface for conversations with your assistants. The interface supports both text and voice communication.
Text Chat
Click Chat on an assistant’s detail page to open the text chat interface. If you have a recent conversation with this assistant, it will resume where you left off. Otherwise, a new conversation is created.
The Interface
The chat uses a split-screen layout:
- Canvas (left side) — Displays visual cards created by the assistant using the
showtool. Cards can contain titles, markdown-formatted body text, and images. - Messages (right side) — The conversation thread with your messages and the assistant’s responses.
Sending Messages
Type your message in the input field and press Send. Your message appears immediately, and a thinking indicator shows while the assistant processes its response. The interface polls for new messages automatically.
While the assistant is thinking, the indicator shows which tool it’s currently using (e.g., “Searching knowledge…” or “Generating image…”).
Technical Details
The agent loop — When you send a message, TeamWeb AI calls the LLM with the assembled system prompt, the full conversation history, and all available tool definitions. If the LLM responds with a tool call (e.g., search_knowledge or save_content) instead of final text, TeamWeb AI executes that tool, appends the result to the conversation, and calls the LLM again. This loop — LLM call, tool execution, LLM call — repeats until the LLM returns a final text response or a safety limit of 100 iterations is reached. In practice, most conversations complete in 1-5 iterations.
Processing state — During this loop, the conversation is flagged as “processing,” which drives the thinking indicator in the UI. The indicator updates in real time to show which tool the agent is currently executing, so you can follow along as the assistant searches knowledge, generates images, or spawns child runs on other assistants.
Message storage — Each message is stored in two representations. The content field holds the plain text shown in the conversation list and chat view. The content_json field holds the full structured interaction — text blocks, tool call inputs, tool results, and thinking text — preserving the complete agent interaction. The API Logs viewer reads from this structured representation to show the full detail of each turn.
Starting a New Conversation
Click New Chat to start a fresh conversation. The previous conversation is preserved and can be viewed in the Conversations section.
Voice Chat
If the WebRTC voice plugin is configured, a microphone button appears in the chat interface. Click it to start a voice conversation with the assistant.
During voice chat:
- You speak naturally and the assistant responds with speech
- The assistant can still use tools and display results on the canvas
- A transcript of the conversation is maintained
- The assistant announces tool usage before executing (“Let me search for that…”)
Canvas Cards
When the assistant uses the show tool, a card appears on the canvas with:
- A numbered badge for easy reference
- A title
- A markdown-formatted body (truncated to 200 characters with expand option)
- An optional image (click to enlarge)
Cards accumulate during the conversation, giving you a visual workspace alongside the chat.
Human-in-the-Loop
Assistants can pause mid-conversation to ask you a question using the ask_human tool. This is useful when the assistant needs your input to continue — for example, choosing between options, approving an action, or clarifying requirements.
How It Works
When the assistant calls ask_human:
- The agent loop pauses and the question appears in the chat
- If the question includes options (text, images, or data cards), they are displayed as selectable cards on the canvas and as inline buttons in the chat message
- Click an option to select it, or type a free-form response
- The assistant resumes automatically with your answer
Option Cards on Canvas
When the assistant presents choices with rich details (images, descriptions), each option appears as a selectable card on the canvas. These are visually distinct from show cards — they have a highlighted border on hover and a Select button instead of a View button.
Clicking Select sends the option label as your reply and the assistant continues.
Non-Interactive Channels
When the assistant asks a question via email or Slack, options are formatted as a numbered list in the message body. Reply with your choice (number or label) to continue the conversation.
show or ask_human enabled — you don’t need to configure anything separately.Workspace Sidebar
When an assistant uses planning or workspace tools, a sidebar panel appears on the right side of the admin chat interface. The sidebar provides a live view of the assistant’s internal state as it works.
Plan View
If the assistant has created a plan (via the manage_plan tool), the sidebar displays it as a hierarchical list of steps. Each step shows a status indicator:
- Pending — Not yet started
- In-progress — Currently being worked on
- Completed — Finished successfully
- Skipped — Deliberately skipped
- Failed — Encountered an error
Nested sub-steps are indented beneath their parent step, so you can follow both the high-level progress and the fine-grained detail.
Workspace Files
Below the plan, the sidebar lists all files in the conversation’s workspace. Each entry shows the filename, file size, and last-modified time. Click a file to view its content in-place.
Files are created and updated by the assistant using the write_file tool and persist across turns within the conversation.
Real-Time Updates
The sidebar updates automatically via polling while the conversation is active, so you can watch the plan progress and new files appear as the assistant works — no manual refresh needed.
Availability
The workspace sidebar is only visible in the admin browser chat interface. Conversations conducted through other channels (email, Slack, etc.) do not display the sidebar, though the underlying workspace data is still created and stored.
Cleanup
Workspace data — plans and files — is automatically cleaned up after a configurable time-to-live once the conversation ends. The TTL is controlled by the WORKSPACE_TTL_DAYS environment variable (default: 7 days).