Skip to content

Chat

TeamWeb AI provides a built-in chat interface for conversations with your assistants. The interface supports both text and voice communication.

Text Chat

Click Chat on an assistant’s detail page to open the text chat interface. If you have a recent conversation with this assistant, it will resume where you left off. Otherwise, a new conversation is created.

The Interface

The chat uses a split-screen layout:

  • Canvas (left side) — Displays visual cards created by the assistant using the show tool. Cards can contain titles, markdown-formatted body text, and images.
  • Messages (right side) — The conversation thread with your messages and the assistant’s responses.

Sending Messages

Type your message in the input field and press Send. Your message appears immediately, and a thinking indicator shows while the assistant processes its response. The interface polls for new messages automatically.

While the assistant is thinking, the indicator shows which tool it’s currently using (e.g., “Searching knowledge…” or “Generating image…”).

Technical Details

The agent loop — When you send a message, TeamWeb AI calls the LLM with the assembled system prompt, the full conversation history, and all available tool definitions. If the LLM responds with a tool call (e.g., search_knowledge or save_content) instead of final text, TeamWeb AI executes that tool, appends the result to the conversation, and calls the LLM again. This loop — LLM call, tool execution, LLM call — repeats until the LLM returns a final text response or a safety limit of 100 iterations is reached. In practice, most conversations complete in 1-5 iterations.

Processing state — During this loop, the conversation is flagged as “processing,” which drives the thinking indicator in the UI. The indicator updates in real time to show which tool the agent is currently executing, so you can follow along as the assistant searches knowledge, generates images, or delegates tasks.

Message storage — Each message is stored in two representations. The content field holds the plain text shown in the conversation list and chat view. The content_json field holds the full structured interaction — text blocks, tool call inputs, tool results, and thinking text — preserving the complete agent interaction. The API Logs viewer reads from this structured representation to show the full detail of each turn.

Starting a New Conversation

Click New Chat to start a fresh conversation. The previous conversation is preserved and can be viewed in the Conversations section.

Voice Chat

If the WebRTC voice plugin is configured, a microphone button appears in the chat interface. Click it to start a voice conversation with the assistant.

During voice chat:

  • You speak naturally and the assistant responds with speech
  • The assistant can still use tools and display results on the canvas
  • A transcript of the conversation is maintained
  • The assistant announces tool usage before executing (“Let me search for that…”)
Voice chat requires the OpenAI WebRTC channel plugin to be enabled on the assistant. See WebRTC for setup instructions.

Canvas Cards

When the assistant uses the show tool, a card appears on the canvas with:

  • A numbered badge for easy reference
  • A title
  • A markdown-formatted body (truncated to 200 characters with expand option)
  • An optional image (click to enlarge)

Cards accumulate during the conversation, giving you a visual workspace alongside the chat.