Tools
Tools are capabilities that assistants can use during conversations. TeamWeb AI provides a comprehensive set of built-in tools, and additional tools can be added through project plugins, MCP servers, custom HTTP tools, or Python custom tools.
The Tools page in the sidebar shows a complete inventory of all available tools across all sources. From there you can create custom tools, import from OpenAPI specs, and configure sandbox settings for each tool.
When an assistant needs to take an action — searching the web, saving a note, running code, or creating content — it calls a tool. Each tool has a defined purpose, accepted parameters, and return value. The assistant decides which tools to use based on the conversation context and its instructions.
Some built-in tools are stateful. For example, the browser_action tool
manages a live browser session inside the application runtime so an assistant
can navigate, click, type, wait for page changes, extract content, and take
screenshots across multiple tool calls in the same conversation.
Technical Details
Tool dispatch pipeline — When the LLM requests a tool call, the system routes it through a four-tier dispatch pipeline. First, it checks a map of core tool handlers — the built-in tools that ship with TeamWeb AI. If the tool name starts with mcp_, it routes to the MCP connection manager, which forwards the call to the appropriate MCP server. If the name starts with custom_, it dispatches to the HTTP tool executor for custom tools. Otherwise, it checks the plugin registry for matching plugin tools. This layered dispatch means built-in tools, MCP tools, custom tools, and plugin tools coexist seamlessly.
Tool context — Every tool execution receives a rich context object containing the current conversation, assistant, user, database session, knowledge service, media service, and both plugin-level and per-assistant tool configuration. This lets tools interact with any part of the system — searching knowledge, saving files, logging API calls — without needing to manage their own connections or state.
Stateful tools — Most tools are stateless request/response calls. A small subset maintain conversation-scoped runtime state. The browser tool, for example, keeps a live managed browser session tied to the conversation and stores only high-level metadata in the database. If that live runtime expires, the assistant must start a new browser session rather than replaying old actions.
Eager vs. deferred tools — To keep the LLM’s input context concise, TeamWeb AI uses a tool search mechanism. Frequently used tools (like show, save_content, search_knowledge, and spawn_child_runs) are “eager” — their full schemas are always included in the LLM request. Less common tools are “deferred” — the LLM receives only their names and descriptions and can discover their full schemas on demand when it decides to use one. This significantly reduces input token usage and cost for assistants with many tools enabled.