Skip to content

Tools

Tools are capabilities that assistants can use during conversations. TeamWeb AI provides a comprehensive set of built-in tools, and additional tools can be added through project plugins, MCP servers, or custom HTTP tools.

The Tools page in the sidebar shows a complete inventory of all available tools across all sources. From there you can create custom tools, import from OpenAPI specs, and configure sandbox settings for each tool.

When an assistant needs to take an action — searching the web, saving a note, running code, or creating content — it calls a tool. Each tool has a defined purpose, accepted parameters, and return value. The assistant decides which tools to use based on the conversation context and its instructions.

Technical Details

Tool dispatch pipeline — When the LLM requests a tool call, the system routes it through a four-tier dispatch pipeline. First, it checks a map of core tool handlers — the built-in tools that ship with TeamWeb AI. If the tool name starts with mcp_, it routes to the MCP connection manager, which forwards the call to the appropriate MCP server. If the name starts with custom_, it dispatches to the HTTP tool executor for custom tools. Otherwise, it checks the plugin registry for matching plugin tools. This layered dispatch means built-in tools, MCP tools, custom tools, and plugin tools coexist seamlessly.

Tool context — Every tool execution receives a rich context object containing the current conversation, assistant, user, database session, knowledge service, media service, and both plugin-level and per-assistant tool configuration. This lets tools interact with any part of the system — searching knowledge, saving files, logging API calls — without needing to manage their own connections or state.

Eager vs. deferred tools — To keep the LLM’s input context concise, TeamWeb AI uses a tool search mechanism. Frequently used tools (like show, save_content, search_knowledge, and delegate_task) are “eager” — their full schemas are always included in the LLM request. Less common tools are “deferred” — the LLM receives only their names and descriptions and can discover their full schemas on demand when it decides to use one. This significantly reduces input token usage and cost for assistants with many tools enabled.