MCP Servers
MCP (Model Context Protocol) servers extend your assistants with external tools beyond TeamWeb AI’s built-in set. An MCP server is any service that implements the Model Context Protocol — a standard for exposing tools, resources, and prompts to AI agents.
How It Works
- You register an MCP server in TeamWeb AI with its connection details.
- TeamWeb AI discovers the tools the server provides.
- You enable the server for one or more assistants.
- During conversations, the assistant can call those tools alongside TeamWeb AI’s built-in and plugin tools.
Runtime and Transport Types
Every MCP server is configured along two axes: where it runs and how TeamWeb AI connects to it.
Runtime Type
| Runtime | Description |
|---|---|
| Docker | TeamWeb AI manages a Docker container for the server. Best for npm packages and self-contained servers. |
| External | The server is already running somewhere. You provide the URL. |
Transport Type
| Transport | Description | Available with |
|---|---|---|
| stdio | JSON-RPC over stdin/stdout. Ephemeral — a fresh container is created each session. | Docker only |
| SSE | HTTP Server-Sent Events. | Docker, External |
| Streamable HTTP | MCP Streamable HTTP transport. | Docker, External |
Tool Discovery
After creating a server, click Test Connection on its detail page. TeamWeb AI connects to the server, calls the MCP list_tools endpoint, and stores the discovered tools. The tool catalog shows each tool’s name and description.
Tools are cached after discovery so that assistants can reference them without connecting to the server on every conversation. The cache is refreshed automatically whenever you test the connection or change the server configuration.
Technical Details — Connection Lifecycle
Lazy initialization — MCP connections use a lazy initialization pattern for efficiency. When the agent loop starts, tool schemas are loaded from cached catalogs stored in the database — no server connection is needed. An actual connection to the MCP server is only established when the LLM first calls an MCP tool during a conversation. This means containers are not started and network connections are not opened unless the assistant actually needs to use MCP tools in that particular conversation.
Event loop architecture — The MCP connection manager runs a dedicated event loop in a background daemon thread. This is necessary because MCP sessions require persistent background I/O tasks for reading from and writing to transports (stdio pipes, SSE streams, or HTTP connections). The event loop stays alive between tool calls within a single agent loop, so the connection is reused for multiple tool calls. Connections are cleaned up when the agent loop finishes.
Tool Naming
MCP tools are namespaced to avoid collisions with built-in tools and tools from other servers. The naming pattern is:
mcp_{server_name}__{tool_name}The server name is normalized to lowercase with special characters replaced by underscores. For example, a server named “GitHub Server” providing a create_issue tool would appear as mcp_github_server__create_issue.
Technical Details — Tool Catalog Caching
How caching works — After each successful connection to an MCP server, the discovered tools (names, descriptions, and parameter schemas) are persisted to the database. On subsequent conversations, these cached schemas are used to build the tool definitions sent to the LLM — without connecting to the server at all. This is essential for performance, as otherwise every conversation would need to start containers or open network connections just to tell the LLM what tools exist.
First connection — For servers that have never been connected (no cached tools), the system connects immediately during agent loop setup to discover their tools. The discovered tools are then persisted so subsequent conversations use the cache.
Cache invalidation — The cache is refreshed when you click Test Connection, when the server configuration changes (image, command, environment variables), or when the system detects that a running container has been recreated due to configuration changes.
Container Lifecycle
For Docker-managed servers, TeamWeb AI handles the full container lifecycle:
| Transport | Lifecycle |
|---|---|
| stdio | Ephemeral — created when a tool is called, removed after the session ends |
| SSE / Streamable HTTP (keep running off) | Created on first tool call, removed after the session ends |
| SSE / Streamable HTTP (keep running on) | Persists between sessions. Can be stopped manually from the detail page. |
A background cleanup task periodically removes orphaned or inactive MCP containers.