API Logs
The API Logs section provides visibility into every AI model call made by your assistants. Use it to monitor costs, debug conversations, and understand how your assistants are working.
Activity Log
Navigate to API Logs in the sidebar to see a conversation-centric view of all API activity. Each row shows:
- The conversation subject
- The assistant that made the calls
- The number of API calls in that conversation
- Total tokens used
- Total cost in USD
- The time span of the conversation
Filtering
Filter the activity log by:
| Filter | Description |
|---|---|
| Date range | Start and end dates (YYYY-MM-DD format) |
| Assistant | Show activity for a specific assistant |
| Model | Filter by AI model used |
Conversation Timeline
Click on a conversation in the activity log to see its full event timeline. This shows every LLM call and tool execution in chronological order, including:
- Model used for each call
- Token usage (input, output, cached)
- Cost breakdown
- Duration in milliseconds
- Tools called and their results
- Stop reason (end of response, tool use, etc.)
The timeline auto-updates while a conversation is being processed, so you can watch activity in real time.
Log Entry Detail
Click on an individual log entry to see the complete details of a single API call:
- System prompt — The full system prompt sent to the model
- Request messages — The conversation messages sent
- Response text — The model’s text response
- Thinking text — The model’s reasoning (if available)
- Tool calls — Detailed tool call information including inputs, results, and duration
- Error — Any error information if the call failed
This level of detail is invaluable for debugging assistant behavior and understanding why an assistant responded the way it did.
Technical Details
What gets captured — Each LLM API call is logged with: the full system prompt (stored with a SHA-256 hash for deduplication across calls that share the same prompt), the conversation messages sent, the model used, a token usage breakdown (input, output, and cached tokens), the calculated cost in USD, the model’s thinking/reasoning text (if extended thinking was enabled), all tool calls with their inputs and result summaries, the stop reason, and wall-clock duration in milliseconds.
Tool API call logging — Beyond LLM calls, tool plugins can log their own outbound HTTP requests (to external APIs like WordPress, Unsplash, or any third-party service). These appear in the conversation timeline alongside LLM calls, creating a unified chronological view of all external API activity during a conversation — both the LLM calls that drive the agent loop and the HTTP requests made by the tools the agent invokes.
Rate limit visibility — Rate limit headers from the LLM provider are preserved on each log entry, showing remaining request and token budgets at the time of each call. This helps diagnose throttling issues and understand capacity usage patterns.
For a high-level overview of usage patterns, costs, and trends, see the Analytics Dashboard.