Debug CLI
The Debug CLI is a command-line tool for exporting the complete debug picture of a conversation: its messages, the LLM calls that produced each response, workflow state, and any child-run conversations spawned from it. Use it when you need more detail than the web API Logs view, or when you want to pipe the dump into another tool (an AI agent, jq, a pastebin).
When to use it
The API Logs web UI is the right tool for browsing and filtering conversation activity. Reach for the CLI when you want to:
- Feed a conversation to an AI assistant for help debugging a tricky agent behavior
- Grep a long conversation for a specific tool call, error, or state change
- Archive a debug snapshot before cleanup or a migration
- Pipe structured data to
jqfor quick aggregate questions (total cost, which tools fired, etc.)
The command is read-only — no database rows are modified.
Basic usage
docker exec teamwebai-web-1 uv run flask debug conversation <id>The default output is a markdown-like text dump sent to stdout. It contains the conversation metadata, a deduplicated list of system prompts seen, and one section per message turn with the LLM calls that fired during that turn.
Pass --help for the full agent-facing reference:
docker exec teamwebai-web-1 uv run flask debug conversation --helpOutput structure
# Conversation 42
Subject: … Assistant: … Project: …
Created: … Ended: … (workflow_complete)
## System prompts
- hash=abc123 seen=4 turns [body hidden; use --show-system]
## Turn 1 — user [msg 101 @ 14:30:02]
<user message>
## Turn 2 — assistant [msg 102 @ 14:30:05]
<assistant response>
### LLM call [log 501] model=claude-opus-4-7 tokens=1240/320 cost=$0.021 duration=4300ms
Tool calls:
- read_notes({"query":"…"}) → <result preview>
Thinking: <thinking text>
[system prompt: hash=abc123]
## Turn N …
[child conversation #128] <-- marker at the spawn_child_runs call site
---
## Child conversation 128 … <-- appended when --children is onOptional sections appear only when the relevant flags are set:
## Workflow runs— when--workflowor--all## Child conversations (references)— when--childrenor--all## Peripheral records— when--all(files, browser sessions, plans, task instances)## Pre-turn LLM logs— when logs exist with timestamps preceding the first message
Flags
| Flag | Description |
|---|---|
-o, --output PATH | Write the dump to a file instead of stdout. |
--json | Emit structured JSON instead of the default markdown-like text. Use when piping to jq or another parser. |
--workflow | Include workflow runs: state, event log, gathered data, history. Pass when debugging workflow state transitions. |
--children | Include child-run conversations, appended after the main trace. Pass when debugging spawn_child_runs flows. |
--all | Shortcut for --workflow --children plus peripheral records. |
--no-llm-logs | Drop LLM call logs — messages-only view. Use when you already know the LLM behavior and just want the user-visible conversation. |
--show-system | Print full system prompt bodies (default: hash only). Use when debugging prompt building or cache behavior. |
--show-requests | Print the request_messages array sent to the LLM per call (default: suppressed). Use when debugging context building or history summarization. |
Defaults at a glance
- Messages and LLM logs: on by default.
--no-llm-logsopts out. - Workflow, children, peripheral: off by default. Opt in with the corresponding flag or
--all. - System prompts and request messages: bodies suppressed by default. Opt in with
--show-system/--show-requests.
Examples
# Default dump — messages + LLM logs to stdout
docker exec teamwebai-web-1 uv run flask debug conversation 42
# Save a full dump to a file for pasting into another agent
docker exec teamwebai-web-1 uv run flask debug conversation 42 --all -o /tmp/conv-42.md
# Trace a spawn_child_runs flow end-to-end
docker exec teamwebai-web-1 uv run flask debug conversation 42 --children
# "Why did the model call tool X on turn 3?" — show full prompt and request context
docker exec teamwebai-web-1 uv run flask debug conversation 42 --show-system --show-requests
# Total cost of a conversation via jq
docker exec teamwebai-web-1 uv run flask debug conversation 42 --json \
| jq '[.primary.turns[].llm_logs[].total_cost|tonumber] | add'
# Names of every tool called in a conversation
docker exec teamwebai-web-1 uv run flask debug conversation 42 --json \
| jq -r '.primary.turns[].llm_logs[].tool_names[]?' | sort -uJSON output shape
--json emits a single top-level object with a primary conversation and a flat children array. Each entry is a ConversationDump with its own turns[], each carrying embedded llm_logs[]. Optional sections (workflow_runs, child_refs, peripheral, pre_turn_llm_logs) are populated only when the matching flags are set.
{
"primary": {
"id": 42,
"subject": "…",
"assistant_name": "…",
"turns": [
{
"message_id": 101,
"role": "user",
"content": "…",
"llm_logs": [ { "id": 501, "model": "…", "total_cost": "0.021", "tool_calls_detail": [...] } ]
}
],
"workflow_runs": null,
"child_refs": null,
"peripheral": null
},
"children": []
}Exit codes
0— dump written successfully.1— conversation not found. The error message is printed to stderr.
Related
- API Logs — web-based view of per-conversation LLM activity.
- Testing Workflows — the
flask workflow-testCLI, which exports a conversation as a workflow test scenario.