Skip to content
Code Guide

Session Observability & Monitoring

Track Claude Code usage, estimate costs, and identify patterns across your development sessions.

  1. Why Monitor Sessions
  2. Session Search & Resume
  3. Setting Up Session Logging
  4. Analyzing Session Data
  5. Cost Tracking
  6. Activity Monitoring
  7. External Monitoring Tools
  8. Proxying Claude Code
  9. Patterns & Best Practices
  10. Limitations

Claude Code usage can accumulate quickly, especially in active development. Monitoring helps you:

  • Understand costs: Estimate API spend before invoices arrive
  • Identify patterns: See which tools you use most, which files get edited repeatedly
  • Optimize workflow: Find inefficiencies (e.g., repeatedly reading the same large file)
  • Track projects: Compare usage across different codebases
  • Team visibility: Aggregate usage for team budgeting (when combining logs)

After weeks of using Claude Code, finding past conversations becomes challenging. This section covers native options and community tools.

CommandUse Case
claude -c / claude --continueResume most recent session
claude -r <id> / claude --resume <id>Resume specific session by ID
claude --resumeInteractive session picker

Sessions are stored locally at ~/.claude/projects/<project>/ as JSONL files.

ToolInstallList SpeedSearch SpeedDependenciesResume Command
session-search.sh (this repo)Copy script10ms400msNone (bash)✅ Displayed
claude-conversation-extractorpip install230ms1.7sPython
claude-code-transcriptsuvxN/AN/APython
ran CLInpm -gN/AFastNode.js❌ (commands only)

Zero-dependency bash script optimized for speed with ready-to-use resume commands.

Install:

Terminal window
cp examples/scripts/session-search.sh ~/.claude/scripts/cs
chmod +x ~/.claude/scripts/cs
echo "alias cs='~/.claude/scripts/cs'" >> ~/.zshrc
source ~/.zshrc

Usage:

Terminal window
cs # List 10 most recent sessions (~15ms)
cs "authentication" # Single keyword search (~400ms)
cs "Prisma migration" # Multi-word AND search (both must match)
cs -n 20 # Show 20 results
cs -p myproject "bug" # Filter by project name
cs --since 7d # Sessions from last 7 days
cs --since today # Today's sessions only
cs --json "api" | jq . # JSON output for scripting
cs --rebuild # Force index rebuild

Output:

2026-01-15 08:32 │ my-project │ Implement OAuth flow for...
claude --resume 84287c0d-8778-4a8d-abf1-eb2807e327a8
2026-01-14 21:13 │ other-project │ Fix database migration...
claude --resume 1340c42e-eac5-4181-8407-cc76e1a76219

Copy-paste the claude --resume command to continue any session.

  1. Index mode (no filters): Uses cached TSV index. Auto-refreshes when sessions change. ~15ms lookup.
  2. Search mode (with keyword/filters): Full-text search with 3s timeout. Multi-word queries use AND logic.
  3. Filters: --project (substring match), --since (supports today, yesterday, 7d, YYYY-MM-DD)
  4. Output: Human-readable by default, --json for scripting. Excludes agent/subagent sessions.

If you prefer richer features (HTML export, multiple formats):

Terminal window
# Install
pip install claude-conversation-extractor
# Interactive UI
claude-start
# Direct search
claude-search "keyword"
# Export to markdown
claude-extract --format markdown

See session-search.sh for the complete script.


Session Resume Limitations & Cross-Folder Migration

Section titled “Session Resume Limitations & Cross-Folder Migration”

TL;DR: Native --resume is limited to the current working directory by design. For cross-folder migration, use manual filesystem operations (recommended) or community automation tools (untested).

Claude Code stores sessions at ~/.claude/projects/<encoded-path>/ where <encoded-path> is derived from your project’s absolute path. For example:

  • Project at /home/user/myapp → Sessions in ~/.claude/projects/-home-user-myapp-/
  • Project moved to /home/user/projects/myapp → Claude looks for ~/.claude/projects/-home-user-projects-myapp-/ (different directory)

Design rationale: Sessions store absolute file paths, project-specific context (MCP server configs, .claudeignore rules, environment variables). Cross-folder resume would require path rewriting and context validation, which isn’t implemented yet.

Related: GitHub issue #1516 tracks community requests for native cross-folder support.

When moving a project folder:

Terminal window
# Before moving project
cd ~/.claude/projects/
ls -la # Note the current encoded path
# Move your project
mv /old/location/myapp /new/location/myapp
# Rename session directory to match new path
cd ~/.claude/projects/
mv -- -old-location-myapp- -new-location-myapp-
# Verify
cd /new/location/myapp
claude --continue # Should resume successfully

When forking sessions to a new project:

Terminal window
# Copy session files (preserves original)
cd ~/.claude/projects/
cp -n ./-source-project-/*.jsonl ./-target-project-/
# Copy subagents directory if exists
if [ -d ./-source-project-/subagents ]; then
cp -r ./-source-project-/subagents ./-target-project-/
fi
# Resume in target project
cd /path/to/target/project
claude --continue

Before migrating sessions, verify compatibility:

RiskImpactMitigation
Hardcoded secretsCredentials exposed in new contextAudit .jsonl files before migration, redact if needed
Absolute pathsFile references break if paths differVerify paths exist in target, or accept broken references
MCP server configsSource MCP servers missing in targetInstall matching MCP servers before resuming
.claudeignore rulesDifferent ignore patternsReview differences, merge if needed
Environment variablesprocess.env context mismatchCheck .env files compatibility

When NOT to migrate sessions:

  • Conflicting dependencies (e.g., different Node.js versions, package managers)
  • Database state differences (migrations applied in source, not in target)
  • Authentication context (API tokens, OAuth sessions specific to source project)
  • Security boundaries (migrating from private to public repo)

claude-migrate-session by Jim Weller (inspired by Alexis Laporte) automates the manual process above:

  • Repository: jimweller/dotfiles
  • Features: Global search with filtering, preserves .jsonl + subagents, uses ripgrep for performance
  • Status: Personal dotfiles (0 stars/forks as of Feb 2026), limited adoption
  • Command: /claude-migrate-session <source> <target>

⚠️ Caveat: This tool has minimal community testing. The manual approach is safer and gives you explicit control over what gets migrated. Test thoroughly before using in production workflows.

Use cases for migration:

  • Forking prototype work into production codebase
  • Moving debugging session to isolated test repository
  • Continuing architecture discussion in a new project

Alternative: Entire CLI Session Portability

Section titled “Alternative: Entire CLI Session Portability”

Native limitation: Claude Code’s --resume is tied to absolute file paths, breaking on folder moves.

Entire CLI solution: Checkpoints are path-agnostic, enabling true session portability across project locations.

How it works:

Terminal window
# In source project
cd /old/location/myapp
entire capture --agent="claude-code"
[... work in Claude Code ...]
entire checkpoint --name="migration-complete"
# Move project to new location
mv /old/location/myapp /new/location/myapp
# Resume in target (works because Entire stores relative paths)
cd /new/location/myapp
entire resume --checkpoint="migration-complete"
claude --continue # Resumes with full context

Why Entire checkpoints are portable:

AspectNative --resumeEntire CLI
Path storageAbsolute paths in JSONLRelative paths in checkpoints
Cross-folderBreaks (different project encoding)Works (path-agnostic)
Context preservationPrompt history onlyPrompts + reasoning + file states
Agent handoffsNoYes (between Claude/Gemini)

When to use Entire over manual migration:

  • ✅ Frequent project moves/forks
  • ✅ Multi-agent workflows (Claude → Gemini handoffs)
  • ✅ Session replay for debugging (rewind to exact state)
  • ✅ Governance (approval gates on resume)

Trade-off: Adds tool dependency + storage overhead (~5-10% project size).

Full docs: AI Traceability Guide


For monitoring multiple concurrent Claude Code instances via external orchestrators (Gas Town, multiclaude), see:

Architecture pattern (for custom implementations):

  1. Hook logs Task agent spawns: .claude/hooks/multi-agent-logger.sh
  2. Store in SQLite: ~/.claude/logs/agents.db (parent_id, child_id, timestamp, task)
  3. Stream via SSE: Simple Go/Node HTTP server
  4. Dashboard: React/HTML consuming SSE stream

Native Claude Code monitoring (this guide):

When to use external orchestrator monitoring:

  • Running Gas Town or multiclaude with 5+ concurrent agents
  • Need real-time visibility into agent coordination
  • Debugging orchestration failures (agent conflicts, merge issues)

When native monitoring suffices:

  • Single Claude Code session or --delegate with <3 subagents
  • Post-hoc analysis (logs, stats) is enough
  • Budget/complexity constraints

Copy the session logger to your hooks directory:

Terminal window
# Create hooks directory if needed
mkdir -p ~/.claude/hooks
# Copy the logger (from this repo's examples)
cp examples/hooks/bash/session-logger.sh ~/.claude/hooks/
chmod +x ~/.claude/hooks/session-logger.sh

Add to ~/.claude/settings.json:

{
"hooks": {
"PostToolUse": [
{
"type": "command",
"command": "~/.claude/hooks/session-logger.sh"
}
]
}
}

Run a few Claude Code commands, then check logs:

Terminal window
ls ~/.claude/logs/
# Should see: activity-2026-01-14.jsonl
# View recent entries
tail -5 ~/.claude/logs/activity-$(date +%Y-%m-%d).jsonl | jq .
Environment VariableDefaultDescription
CLAUDE_LOG_DIR~/.claude/logsWhere to store logs
CLAUDE_LOG_TOKENStrueEnable token estimation
CLAUDE_SESSION_IDauto-generatedCustom session identifier

Terminal window
# Copy the script
cp examples/scripts/session-stats.sh ~/.local/bin/
chmod +x ~/.local/bin/session-stats.sh
# Today's summary
session-stats.sh
# Last 7 days
session-stats.sh --range week
# Specific date
session-stats.sh --date 2026-01-14
# Filter by project
session-stats.sh --project my-app
# Machine-readable output
session-stats.sh --json
═══════════════════════════════════════════════════════════
Claude Code Session Statistics - today
═══════════════════════════════════════════════════════════
Summary
Total operations: 127
Sessions: 3
Token Usage
Input tokens: 45,230
Output tokens: 12,450
Total tokens: 57,680
Estimated Cost (Sonnet rates)
Input: $0.1357
Output: $0.1868
Total: $0.3225
Tools Used
Edit: 45
Read: 38
Bash: 24
Grep: 12
Write: 8
Projects
my-app: 89
other-project: 38

Token counts tell you how much you used Claude Code. JSONL logs can also tell you how well your configuration is working — if you know what to look for.

Beyond cost metrics, three patterns reliably signal that a skill, rule, or CLAUDE.md section needs updating:

Repeated reads of the same file

If Claude reads the same file 3+ times in one session, the content it needs probably isn’t where it expects to find it. Consider moving the relevant context into a skill or CLAUDE.md section.

Terminal window
# Files read more than 3x in recent sessions
jq -r 'select(.tool == "Read") | .file' ~/.claude/logs/activity-*.jsonl \
| sort | uniq -c | sort -rn | awk '$1 > 3'

Tool failures on the same command

A Bash command that fails repeatedly across sessions usually means a skill has an outdated path, renamed binary, or command that no longer works with your current stack.

Terminal window
# Failing commands
jq -r 'select(.tool == "Bash" and (.exit_code // 0) != 0) | .command' \
~/.claude/logs/activity-*.jsonl | sort | uniq -c | sort -rn | head -10

High edit frequency on the same file

Files edited heavily across sessions often indicate missing context — the file’s purpose isn’t clear to the agent, or conventions around it aren’t documented.

Terminal window
# Most-edited files (proxy for context gaps)
jq -r 'select(.tool == "Edit") | .file' ~/.claude/logs/activity-*.jsonl \
| sort | uniq -c | sort -rn | head -10

For each pattern you surface, ask: is there a skill, rule, or CLAUDE.md section that should cover this? See §9.23 Configuration Lifecycle & The Update Loop for the full workflow.


Each log entry is a JSON object:

{
"timestamp": "2026-01-14T15:30:00Z",
"session_id": "1705234567-12345",
"tool": "Edit",
"file": "src/components/Button.tsx",
"project": "my-app",
"tokens": {
"input": 350,
"output": 120,
"total": 470
}
}

The logger estimates tokens using a simple heuristic: ~4 characters per token. This is approximate and tends to slightly overestimate.

Default rates are for Claude Sonnet. Adjust via environment variables:

Terminal window
# Sonnet rates (default)
export CLAUDE_RATE_INPUT=0.003 # $3/1M tokens
export CLAUDE_RATE_OUTPUT=0.015 # $15/1M tokens
# Opus rates (if using Opus)
export CLAUDE_RATE_INPUT=0.015 # $15/1M tokens
export CLAUDE_RATE_OUTPUT=0.075 # $75/1M tokens
# Haiku rates
export CLAUDE_RATE_INPUT=0.00025 # $0.25/1M tokens
export CLAUDE_RATE_OUTPUT=0.00125 # $1.25/1M tokens

Add to your shell profile for daily budget warnings:

Terminal window
# ~/.zshrc or ~/.bashrc
claude_budget_check() {
local cost=$(session-stats.sh --json 2>/dev/null | jq -r '.summary.estimated_cost.total // 0')
local threshold=5.00 # $5 daily budget
if (( $(echo "$cost > $threshold" | bc -l) )); then
echo "⚠️ Claude Code daily spend: \$$cost (threshold: \$$threshold)"
fi
}
# Run on shell start
claude_budget_check

Cost tracking tells you how much you spend. Activity monitoring tells you what Claude Code actually did: which files it read, which commands it ran, which URLs it fetched. This is the audit layer.

Every tool call Claude Code makes is recorded in the session JSONL files at ~/.claude/projects/<project>/. Each entry with type: "assistant" contains a content array where type: "tool_use" blocks document every action.

Terminal window
# Find your session files
ls ~/.claude/projects/-$(pwd | tr '/' '-')-/
# Inspect tool calls in a session
cat ~/.claude/projects/-your-project-/SESSION_ID.jsonl | \
jq 'select(.type == "assistant") | .message.content[]? | select(.type == "tool_use") | {tool: .name, input: .input}'
ToolWhat It Exposes
ReadFiles accessed (path, line range)
Write / EditFiles modified (path, content delta)
BashCommands executed (full command string)
WebFetchURLs fetched (may include data sent in POST)
TaskSubagent spawns (prompt passed to sub-model)
Glob / GrepSearch patterns and scope
Terminal window
# All files read in a session
SESSION=~/.claude/projects/-your-project-/SESSION_ID.jsonl
jq 'select(.type == "assistant") | .message.content[]? | select(.type == "tool_use" and .name == "Read") | .input.file_path' "$SESSION"
# All bash commands executed
jq 'select(.type == "assistant") | .message.content[]? | select(.type == "tool_use" and .name == "Bash") | .input.command' "$SESSION"
# All URLs fetched
jq 'select(.type == "assistant") | .message.content[]? | select(.type == "tool_use" and .name == "WebFetch") | .input.url' "$SESSION"
# Count tool usage by type
jq -r 'select(.type == "assistant") | .message.content[]? | select(.type == "tool_use") | .name' "$SESSION" | sort | uniq -c | sort -rn

These tool call patterns are worth flagging in automated audits:

PatternRiskDetection
Read on .env, *.pem, id_rsaCredential access`jq ‘…
Bash with rm -rf, git push --forceDestructive action`jq ‘…
WebFetch on external URLsData exfiltration risk`jq ‘…
Write on files outside project rootScope creepCheck paths against working directory

Security context: Claude Code operates read-write on your filesystem with your user permissions. The JSONL audit trail is your record of what happened. For teams, consider syncing these logs to immutable storage.


Beyond the hook-based approach above, the community has built purpose-specific tools. This is a factual snapshot as of early 2026.

ToolTypeWhat It DoesInstall
ccusageCLI / TUICost tracking from JSONL — the de-facto reference for pricing data. ~10K GitHub stars.npm i -g ccusage
claude-code-otelOpenTelemetry exporterEmits spans to any OTEL collector. Integrates with Prometheus + Grafana dashboards. Enterprise-focused.npm i -g claude-code-otel
AktoSaaS / self-hostedAPI security guardrails + audit trail. Intercepts at the API level, flags policy violations.akto.io
MLflow TracingCLI + SDKExact token counts, tool spans, LLM-as-judge evaluation. CLI mode: zero Python required. Best for ML/MLOps teams.pip install mlflowsee section below
ccboardTUI + WebUnified dashboard for sessions, costs, stats. Activity/audit tab in development.cargo install ccboard
Want cost numbers fast? → ccusage (CLI, 0 config)
Need enterprise audit trail? → claude-code-otel + Grafana or Akto
Already using MLflow for ML? → MLflow tracing integration (see below)
Need agent regression detection? → MLflow tracing + LLM-as-judge
Want a persistent TUI/Web UI? → ccboard
Terminal window
npm i -g ccusage
ccusage # Today's usage
ccusage --days 7 # Last 7 days

Reads directly from ~/.claude/projects/**/*.jsonl. No API keys, no data sent externally. Source: github.com/ryoppippi/ccusage.

Exports Claude Code activity as OpenTelemetry spans:

Terminal window
npm i -g claude-code-otel
claude-code-otel --collector http://localhost:4318

Spans include tool name, duration, token counts. Plug into any OTEL-compatible backend (Jaeger, Tempo, Datadog). Source: github.com/badger-99/claude-code-otel.

Terminal window
cargo install ccboard
ccboard # Launch TUI
ccboard --web # Launch Web UI (localhost:3000)

Source: github.com/FlorianBruniaux/ccboard. An Activity tab covering file access, bash commands, and network calls is planned (see docs/resource-evaluations/ccboard-activity-module-plan.md).

When to use: Teams already in the MLflow/MLOps ecosystem, or anyone needing exact token counts + LLM-based quality evaluation. Not the right fit for solo devs wanting quick cost numbers (use ccusage instead).

What makes it different from the other tools: MLflow intercepts at the API level, not post-hoc from JSONL. It captures exact token counts (vs the ~15-25% variance of hook-based estimation) and enables LLM-as-judge regression detection — not just “what happened” but “was it good?”.

Works with interactive claude sessions. Hooks into .claude/settings.json:

Terminal window
pip install "mlflow[genai]>=3.4"
# Enable tracing in current project directory
mlflow autolog claude
# With custom backend (recommended for persistence)
mlflow autolog claude -u sqlite:///mlflow.db
# With named experiment
mlflow autolog claude -n "my-project"
# Check status / disable
mlflow autolog claude --status
mlflow autolog claude --disable

Launch the UI to inspect traces:

Terminal window
mlflow server # → http://localhost:5000

What gets captured automatically: user prompts, assistant responses, tool calls (name + inputs + outputs), token counts (exact), latency per call, session metadata.

import mlflow
mlflow.anthropic.autolog() # one line, before anything else
mlflow.set_experiment("my-agent")
# Use ClaudeSDKClient normally — all interactions are traced
# ⚠️ Only ClaudeSDKClient is supported. Direct API calls are not traced.
from anthropic import claude_agent_sdk
async with ClaudeSDKClient(options=AGENT_OPTIONS) as client:
await client.query(query)

Requires: mlflow>=3.5 + claude-agent-sdk>=0.1.0.

Claude Code can query its own traces directly. Add to .claude/settings.json:

{
"mcpServers": {
"mlflow-mcp": {
"command": "uv",
"args": ["run", "--with", "mlflow[mcp]>=3.5.1", "mlflow", "mcp", "run"],
"env": { "MLFLOW_TRACKING_URI": "<your-tracking-uri>" }
}
}
}

Once configured, you can ask Claude Code: “Find all sessions where the backend-architect agent used more than 20 tool calls” — it queries MLflow directly without copy-pasting IDs.

The key capability absent from all other tools in this section. After modifying an agent’s instructions, measure whether quality improved or degraded:

from mlflow.genai.scorers import scorer, ConversationCompleteness, RelevanceToQuery
from mlflow.entities.model_registry import Feedback
@scorer
def tool_efficiency(trace) -> int:
"""Count tool calls — lower is better for well-scoped tasks."""
return len(trace.search_spans(span_type="TOOL"))
@scorer
def permission_blocks(trace) -> int:
"""Detect how often the agent was blocked by permission gates."""
return sum(
1 for span in trace.search_spans(span_type="TOOL")
if span.outputs and "requires approval" in str(span.outputs).lower()
)
# Run evaluation against recorded traces
traces = mlflow.search_traces(experiment_ids=["<id>"], max_results=50)
results = mlflow.genai.evaluate(
data=traces,
scorers=[
tool_efficiency,
permission_blocks,
ConversationCompleteness(),
RelevanceToQuery(),
]
)

Built-in scorers: ConversationCompleteness, RelevanceToQuery, UserFrustration, SafetyScorer.

Custom scorers: full access to the trace object (all spans, inputs, outputs, token counts).

LimitationDetail
CLI mode audienceBest for interactive sessions; SDK mode required for programmatic agents
SDK restrictionOnly ClaudeSDKClient — direct API calls bypass tracing
PII riskTraces capture full conversation content. Redact before storing if working with sensitive data
Production backendSQLite = dev only. Use PostgreSQL/MySQL for production
OpenTelemetryMLflow 3.6+ exports to any OTEL-compatible backend (Datadog, Grafana, etc.)

A common question: “Can I run Proxyman/Charles to see what Claude Code sends to Anthropic?”

Short answer: Not directly. Here’s why, and what works instead.

Claude Code is a Node.js process. By default, Node.js ignores system-level proxy settings (HTTP_PROXY, HTTPS_PROXY) — it uses its own TLS stack and doesn’t read macOS/Windows proxy configurations.

Additionally, even if traffic flows through your proxy, the TLS certificate mismatch causes Claude Code to fail (CERT_UNTRUSTED).

Option 1: Trust a MITM Certificate (Proxyman / Charles)

Section titled “Option 1: Trust a MITM Certificate (Proxyman / Charles)”

Force Node.js to trust your proxy’s CA certificate:

Terminal window
# Export Proxyman's CA cert (File → Export → Root Certificate)
# Then point Node.js at it:
export NODE_EXTRA_CA_CERTS="/path/to/proxyman-ca.pem"
# Start Claude Code — traffic will now route through Proxyman
claude

Same approach works for Charles: Help → SSL Proxying → Export Charles Root Certificate.

Caveats:

  • Some Claude Code versions use certificate pinning for api.anthropic.com — this may still fail
  • This approach requires a running Proxyman/Charles instance listening on the configured port

Option 2: Redirect API Traffic with ANTHROPIC_API_URL

Section titled “Option 2: Redirect API Traffic with ANTHROPIC_API_URL”

Point Claude Code at a local interceptor instead of api.anthropic.com:

Terminal window
export ANTHROPIC_API_URL="http://localhost:8080"
claude

Run any HTTP proxy/logger on port 8080 that forwards to https://api.anthropic.com. This bypasses TLS entirely for the Claude Code → proxy hop.

Use cases: Logging request payloads, injecting headers, rate-limiting locally, replaying requests.

mitmproxy is the cleanest open-source solution. It provides a scriptable HTTPS proxy with a web UI and terminal interface.

Terminal window
# Install
brew install mitmproxy # macOS
# or: pip install mitmproxy
# Start transparent proxy on port 8080
mitmproxy --listen-port 8080
# In a new terminal, point Claude Code at it
export NODE_EXTRA_CA_CERTS="$(python3 -c 'import mitmproxy.certs; print(mitmproxy.certs.Cert.default_ca_path())')"
export HTTPS_PROXY="http://localhost:8080"
claude

The mitmproxy web UI (mitmweb) at http://localhost:8081 shows full request/response bodies — including the JSON payloads Claude Code sends to Anthropic.

What you’ll see: System prompt, user messages, tool definitions, tool results, model parameters.

For a zero-dependency approach:

# proxy.py — simple HTTPS logging proxy
from http.server import HTTPServer, BaseHTTPRequestHandler
import urllib.request, json, sys
TARGET = "https://api.anthropic.com"
class LoggingProxy(BaseHTTPRequestHandler):
def do_POST(self):
length = int(self.headers["Content-Length"])
body = self.rfile.read(length)
print(json.dumps(json.loads(body), indent=2)) # Log request
# Forward to Anthropic...
HTTPServer(("localhost", 8080), LoggingProxy).serve_forever()
Terminal window
python3 proxy.py &
export ANTHROPIC_API_URL="http://localhost:8080"
claude

Privacy note: Proxied traffic includes everything in the conversation context — file contents Claude has read, your code, any secrets it encountered. Handle proxy logs accordingly.


Set a calendar reminder to review weekly stats:

Terminal window
session-stats.sh --range week

Look for:

  • Unusually high token usage days
  • Repeated operations on same files (inefficiency signal)
  • Project distribution (where time is spent)

Use CLAUDE_SESSION_ID to tag sessions by project:

Terminal window
export CLAUDE_SESSION_ID="project-myapp-$(date +%s)"
claude

For team-wide tracking, sync logs to shared storage:

Terminal window
# Example: sync to S3 daily
aws s3 sync ~/.claude/logs/ s3://company-claude-logs/$(whoami)/

Then aggregate with:

Terminal window
# Download all team logs
aws s3 sync s3://company-claude-logs/ /tmp/team-logs/
# Combine and analyze
cat /tmp/team-logs/*/activity-$(date +%Y-%m-%d).jsonl | \
jq -s 'group_by(.project) | map({project: .[0].project, total_tokens: [.[].tokens.total] | add})'

Logs accumulate over time. Add cleanup to cron:

Terminal window
# Clean logs older than 30 days
find ~/.claude/logs -name "*.jsonl" -mtime +30 -delete

LimitationReason
Exact token countsClaude Code CLI doesn’t expose API token metrics
TTFT (Time to First Token)Hook runs after tool completes, not during streaming
Real-time streaming metricsNo hook event during response generation
Actual API costsToken estimates are heuristic, not from billing
Model selectionLog doesn’t capture which model was used per request
Context window usageNo visibility into current context percentage
  • Token estimates: ~15-25% variance from actual billing
  • Cost estimates: Use as directional guidance, not accounting
  • Session boundaries: Sessions are approximated by ID, not exact API sessions
  • Tool usage counts: Exact count of each tool invocation
  • File access patterns: Which files were touched
  • Relative comparisons: Day-to-day/project-to-project trends
  • Operation timing: When tools were used (timestamp)