Skip to content

Tracing

MC-AI supports opt-in OpenTelemetry tracing for full visibility into workflow execution, node timings, LLM calls, and tool invocations.

Tracing is enabled via the OTEL_EXPORTER_OTLP_ENDPOINT environment variable. When unset, all tracing is a no-op with zero overhead (dynamic imports ensure OTel machinery is never loaded).

import { initTracing } from '@mcai/orchestrator';
// Call once at app startup (before any traced code)
await initTracing('orchestrator');
Terminal window
OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 node app.js
workflow.run (graph-runner.ts)
├── node.execute.supervisor (graph-runner.ts)
│ └── supervisor.route (supervisor-executor.ts)
├── node.execute.agent (graph-runner.ts)
│ └── agent.execute (agent-executor.ts)
└── node.execute.tool (graph-runner.ts)
SpanAttributes
workflow.runworkflow.id, graph.id, graph.name, run.id, workflow.duration_ms, workflow.status, workflow.iterations
agent.executeagent.id, agent.model, agent.provider, agent.tokens.input, agent.tokens.output, agent.tools_called
supervisor.routesupervisor.id, supervisor.decision, supervisor.reasoning, supervisor.iteration, supervisor.input_tokens, supervisor.output_tokens
  • Workflow Run: Total duration and status
  • Supervisor Decisions: Why it chose a particular node (reasoning captured in span)
  • Agent Execution: Model used, token usage (input/output), tools called
  • Tool Calls: Inputs and outputs of every MCP call

With Jaeger (or any OTLP-compatible collector):

Terminal window
# If using Docker Compose, Jaeger is included
docker compose up jaeger
# View traces
open http://localhost:16686

Other compatible collectors: Axiom, LangFuse, Honeycomb, Grafana Tempo.

  • Evaluations — verify agent behavior with automated evals