Skip to main content

Exporting Traces with OpenTelemetry

This guide explains how to export KaibanJS workflow traces using the @kaibanjs/opentelemetry package. With this integration, you can visualize, debug, and monitor your AI agents’ workflows in real time through OpenTelemetry-compatible observability tools like SigNoz, Langfuse, Phoenix, or Braintrust.

Using AI Development Tools?

Our documentation is available in an LLM-friendly format at docs.kaibanjs.com/llms-full.txt.
Feed this URL directly into your AI IDE or coding assistant for enhanced development support!

Introduction​

The @kaibanjs/opentelemetry package bridges KaibanJS with OpenTelemetry, automatically mapping your agent and task executions to OpenTelemetry spans.
This allows for a detailed, visual representation of how your agents think, act, and collaborate within complex workflows.

Key Features​

  • πŸ” Automatic Trace Mapping β€” KaibanJS tasks and agents are represented as OpenTelemetry spans.
  • πŸ“ˆ Built-in Metrics β€” Duration, token usage, cost, and performance are automatically captured.
  • 🌐 Multi-Service Export β€” Export traces to SigNoz, Langfuse, Phoenix, Braintrust, Dash0, and any OTLP-compatible service.
  • βš™οΈ Smart Sampling β€” Supports configurable sampling strategies.
  • 🧩 Zero Breaking Changes β€” Works without modifying your existing KaibanJS logic.

Installation​

npm install @kaibanjs/opentelemetry

Quick Start​

Here’s a minimal setup to get started with OpenTelemetry tracing in your KaibanJS project:

import { Team, Agent, Task } from 'kaibanjs';
import { enableOpenTelemetry } from '@kaibanjs/opentelemetry';

const team = new Team({
name: 'My Observability Team',
agents: [...],
tasks: [...]
});

const config = {
enabled: true,
sampling: { rate: 1.0, strategy: 'always' },
attributes: {
includeSensitiveData: false,
customAttributes: {
'service.name': 'kaiban-observability-demo',
'service.version': '1.0.0'
}
},
exporters: {
console: true,
otlp: {
endpoint: 'https://ingest.us.signoz.cloud:443',
protocol: 'grpc',
headers: { 'signoz-access-token': 'your-token' },
serviceName: 'kaibanjs-service'
}
}
};

enableOpenTelemetry(team, config);
await team.start({ input: 'data' });

Configuration Options​

OpenTelemetryConfig Interface​

interface OpenTelemetryConfig {
enabled: boolean;
sampling: {
rate: number;
strategy: 'always' | 'probabilistic' | 'rate_limiting';
};
attributes: {
includeSensitiveData: boolean;
customAttributes: Record<string, string>;
};
exporters?: {
console?: boolean;
otlp?: OTLPConfig | OTLPConfig[];
};
}

Sampling Strategies​

StrategyDescription
alwaysRecords all traces β€” recommended for development
probabilisticSamples a percentage of traces (0.0 to 1.0)
rate_limitingLimits trace rate for high-load production systems

Trace Structure​

Each KaibanJS task or agent execution is automatically converted into OpenTelemetry spans, with a clear parent-child hierarchy:

Task Span (DOING β†’ DONE)
β”œβ”€β”€ Agent Thinking Span (THINKING_END)
β”œβ”€β”€ Agent Thinking Span (THINKING_END)
└── Agent Thinking Span (THINKING_END)

Exporting Traces​

Console Exporter (for Development)​

exporters: {
console: true
}

OTLP Exporter (for Production)​

You can export traces to any OTLP-compatible service.

Example: Single Service​

exporters: {
otlp: {
endpoint: 'https://cloud.langfuse.com/api/public/otel',
protocol: 'http',
headers: {
Authorization: 'Basic ' + Buffer.from('pk-lf-xxx:sk-lf-xxx').toString('base64')
},
serviceName: 'kaibanjs-langfuse'
}
}

Example: Multiple Services​

exporters: {
otlp: [
{
endpoint: 'https://ingest.us.signoz.cloud:443',
protocol: 'grpc',
headers: { 'signoz-access-token': 'your-token' },
serviceName: 'kaibanjs-signoz'
},
{
endpoint: 'https://cloud.langfuse.com/api/public/otel',
protocol: 'http',
headers: {
Authorization:
'Basic ' + Buffer.from('pk-lf-xxx:sk-lf-xxx').toString('base64')
},
serviceName: 'kaibanjs-langfuse'
}
]
}

Environment Variable Configuration​

export OTEL_EXPORTER_OTLP_ENDPOINT="https://your-service.com"
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your-token"
export OTEL_EXPORTER_OTLP_PROTOCOL="http"

Then in your code:

exporters: {
otlp: { serviceName: 'kaibanjs-service' }
}

Monitoring Metrics​

  • Workflow and task duration
  • Cost and token usage
  • Iteration count
  • Error rates
  • Resource consumption

Advanced Usage​

import { createOpenTelemetryIntegration } from '@kaibanjs/opentelemetry';

const integration = createOpenTelemetryIntegration(config);
integration.integrateWithTeam(team);

await team.start({ input: 'data' });
await integration.shutdown();

Best Practices​

  1. Use probabilistic sampling in production.
  2. Avoid including sensitive data in traces.
  3. Validate exporter endpoints and authentication tokens.
  4. Use the console exporter for local debugging.
  5. Monitor memory and performance when scaling agents.

Troubleshooting​

IssuePossible CauseSolution
Connection refusedWrong endpointVerify OTLP URL and protocol
Authentication failedInvalid API tokenDouble-check headers or environment variables
Timeout errorsNetwork latencyIncrease timeout in OTLP config
No traces visibleSampling rate too lowUse strategy: 'always' temporarily

Conclusion​

By integrating OpenTelemetry with KaibanJS, you gain deep visibility into your agents’ behavior and task performance.
This observability layer empowers you to diagnose issues faster, optimize execution flows, and scale AI systems confidently.

We Love Feedback!

Found this guide useful or have suggestions?
Help us improve by submitting an issue on GitHub.