Supported providers
OpenAI
Chat completions, streaming, embeddings, and image generation — with full token usage and cost tracking.
Anthropic
Claude messages and streaming — with cache token tracking and full OpenTelemetry semantic conventions.
What gets collected
Every instrumented call automatically records:- Distributed traces — spans with request/response details, model name, token counts, and cost
- OTel metrics —
gen_ai.client.token.usage,gen_ai.client.operation.duration,gen_ai.server.time_to_first_token,gen_ai.server.time_per_output_token,gen_ai.server.request.duration - Streaming metrics — time-to-first-chunk and per-chunk latency observations
- Cost tracking — automatic cost calculation using the built-in pricing data
Installation
- Latest Version
- Specific Version
Quick start
YOUR_OTEL_ENDPOINT with the URL of your OpenTelemetry backend, such as http://127.0.0.1:4318 for a local OpenLIT deployment.
Configuration
Pass aConfig struct to openlit.Init():
| Field | Environment Variable | Description | Default |
|---|---|---|---|
OtlpEndpoint | OTEL_EXPORTER_OTLP_ENDPOINT | OTLP backend URL | http://127.0.0.1:4318 |
OtlpHeaders | — | Additional HTTP headers for OTLP requests | {} |
ApplicationName | — | Name of your application | default |
Environment | — | Deployment environment label | default |
ServiceVersion | — | Service version string | "" |
DisableTracing | — | Disable trace collection | false |
DisableMetrics | — | Disable metrics collection | false |
DisableBatch | — | Disable batch export (useful for testing) | false |
DisableCaptureMessageContent | — | Omit prompt/completion text from spans | false |
DetailedTracing | — | Enable component-level tracing detail | false |
DisablePricingFetch | — | Skip fetching remote pricing data | false |
PricingEndpoint | — | URL for custom pricing JSON | built-in |
PricingInfo | — | In-process pricing overrides | {} |
TraceExporterTimeout | — | Timeout for trace exports | 10s |
MetricExporterTimeout | — | Timeout for metric exports | 10s |
MetricExportInterval | — | Interval for metric exports | 30s |
Via environment variable
Getting started
OpenAI Integration
Monitor chat completions, streaming, embeddings, and image generation
Anthropic Integration
Monitor Claude messages and streaming with cache token tracking
Destinations
Send telemetry to Datadog, Grafana, New Relic, and other observability stacks
Configuration
Configure the OpenLIT SDK according to your requirements

