Skip to main content
The OpenLIT Go SDK provides OpenTelemetry-native observability for Go applications using LLM APIs. It wraps your provider clients with lightweight instrumentation that automatically collects traces, metrics, and cost data without requiring any changes to your application logic.

Supported providers

OpenAI

Chat completions, streaming, embeddings, and image generation — with full token usage and cost tracking.

Anthropic

Claude messages and streaming — with cache token tracking and full OpenTelemetry semantic conventions.

What gets collected

Every instrumented call automatically records:
  • Distributed traces — spans with request/response details, model name, token counts, and cost
  • OTel metricsgen_ai.client.token.usage, gen_ai.client.operation.duration, gen_ai.server.time_to_first_token, gen_ai.server.time_per_output_token, gen_ai.server.request.duration
  • Streaming metrics — time-to-first-chunk and per-chunk latency observations
  • Cost tracking — automatic cost calculation using the built-in pricing data

Installation

go get github.com/openlit/openlit/sdk/go

Quick start

package main

import (
    "context"
    "fmt"
    "log"

    openlit "github.com/openlit/openlit/sdk/go"
    "github.com/openlit/openlit/sdk/go/instrumentation/openai"
)

func main() {
    // 1. Initialize OpenLIT (once, at startup)
    if err := openlit.Init(openlit.Config{
        OtlpEndpoint:    "http://127.0.0.1:4318",
        ApplicationName: "my-ai-app",
        Environment:     "production",
    }); err != nil {
        log.Fatal(err)
    }
    defer openlit.Shutdown(context.Background())

    // 2. Create an instrumented client
    client := openai.NewClient("your-openai-api-key")

    // 3. Use it exactly like a normal client
    resp, err := client.CreateChatCompletion(context.Background(), openai.ChatCompletionRequest{
        Model: "gpt-4o",
        Messages: []openai.Message{
            {Role: "user", Content: "Hello, world!"},
        },
        MaxTokens: 100,
    })
    if err != nil {
        log.Fatal(err)
    }

    fmt.Println(resp.Choices[0].Message.Content)
}
Replace YOUR_OTEL_ENDPOINT with the URL of your OpenTelemetry backend, such as http://127.0.0.1:4318 for a local OpenLIT deployment.

Configuration

Pass a Config struct to openlit.Init():
FieldEnvironment VariableDescriptionDefault
OtlpEndpointOTEL_EXPORTER_OTLP_ENDPOINTOTLP backend URLhttp://127.0.0.1:4318
OtlpHeadersAdditional HTTP headers for OTLP requests{}
ApplicationNameName of your applicationdefault
EnvironmentDeployment environment labeldefault
ServiceVersionService version string""
DisableTracingDisable trace collectionfalse
DisableMetricsDisable metrics collectionfalse
DisableBatchDisable batch export (useful for testing)false
DisableCaptureMessageContentOmit prompt/completion text from spansfalse
DetailedTracingEnable component-level tracing detailfalse
DisablePricingFetchSkip fetching remote pricing datafalse
PricingEndpointURL for custom pricing JSONbuilt-in
PricingInfoIn-process pricing overrides{}
TraceExporterTimeoutTimeout for trace exports10s
MetricExporterTimeoutTimeout for metric exports10s
MetricExportIntervalInterval for metric exports30s

Via environment variable

// If OTEL_EXPORTER_OTLP_ENDPOINT is set, no endpoint config is needed
openlit.Init(openlit.Config{
    ApplicationName: "my-ai-app",
})
export OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4318

Getting started