To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to an OpenTelemetry Collector, follow the below steps. The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. It can act as an intermediary to route your OpenLIT data to multiple backends or apply processing transformations.

1. Deploy OpenTelemetry Collector

Install the Collector (choose your preferred method):
# Run OpenTelemetry Collector with OTLP receivers
docker run -p 4317:4317 -p 4318:4318 \
  -v $(pwd)/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml \
  otel/opentelemetry-collector-contrib:latest
Basic Collector Configuration: Create an otel-collector-config.yaml file:
Get the Collector Endpoint:
  • Default HTTP endpoint: http://localhost:4318 or http://your-collector-host:4318
  • Default gRPC endpoint: http://localhost:4317 or http://your-collector-host:4317

2. Instrument your application

For Kubernetes deployments with zero-code instrumentation:
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
metadata:
  name: otelcol-instrumentation
  namespace: default
spec:
  selector:
    matchLabels:
      instrument: "true"
  python:
    instrumentation:
      provider: "openlit"
      version: "latest"
  otlp:
    endpoint: "YOUR_COLLECTOR_ENDPOINT"
    timeout: 30
  resource:
    environment: "production"
    serviceName: "my-ai-service"
Replace:
  1. YOUR_COLLECTOR_ENDPOINT with your OpenTelemetry Collector endpoint from Step 1.
    • Kubernetes cluster: http://my-otel-collector:4318
    • External: http://your-collector-host:4318
When deploying the OpenTelemetry Collector in the same Kubernetes cluster, use the service name and namespace for the endpoint (e.g., http://my-otel-collector.monitoring.svc.cluster.local:4318).
Refer to the OpenLIT Operator Documentation for more advanced configurations and use cases.

3. Configure Collector Exporters

Once your LLM application is sending data to the OpenTelemetry Collector, configure exporters to send data to your preferred observability backends: Popular Exporter Configurations:
Monitor Collector Health:
# Check collector logs
docker logs <collector-container-id>

# Or for Kubernetes
kubectl logs -l app.kubernetes.io/name=opentelemetry-collector

# Health check endpoint (if enabled)
curl http://localhost:13133/
Benefits of Using OpenTelemetry Collector:
  • Vendor Agnostic: Route data to multiple backends simultaneously
  • Data Processing: Apply transformations, filtering, and sampling
  • Protocol Translation: Convert between different telemetry formats
  • Buffering & Reliability: Handle network issues and backend outages
  • Cost Optimization: Sample and filter data to reduce costs
  • Security: Add authentication, encryption, and data anonymization
Your OpenLIT-instrumented AI applications will send telemetry data to the Collector, which can then process and route it to any number of observability backends, providing flexibility and powerful data processing capabilities for your LLM monitoring infrastructure.