To send OpenTelemetry traces generated by OpenLIT from your AI Application to Langfuse, follow the below steps. Langfuse is an OpenTelemetry backend that supports native trace ingestion from OpenTelemetry instrumentation libraries like OpenLIT.

1. Get your Credentials

  1. Sign up at Langfuse: Go to Langfuse Cloud or deploy Langfuse self-hosted
  2. Get your Project Keys:
    • Public Key: Your Langfuse public key (starts with pk-lf-)
    • Secret Key: Your Langfuse secret key (starts with sk-lf-)
  3. Choose your data region:
    • EU Region: https://cloud.langfuse.com/api/public/otel
    • US Region: https://us.cloud.langfuse.com/api/public/otel
    • Self-hosted: https://your-langfuse-instance.com/api/public/otel
Save these credentials - you’ll need them for authentication.

2. Instrument your application

For Kubernetes deployments with zero-code instrumentation:
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
metadata:
  name: langfuse-instrumentation
  namespace: default
spec:
  selector:
    matchLabels:
      instrument: "true"
  python:
    instrumentation:
      provider: "openlit"
      version: "latest"
  otlp:
    endpoint: "https://cloud.langfuse.com/api/public/otel"
    headers: "Authorization=Basic <BASE64_ENCODED_AUTH>"
    timeout: 30
  resource:
    environment: "production"
    serviceName: "my-ai-service"
To create the Base64 encoded auth header:
# Replace with your actual Langfuse keys
export LANGFUSE_PUBLIC_KEY="pk-lf-..."
export LANGFUSE_SECRET_KEY="sk-lf-..."
export LANGFUSE_AUTH=$(echo -n "$LANGFUSE_PUBLIC_KEY:$LANGFUSE_SECRET_KEY" | base64)
echo $LANGFUSE_AUTH
Replace:
  1. <BASE64_ENCODED_AUTH> with the output from the Base64 encoding command above
  2. Update the endpoint for your region:
    • EU: https://cloud.langfuse.com/api/public/otel
    • US: https://us.cloud.langfuse.com/api/public/otel
    • Self-hosted: https://your-langfuse-instance.com/api/public/otel
Refer to the OpenLIT Operator Documentation for more advanced configurations and use cases.

3. Visualize in Langfuse

Once your LLM application is instrumented, you can explore the telemetry data in Langfuse:
  1. Navigate to Langfuse: Go to your Langfuse Dashboard (or your self-hosted instance)
  2. Explore Traces: Click on Traces in the sidebar to view your AI application traces
  3. View Detailed Traces: Each trace includes:
    • LLM requests with detailed timing and token usage
    • Model performance analytics and latency metrics
    • Request/response payloads for debugging
    • Cost tracking and token consumption
    • Hierarchical spans showing the complete request flow
  4. Sessions and Users: Link traces to user sessions for comprehensive observability
  5. Datasets and Evaluations: Use Langfuse’s evaluation features to assess model performance
  6. Analytics Dashboard: Monitor trends, costs, and performance over time
Example: You can view this sample trace to see how OpenLIT traces appear in Langfuse. Your OpenLIT-instrumented AI applications will appear automatically in Langfuse with comprehensive observability including LLM costs, token usage, model performance, and detailed execution traces with full context and debugging capabilities.