To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to OneUptime, follow the below steps.

1. Get your Credentials

  1. Sign in to your OneUptime account
  2. Navigate to Project Settings:
    • Click on More in the Navigation bar
    • Click on Project Settings
  3. Create Telemetry Ingestion Key:
    • On the Telemetry Ingestion Key page, click on Create Ingestion Key to create a token
  4. Copy the Token:
    • Once you created a token, click on View to view and copy the token

2. Instrument your application

For Kubernetes deployments with zero-code instrumentation:
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
metadata:
  name: oneuptime-instrumentation
  namespace: default
spec:
  selector:
    matchLabels:
      instrument: "true"
  python:
    instrumentation:
      provider: "openlit"
      version: "latest"
  otlp:
    endpoint: "https://otlp.oneuptime.com"
    headers: "x-oneuptime-token=YOUR_ONEUPTIME_SERVICE_TOKEN"
    timeout: 30
  resource:
    environment: "production"
    serviceName: "my-ai-service"
Replace:
  1. YOUR_ONEUPTIME_SERVICE_TOKEN with the OneUptime Ingestion Key value you copied in Step 1.
Refer to the OpenLIT Operator Documentation for more advanced configurations and use cases.

3. Visualize in OneUptime

Once your LLM application is instrumented, you can explore the telemetry data in OneUptime:
  1. Navigate to Telemetry: Go to your OneUptime project dashboard
  2. View Traces: Check the Telemetry Traces page to see your AI application traces