To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to Highlight.io, follow the below steps.

1. Get your Credentials

  1. Sign in to your Highlight.io account
  2. Navigate to Project Settings:
    • Go to your project dashboard
    • Click on SettingsProject Settings
  3. Get your Project ID:
    • Copy your Project ID from the settings page
    • This will be used in the OTLP endpoint URL
  4. Generate API Key (if needed):
    • Navigate to API Keys section
    • Generate a new API key for OpenTelemetry ingestion
    • Copy the API key for authentication

2. Instrument your application

For Kubernetes deployments with zero-code instrumentation:
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
metadata:
  name: highlight-instrumentation
  namespace: default
spec:
  selector:
    matchLabels:
      instrument: "true"
  python:
    instrumentation:
      provider: "openlit"
      version: "latest"
  otlp:
    endpoint: "https://otel.highlight.io:4318/v1/traces"
    headers: "x-highlight-project=YOUR_PROJECT_ID"
    timeout: 30
  resource:
    environment: "production"
    serviceName: "my-ai-service"
Replace:
  1. YOUR_PROJECT_ID with your Highlight.io Project ID from Step 1.
    • Example: x-highlight-project=1jdkoe52
Refer to the OpenLIT Operator Documentation for more advanced configurations and use cases.

3. Visualize in Highlight.io

Once your LLM application is instrumented, you can explore the telemetry data in Highlight.io:
  1. Navigate to Traces: Go to your Highlight.io project dashboard and click on Traces
  2. Explore AI Operations: View your AI application traces including:
    • LLM request traces with detailed timing
    • Token usage and cost information
    • Vector database operations
    • Model performance analytics
    • Request/response payloads (if enabled)
  3. Session Monitoring: Link traces to user sessions for full-stack observability
  4. Error Tracking: Monitor and debug AI application errors and exceptions
  5. Performance Analysis: Analyze latency, throughput, and resource usage
Your OpenLIT-instrumented AI applications will appear automatically in Highlight.io with comprehensive observability including LLM costs, token usage, model performance, and integration with your existing application monitoring.