To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to Dynatrace, follow the below steps.

1. Get your Credentials

  1. Log into your Dynatrace environment
  2. Generate an API Token:
    • Go to SettingsIntegrationAPI tokens
    • Click Generate token
    • Give it a name (e.g., openlit-token)
    • Enable these scopes:
      • openTelemetryTrace.ingest - for trace ingestion
      • metrics.ingest - for metrics ingestion
      • logs.ingest - for logs ingestion (optional)
    • Click Generate and copy the token
  3. Get your Environment ID:
    • Your Dynatrace URL format: https://{environment-id}.live.dynatrace.com
    • Extract the {environment-id} part from your Dynatrace URL
  4. Construct your OTLP endpoint:
    • Format: https://{environment-id}.live.dynatrace.com/api/v2/otlp
    • Example: https://abc12345.live.dynatrace.com/api/v2/otlp

2. Instrument your application

For direct integration into your Python applications:
import openlit

openlit.init(
  otlp_endpoint="https://YOUR_ENVIRONMENT_ID.live.dynatrace.com/api/v2/otlp", 
  otlp_headers="Authorization=Api-Token YOUR_DYNATRACE_API_TOKEN"
)
Replace:
  1. YOUR_ENVIRONMENT_ID with your Dynatrace environment ID.
    • Example: abc12345.live.dynatrace.com
  2. YOUR_DYNATRACE_API_TOKEN with the API token you generated in Step 1.
Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases.

3. View your telemetry in Dynatrace

Once your AI application starts sending telemetry data, you can explore it in Dynatrace:
  1. Navigate to Observability: Go to Observe and explore in your Dynatrace environment
  2. Distributed traces: View Distributed traces to see your AI application traces with LLM calls and vector operations
  3. Services: Check Services to monitor your AI service performance and dependencies
  4. Metrics: Explore custom metrics in Metrics for token usage, costs, and AI-specific KPIs
Your OpenLIT-instrumented AI applications will appear automatically in Dynatrace with comprehensive observability including LLM costs, token usage, model performance, and vector database operations.