To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to Middleware, follow the below steps.

1. Get your Credentials

  1. Sign in to your Middleware account: Go to Middleware Dashboard
  2. Navigate to API Keys: Go to SettingsAPI Keys (Direct Link)
  3. Copy your credentials:
    • MW_API_KEY: Your Middleware API key for authentication
    • MW_TARGET: Your Middleware target URL endpoint Save these values - you’ll need them for configuration.

2. Instrument your application

For direct integration into your Python applications:
import openlit

openlit.init(
  otlp_endpoint="MW_TARGET",
  application_name="YOUR_APPLICATION_NAME",
  otlp_headers={
      "Authorization": "MW_API_KEY",
      "X-Trace-Source": "openlit",
  }
)
Replace:
  1. MW_TARGET with your Middleware target URL from Step 1.
    • Example: https://abcd.middleware.io:443
  2. MW_API_KEY with your Middleware API key from Step 1.
    • Example: dxyxsdojzrgpsvizzzcsvhrwnmzqdsdsd
  3. YOUR_APPLICATION_NAME with your application name.
Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases.

3. Visualize in Middleware

Once your LLM application is instrumented, you can explore the telemetry data in Middleware:
  1. Navigate to LLM Observability: Go to your Middleware Dashboard and click on LLM Observability in the sidebar
  2. Explore AI Operations: View your AI application traces including:
    • LLM request traces with detailed timing
    • Token usage and cost information
    • Vector database operations
    • Model performance analytics
    • Request/response payloads (if enabled)
  3. Custom Dashboards: Create custom dashboards for your specific LLM metrics
  4. Alerting: Set up alerts for LLM performance anomalies and cost thresholds
  5. Performance Analysis: Analyze latency, throughput, and resource usage patterns
For detailed information on LLM Observability features, consult the Middleware LLM Observability Documentation.