To send OpenTelemetry traces generated by OpenLIT from your AI Application to Murnitur.ai, follow the below steps.
1. Get your Credentials
- Visit Murnitur.ai: Go to Murnitur.ai to create your account
- Generate API Key: Navigate to your dashboard and generate your API key
- Save your credentials:
- API Key: Your Murnitur trace token for authentication
- Endpoint:
https://middleware.murnitur.ai
Murnitur.ai is optimized for trace data. The integration automatically disables metrics to focus on trace observability.
2. Instrument your application
For direct integration into your Python applications: Function Arguments
Environment Variables
import openlit
openlit.init(
otlp_endpoint="https://middleware.murnitur.ai",
otlp_headers="x-murnix-trace-token=YOUR_MURNITUR_API_KEY",
disable_metrics=True # Murnitur focuses on trace data
)
Replace:
YOUR_MURNITUR_API_KEY with your Murnitur API key from Step 1.
import openlit
openlit.init()
Set these environment variables:export OTEL_EXPORTER_OTLP_ENDPOINT="https://middleware.murnitur.ai"
export OTEL_EXPORTER_OTLP_HEADERS="x-murnix-trace-token=YOUR_MURNITUR_API_KEY"
export OTEL_SERVICE_NAME="my-ai-service"
export OTEL_DEPLOYMENT_ENVIRONMENT="production"
export OPENLIT_DISABLE_METRICS="true"
Replace:
YOUR_MURNITUR_API_KEY with your Murnitur API key from Step 1.
Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases. For zero-code auto-instrumentation via command line: CLI Arguments
Environment Variables
# Using CLI arguments
openlit-instrument \
--otlp-endpoint "https://middleware.murnitur.ai" \
--otlp-headers "x-murnix-trace-token=YOUR_MURNITUR_API_KEY" \
--service-name "my-ai-service" \
--deployment-environment "production" \
--disable-metrics \
python app.py
Replace:
YOUR_MURNITUR_API_KEY with your Murnitur API key from Step 1.
# Set environment variables (takes precedence over CLI args)
export OTEL_EXPORTER_OTLP_ENDPOINT="https://middleware.murnitur.ai"
export OTEL_EXPORTER_OTLP_HEADERS="x-murnix-trace-token=YOUR_MURNITUR_API_KEY"
export OTEL_SERVICE_NAME="my-ai-service"
export OTEL_DEPLOYMENT_ENVIRONMENT="production"
export OPENLIT_DISABLE_METRICS="true"
# Run your application
openlit-instrument python app.py
Replace:
YOUR_MURNITUR_API_KEY with your Murnitur API key from Step 1.
Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases.
3. Visualize in Murnitur.ai
Once your LLM application is instrumented, you can explore the telemetry data in Murnitur.ai:
- Navigate to Murnitur.ai: Go to your Murnitur.ai Dashboard
- Explore Traces: View your AI application traces including:
- LLM request traces with detailed timing and execution flow
- Model performance analytics and latency metrics
- Request/response data for debugging and optimization
- Token usage and cost tracking information
- Complete trace hierarchy showing the full request lifecycle
- Trace Analytics: Analyze patterns, performance bottlenecks, and usage trends
- Performance Monitoring: Monitor latency, throughput, and error rates
- Debugging Tools: Use detailed trace data to debug and optimize your AI applications
Your OpenLIT-instrumented AI applications will appear automatically in Murnitur.ai with comprehensive trace observability focused on LLM performance, execution flow, and detailed debugging capabilities optimized for AI workloads.