1. Get your Credentials
- Visit Murnitur.ai: Go to Murnitur.ai to create your account
- Generate API Key: Navigate to your dashboard and generate your API key
- Save your credentials:
- API Key: Your Murnitur trace token for authentication
- Endpoint:
https://middleware.murnitur.ai
Murnitur.ai is optimized for trace data. The integration automatically disables metrics to focus on trace observability.
2. Instrument your application
For direct integration into your Python applications:Replace:Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases.
YOUR_MURNITUR_API_KEY
with your Murnitur API key from Step 1.
3. Visualize in Murnitur.ai
Once your LLM application is instrumented, you can explore the telemetry data in Murnitur.ai:- Navigate to Murnitur.ai: Go to your Murnitur.ai Dashboard
- Explore Traces: View your AI application traces including:
- LLM request traces with detailed timing and execution flow
- Model performance analytics and latency metrics
- Request/response data for debugging and optimization
- Token usage and cost tracking information
- Complete trace hierarchy showing the full request lifecycle
- Trace Analytics: Analyze patterns, performance bottlenecks, and usage trends
- Performance Monitoring: Monitor latency, throughput, and error rates
- Debugging Tools: Use detailed trace data to debug and optimize your AI applications