Explore Traces: View your AI application traces including:
LLM request traces with detailed timing and execution flow
Model performance analytics and latency metrics
Request/response data for debugging and optimization
Token usage and cost tracking information
Complete trace hierarchy showing the full request lifecycle
Trace Analytics: Analyze patterns, performance bottlenecks, and usage trends
Performance Monitoring: Monitor latency, throughput, and error rates
Debugging Tools: Use detailed trace data to debug and optimize your AI applications
Your OpenLIT-instrumented AI applications will appear automatically in Murnitur.ai with comprehensive trace observability focused on LLM performance, execution flow, and detailed debugging capabilities optimized for AI workloads.