-
Traces Page: Navigate to the Traces section at
127.0.0.1:3000/traces
to view all distributed traces from your AI applications with detailed span analysis and execution flow. - Dashboard Widgets: Create custom trace widgets in your dashboards to monitor specific trace metrics, latency trends, and performance insights alongside other observability data.
Quickstart: LLM Observability
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Create a dashboard
Create custom visualizations with flexible widgets, queries, and real-time AI monitoring
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.