The OpenLIT platform provides comprehensive trace visualization capabilities. You can view traces in two ways:Documentation Index
Fetch the complete documentation index at: https://docs.openlit.io/llms.txt
Use this file to discover all available pages before exploring further.
-
Traces Page: Navigate to the Traces section at
127.0.0.1:3000/tracesto view all distributed traces from your AI applications with detailed span analysis and execution flow. - Dashboard Widgets: Create custom trace widgets in your dashboards to monitor specific trace metrics, latency trends, and performance insights alongside other observability data.
Group traces
Use Group By on the Traces page to roll up large trace lists into meaningful groups before drilling into individual spans. Grouping works with the selected time range and any active filters, so you can narrow the dataset first and then compare trace segments. You can group traces by:- Model: Compare requests by
gen_ai.request.model. - Provider: Compare requests by
gen_ai.system. - Span Name: Group repeated operations or framework steps.
- Application: Compare services using the
service.nameresource attribute. - Custom attribute: Group by any span attribute, resource attribute, or top-level trace field available in your trace data.
Grouping is best for finding high-volume models, expensive providers, slow span types, or application-level hotspots before opening an individual trace.
Filter and group together
Grouping can be combined with the existing trace filters. For example, you can filter to a single environment, apply a maximum cost threshold, and then group by model to find which models dominate that filtered slice. Custom attribute filters and custom group-by attributes can be used together when you need to inspect application-specific metadata.Quickstart: LLM Observability
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Create a dashboard
Create custom visualizations with flexible widgets, queries, and real-time AI monitoring
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Zero-code observability with the OpenLIT Controller
Discover and instrument LLM traffic across Kubernetes, Docker, and Linux using eBPF — no code changes required.

