Once your LLM application is instrumented, you can explore the telemetry data in Highlight.io:
Navigate to Traces: Go to your Highlight.io project dashboard and click on Traces
Explore AI Operations: View your AI application traces including:
LLM request traces with detailed timing
Token usage and cost information
Vector database operations
Model performance analytics
Request/response payloads (if enabled)
Session Monitoring: Link traces to user sessions for full-stack observability
Error Tracking: Monitor and debug AI application errors and exceptions
Performance Analysis: Analyze latency, throughput, and resource usage
Your OpenLIT-instrumented AI applications will appear automatically in Highlight.io with comprehensive observability including LLM costs, token usage, model performance, and integration with your existing application monitoring.