- OpenLIT - Open-source platform for tracing, prompt management, evaluations, and scalable AI observability with dashboards, metrics, logs, and remote collectors.
- OpenLIT SDKs - OpenTelemetry-native auto-instrumentation to trace LLMs, agents, vector databases and GPUs with zero-code
- OpenLIT Operator - Kubernetes operator that automatically injects instrumentation into AI applications without requiring code or image changes.
Features
OpenLIT provides distributed tracing capabilities for understanding and debugging AI applications:
- OpenTelemetry-native SDKs - Automatic instrumentation for LLMs, agents, frameworks, Vector databases, MCP and GPUs.
- Exceptions Monitoring - Track and debug application errors with detailed stack traces.
- Universal Compatibility - View traces from any OpenTelemetry-instrumented tool or LLM instrumentation frameworks like OpenInference, OpenLLMetry.
Getting Started
Choose your path to start building better AI applications with OpenLIT:Quickstart: LLM Observability
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Deploy OpenLIT
Deployment options for scalable LLM monitoring infrastructure
Create a dashboard
Create custom visualizations with flexible widgets, queries, and real-time AI monitoring
Manage prompts
Version, deploy, and collaborate on prompts with centralized management and tracking
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.