OpenLIT is an open-source AI engineering platform that helps teams build, evaluate and observe AI applications across the entire lifecycle from development to production. OpenLIT provides the following OSS tools to support this goal:Documentation Index
Fetch the complete documentation index at: https://docs.openlit.io/llms.txt
Use this file to discover all available pages before exploring further.
- OpenLIT - Open-source platform for tracing, prompt management, evaluations, and scalable AI observability with dashboards, metrics, logs, and remote collectors.
- OpenLIT SDKs - OpenTelemetry-native auto-instrumentation to trace LLMs, agents, vector databases and GPUs with zero-code.
- OpenLIT Controller - Zero-code LLM and Agent observability for Kubernetes, Docker, and Linux using eBPF and automatic SDK injection.
Features
- Tracing
- Evaluations
- Prompts
- Experiments
- Dashboards
- Secrets
OpenLIT provides distributed tracing capabilities for understanding and debugging AI applications:
- OpenTelemetry-native SDKs - Automatic instrumentation for LLMs, agents, frameworks, Vector databases, MCP and GPUs.
- Exceptions Monitoring - Track and debug application errors with detailed stack traces.
- Universal Compatibility - View traces from any OpenTelemetry-instrumented tool or LLM instrumentation frameworks like OpenInference, OpenLLMetry.
Upgrade Information
Getting Started
Choose your path to start building better AI applications with OpenLIT:Quickstart: LLM Observability
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Deploy OpenLIT
Deployment options for scalable LLM monitoring infrastructure
Create a dashboard
Create custom visualizations with flexible widgets, queries, and real-time AI monitoring
Manage prompts
Version, deploy, and collaborate on prompts with centralized management and tracking
Zero-code observability with the OpenLIT Controller
Discover and instrument LLM traffic across Kubernetes, Docker, and Linux using eBPF — no code changes required.

