Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openlit.io/llms.txt

Use this file to discover all available pages before exploring further.

OpenLIT automatically instruments LLMs, VectorDBs, MCP, and frameworks by default.
Get real-time cost tracking, token usage monitoring, and latency optimization for your AI applications.

Select where your AI application is running

1

Deploy OpenLIT with the Controller

Deploy OpenLIT and the Controller together using the Helm chart. The Controller automatically discovers your services and lets you enable LLM observability from the dashboard — no code changes needed.
helm repo add openlit https://openlit.github.io/helm/
helm repo update
helm install openlit openlit/openlit --set openlit-controller.enabled=true
The Controller’s DaemonSet, RBAC, and configuration are all handled by the chart. The dashboard URL and OTLP endpoint are auto-derived from the Helm release name.
If you’ve created an API key in Settings → API Keys, pass it to the Controller so it authenticates with the dashboard:
helm install openlit openlit/openlit \
  --set openlit-controller.enabled=true \
  --set openlit-controller.apiKey="YOUR_OPENLIT_API_KEY"
You can also set the OPENLIT_API_KEY environment variable. See Controller configuration for details.
For detailed configuration options, see the Installation guide and Controller configuration.
2

Enable observability from the Agents page

  1. Open the OpenLIT dashboard and navigate to Agents
  2. Your services making LLM API calls will appear automatically
  3. Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes, image rebuilds, or redeployments required. For deeper application-level tracing, you can also use the OpenLIT SDKs.

Monitor and optimize your AI applications

With real-time LLM observability data now flowing to OpenLIT, visualize comprehensive AI performance metrics including token costs, latency patterns, hallucination rates, and model accuracy to optimize your production AI applications. Head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials:
  • Email: user@openlit.io
  • Password: openlituser
If you have any questions or need support, reach out to our community.

Quickstart: LLM Evaluations

Get started with evaluating your LLM responses in 2 simple steps

Integrations

60+ AI integrations with automatic instrumentation and performance tracking

Create a dashboard

Create custom visualizations with flexible widgets, queries, and real-time AI monitoring