Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.openlit.io/llms.txt

Use this file to discover all available pages before exploring further.

The fastest way to get the Controller running is through the LLM Observability quickstart, which covers deploying OpenLIT and the Controller together on every platform. Get real-time cost tracking, token usage monitoring, and latency optimization for your AI applications.

Select where your AI application is running

1

Deploy OpenLIT with the Controller

Deploy OpenLIT and the Controller together using the Helm chart. The Controller automatically discovers your services and lets you enable LLM observability from the dashboard — no code changes needed.
helm repo add openlit https://openlit.github.io/helm/
helm repo update
helm install openlit openlit/openlit --set openlit-controller.enabled=true
The Controller’s DaemonSet, RBAC, and configuration are all handled by the chart. The dashboard URL and OTLP endpoint are auto-derived from the Helm release name.
If you’ve created an API key in Settings → API Keys, pass it to the Controller so it authenticates with the dashboard:
helm install openlit openlit/openlit \
  --set openlit-controller.enabled=true \
  --set openlit-controller.apiKey="YOUR_OPENLIT_API_KEY"
You can also set the OPENLIT_API_KEY environment variable. See Controller configuration for details.
For detailed configuration options, see the Installation guide and Controller configuration.
2

Enable observability from the Agents page

  1. Open the OpenLIT dashboard and navigate to Agents
  2. Your services making LLM API calls will appear automatically
  3. Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes, image rebuilds, or redeployments required. For deeper application-level tracing, you can also use the OpenLIT SDKs.

Monitor and optimize your AI applications

With real-time LLM observability data now flowing to OpenLIT, visualize comprehensive AI performance metrics including token costs, latency patterns, hallucination rates, and model accuracy to optimize your production AI applications. Head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials:
  • Email: user@openlit.io
  • Password: openlituser
If you have any questions or need support, reach out to our community.

Authenticate with an API Key

If you’ve created an API key in Settings → API Keys, configure the Controller to use it. This secures the poll endpoint so only authorized controllers can register services and receive commands.
PlatformHow to pass the key
Kubernetes--set openlit-controller.apiKey="YOUR_KEY" in Helm
DockerOPENLIT_API_KEY environment variable
LinuxEnvironment="OPENLIT_API_KEY=YOUR_KEY" in the systemd unit
On a fresh install with no API keys, the Controller can connect without authentication. Once you create your first API key, all controllers must authenticate. See Configuration for the full reference.

What’s Next?

Once services are discovered, use the Agents page to:
  1. Enable LLM Observability — Click Enable next to any service to start capturing LLM metrics via eBPF
  2. Enable Agent Observability — For Python services, click Enable to inject the OpenLIT SDK for agent framework traces
  3. View traces and metrics — Head to the Requests and Dashboards sections to see your data
The Controller automatically reconciles desired state. If a pod restarts, a container is recreated, or a process bounces, the Controller will re-enable the observability you configured.

Configuration Reference

Customize polling intervals, OTLP endpoints, and more

Architecture Deep Dive

How eBPF discovery, SDK injection, and reconciliation work