Use this file to discover all available pages before exploring further.
The fastest way to get the Controller running is through the LLM Observability quickstart, which covers deploying OpenLIT and the Controller together on every platform.Get real-time cost tracking, token usage monitoring, and latency optimization for your AI applications.
Deploy OpenLIT and the Controller together using the Helm chart. The Controller automatically discovers your services and lets you enable LLM observability from the dashboard — no code changes needed.
The Controller’s DaemonSet, RBAC, and configuration are all handled by the chart.
The dashboard URL and OTLP endpoint are auto-derived from the Helm release name.
If you’ve created an API key in Settings → API Keys, pass it to the Controller so it authenticates with the dashboard:
Your services making LLM API calls will appear automatically
Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes, image rebuilds, or redeployments required. For deeper application-level tracing, you can also use the OpenLIT SDKs.
1
Deploy OpenLIT with Docker Compose
git clone git@github.com:openlit/openlit.git
From the root directory of the OpenLIT Repo, add the Controller service to your docker-compose.yml:
Open the OpenLIT dashboard at http://127.0.0.1:3000
Navigate to Agents
Your containers making LLM API calls will appear automatically
Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes or image rebuilds required. For deeper application-level tracing, you can also use the OpenLIT SDKs.
Create a systemd service with the configuration passed as environment variables:
/etc/systemd/system/openlit-controller.service
[Unit]Description=OpenLIT ControllerAfter=network.target[Service]Environment="OPENLIT_URL=http://your-openlit-host:3000"Environment="OTEL_EXPORTER_OTLP_ENDPOINT=http://your-openlit-host:4318"Environment="OPENLIT_POLL_INTERVAL=60s"Environment="OPENLIT_ENVIRONMENT=production"# Environment="OPENLIT_API_KEY=your-api-key" # Recommended: see tip belowExecStart=/usr/local/bin/openlit-controllerRestart=alwaysRestartSec=5[Install]WantedBy=multi-user.target
Then reload: systemctl daemon-reload && systemctl restart openlit-controller
3
Enable observability from the Agents page
Open the OpenLIT dashboard and navigate to Agents
Your processes making LLM API calls will appear automatically
Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes required. For deeper application-level tracing, you can also use the OpenLIT SDKs.
With real-time LLM observability data now flowing to OpenLIT, visualize comprehensive AI performance metrics including token costs, latency patterns, hallucination rates, and model accuracy to optimize your production AI applications.Head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials:
Email: user@openlit.io
Password: openlituser
If you have any questions or need support, reach out to our community.
If you’ve created an API key in Settings → API Keys, configure the Controller to use it. This secures the poll endpoint so only authorized controllers can register services and receive commands.
Platform
How to pass the key
Kubernetes
--set openlit-controller.apiKey="YOUR_KEY" in Helm
Docker
OPENLIT_API_KEY environment variable
Linux
Environment="OPENLIT_API_KEY=YOUR_KEY" in the systemd unit
On a fresh install with no API keys, the Controller can connect without authentication. Once you create your first API key, all controllers must authenticate. See Configuration for the full reference.
Once services are discovered, use the Agents page to:
Enable LLM Observability — Click Enable next to any service to start capturing LLM metrics via eBPF
Enable Agent Observability — For Python services, click Enable to inject the OpenLIT SDK for agent framework traces
View traces and metrics — Head to the Requests and Dashboards sections to see your data
The Controller automatically reconciles desired state. If a pod restarts, a container is recreated, or a process bounces, the Controller will re-enable the observability you configured.
Configuration Reference
Customize polling intervals, OTLP endpoints, and more
Architecture Deep Dive
How eBPF discovery, SDK injection, and reconciliation work