Deploy OpenLIT and the Controller together using the Helm chart. The Controller automatically discovers your services and lets you enable LLM observability from the dashboard — no code changes needed.
The Controller’s DaemonSet, RBAC, and configuration are all handled by the chart.
The dashboard URL and OTLP endpoint are auto-derived from the Helm release name.
If you’ve created an API key in Settings → API Keys, pass it to the Controller so it authenticates with the dashboard:
Your services making LLM API calls will appear automatically
Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes, image rebuilds, or redeployments required. For deeper application-level tracing, you can also use the OpenLIT SDKs.
1
Deploy OpenLIT with Docker Compose
git clone git@github.com:openlit/openlit.git
From the root directory of the OpenLIT Repo, add the Controller service to your docker-compose.yml:
Open the OpenLIT dashboard at http://127.0.0.1:3000
Navigate to Agents
Your containers making LLM API calls will appear automatically
Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes or image rebuilds required. For deeper application-level tracing, you can also use the OpenLIT SDKs.
Create a systemd service with the configuration passed as environment variables:
/etc/systemd/system/openlit-controller.service
[Unit]Description=OpenLIT ControllerAfter=network.target[Service]Environment="OPENLIT_URL=http://your-openlit-host:3000"Environment="OTEL_EXPORTER_OTLP_ENDPOINT=http://your-openlit-host:4318"Environment="OPENLIT_POLL_INTERVAL=60s"Environment="OPENLIT_ENVIRONMENT=production"# Environment="OPENLIT_API_KEY=your-api-key" # Recommended: see tip belowExecStart=/usr/local/bin/openlit-controllerRestart=alwaysRestartSec=5[Install]WantedBy=multi-user.target
Then reload: systemctl daemon-reload && systemctl restart openlit-controller
3
Enable observability from the Agents page
Open the OpenLIT dashboard and navigate to Agents
Your processes making LLM API calls will appear automatically
Click Enable next to any service to start collecting traces and metrics
The Controller uses eBPF for zero-code LLM observability and can also inject the OpenLIT Python SDK for agent framework traces — no code changes required. For deeper application-level tracing, you can also use the OpenLIT SDKs.
With real-time LLM observability data now flowing to OpenLIT, visualize comprehensive AI performance metrics including token costs, latency patterns, hallucination rates, and model accuracy to optimize your production AI applications.Head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials:
Email: user@openlit.io
Password: openlituser
If you have any questions or need support, reach out to our community.
Quickstart: LLM Evaluations
Get started with evaluating your LLM responses in 2 simple steps
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Create a dashboard
Create custom visualizations with flexible widgets, queries, and real-time AI monitoring