In this 5-minute tutorial, you’ll deploy an example AI Agent in Kubernetes and automatically get complete observability with zero code changes. Using OpenLIT Operator, you’ll capture distributed traces showing LLM costs, token usage, and agent performance metrics - then visualize everything in the OpenLIT dashboard.

📋 Prerequisites

  • Kubernetes cluster+ with cluster admin access
  • Helm package manager
  • kubectl configured for your cluster
1

Deploy OpenLIT Platform

Install the OpenLIT observability platform to collect and monitor your LLM App and AI Agent performance:Add Helm Repository
helm repo add openlit https://openlit.github.io/helm/
helm repo update
Install OpenLIT Platform
helm install openlit openlit/openlit
2

Deploy OpenLIT Operator

Install the OpenLIT Operator to enable zero-code instrumentation:Install OpenLIT Operator
helm install openlit-operator openlit/openlit-operator
Verify the operator is running
# Check operator pod status
kubectl get pods -n openlit -l app.kubernetes.io/name=openlit-operator
Expected output:
NAME                                READY   STATUS    RESTARTS   AGE
openlit-operator-7b9c8d5f7b-xyz12   1/1     Running   0          30s
3

Create AutoInstrumentation Custom Resource

Create an AutoInstrumentation resource to define how your AI apps should be instrumented:
kubectl apply -f - <<EOF
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
metadata:
  name: quickstart-instrumentation
  namespace: default
spec:
  selector:
    matchLabels:
      instrumentation: openlit
  otlp:
    endpoint: "http://openlit.openlit.svc.cluster.local:4318"
EOF
Already have AI applications running? You’ll need to restart them to enable instrumentation:
kubectl rollout restart deployment <your-deployment-name>
4

Deploy the example AI Agent

Deploy the example AI agent built using CrewAI:
kubectl apply -f https://raw.githubusercontent.com/openlit/openlit/main/operator/examples/test-application-deployment.yaml
5

View Traces and metrics in OpenLIT

Access the OpenLIT dashboard to view your AI application traces:Port Forward to OpenLIT
# Forward the OpenLIT dashboard port
kubectl port-forward -n openlit svc/openlit 3000:3000
Access Dashboard
  1. Open your browser and navigate to http://localhost:3000
  2. Navigate to Traces section in the dashboard
What You’ll SeeIn the OpenLIT dashboard, you’ll see:
  • Service Overview: Your openlit-test-app service with health metrics
  • Trace Timeline: Individual traces for HTTP requests and OpenAI API calls
  • LLM Operations: Detailed spans showing OpenAI API calls with token usage
  • Performance Metrics: Response times, error rates, and throughput
  • Cost Tracking: Token usage and estimated costs