The OpenLIT Operator brings zero-code AI observability to Kubernetes environments, automatically injecting instrumention to your AI applications with OpenTelemetry to produce distributed traces and metrics without requiring any code changes. Its built specifically for AI workloads, it provides seamless observability for LLMs, vector databases, and AI frameworks running in Kubernetes.

Goals

  • Zero-Code Instrumentation - OpenLIT Operator automatically injects and configures instrumentation in your AI applications and produces distributed traces and metrics without any code changes.
  • OpenTelemetry-native - Built entirely on OpenTelemetry standards and protocols, ensuring seamless integration with existing observability infrastructure and vendor-neutral telemetry collection
  • Provider Flexibility - Support for multiple AI instrumentation providers (OpenLIT, OpenInference, OpenLLMetry, Custom) with easy switching capabilities
OpenLIT achieves this by deploying a set of components that work together to inject, configure, and manage telemetry collection from your AI applications.

Adopt OpenTelemetry for AI applications in minutes

Get complete visibility into your LLM applications and AI Agents running in Kubernetes. Track token usage, monitor agent workflows, measure response times, and debug AI framework interactions - all without touching your code.

Supported instrumentations

The OpenLIT Operator automatically instruments:

LLM Providers

OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Ollama, Groq, Cohere, Mistral, and more

AI/Agentic Frameworks

LangChain, LlamaIndex, CrewAI, Haystack, AG2, DSPy, Guardrails, and more

Vector Databases

ChromaDB, Pinecone, Qdrant, Milvus, Weaviate, and more

Supported languages

Python

Full supportComplete instrumentation for all Python based AI applications and AI Agents

Javscript

Coming soonComplete instrumentation for all JS/TS based AI applications and AI Agents

More languages

RoadmapJava, Go, and other languages planned for future releases

How it works

1

Install the Operator

Deploy OpenLIT Operator to your Kubernetes cluster using Helm
helm install openlit-operator openlit/openlit-operator
2

Create AutoInstrumentation custom resource

Define which applications to instrument
apiVersion: openlit.io/v1alpha1
kind: AutoInstrumentation
spec:
  selector:
    # Use any existing label from you application deployment
    matchLabels:
      app: backend
      type: chatbotagent
  otlp:
    endpoint: "http://openlit:4318"
3

Zero-Code AI observability ready!

Restart your pods and they automatically start emitting distributed traces with LLM costs, token usage, and agent performance metrics.

Getting started

Select from the following guides to learn more about how to use OpenLIT:
Kubernetes

Running in Kubernetes? Try the OpenLIT Operator

Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.