Goals
- Zero-Code Instrumentation - OpenLIT Operator automatically injects and configures instrumentation in your AI applications and produces distributed traces and metrics without any code changes.
- OpenTelemetry-native - Built entirely on OpenTelemetry standards and protocols, ensuring seamless integration with existing observability infrastructure and vendor-neutral telemetry collection
- Provider Flexibility - Support for multiple AI instrumentation providers (OpenLIT, OpenInference, OpenLLMetry, Custom) with easy switching capabilities
Adopt OpenTelemetry for AI applications in minutes
Get complete visibility into your LLM applications and AI Agents running in Kubernetes. Track token usage, monitor agent workflows, measure response times, and debug AI framework interactions - all without touching your code.Supported instrumentations
The OpenLIT Operator automatically instruments:LLM Providers
OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Ollama, Groq, Cohere, Mistral, and more
AI/Agentic Frameworks
LangChain, LlamaIndex, CrewAI, Haystack, AG2, DSPy, Guardrails, and more
Vector Databases
ChromaDB, Pinecone, Qdrant, Milvus, Weaviate, and more
Supported languages
Python
Full supportComplete instrumentation for all Python based AI applications and AI Agents
Javscript
Coming soonComplete instrumentation for all JS/TS based AI applications and AI Agents
More languages
RoadmapJava, Go, and other languages planned for future releases
How it works
1
Install the Operator
Deploy OpenLIT Operator to your Kubernetes cluster using Helm
2
Create AutoInstrumentation custom resource
Define which applications to instrument
3
Zero-Code AI observability ready!
Restart your pods and they automatically start emitting distributed traces with LLM costs, token usage, and agent performance metrics.
Getting started
Select from the following guides to learn more about how to use OpenLIT:Quickstart
Get started with monitoring your LLM Applications in 2 simple steps
Instrumentations
Integrate your AI Stack with OpenLIT
Installation
Deploy OpenLIT in your preferred environment
Destinations
Send telemetry to your existing Observablity Stack
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.