The OpenLIT SDK is an open-source, OpenTelemetry-native LLM observability platform that enables production-ready AI application and agent monitoring without any code changes. It includes upstream OpenTelemetry instrumentations for HTTP frameworks, clients, and system components, delivering complete visibility across your entire AI infrastructure.

Goals

  • Zero Code AI Observability - Enable production-ready LLM monitoring for your AI applications without any code changes.
  • OpenTelemetry Native - Built on OpenTelemetry standards for seamless integration with existing observability stacks like Grafana, Datadog, and New Relic.
  • Drop-in Replacement - Simply replace opentelemetry-instrument with openlit-instrument to get the same functionality plus comprehensive AI capabilities.
  • Complete Stack Coverage - AI instrumentations + upstream OpenTelemetry instrumentations for full-stack observability.
  • End-to-End Distributed Tracing - Complete visibility: HTTP requests → framework routing → database queries → LLM calls → agent workflows → tool usage → responses
  • Advanced LLM Monitoring - Real-time cost tracking, token usage optimization, performance monitoring, and evaluations scoring.

Supported instrumentations

The OpenLIT SDK automatically instruments:

LLM Providers

OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Ollama, Groq, Cohere, Mistral, and more

AI/Agentic Frameworks

LangChain, LlamaIndex, CrewAI, mem0, AG2, DSPy, Agno, and more

Vector Databases

ChromaDB, Pinecone, Qdrant, Milvus, Weaviate, and more

GPUs

NVIDIA and AMD

HTTP Frameworks and Clients

FastAPI, Flask, Django, Requests, HTTPX, aiohttp, urllib and more

Supported languages

Python

Complete AI observability with automatic dependency detection. Zero-code instrumentation for production Python LLM applications with real-time performance monitoring.

TypeScript/JavaScript

Full LLM monitoring support for TypeScript / JavaScript applications with distributed tracing, metrics and cost optimization.

How it works

Option 1: Zero code instrumentation
openlit-instrument python your_app.py
Option 2: Manual instrumentation
import openlit

openlit.init()  # Enables Automatic instrumentation for AI apps and agents
Both approaches provide production-ready observability: distributed tracing, real-time cost tracking, latency monitoring, token usage optimization, and evaluation scoring.

Getting started

Select from the following guides to learn more about production AI monitoring:
Kubernetes

Running in Kubernetes? Try the OpenLIT Operator

Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.