Goals
- Zero Code AI Observability - Enable production-ready LLM monitoring for your AI applications without any code changes.
- OpenTelemetry Native - Built on OpenTelemetry standards for seamless integration with existing observability stacks like Grafana, Datadog, and New Relic.
-
Drop-in Replacement - Simply replace
opentelemetry-instrument
withopenlit-instrument
to get the same functionality plus comprehensive AI capabilities. - Complete Stack Coverage - AI instrumentations + upstream OpenTelemetry instrumentations for full-stack observability.
- End-to-End Distributed Tracing - Complete visibility: HTTP requests → framework routing → database queries → LLM calls → agent workflows → tool usage → responses
- Advanced LLM Monitoring - Real-time cost tracking, token usage optimization, performance monitoring, and evaluations scoring.
Supported instrumentations
The OpenLIT SDK automatically instruments:LLM Providers
OpenAI, Anthropic, Google, Azure OpenAI, AWS Bedrock, Ollama, Groq, Cohere, Mistral, and more
AI/Agentic Frameworks
LangChain, LlamaIndex, CrewAI, mem0, AG2, DSPy, Agno, and more
Vector Databases
ChromaDB, Pinecone, Qdrant, Milvus, Weaviate, and more
GPUs
NVIDIA and AMD
HTTP Frameworks and Clients
FastAPI, Flask, Django, Requests, HTTPX, aiohttp, urllib and more
Supported languages
Python
Complete AI observability with automatic dependency detection. Zero-code instrumentation for production Python LLM applications with real-time performance monitoring.
TypeScript/JavaScript
Full LLM monitoring support for TypeScript / JavaScript applications with distributed tracing, metrics and cost optimization.
How it works
Option 1: Zero code instrumentationGetting started
Select from the following guides to learn more about production AI monitoring:Quickstart: LLM Observability
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Deploy OpenLIT Platform
Deployment options for scalable LLM monitoring infrastructure
Destinations
Send telemetry to Datadog, Grafana, New Relic, and other observability stacks
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.