LLMs
OpenAI
GPT4All
Ollama
DeepSeek
Cohere
Anthropic
vLLM
GitHub Models
Azure OpenAI
Azure AI Inference
Mistral
HuggingFace
Amazon Bedrock
Vertex AI
Google AI Studio
Groq
NVIDIA NIM
xAI
ElevenLabs
AI21
Together.ai
Assembly AI
Featherless
Reka AI
OLA Krutrim
Titan ML
Sarvam AI
Prem AI
VectorDBs
Frameworks
LangChain
OpenAI Agents
LiteLLM
CrewAI
LlamaIndex
Browser Use
Pydantic AI
DSPy
AutoGen / AG2
Haystack
mem0
Guardrails AI
Phidata
MultiOn
Julep AI
Letta
Crawl4AI
FireCrawl
Dynamiq
ControlFlow
SwarmZero
GPUs
Quickstart: LLM Observability
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Deploy OpenLIT Platform
Deployment options for scalable LLM monitoring infrastructure
Destinations
Send telemetry to Datadog, Grafana, New Relic, and other observability stacks
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.