OpenLIT home pagelight logodark logo
  • openlit/openlit
  • openlit/openlit
OpenLIT
SDK
Kubernetes Operator
  • GitHub
  • Community
  • Blog
  • Getting Started
    • Overview
    • Quickstarts
    • Configuration
    • Instrumentation Methods
    Core Features
    • Distributed Tracing
    • Metrics
    • GPU Monitoring
    • Track Cost for Custom Models
    • Evaluations
    • Guardrails
    Integrations
    • Overview
    • LLMs
    • VectorDBs
    • Frameworks
    • GPUs
    Destinations
    • Overview
    • Supported Destinations
    On this page
    • LLMs
    • VectorDBs
    • Frameworks
    • GPUs
    Integrations

    Monitor AI Applications using OpenTelemetry

    Track LLM Costs, Agent actions, Tokens, Performance along with User Interactions

    Start integrating and monitoring your applications with OpenLIT in just one line of code. Choose from a range of Integrations to start tracking LLM performance, usage, costs and much more.

    ​
    LLMs

    OpenAI

    OpenAI

    GPT4All

    Ollama

    DeepSeek

    Cohere

    Anthropic

    vLLM

    GitHub Models

    Azure OpenAI

    Azure AI Inference

    Mistral AI

    Mistral

    HuggingFace

    Icon-Architecture/16/Arch_Amazon-Bedrock_16

    Amazon Bedrock

    Vertex AI

    Google AI Studio

    Groq

    NVIDIA NIM

    xAI

    ElevenLabs

    AI21

    Together.ai

    Assembly AI

    Featherless

    Reka AI

    OLA Krutrim

    Titan ML

    Sarvam AI

    Prem AI

    ​
    VectorDBs

    Chroma

    ChromaDB

    Pinecone

    Pinecone

    qdrant

    Qdrant

    Milvus

    AstraDB

    ​
    Frameworks

    LangChain

    OpenAI

    OpenAI Agents

    LiteLLM

    CrewAI

    LlamaIndex

    Browser Use

    Pydantic AI

    DSPy

    AutoGen / AG2

    Haystack

    mem0

    Guardrails AI

    Phidata

    MultiOn

    Julep AI

    Letta

    Crawl4AI

    FireCrawl

    Dynamiq

    ControlFlow

    SwarmZero

    ​
    GPUs

    NVIDIA

    AMD Radeon


    Quickstart: LLM Observability

    Production-ready AI monitoring setup in 2 simple steps with zero code changes

    Deploy OpenLIT Platform

    Deployment options for scalable LLM monitoring infrastructure

    Destinations

    Send telemetry to Datadog, Grafana, New Relic, and other observability stacks
    Kubernetes

    Running in Kubernetes? Try the OpenLIT Operator

    Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.

    Was this page helpful?

    Suggest editsRaise issue
    Guardrails
    Previous
    Monitor OpenAI using OpenTelemetry
    Next
    githubslackdiscordtwitterlinkedin
    Powered by Mintlify
    Assistant
    Responses are generated using AI and may contain mistakes.