# OpenLIT > OpenLIT is the leading open-source AI observability platform with zero-code instrumentation for LLMs, vector databases, and AI frameworks. Monitor OpenAI, Anthropic, LangChain, LlamaIndex with OpenTelemetry. Features include cost tracking, performance metrics, prompt management, and enterprise-grade security for production AI applications. ## Docs - [Configuration](https://docs.openlit.io/latest/gpu-collector/configuration.md): Environment variables reference for the OpenTelemetry GPU Collector - [AMD GPUs](https://docs.openlit.io/latest/gpu-collector/gpus/amd.md): Monitor AMD GPU metrics via sysfs/hwmon using the OpenTelemetry GPU Collector - [Intel GPUs](https://docs.openlit.io/latest/gpu-collector/gpus/intel.md): Monitor Intel GPU metrics via sysfs/hwmon using the OpenTelemetry GPU Collector - [NVIDIA GPUs](https://docs.openlit.io/latest/gpu-collector/gpus/nvidia.md): Monitor NVIDIA GPU metrics via NVML using the OpenTelemetry GPU Collector - [Installation](https://docs.openlit.io/latest/gpu-collector/installation.md): Install the OpenTelemetry GPU Collector via Docker, binary, or from source - [Metrics Reference](https://docs.openlit.io/latest/gpu-collector/metrics.md): Complete list of all metrics exported by the OpenTelemetry GPU Collector - [Overview](https://docs.openlit.io/latest/gpu-collector/overview.md): OpenTelemetry-native GPU and host metrics collector for NVIDIA, AMD, and Intel GPUs - [Quickstart](https://docs.openlit.io/latest/gpu-collector/quickstart.md): Get the OpenTelemetry GPU Collector running in under 5 minutes - [Otter — AI Chat Copilot](https://docs.openlit.io/latest/openlit/chat.md): Meet Otter, your AI copilot for querying data, managing resources, and creating dashboards - [Configuration](https://docs.openlit.io/latest/openlit/configuration.md): Configuring Options for OpenLIT - [Auto Refresh & Time Interval](https://docs.openlit.io/latest/openlit/dashboards/auto-refresh-and-time-interval.md): Learn how to enable auto-refresh and set time intervals in OpenLIT dashboards to keep your data live and updated in real time. - [Create Dashboard](https://docs.openlit.io/latest/openlit/dashboards/create-dashboard.md): Learn how to create dashboards in OpenLIT with step-by-step instructions for adding widgets, configuring visualizations, and building effective monitoring visualizations. - [Create Folder](https://docs.openlit.io/latest/openlit/dashboards/create-folder.md): Create folders to organize dashboards into logical collections by team, feature, environment, or product for better management and navigation. - [Export Dashboard](https://docs.openlit.io/latest/openlit/dashboards/export-dashboard.md): Learn how to export your OpenLIT dashboards for backup, sharing, and migration across different environments. - [Filters & Dynamic Bindings](https://docs.openlit.io/latest/openlit/dashboards/filters-and-dynamic-bindings.md): Learn how to make OpenLIT dashboards interactive by using filters and mustache-style dynamic bindings inside ClickHouse queries. - [Import Dashboard](https://docs.openlit.io/latest/openlit/dashboards/import-dashboard.md): Learn how to import pre-built dashboard layouts in OpenLIT to quickly set up comprehensive monitoring views for your AI applications. - [Organize Dashboards](https://docs.openlit.io/latest/openlit/dashboards/organize-dashboards.md): Learn how to organize dashboards in OpenLIT using folders, boards, and drag-and-drop for better structure and navigation. - [Overview](https://docs.openlit.io/latest/openlit/dashboards/overview.md): Create powerful, interactive dashboards to monitor AI application performance, visualize telemetry data, and gain insights into your LLM operations with real-time analytics. - [Pin a Dashboard](https://docs.openlit.io/latest/openlit/dashboards/pin-dashboard.md): Learn how to pin a dashboard in OpenLIT to keep key dashboards easily accessible at the top of your list. - [Set a Main Dashboard](https://docs.openlit.io/latest/openlit/dashboards/set-main-dashboard.md): Learn how to set a dashboard as your main (home) dashboard in OpenLIT for faster access to your most important views. - [Area Chart Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/area-chart-widget.md): Learn how to use the Area Chart Widget in OpenLIT to visualize time-based trends using ClickHouse queries and dynamic parameters. - [Bar Chart Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/bar-chart-widget.md): Learn how to use the Bar Chart Widget in OpenLIT to compare grouped data using ClickHouse queries and dynamic filters. - [Line Chart Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/line-chart-widget.md): Learn how to use the Line Chart Widget in OpenLIT to plot precise time series metrics using ClickHouse queries and dynamic bindings. - [Markdown Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/markdown-widget.md): Learn how to use the Markdown Widget in OpenLIT to add rich text, annotations, and links to your dashboards. - [Overview](https://docs.openlit.io/latest/openlit/dashboards/widgets/overview.md): Learn about OpenLIT's widget system for creating powerful data visualizations, from time series charts to statistical summaries and interactive tables. - [Pie Chart Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/pie-chart-widget.md): Learn how to use the Pie Chart Widget in OpenLIT to display proportions and segment distributions using ClickHouse data. - [Stats Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/stat-widget.md): Learn how to use the Statistics Widget in OpenLIT to display key metrics, KPIs, and summary values with real-time ClickHouse data. - [Table Widget](https://docs.openlit.io/latest/openlit/dashboards/widgets/table-widget.md): Learn how to use the Table Widget in OpenLIT to display structured data with sorting, scrolling, and pagination using ClickHouse queries. - [Anonymous Usage Metrics](https://docs.openlit.io/latest/openlit/developer-resources/anonymous-telemetry.md): Details about OpenLIT anonymous usage metrics reporting - [Get Prompt](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/endpoint/prompt-hub/get.md): Fetches a compiled prompt using the provided prompt ID, version, and variables. - [Evaluate Rules](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/endpoint/rule-engine/evaluate.md): Evaluates all active rules against the provided input fields and returns matching rule IDs, linked entities, and optionally their full data. - [Get Secret(s)](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/endpoint/vault/get.md): Fetches secret(s) using the provided key or tags. - [Introduction](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/introduction.md): OpenAPI specification for API Endpoints in OpenLIT - [Connect Multiple Databases](https://docs.openlit.io/latest/openlit/developer-resources/multiple-db.md): Connect and switch between multiple ClickHouse databases in OpenLIT - [Vault](https://docs.openlit.io/latest/openlit/developer-resources/vault.md): Store and access sensitive information like LLM API keys securely - [LLM-as-a-Judge](https://docs.openlit.io/latest/openlit/evaluations/llm-as-a-judge.md): Use LLMs to evaluate AI application quality, safety, and performance with automated scoring and detailed analysis - [Overview](https://docs.openlit.io/latest/openlit/evaluations/overview.md): Automated Eval scoring with 11 evaluation types — hallucination, bias, toxicity, safety, and more - [Programmatic Evaluations](https://docs.openlit.io/latest/openlit/evaluations/programmatic-evals.md): Quickly evaluate your LLMs and AI Agent responses for Hallucination, Bias, and Toxicity - [Self-hosting](https://docs.openlit.io/latest/openlit/installation.md): Guide to help you deploy your own instance of OpenLIT - [OAuth](https://docs.openlit.io/latest/openlit/oauth.md): Configure Google and GitHub OAuth authentication for OpenLIT using NextAuth.js - [Errors](https://docs.openlit.io/latest/openlit/observability/error.md): View all error traces in one place with filtering and detailed exception information. - [Fleet Hub](https://docs.openlit.io/latest/openlit/observability/fleet-hub.md): Centralized management and monitoring of OpenTelemetry collectors using OpAMP (Open Agent Management Protocol) - [Metrics](https://docs.openlit.io/latest/openlit/observability/metrics.md): View and analyze OpenTelemetry metrics in OpenLIT - [Tracing](https://docs.openlit.io/latest/openlit/observability/tracing.md): View and analyze distributed traces in OpenLIT - [Organisation](https://docs.openlit.io/latest/openlit/organisation.md): Manage multi-tenant workspaces, invite team members, and control access with role-based permissions - [Cost Recalculation](https://docs.openlit.io/latest/openlit/pricing/cost-recalculation.md): Recalculate the cost of LLM traces using current model pricing — manually or on a schedule - [Manage Models](https://docs.openlit.io/latest/openlit/pricing/manage-models.md): View, edit, and add LLM models with per-model pricing across all providers - [Context](https://docs.openlit.io/latest/openlit/prompts-experiments/context.md): Store and manage reusable contextual content that can be linked to rules and retrieved at runtime - [OpenGround](https://docs.openlit.io/latest/openlit/prompts-experiments/openground.md): Test and compare different LLMs side-by-side based on performance, cost, and other key metrics - [Prompt Hub](https://docs.openlit.io/latest/openlit/prompts-experiments/prompt-hub.md): Manage prompts centrally, fetch versions, and use variables for dynamic prompts - [Rule Engine](https://docs.openlit.io/latest/openlit/prompts-experiments/rule-engine.md): Define conditional rules with AND/OR logic to match runtime inputs and retrieve linked AI resources - [Get started with AI Observability](https://docs.openlit.io/latest/openlit/quickstart-ai-observability.md): Quickly start monitoring your AI Applications in just a single line of code - [Evaluations](https://docs.openlit.io/latest/openlit/quickstart-evals.md): Use LLMs to evaluate AI application quality, safety, and performance with automated scoring and detailed analysis - [GPU Performance Monitoring](https://docs.openlit.io/latest/openlit/quickstart-gpu.md): Simple GPU monitoring setup for AI workloads. Track NVIDIA and AMD GPU usage, temperature, and costs with zero code changes using OpenTelemetry. - [Secure your AI app against risks](https://docs.openlit.io/latest/openlit/quickstart-guard.md): Quickly secure your app from Prompt Injection, Sensitive Topics, and Topic Restriction - [Get started with MCP Monitoring](https://docs.openlit.io/latest/openlit/quickstart-mcp-observability.md): Quickly start monitoring your MCP (Model Context Protocol) Applications in just a single line of code - [Get started with VectorDB Observability](https://docs.openlit.io/latest/openlit/quickstart-vectordb-observability.md): Quickly start monitoring your Vector Database Applications in just a single line of code - [Architecture](https://docs.openlit.io/latest/operator/architecture.md): For the geeks and nerds looking to learn how the operator works - [AutoInstrumentation CR](https://docs.openlit.io/latest/operator/configuration/autoinstrumentation.md): Configure AutoInstrumentation Custom Resources with detailed parameters - [Operator](https://docs.openlit.io/latest/operator/configuration/operator.md): Configure the OpenLIT Operator deployment using Helm chart values - [Dash0](https://docs.openlit.io/latest/operator/destinations/dash0.md): LLM Observability with Dash0 and OpenLIT - [DataDog](https://docs.openlit.io/latest/operator/destinations/datadog.md): LLM Observability with DataDog and OpenLIT - [Dynatrace](https://docs.openlit.io/latest/operator/destinations/dynatrace.md): LLM Observability with Dynatrace and OpenLIT - [Elastic](https://docs.openlit.io/latest/operator/destinations/elastic.md): LLM Observability with Elastic and OpenLIT - [Grafana Cloud](https://docs.openlit.io/latest/operator/destinations/grafanacloud.md): LLM Observability with Grafana Cloud and OpenLIT - [Highlight.io](https://docs.openlit.io/latest/operator/destinations/highlight.md): LLM Observability with Highlight.io and OpenLIT - [HyperDX](https://docs.openlit.io/latest/operator/destinations/hyperdx.md): LLM Observability with HyperDX and OpenLIT - [Langfuse](https://docs.openlit.io/latest/operator/destinations/langfuse.md): LLM Observability with Langfuse and OpenLIT - [Middleware](https://docs.openlit.io/latest/operator/destinations/middleware.md): LLM Observability with Middleware and OpenLIT - [Murnitur](https://docs.openlit.io/latest/operator/destinations/murnitur.md): LLM Observability with Murnitur and OpenLIT - [New Relic](https://docs.openlit.io/latest/operator/destinations/newrelic.md): LLM Observability with New Relic and OpenLIT - [OneUptime](https://docs.openlit.io/latest/operator/destinations/oneuptime.md): LLM Observability with OneUptime and OpenLIT - [Oodle](https://docs.openlit.io/latest/operator/destinations/oodle.md): LLM Observability with Oodle using OpenLIT - [OpenLIT](https://docs.openlit.io/latest/operator/destinations/openlit.md): LLM Observability with OpenLIT Platform and OpenLIT - [OpenObserve](https://docs.openlit.io/latest/operator/destinations/openobserve.md): LLM Observability with OpenObserve and OpenLIT - [OpenTelemetry Collector](https://docs.openlit.io/latest/operator/destinations/otelcol.md): LLM Observability with OpenTelemetry Collector and OpenLIT - [Overview](https://docs.openlit.io/latest/operator/destinations/overview.md): Send AI observability data to your existing observability stack - [Prometheus + Jaeger](https://docs.openlit.io/latest/operator/destinations/prometheus-jaeger.md): LLM Observability with Prometheus and Jaeger using OpenLIT - [Prometheus + Tempo](https://docs.openlit.io/latest/operator/destinations/prometheus-tempo.md): LLM Observability with Prometheus and Grafana Tempo using OpenLIT - [SigLens](https://docs.openlit.io/latest/operator/destinations/siglens.md): LLM Observability with SigLens and OpenLIT - [SigNoz](https://docs.openlit.io/latest/operator/destinations/signoz.md): LLM Observability with SigNoz and OpenLIT - [Installation and Maintenance](https://docs.openlit.io/latest/operator/installation.md): Deploy OpenLIT Operator in Kubernetes Cluster - [Overview](https://docs.openlit.io/latest/operator/instrumentations/overview.md): Understanding OpenLIT Operator instrumentation providers and capabilities - [Custom](https://docs.openlit.io/latest/operator/instrumentations/python/custom.md): Build and deploy custom instrumentation solutions - [OpenInference](https://docs.openlit.io/latest/operator/instrumentations/python/openinference.md): OpenTelemetry standard compliance with OpenInference instrumentation - [OpenLIT](https://docs.openlit.io/latest/operator/instrumentations/python/openlit.md): Complete AI observability with OpenLIT instrumentation provider - [OpenLLMetry](https://docs.openlit.io/latest/operator/instrumentations/python/openllmetry.md): LLM-focused observability with OpenLLMetry instrumentation - [Overview](https://docs.openlit.io/latest/operator/overview.md): Zero-Code AI Observability in Kubernetes - [Quickstart](https://docs.openlit.io/latest/operator/quickstart.md): Get started with OpenLIT Operator in 5 minutes - [OpenLIT](https://docs.openlit.io/latest/overview.md): Open-source AI engineering platform - [Configuration](https://docs.openlit.io/latest/sdk/configuration.md): Configure the OpenLIT SDK for OpenTelemetry-native LLM observability, cost tracking, and performance monitoring - [Dash0](https://docs.openlit.io/latest/sdk/destinations/dash0.md): LLM Observability with Dash0 and OpenLIT - [DataDog](https://docs.openlit.io/latest/sdk/destinations/datadog.md): LLM Observability with DataDog and OpenLIT - [Dynatrace](https://docs.openlit.io/latest/sdk/destinations/dynatrace.md): LLM Observability with Dynatrace and OpenLIT - [Elastic](https://docs.openlit.io/latest/sdk/destinations/elastic.md): LLM Observability with Elastic and OpenLIT - [Grafana Cloud](https://docs.openlit.io/latest/sdk/destinations/grafanacloud.md): LLM Observability with Grafana Cloud and OpenLIT - [Highlight.io](https://docs.openlit.io/latest/sdk/destinations/highlight.md): LLM Observability with Highlight.io and OpenLIT - [HyperDX](https://docs.openlit.io/latest/sdk/destinations/hyperdx.md): LLM Observability with HyperDX and OpenLIT - [Langfuse](https://docs.openlit.io/latest/sdk/destinations/langfuse.md): LLM Observability with Langfuse and OpenLIT - [Middleware](https://docs.openlit.io/latest/sdk/destinations/middleware.md): LLM Observability with Middleware and OpenLIT - [Murnitur](https://docs.openlit.io/latest/sdk/destinations/murnitur.md): LLM Observability with Murnitur and OpenLIT - [New Relic](https://docs.openlit.io/latest/sdk/destinations/new-relic.md): LLM Observability with New Relic and OpenLIT - [OneUptime](https://docs.openlit.io/latest/sdk/destinations/oneuptime.md): LLM Observability with OneUptime and OpenLIT - [Oodle](https://docs.openlit.io/latest/sdk/destinations/oodle.md): LLM Observability with Oodle using OpenLIT - [OpenLIT](https://docs.openlit.io/latest/sdk/destinations/openlit.md): LLM Observability with OpenLIT Platform and OpenLIT - [OpenObserve](https://docs.openlit.io/latest/sdk/destinations/openobserve.md): LLM Observability with OpenObserve and OpenLIT - [OpenTelemetry Collector](https://docs.openlit.io/latest/sdk/destinations/otelcol.md): LLM Observability with OpenTelemetry Collector and OpenLIT - [Overview](https://docs.openlit.io/latest/sdk/destinations/overview.md): Send AI observability data to your existing observability stack - [Prometheus + Jaeger](https://docs.openlit.io/latest/sdk/destinations/prometheus-jaeger.md): LLM Observability with Prometheus and Jaeger using OpenLIT - [Prometheus + Tempo](https://docs.openlit.io/latest/sdk/destinations/prometheus-tempo.md): LLM Observability with Prometheus and Grafana Tempo using OpenLIT - [SigLens](https://docs.openlit.io/latest/sdk/destinations/siglens.md): LLM Observability with SigLens and OpenLIT - [SigNoz](https://docs.openlit.io/latest/sdk/destinations/signoz.md): LLM Observability with SigNoz and OpenLIT - [Evaluations](https://docs.openlit.io/latest/sdk/features/evaluations.md): Evaluate your model responses for Hallucination, Bias, and Toxicity - [GPU Performance Monitoring](https://docs.openlit.io/latest/sdk/features/gpu.md): Monitor NVIDIA and AMD GPUs with key metrics like usage, temperature, and power using OpenTelemetry for AI workloads - [Guardrails](https://docs.openlit.io/latest/sdk/features/guardrails.md): Secure your app from Prompt Injection, Sensitive Topics, and Topic Restriction - [Metrics](https://docs.openlit.io/latest/sdk/features/metrics.md): Visualize and monitor AI application metrics in OpenLIT platform with custom dashboards and advanced analytics - [Track Cost for Custom Models](https://docs.openlit.io/latest/sdk/features/pricing.md): Use your own Pricing File to calculate LLM usage costs - [Rule Engine](https://docs.openlit.io/latest/sdk/features/rule-engine.md): Evaluate rules and retrieve matching contexts, prompts, and evaluation configs from the OpenLIT Rule Engine - [Distributed Tracing](https://docs.openlit.io/latest/sdk/features/tracing.md): Visualize and analyze distributed traces in OpenLIT platform with detailed span analysis and performance insights - [Go SDK Overview](https://docs.openlit.io/latest/sdk/go-overview.md): OpenTelemetry-native observability for Go AI applications. Monitor OpenAI and Anthropic with automatic token tracking, cost calculation, and distributed tracing. - [Instrumentation Methods](https://docs.openlit.io/latest/sdk/instrumentation-methods.md): Choose between zero-code instrumentation or manual instrumentation for AI observability - [Monitor AutoGen using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/ag2.md) - [Monitor AI Agent Governance using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/agent-governance-toolkit.md) - [Monitor AI21 using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/ai21.md) - [Monitor Claude using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/anthropic.md) - [Monitor Assembly AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/assemblyai.md) - [Monitor AstraDB using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/astradb.md) - [Monitor Azure AI Inference using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/azure-ai-inference.md) - [Monitor Azure OpenAI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/azure-openai.md) - [Monitor Amazon Bedrock using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/bedrock.md) - [Monitor Browser Use using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/browser-use.md) - [Monitor ChromaDB using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/chromadb.md) - [Monitor Cohere using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/cohere.md) - [Monitor ControlFlow using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/controlflow.md) - [Monitor Crawl4AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/crawl4ai.md) - [Monitor CrewAI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/crewai.md) - [Monitor DeepSeek using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/deepseek.md) - [Monitor DSPy using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/dspy.md) - [Monitor Dynamiq using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/dynamiq.md) - [Monitor ElevenLabs using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/elevenlabs.md) - [Monitor Featherless using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/featherless.md) - [Monitor FireCrawl using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/firecrawl.md) - [Monitor GitHub Models using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/github-models.md) - [Monitor Claude using OpenTelemetry (Go)](https://docs.openlit.io/latest/sdk/integrations/go-anthropic.md) - [Monitor OpenAI using OpenTelemetry (Go)](https://docs.openlit.io/latest/sdk/integrations/go-openai.md) - [Monitor Google AI Studio using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/google-ai-studio.md) - [Monitor GPT4All using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/gpt4all.md) - [Monitor Groq using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/groq.md) - [Monitor Guardrails AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/guardrails.md) - [Monitor Haystack using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/haystack.md) - [Monitor HuggingFace using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/huggingface.md) - [Monitor Julep AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/julep-ai.md) - [Monitor Krutrim using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/krutrim.md) - [Monitor LangChain using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/langchain.md) - [Monitor Letta using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/letta.md) - [Monitor LiteLLM using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/litellm.md) - [Monitor LlamaIndex using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/llama-index.md) - [Monitor mem0 using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/mem0.md) - [Monitor Milvus using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/milvus.md) - [Monitor Mistral AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/mistral.md) - [Monitor MultiOn using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/multion.md) - [Monitor NVIDIA NIM using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/nvidia-nim.md) - [Monitor Ollama using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/ollama.md) - [Monitor OpenAI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/openai.md) - [Monitor OpenAI Agents](https://docs.openlit.io/latest/sdk/integrations/openai-agents.md) - [Monitor AI Applications using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/overview.md): Track LLM Costs, Agent actions, Tokens, Performance along with User Interactions - [Monitor Phidata using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/phidata.md) - [Monitor Pinecone using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/pinecone.md) - [Monitor Prem AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/premai.md) - [Monitor Pydantic AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/pydantic.md) - [Monitor Qdrant using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/qdrant.md) - [Monitor Reka using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/reka.md) - [Monitor Sarvam AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/sarvam.md) - [Monitor SwarmZero using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/swarmzero.md) - [Monitor Titan ML using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/titan-ml.md) - [Monitor Together AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/together.md) - [Monitor Vertex AI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/vertexai.md) - [Monitor vLLM using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/vllm.md) - [Monitor xAI using OpenTelemetry](https://docs.openlit.io/latest/sdk/integrations/xai.md) - [Overview](https://docs.openlit.io/latest/sdk/overview.md): Production-ready AI observability with zero code changes. Monitor LLM applications, track token usage, detect hallucinations, and optimize costs with OpenTelemetry-native instrumentation. - [Get started with AI Observability](https://docs.openlit.io/latest/sdk/quickstart-ai-observability.md): Quickly start monitoring your AI Applications in just a single line of code - [GPU Performance Monitoring](https://docs.openlit.io/latest/sdk/quickstart-gpu.md): Simple GPU monitoring setup for AI workloads. Track NVIDIA and AMD GPU usage, temperature, and costs with zero code changes using OpenTelemetry. - [Secure your AI app against risks](https://docs.openlit.io/latest/sdk/quickstart-guard.md): Quickly secure your app from Prompt Injection, Sensitive Topics, and Topic Restriction - [Get started with MCP Monitoring](https://docs.openlit.io/latest/sdk/quickstart-mcp-observability.md): Quickly start monitoring your MCP (Model Context Protocol) Applications in just a single line of code - [Evaluate LLMs and AI Agents](https://docs.openlit.io/latest/sdk/quickstart-programmatic-evals.md): Quickly evaluate your model responses for Hallucination, Bias, and Toxicity - [Get started with VectorDB Observability](https://docs.openlit.io/latest/sdk/quickstart-vectordb-observability.md): Quickly start monitoring your Vector Database Applications in just a single line of code ## OpenAPI Specs - [rule-engine](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/endpoint/rule-engine/rule-engine.yml) - [vault](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/endpoint/vault/vault.yml) - [prompt-hub](https://docs.openlit.io/latest/openlit/developer-resources/api-reference/endpoint/prompt-hub/prompt-hub.yml) - [openiapi](https://docs.openlit.io/latest/api-reference/openiapi.yml) - [openapi](https://docs.openlit.io/api-reference/openapi.json) ## Optional - [GitHub](https://github.com/openlit/openlit) - [Community](https://join.slack.com/t/openlit/shared_invite/zt-2etnfttwg-TjP_7BZXfYg84oAukY8QRQ) - [Blog](https://openlit.io/blogs)