Configure OpenLIT SDK for AI monitoring and model performance tracking using flexible instrumentation methods. Choose from Manual instrumentation or Zero-code instrumentation for complete LLM observability:

Manual instrumentation (SDK)

import openlit

openlit.init(
    service_name="my-ai-app",
    environment="production",
    otlp_endpoint="https://otel-endpoint.com"
)

Zero-code instrumentation (CLI)

export OTEL_SERVICE_NAME=my-ai-app
export OTEL_DEPLOYMENT_ENVIRONMENT=production
openlit-instrument python your_app.py

Configuration parameters

Customize OpenLIT SDK behavior for your specific instrumentation needs:
ParameterCLI ArgumentEnvironment VariableDescriptionDefaultRequired
environment--environmentOTEL_DEPLOYMENT_ENVIRONMENTDeployment environment"default"No
service_name--service_nameOTEL_SERVICE_NAMEService name for tracing"default"No
otlp_endpoint--otlp_endpointOTEL_EXPORTER_OTLP_ENDPOINTOpenTelemetry endpoint for LLM monitoring data exportNoneNo
otlp_headers--otlp_headersOTEL_EXPORTER_OTLP_HEADERSAuthentication headers for enterprise monitoring backendsNoneNo
disable_batch--disable_batchOPENLIT_DISABLE_BATCHDisable batch span processingFalseNo
capture_message_content--capture_message_contentOTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENTEnable LLM prompt and response content tracing for debuggingTrueNo
disabled_instrumentors--disabled_instrumentorsOPENLIT_DISABLED_INSTRUMENTORSDisable specific AI service instrumentations (comma-separated)NoneNo
disable_metrics--disable_metricsOPENLIT_DISABLE_METRICSDisable cost tracking and performance metrics collectionFalseNo
pricing_json--pricing_jsonOPENLIT_PRICING_JSONCustom pricing configuration for accurate LLM cost trackingNoneNo
detailed_tracing--detailed_tracingOPENLIT_DETAILED_TRACINGEnable detailed AI framework and component-level tracingTrueNo
collect_system_metrics--collect_system_metricsOPENLIT_COLLECT_SYSTEM_METRICSComprehensive system monitoring (CPU, memory, disk, network, GPU) for AI workloadsFalseNo
tracerN/AN/AAn instance of OpenTelemetry Tracer for tracing operationsNoneNo
event_loggerN/AN/AEventLoggerProvider instanceNoneNo
meterN/AN/AOpenTelemetry Metrics instanceNoneNo

Deprecated parameters

ParameterCLI ArgumentEnvironment VariableDescriptionDefaultRequired
application_name--application_nameOTEL_SERVICE_NAMEApplication name for tracing (deprecated, use service_name)"default"No
collect_gpu_stats--collect_gpu_statsOPENLIT_COLLECT_GPU_STATSEnable GPU statistics collection (deprecated, use collect_system_metrics)FalseNo
Environment variables take precedence over CLI arguments, which take precedence over SDK parameters.

Resource attributes

Additional resource attributes can be controlled using standard OpenTelemetry environment variables for enhanced metadata and observability context:
Environment VariableDescriptionExample
OTEL_RESOURCE_ATTRIBUTESKey-value pairs for resource attributesservice.version=1.0.0,deployment.environment=production
OTEL_SERVICE_VERSIONVersion of the service1.2.3
OTEL_RESOURCE_ATTRIBUTES_POD_NAMEKubernetes pod name (if applicable)my-ai-app-pod-xyz
OTEL_RESOURCE_ATTRIBUTES_NODE_NAMEKubernetes node name (if applicable)node-123
Example:
# Set resource attributes for better trace organization
export OTEL_RESOURCE_ATTRIBUTES="service.version=2.1.0,team=ai-platform,cost.center=engineering"
export OTEL_SERVICE_VERSION=2.1.0

# Run with enhanced metadata
openlit-instrument python your_ai_app.py
These attributes enhance trace metadata for better filtering, grouping, and analysis in your observability platform.

Prompt Hub - openlit.get_prompt()

Advanced prompt management and version control for production LLM applications. Configure OpenLIT Prompt Hub for centralized prompt governance and tracking:
ParameterDescription
urlSets the OpenLIT URL. Defaults to the OPENLIT_URL environment variable.
api_keySets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable.
nameUnique prompt identifier for retrieval. Use with prompt_id for specific prompt versioning
prompt_idNumeric ID for direct prompt access. Enables precise prompt version control. Optional
versionSpecific prompt version retrieval for consistent AI behavior across deployments. Optional
shouldCompileEnable dynamic prompt compilation with variables for personalized LLM interactions. Optional
variablesDynamic variables for prompt template compilation and customization. Optional
meta_propertiesTracking metadata for prompt usage analytics and audit trails in production. Optional

Vault - openlit.get_secrets()

Enterprise-grade secret management for AI applications. Configure OpenLIT Vault for secure API key and credential handling in production LLM deployments:
ParameterDescription
urlSets the Openlit URL. Defaults to the OPENLIT_URL environment variable.
api_keySets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable.
keySpecific secret key retrieval for individual credential access. Optional
should_set_envAutomatically set retrieved secrets as environment variables for seamless application integration. Optional
tagsTag-based secret filtering for organized credential management across different AI services. Optional

Kubernetes

Running in Kubernetes? Try the OpenLIT Operator

Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.