Manual instrumentation (SDK)
Zero-code instrumentation (CLI)
Configuration parameters
Customize OpenLIT SDK behavior for your specific instrumentation needs:Parameter | CLI Argument | Environment Variable | Description | Default | Required |
---|---|---|---|---|---|
environment | --environment | OTEL_DEPLOYMENT_ENVIRONMENT | Deployment environment | "default" | No |
service_name | --service_name | OTEL_SERVICE_NAME | Service name for tracing | "default" | No |
otlp_endpoint | --otlp_endpoint | OTEL_EXPORTER_OTLP_ENDPOINT | OpenTelemetry endpoint for LLM monitoring data export | None | No |
otlp_headers | --otlp_headers | OTEL_EXPORTER_OTLP_HEADERS | Authentication headers for enterprise monitoring backends | None | No |
disable_batch | --disable_batch | OPENLIT_DISABLE_BATCH | Disable batch span processing | False | No |
capture_message_content | --capture_message_content | OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT | Enable LLM prompt and response content tracing for debugging | True | No |
disabled_instrumentors | --disabled_instrumentors | OPENLIT_DISABLED_INSTRUMENTORS | Disable specific AI service instrumentations (comma-separated) | None | No |
disable_metrics | --disable_metrics | OPENLIT_DISABLE_METRICS | Disable cost tracking and performance metrics collection | False | No |
pricing_json | --pricing_json | OPENLIT_PRICING_JSON | Custom pricing configuration for accurate LLM cost tracking | None | No |
detailed_tracing | --detailed_tracing | OPENLIT_DETAILED_TRACING | Enable detailed AI framework and component-level tracing | True | No |
collect_system_metrics | --collect_system_metrics | OPENLIT_COLLECT_SYSTEM_METRICS | Comprehensive system monitoring (CPU, memory, disk, network, GPU) for AI workloads | False | No |
tracer | N/A | N/A | An instance of OpenTelemetry Tracer for tracing operations | None | No |
event_logger | N/A | N/A | EventLoggerProvider instance | None | No |
meter | N/A | N/A | OpenTelemetry Metrics instance | None | No |
Deprecated parameters
Parameter | CLI Argument | Environment Variable | Description | Default | Required |
---|---|---|---|---|---|
application_name | --application_name | OTEL_SERVICE_NAME | Application name for tracing (deprecated, use service_name ) | "default" | No |
collect_gpu_stats | --collect_gpu_stats | OPENLIT_COLLECT_GPU_STATS | Enable GPU statistics collection (deprecated, use collect_system_metrics ) | False | No |
Environment variables take precedence over CLI arguments, which take precedence over SDK parameters.
Resource attributes
Additional resource attributes can be controlled using standard OpenTelemetry environment variables for enhanced metadata and observability context:Environment Variable | Description | Example |
---|---|---|
OTEL_RESOURCE_ATTRIBUTES | Key-value pairs for resource attributes | service.version=1.0.0,deployment.environment=production |
OTEL_SERVICE_VERSION | Version of the service | 1.2.3 |
OTEL_RESOURCE_ATTRIBUTES_POD_NAME | Kubernetes pod name (if applicable) | my-ai-app-pod-xyz |
OTEL_RESOURCE_ATTRIBUTES_NODE_NAME | Kubernetes node name (if applicable) | node-123 |
Prompt Hub - openlit.get_prompt()
Advanced prompt management and version control for production LLM applications. Configure OpenLIT Prompt Hub for centralized prompt governance and tracking:Parameter | Description |
---|---|
url | Sets the OpenLIT URL. Defaults to the OPENLIT_URL environment variable. |
api_key | Sets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable. |
name | Unique prompt identifier for retrieval. Use with prompt_id for specific prompt versioning |
prompt_id | Numeric ID for direct prompt access. Enables precise prompt version control. Optional |
version | Specific prompt version retrieval for consistent AI behavior across deployments. Optional |
shouldCompile | Enable dynamic prompt compilation with variables for personalized LLM interactions. Optional |
variables | Dynamic variables for prompt template compilation and customization. Optional |
meta_properties | Tracking metadata for prompt usage analytics and audit trails in production. Optional |
Vault - openlit.get_secrets()
Enterprise-grade secret management for AI applications. Configure OpenLIT Vault for secure API key and credential handling in production LLM deployments:Parameter | Description |
---|---|
url | Sets the Openlit URL. Defaults to the OPENLIT_URL environment variable. |
api_key | Sets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable. |
key | Specific secret key retrieval for individual credential access. Optional |
should_set_env | Automatically set retrieved secrets as environment variables for seamless application integration. Optional |
tags | Tag-based secret filtering for organized credential management across different AI services. Optional |
Deploy OpenLIT
Deployment options for scalable LLM monitoring infrastructure
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Destinations
Send elemetry to Datadog, Grafana, New Relic, and other observability stacks
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.