Manual instrumentation (SDK)
import openlit
openlit.init(
service_name="my-ai-app",
environment="production",
otlp_endpoint="https://otel-endpoint.com"
)
Zero-code instrumentation (CLI)
export OTEL_SERVICE_NAME=my-ai-app
export OTEL_DEPLOYMENT_ENVIRONMENT=production
openlit-instrument python your_app.py
Configuration parameters
Customize OpenLIT SDK behavior for your specific instrumentation needs:| Parameter | CLI Argument | Environment Variable | Description | Default | Required |
|---|
environment | --environment | OTEL_DEPLOYMENT_ENVIRONMENT | Deployment environment | "default" | No |
service_name | --service_name | OTEL_SERVICE_NAME | Service name for tracing | "default" | No |
otlp_endpoint | --otlp_endpoint | OTEL_EXPORTER_OTLP_ENDPOINT | OpenTelemetry endpoint for LLM monitoring data export | None | No |
otlp_headers | --otlp_headers | OTEL_EXPORTER_OTLP_HEADERS | Authentication headers for enterprise monitoring backends | None | No |
disable_batch | --disable_batch | OPENLIT_DISABLE_BATCH | Disable batch span processing | False | No |
capture_message_content | --capture_message_content | OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT | Enable LLM prompt and response content tracing for debugging | True | No |
disabled_instrumentors | --disabled_instrumentors | OPENLIT_DISABLED_INSTRUMENTORS | Disable specific AI service instrumentations (comma-separated) | None | No |
disable_metrics | --disable_metrics | OPENLIT_DISABLE_METRICS | Disable cost tracking and performance metrics collection | False | No |
pricing_json | --pricing_json | OPENLIT_PRICING_JSON | Custom pricing configuration for accurate LLM cost tracking | None | No |
detailed_tracing | --detailed_tracing | OPENLIT_DETAILED_TRACING | Enable detailed AI framework and component-level tracing | True | No |
collect_system_metrics | --collect_system_metrics | OPENLIT_COLLECT_SYSTEM_METRICS | Comprehensive system monitoring (CPU, memory, disk, network, GPU) for AI workloads | False | No |
tracer | N/A | N/A | An instance of OpenTelemetry Tracer for tracing operations | None | No |
event_logger | N/A | N/A | EventLoggerProvider instance | None | No |
meter | N/A | N/A | OpenTelemetry Metrics instance | None | No |
Database instrumentation options
These options apply to database instrumentations like PostgreSQL (psycopg3):| Parameter | CLI Argument | Environment Variable | Description | Default | Required |
|---|
capture_parameters | --capture_parameters | OPENLIT_CAPTURE_PARAMETERS | Capture database query parameters in spans (security risk - may expose sensitive data) | False | No |
enable_sqlcommenter | --enable_sqlcommenter | OPENLIT_ENABLE_SQLCOMMENTER | Inject OpenTelemetry trace context as SQL comments for database log correlation | False | No |
Security Notice: Enabling capture_parameters records query parameters (the values passed to $1, $2, etc.) in your traces. This may expose sensitive data like passwords, API keys, or personal information. Only enable in development environments or when you’re certain parameters don’t contain sensitive data.
SQLCommenter Example:When enable_sqlcommenter is enabled, queries are automatically modified to include trace context:-- Original
SELECT * FROM users WHERE id = $1;
-- With SQLCommenter
SELECT * FROM users WHERE id = $1 /*traceparent='00-abc123def456-789xyz-01',application='my_app'*/;
This enables correlation between application traces and database logs (e.g., pg_stat_statements, auto_explain).Deprecated parameters
| Parameter | CLI Argument | Environment Variable | Description | Default | Required |
|---|
application_name | --application_name | OTEL_SERVICE_NAME | Application name for tracing (deprecated, use service_name) | "default" | No |
collect_gpu_stats | --collect_gpu_stats | OPENLIT_COLLECT_GPU_STATS | Enable GPU statistics collection (deprecated, use collect_system_metrics) | False | No |
Environment variables take precedence over CLI arguments, which take precedence over SDK parameters.
Resource attributes
Additional resource attributes can be controlled using standard OpenTelemetry environment variables for enhanced metadata and observability context:| Environment Variable | Description | Example |
|---|
OTEL_RESOURCE_ATTRIBUTES | Key-value pairs for resource attributes | service.version=1.0.0,deployment.environment=production |
OTEL_SERVICE_VERSION | Version of the service | 1.2.3 |
OTEL_RESOURCE_ATTRIBUTES_POD_NAME | Kubernetes pod name (if applicable) | my-ai-app-pod-xyz |
OTEL_RESOURCE_ATTRIBUTES_NODE_NAME | Kubernetes node name (if applicable) | node-123 |
Example:# Set resource attributes for better trace organization
export OTEL_RESOURCE_ATTRIBUTES="service.version=2.1.0,team=ai-platform,cost.center=engineering"
export OTEL_SERVICE_VERSION=2.1.0
# Run with enhanced metadata
openlit-instrument python your_ai_app.py
These attributes enhance trace metadata for better filtering, grouping, and analysis in your observability platform.Prompt Hub - openlit.get_prompt()
Advanced prompt management and version control for production LLM applications. Configure OpenLIT Prompt Hub for centralized prompt governance and tracking:| Parameter | Description |
|---|
url | Sets the OpenLIT URL. Defaults to the OPENLIT_URL environment variable. |
api_key | Sets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable. |
name | Unique prompt identifier for retrieval. Use with prompt_id for specific prompt versioning |
prompt_id | Numeric ID for direct prompt access. Enables precise prompt version control. Optional |
version | Specific prompt version retrieval for consistent AI behavior across deployments. Optional |
shouldCompile | Enable dynamic prompt compilation with variables for personalized LLM interactions. Optional |
variables | Dynamic variables for prompt template compilation and customization. Optional |
meta_properties | Tracking metadata for prompt usage analytics and audit trails in production. Optional |
Vault - openlit.get_secrets()
Enterprise-grade secret management for AI applications. Configure OpenLIT Vault for secure API key and credential handling in production LLM deployments:| Parameter | Description |
|---|
url | Sets the Openlit URL. Defaults to the OPENLIT_URL environment variable. |
api_key | Sets the OpenLIT API Key. Can also be provided via the OPENLIT_API_KEY environment variable. |
key | Specific secret key retrieval for individual credential access. Optional |
should_set_env | Automatically set retrieved secrets as environment variables for seamless application integration. Optional |
tags | Tag-based secret filtering for organized credential management across different AI services. Optional |