Quickstart: LLM Observablity
Production-ready AI monitoring setup in 2 simple steps with zero code changes
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Using an existing OTel Tracer
You have the flexibility to integrate your existing OpenTelemetry (OTel) tracer configuration with OpenLIT. If you already have an OTel tracer instantiated in your application, you can pass it directly toopenlit.init(tracer=tracer)
.
This integration ensures that OpenLIT utilizes your custom tracer settings, allowing for a unified tracing setup across your application.
Example:
Add custom resource attributes
TheOTEL_RESOURCE_ATTRIBUTES
environment variable allows you to provide additional OpenTelemetry resource attributes when starting your application with OpenLIT. OpenLIT already includes some default resource attributes:
telemetry.sdk.name: openlit
service.name: YOUR_SERVICE_NAME
deployment.environment: YOUR_ENVIRONMENT_NAME
OTEL_RESOURCE_ATTRIBUTES
variable. Your custom attributes will be added on top of the existing OpenLIT attributes, providing additional context to your telemetry data. Simply format your attributes as key1=value1,key2=value2
.
For example:
Disable Tracing of Content
By default, OpenLIT adds the prompts and completions to Trace span attributes. However, you may want to disable this logging for privacy reasons, as they may contain highly sensitive data from your users. You may also simply want to reduce the size of your traces. Example:Disable Batch
By default, the SDK batches spans using the OpenTelemetry batch span processor. When working locally, sometimes you may wish to disable this behavior. You can do that with this flag. Example:Disable Instrumentations
By default, OpenLIT automatically detects which models and frameworks you are using and instruments them for you. You can override this and disable instrumentation for specific frameworks and models. Example:Manual Tracing
Usingopenlit.trace
, you get access to manually create traces, allowing you to record every process within a single function.
python
trace
function automatically groups any LLM function invoked within generate_one_liner
, providing you with organized groupings right out of the box.
You can do more with traces by running the start_trace
context generator:
python
trace.set_result('')
to set the final result of the trace and trace.set_metadata({})
to add custom metadata.
Full Example
python
Deploy OpenLIT
Deployment options for scalable LLM monitoring infrastructure
Integrations
60+ AI integrations with automatic instrumentation and performance tracking
Destinations
Send elemetry to Datadog, Grafana, New Relic, and other observability stacks
Running in Kubernetes? Try the OpenLIT Operator
Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.