export OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318"export OPENLIT_COLLECT_SYSTEM_METRICS=trueexport OTEL_SERVICE_NAME=my-gpu-app# Run your applicationopenlit-instrument python your_app.py
You can set up OpenLIT in your application using either function arguments directly in your code or by using environment variables.
Parameters
Environment Variables
Add the following two lines to your application code:
import openlitopenlit.init( otlp_endpoint="http:127.0.0.1:4318", collect_system_metrics=True # This enables GPU monitoring)
Replace:
YOUR_OTEL_ENDPOINT with the URL of your OpenTelemetry backend, such as http://127.0.0.1:4318 if you are using OpenLIT and a local OTel Collector.
Note:collect_system_metrics=True replaces the deprecated collect_gpu_stats=True
Configure your OTLP endpoint using environment variables:
You can quickly start using the OTel GPU Collector by pulling the Docker image:
Here’s a quick example showing how to run the container with the required environment variables:
docker run --gpus all \ -e OTEL_SERVICE_NAME='chatbot' \ -e OTEL_RESOURCE_ATTRIBUTES='deployment.environment=staging' \ -e OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318" \ ghcr.io/openlit/otel-gpu-collector:latest
For more advanced configurations of the collector, visit the OTel GPU Collector repository.Note: If you’ve deployed OpenLIT using Docker Compose, make sure to use the host’s IP address or add OTel GPU Collector to the Docker Compose:
Docker Compose: Add the following config under `services`