Get started

1

Deploy OpenLIT

1

Git clone OpenLIT repository

git clone git@github.com:openlit/openlit.git
2

Start Docker Compose

From the root directory of the OpenLIT Repo, Run the below command:
docker compose up -d
2

Install OpenLIT

Open your command line or terminal and run:
pip install openlit
3

Initialize OpenLIT in your Application

Not sure which method to choose? Check out Instrumentation Methods to understand the differences.
# Start GPU monitoring instantly
openlit-instrument --collect-system-metrics python your_app.py

# With custom settings
openlit-instrument \
  --otlp-endpoint http://127.0.0.1:4318 \
  --service-name my-gpu-app \
  --environment production \
  --collect-system-metrics \
  python your_app.py
To send metrics to other Observability tools, refer to the supported destinations.For more advanced configurations and application use cases, visit the OpenLIT Python repository.
1

Deploy OpenLIT

1

Git clone OpenLIT repository

git clone git@github.com:openlit/openlit.git
2

Start Docker Compose

From the root directory of the OpenLIT Repo, Run the below command:
docker compose up -d
2

Pull `otel-gpu-collector` Docker Image

You can quickly start using the OTel GPU Collector by pulling the Docker image:
docker pull ghcr.io/openlit/otel-gpu-collector:latest
3

Run `otel-gpu-collector` Docker container

You can quickly start using the OTel GPU Collector by pulling the Docker image: Here’s a quick example showing how to run the container with the required environment variables:
docker run --gpus all \
    -e GPU_APPLICATION_NAME='chatbot' \
    -e GPU_ENVIRONMENT='staging' \
    -e OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318" \
    ghcr.io/openlit/otel-gpu-collector:latest
For more advanced configurations of the collector, visit the OTel GPU Collector repository.Note: If you’ve deployed OpenLIT using Docker Compose, make sure to use the host’s IP address or add OTel GPU Collector to the Docker Compose:
otel-gpu-collector:
  image: ghcr.io/openlit/otel-gpu-collector:latest
  environment:
    GPU_APPLICATION_NAME: 'chatbot'
    GPU_ENVIRONMENT: 'staging'
    OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel-collector:4318"
  device_requests:
  - driver: nvidia
    count: all
    capabilities: [gpu]
  depends_on:
  - otel-collector
  restart: always
OTEL_EXPORTER_OTLP_ENDPOINT="http://192.168.10.15:4318"

Kubernetes

Running in Kubernetes? Try the OpenLIT Operator

Automatically inject instrumentation into existing workloads without modifying pod specs, container images, or application code.