Skip to main content

Get started

OpenLIT SDK

Collect and send GPU performance metrics directly from your application to an OpenTelemetry endpoint.

OpenTelemetry GPU Collector

Install the OpenTelemetry GPU Collector as a Docker container to collect and send GPU performance metrics to an OpenTelemetry endpoint.
1

Deploy OpenLIT

1

Git clone OpenLIT repository

git clone git@github.com:openlit/openlit.git
2

Start Docker Compose

From the root directory of the OpenLIT Repo, Run the below command:
docker compose up -d
2

Install OpenLIT

Open your command line or terminal and run:
pip install openlit
3

Initialize OpenLIT in your Application

Not sure which method to choose? Check out Instrumentation Methods to understand the differences.
# Start GPU monitoring instantly
openlit-instrument --collect-system-metrics python your_app.py

# With custom settings
openlit-instrument \
  --otlp-endpoint http://127.0.0.1:4318 \
  --service-name my-gpu-app \
  --environment production \
  --collect-system-metrics \
  python your_app.py
To send metrics to other Observability tools, refer to the supported destinations.For more advanced configurations and application use cases, visit the OpenLIT Python repository.
1

Deploy OpenLIT

1

Git clone OpenLIT repository

git clone git@github.com:openlit/openlit.git
2

Start Docker Compose

From the root directory of the OpenLIT Repo, Run the below command:
docker compose up -d
2

Pull `otel-gpu-collector` Docker Image

You can quickly start using the OTel GPU Collector by pulling the Docker image:
docker pull ghcr.io/openlit/otel-gpu-collector:latest
3

Run `otel-gpu-collector` Docker container

You can quickly start using the OTel GPU Collector by pulling the Docker image: Here’s a quick example showing how to run the container with the required environment variables:
docker run --gpus all \
    -e GPU_APPLICATION_NAME='chatbot' \
    -e GPU_ENVIRONMENT='staging' \
    -e OTEL_EXPORTER_OTLP_ENDPOINT="http://127.0.0.1:4318" \
    ghcr.io/openlit/otel-gpu-collector:latest
For more advanced configurations of the collector, visit the OTel GPU Collector repository.Note: If you’ve deployed OpenLIT using Docker Compose, make sure to use the host’s IP address or add OTel GPU Collector to the Docker Compose:
otel-gpu-collector:
  image: ghcr.io/openlit/otel-gpu-collector:latest
  environment:
    GPU_APPLICATION_NAME: 'chatbot'
    GPU_ENVIRONMENT: 'staging'
    OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel-collector:4318"
  device_requests:
  - driver: nvidia
    count: all
    capabilities: [gpu]
  depends_on:
  - otel-collector
  restart: always
OTEL_EXPORTER_OTLP_ENDPOINT="http://192.168.10.15:4318"

Integrations

60+ AI integrations with automatic instrumentation and performance tracking

Create a dashboard

Create custom visualizations with flexible widgets, queries, and real-time AI monitoring

Manage prompts

Version, deploy, and collaborate on prompts with centralized management and tracking