This guide will walk you through setting up OpenTelemetry Auto Instrumentation for monitoring your LLM Application using OpenLIT. In just a few steps, you’ll be able to track and analyze the performance and usage of your LLM Applications. In this guide, we’ll show how you can send OpenTelemetry traces and metrics from your LLM Applications to OpenLIT.

1

Deploy OpenLIT

1

Git Clone OpenLIT Repository

git clone [email protected]:openlit/openlit.git
2

Start Docker Compose

From the root directory of the OpenLIT Repo, Run the below command:

docker compose up -d
2

Install OpenLIT SDK

pip install openlit
3

Initialize OpenLIT in Your Application

Add the following two lines to your application code:

import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

Example Usage for monitoring OpenAI Usage:

from openai import OpenAI
import openlit

openlit.init(otlp_endpoint="http://127.0.0.1:4318")

client = OpenAI(
    api_key="YOUR_OPENAI_KEY"
)

chat_completion = client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "What is LLM Observability",
        }
    ],
    model="gpt-3.5-turbo",
)

Refer to OpenLIT Python SDK repository or Typescript SDK repository for more advanced configurations and use cases.

4

Visualize and Analyze

With the LLM Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your LLM application’s performance, behavior, and identify areas of improvement.

Just head over to OpenLIT at 127.0.0.1:3000 on your browser to start exploring. You can login using the default credentials

You’re all set! Following these steps should have you on your way to effectively monitoring your LLM applications with OpenTelemetry. If you wish to send telemetry to any other backend, Refer to our Connections guide.

If you have any questions or need support, reach out to our community.