This guide will walk you through setting up OpenTelemetry Auto Instrumentation for monitoring your LLM Application using OpenLIT. In just a few steps, you’ll be able to track and analyze the performance and usage of your LLM Applications.
In this guide, we’ll show how you can send OpenTelemetry traces and metrics from your LLM Applications to OpenLIT.
Deploy OpenLIT
Git Clone OpenLIT Repository
git clone git@github.com:openlit/openlit.git
Start Docker Compose
From the root directory of the OpenLIT Repo , Run the below command:
Initialize OpenLIT in Your Application
Add the following two lines to your application code:
Setup using function arguments Setup using Environment Variables import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
Examples:
OpenAI
Anthropic
Cohere
LiteLLM
Langchain
Ollama
from openai import OpenAI
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
client = OpenAI(
api_key = "YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "What is LLM Observability?" ,
}
],
model = "gpt-3.5-turbo" ,
)
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
Examples:
OpenAI
Anthropic
Cohere
LiteLLM
Langchain
Ollama
from openai import OpenAI
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
client = OpenAI(
api_key = "YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "What is LLM Observability?" ,
}
],
model = "gpt-3.5-turbo" ,
)
import Openlit from "openlit"
Openlit . init ({ otlpEndpoint: "http://127.0.0.1:4318/v1/traces" })
Example Usage for monitoring OpenAI
Usage:
import Openlit from "openlit"
Openlit . init ({ otlpEndpoint: "http://127.0.0.1:4318/v1/traces" })
async function main () {
const OpenAI = await import ( "openai" ). then (( e ) => e . default );
const openai = new OpenAI ({
apiKey: YOUR_OPENAI_KEY ,
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "system" , content: "Who are you ?" }],
});
console . log ( completion ?. choices ?.[ 0 ]);
}
main ();
OR
import openlit from "openlit"
import OpenAI from "openai"
openlit . init ({
otlpEndpoint: "http://127.0.0.1:4318/v1/traces" ,
instrumentations: {
openai: OpenAI ,
}
})
async function main () {
const openai = new OpenAI ({
apiKey: YOUR_OPENAI_KEY ,
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "system" , content: "Who are you ?" }],
});
console . log ( completion ?. choices ?.[ 0 ]);
}
main ();
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
Examples:
OpenAI
Anthropic
Cohere
LiteLLM
Langchain
Ollama
from openai import OpenAI
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
client = OpenAI(
api_key = "YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "What is LLM Observability?" ,
}
],
model = "gpt-3.5-turbo" ,
)
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
Examples:
OpenAI
Anthropic
Cohere
LiteLLM
Langchain
Ollama
from openai import OpenAI
import openlit
openlit.init( otlp_endpoint = "http://127.0.0.1:4318" )
client = OpenAI(
api_key = "YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "What is LLM Observability?" ,
}
],
model = "gpt-3.5-turbo" ,
)
import Openlit from "openlit"
Openlit . init ({ otlpEndpoint: "http://127.0.0.1:4318/v1/traces" })
Example Usage for monitoring OpenAI
Usage:
import Openlit from "openlit"
Openlit . init ({ otlpEndpoint: "http://127.0.0.1:4318/v1/traces" })
async function main () {
const OpenAI = await import ( "openai" ). then (( e ) => e . default );
const openai = new OpenAI ({
apiKey: YOUR_OPENAI_KEY ,
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "system" , content: "Who are you ?" }],
});
console . log ( completion ?. choices ?.[ 0 ]);
}
main ();
OR
import openlit from "openlit"
import OpenAI from "openai"
openlit . init ({
otlpEndpoint: "http://127.0.0.1:4318/v1/traces" ,
instrumentations: {
openai: OpenAI ,
}
})
async function main () {
const openai = new OpenAI ({
apiKey: YOUR_OPENAI_KEY ,
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "system" , content: "Who are you ?" }],
});
console . log ( completion ?. choices ?.[ 0 ]);
}
main ();
Add the following two lines to your application code:
import openlit
openlit.init()
Run the following command to configure the OTEL export endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
Examples:
OpenAI
Anthropic
Cohere
LiteLLM
Langchain
Ollama
from openai import OpenAI
import openlit
openlit.init()
client = OpenAI(
api_key = "YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "What is LLM Observability?" ,
}
],
model = "gpt-3.5-turbo" ,
)
Add the following two lines to your application code:
import openlit
openlit.init()
Run the following command to configure the OTEL export endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
Examples:
OpenAI
Anthropic
Cohere
LiteLLM
Langchain
Ollama
from openai import OpenAI
import openlit
openlit.init()
client = OpenAI(
api_key = "YOUR_OPENAI_KEY"
)
chat_completion = client.chat.completions.create(
messages = [
{
"role" : "user" ,
"content" : "What is LLM Observability?" ,
}
],
model = "gpt-3.5-turbo" ,
)
Add the following two lines to your application code:
import openlit from "openlit"
openlit . init ()
Run the following command to configure the OTEL export endpoint:
export OTEL_EXPORTER_OTLP_ENDPOINT = "http://127.0.0.1:4318"
Example Usage for monitoring OpenAI
Usage:
import openlit from "openlit"
openlit . init ()
async function main () {
const OpenAI = await import ( "openai" ). then (( e ) => e . default );
const openai = new OpenAI ({
apiKey: YOUR_OPENAI_KEY ,
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "system" , content: "Who are you ?" }],
});
console . log ( completion ?. choices ?.[ 0 ]);
}
main ();
OR
import openlit from "openlit"
import OpenAI from "openai"
openlit . init ({
instrumentations: {
openai: OpenAI ,
}
})
async function main () {
const openai = new OpenAI ({
apiKey: YOUR_OPENAI_KEY ,
});
const completion = await openai . chat . completions . create ({
messages: [{ role: "system" , content: "Who are you ?" }],
});
console . log ( completion ?. choices ?.[ 0 ]);
}
main ();
Refer to OpenLIT Python SDK repository or Typescript SDK repository for more advanced configurations and use cases.
Visualize and Analyze
With the LLM Observability data now being collected and sent to OpenLIT, the next step is to visualize and analyze this data to get insights into your LLM application’s performance, behavior, and identify areas of improvement.
Just head over to OpenLIT at 127.0.0.1:3000
on your browser to start exploring. You can login using the default credentials
Email : user@openlit.io
Password : openlituser
You’re all set! Following these steps should have you on your way to effectively monitoring your LLM applications with OpenTelemetry.
Send Observability telemetry to other OpenTelemetry backends
If you wish to send telemetry directly from the SDK to another backend, you can stop the current Docker services by using the command below. For more details on sending the data to your existing OpenTelemetry backends, refer to our Connections guide.
If you have any questions or need support, reach out to our community .