To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to SigLens, follow the below steps.

1. Get your SigLens Endpoint

  1. Ensure SigLens is running: Make sure your SigLens instance is deployed and accessible
  2. Get the OTLP endpoint: SigLens accepts OTLP data on port 4318
    • Local deployment: http://localhost:4318/v1/traces
    • Remote deployment: http://<your-siglens-host>:4318/v1/traces
    • Replace <your-siglens-host> with your SigLens server address
SigLens supports direct OTLP ingestion without additional authentication for basic setups.

2. Instrument your application

For direct integration into your Python applications:
import openlit

openlit.init(
  otlp_endpoint="YOUR_SIGLENS_HTTP_ENDPOINT"
)
Replace:
  1. YOUR_SIGLENS_HTTP_ENDPOINT with your SigLens OTLP endpoint from Step 1.
    • Example (Local): http://localhost:4318/v1/traces
    • Example (Remote): http://siglens.company.com:4318/v1/traces
Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases.

3. Visualize in SigLens

Once your LLM application is instrumented, you can explore the telemetry data in SigLens:
  1. Access SigLens Interface: Log into your SigLens instance dashboard
  2. Navigate to Tracing: Click Tracing in the side navigation menu
  3. Explore AI Operations: View your AI application traces including:
    • LLM request traces with detailed timing
    • Token usage and cost information
    • Vector database operations
    • Model performance analytics
    • Request/response payloads (if enabled)
  4. Trace Details: Click on any trace to see detailed span information and execution flow
  5. Search and Filter: Use SigLens’ powerful search capabilities to filter traces by service, operation, or custom attributes
  6. Performance Analysis: Analyze latency patterns and identify performance bottlenecks