1. Get your SigLens Endpoint
- Ensure SigLens is running: Make sure your SigLens instance is deployed and accessible
- Get the OTLP endpoint: SigLens accepts OTLP data on port 4318
- Local deployment:
http://localhost:4318/v1/traces
- Remote deployment:
http://<your-siglens-host>:4318/v1/traces
- Replace
<your-siglens-host>
with your SigLens server address
- Local deployment:
SigLens supports direct OTLP ingestion without additional authentication for basic setups.
2. Instrument your application
For Kubernetes deployments with zero-code instrumentation:YOUR_SIGLENS_HTTP_ENDPOINT
with your SigLens OTLP endpoint from Step 1.- Example (Local):
http://localhost:4318/v1/traces
- Example (Remote):
http://siglens.company.com:4318/v1/traces
- Example (Local):
3. Visualize in SigLens
Once your LLM application is instrumented, you can explore the telemetry data in SigLens:- Access SigLens Interface: Log into your SigLens instance dashboard
- Navigate to Tracing: Click Tracing in the side navigation menu
- Explore AI Operations: View your AI application traces including:
- LLM request traces with detailed timing
- Token usage and cost information
- Vector database operations
- Model performance analytics
- Request/response payloads (if enabled)
- Trace Details: Click on any trace to see detailed span information and execution flow
- Search and Filter: Use SigLens’ powerful search capabilities to filter traces by service, operation, or custom attributes
- Performance Analysis: Analyze latency patterns and identify performance bottlenecks