1. Get your Credentials
If you haven’t deployed the OpenLIT Platform yet, follow the Installation Guide to set it up. Common OpenLIT Platform endpoints:- Kubernetes cluster:
http://openlit.openlit.svc.cluster.local:4318
- Local development:
http://localhost:4318
(using port-forward) - External/Ingress: Your configured external endpoint
2. Instrument your application
For Kubernetes deployments with zero-code instrumentation:YOUR_OPENLIT_PLATFORM_ENDPOINT
with your OpenLIT Platform endpoint from Step 1.- Same cluster:
http://openlit.openlit.svc.cluster.local:4318
- External:
https://your-openlit-domain.com:4318
- Same cluster:
When using the OpenLIT Operator with the OpenLIT Platform in the same cluster, the default endpoint
http://openlit.openlit.svc.cluster.local:4318
is automatically configured if no explicit endpoint is provided.3. Access OpenLIT Platform Dashboard
Once your LLM application is instrumented, you can explore the comprehensive observability data in the OpenLIT Platform: Access the Dashboard:- LLM Observability Dashboard: Comprehensive view of your AI applications including:
- Real-time Metrics: Request rates, latency, and error rates
- Cost Tracking: Token usage and cost breakdown by model and application
- Performance Analytics: Response times, throughput, and model performance
- Trace Visualization: Detailed execution flow with full request/response context
- Vector Database Analytics: Monitor your vector database operations and performance
- GPU Monitoring: Track GPU utilization and performance metrics (if enabled)
- Custom Dashboards: Create tailored views for your specific monitoring needs