To send OpenTelemetry metrics and traces generated by OpenLIT from your AI Application to the OpenLIT Platform, follow the below steps.Documentation Index
Fetch the complete documentation index at: https://docs.openlit.io/llms.txt
Use this file to discover all available pages before exploring further.
1. Get your Credentials
If you haven’t deployed the OpenLIT Platform yet, follow the Installation Guide to set it up. Common OpenLIT Platform endpoints:- Kubernetes cluster:
http://openlit.openlit.svc.cluster.local:4318 - Local development:
http://localhost:4318(using port-forward) - External/Ingress: Your configured external endpoint
2. Instrument your application
- SDK
- CLI
For direct integration into your Python applications:Replace Refer to the OpenLIT Python SDK repository for more advanced configurations and use cases.
- Function Arguments
- Environment Variables
http://localhost:4318 with your OpenLIT Platform endpoint:- Local development:
http://localhost:4318 - Kubernetes cluster:
http://openlit.openlit.svc.cluster.local:4318 - External: Your configured external endpoint
3. Access OpenLIT Platform Dashboard
Once your LLM application is instrumented, you can explore the comprehensive observability data in the OpenLIT Platform: Access the Dashboard:- LLM Observability Dashboard: Comprehensive view of your AI applications including:
- Real-time Metrics: Request rates, latency, and error rates
- Cost Tracking: Token usage and cost breakdown by model and application
- Performance Analytics: Response times, throughput, and model performance
- Trace Visualization: Detailed execution flow with full request/response context
- Vector Database Analytics: Monitor your vector database operations and performance
- GPU Monitoring: Track GPU utilization and performance metrics (if enabled)
- Custom Dashboards: Create tailored views for your specific monitoring needs

