Quickly evaluate your model responses for Hallucination, Bias, and Toxicity
All
evaluator, which checks for all three aspects in one go, using openlit.evals
. Additionally, we’ll show you how to collect OpenTelemetry metrics during the evaluation process.
Initialize evaluations in Your Application
openlit.evals.Hallucination()
, openlit.evals.Bias()
, or openlit.evals.Toxicity()
.For details on how it works, and to see the supported providers, models and parameters to pass, check our Evaluations Guide.Collecting Evaluation metrics
openlit.evals
module integrates with OpenTelemetry to track evaluation metrics as a counter, including score details and evaluation metadata. To enable metric collection, initialize OpenLIT for metrics tracking: