Get started with Guardrails
Quickly secure your app from Prompt Injection, Sensitive Topics, and Topic Restriction
This guide will help you set up guardrails to secure your applications from bad prompts. With OpenLIT Guardrails, you can detect and manage Prompt Injection, Sensitive Topics, and restrict prompts to certain topics.
We’ll demonstrate how to use the All
guardrail, which checks for all three aspects at once, using openlit.guard
. Additionally, we’ll show you how to collect OpenTelemetry metrics during the guardrail process.
Initialize Guardrails in Your Application
Add the following two lines to your application code:
Full Example:
The “All” guardrail is useful for simultaneously checking against Prompt Injection, Sensitive Topics, and Topic Restriction. For more efficient, targeted protection from harmful prompts, you can use specific guardrails like openlit.guard.PromptInjection()
, openlit.guard.SensitiveTopics()
, or openlit.guard.TopicRestriction()
.
For details on how it works, and to see the supported providers, models, and parameters to pass, check our Guardrails Guide.
Collecting Guardrail Metrics
The openlit.guard
module integrates with OpenTelemetry to track guardrail metrics as a counter, including score details and validation metadata. To enable metric collection, initialize OpenLIT with metrics tracking:
These metrics can be sent to any OpenTelemetry-compatible backend. For configuration details, check out our Connections Guide to choose your preferred destination for these metrics.
You’re all set! By following these steps, you can effectively secure the interactions generated by your models.
If you have any questions or need support, reach out to our community.
Was this page helpful?