All
guardrail for complete prompt safety monitoring, detecting prompt injection attacks, sensitive content, and topic violations simultaneously. We’ll also show you how to collect OpenTelemetry guardrail metrics for continuous AI security monitoring.
1
Initialize guardrails
Set up automated prompt safety filters for LLMs with just two lines of code:Full Example:The For advanced AI guardrails configuration and supported providers, explore our Guardrails Guide.
example.py
Output
All
guard provides prompt safety filtering against injection attacks, sensitive content, and topic violations simultaneously. For targeted prompt protection, use specific guardrails:Prompt injection detection
Detect and block malicious prompt injection attacks and jailbreak attempts
Sensitive topic filtering
Filter sensitive content including personal data, financial information, and confidential topics
Topic restriction
Restrict LLM responses to approved topics and prevent off-topic conversations
2
Track AI Guardrail metrics
To send guardrail security metrics to OpenTelemetry backends, your application needs to be instrumented via OpenLIT. Choose from three instrumentation methods, then simply add Metrics are sent to the same OpenTelemetry backend configured during instrumentation, check our supported destinations for configuration details.
collect_metrics=True
to track prompt injection detection, sensitive topic filtering, and topic restriction metrics.No code changes needed - instrument via CLI:Then in your application: