Skip to main content

Overview

The OpenLIT SDKs provide a function to evaluate rules against the Rule Engine from your application code. At runtime, send trace attributes (model, provider, service name, etc.) and get back matching rules and their linked entities — contexts, prompts, or evaluation configurations. This enables dynamic, condition-driven retrieval of AI resources without hardcoding logic in your application.

Contexts

Retrieve system prompts and knowledge based on model, user tier, or any attribute

Prompts

Fetch compiled prompts with variable substitution from the Prompt Hub

Evaluation Configs

Determine which evaluation types apply to a given trace

Prerequisites

1

Set up OpenLIT

Ensure you have an OpenLIT instance running. See Quick Start for setup instructions.
2

Create an API Key

Navigate to Settings > API Keys in OpenLIT. Click Create API Key and save the key securely.
3

Create Rules

Set up rules with conditions and linked entities in the Rule Engine UI.

Configuration

All SDKs resolve the OpenLIT URL and API key in the same order:
ParameterEnvironment VariableDescriptionDefault
url / URLOPENLIT_URLBase URL of your OpenLIT dashboardhttp://127.0.0.1:3000
api_key / apiKey / APIKeyOPENLIT_API_KEYAPI key for Bearer token authenticationrequired
Set environment variables to avoid passing credentials in every call:
export OPENLIT_URL="https://your-openlit-instance.com"
export OPENLIT_API_KEY="your-api-key"

Usage

Retrieve Contexts

Fetch context entities (system prompts, knowledge) that match the given trace attributes.
import openlit

result = openlit.evaluate_rule(
    entity_type="context",
    fields={
        "gen_ai.system": "openai",
        "gen_ai.request.model": "gpt-4",
        "service.name": "my-app",
    },
    include_entity_data=True,
)

if result:
    print("Matching rules:", result["matchingRuleIds"])
    for entity in result.get("entities", []):
        entity_key = f"{entity['entity_type']}:{entity['entity_id']}"
        data = result.get("entity_data", {}).get(entity_key, {})
        print(f"Context: {data.get('name')} - {data.get('content')}")

Retrieve Prompts

Fetch compiled prompts from the Prompt Hub with variable substitution.
import openlit

result = openlit.evaluate_rule(
    entity_type="prompt",
    fields={
        "gen_ai.system": "openai",
        "gen_ai.request.model": "gpt-4",
    },
    include_entity_data=True,
    entity_inputs={
        "variables": {"user_name": "Alice", "product": "OpenLIT"},
        "shouldCompile": True,
    },
)

if result:
    for entity in result.get("entities", []):
        key = f"prompt:{entity['entity_id']}"
        prompt_data = result.get("entity_data", {}).get(key, {})
        print("Compiled prompt:", prompt_data.get("compiledPrompt"))

Check Evaluation Rules

Determine which evaluation types (hallucination, bias, etc.) are linked to rules matching the current trace. This works with both the 11 built-in evaluation types and any custom evaluation types you have created.
import openlit

result = openlit.evaluate_rule(
    entity_type="evaluation",
    fields={
        "gen_ai.system": "openai",
        "gen_ai.request.model": "gpt-4",
        "service.name": "production-api",
    },
)

if result and result["matchingRuleIds"]:
    print("Evaluation rules matched:", result["matchingRuleIds"])
    for entity in result.get("entities", []):
        print(f"  Evaluation type: {entity['entity_id']}")
else:
    print("No evaluation rules matched")

Parameters

Python — openlit.evaluate_rule()

ParameterTypeRequiredDescription
urlstrNoOpenLIT dashboard URL
api_keystrNoAPI key for authentication
entity_typestrYes"context", "prompt", or "evaluation"
fieldsdictYesTrace attributes to match against rules
include_entity_databoolNoInclude full entity data in response. Default: False
entity_inputsdictNoInputs for entity resolution (e.g. prompt variables)

TypeScript — Openlit.evaluateRule()

ParameterTypeRequiredDescription
urlstringNoOpenLIT dashboard URL
apiKeystringNoAPI key for authentication
entityType'context', 'prompt', or 'evaluation'YesEntity type to match
fieldsObject of string/number/boolean valuesYesTrace attributes to match
includeEntityDatabooleanNoInclude full entity data. Default: false
entityInputsObjectNoInputs for entity resolution

Go — openlit.EvaluateRule()

FieldTypeRequiredDescription
URLstringNoOpenLIT dashboard URL
APIKeystringNoAPI key for authentication
EntityTypeRuleEntityTypeYesRuleEntityContext, RuleEntityPrompt, or RuleEntityEvaluation
Fieldsmap[string]interfaceYesTrace attributes to match
IncludeEntityDataboolNoInclude full entity data. Default: false
EntityInputsmap[string]interfaceNoInputs for entity resolution
Timeouttime.DurationNoHTTP timeout. Default: 30s

Response Format

All SDKs return the same response structure:
{
  "matchingRuleIds": ["rule-uuid-1", "rule-uuid-2"],
  "entities": [
    {
      "rule_id": "rule-uuid-1",
      "entity_type": "context",
      "entity_id": "ctx-uuid-1"
    }
  ],
  "entity_data": {
    "context:ctx-uuid-1": {
      "id": "ctx-uuid-1",
      "name": "Premium System Prompt",
      "content": "You are a helpful AI assistant..."
    }
  }
}
FieldDescription
matchingRuleIdsArray of rule IDs whose conditions matched the input fields
entitiesArray of linked entities from matching rules, filtered by entity_type
entity_dataFull entity records, keyed as type:id. Only present when include_entity_data is true

Error Handling

Returns None on any error (network, auth, server). Check for None before using the result.
result = openlit.evaluate_rule(entity_type="context", fields={"key": "val"})
if result is None:
    print("Rule evaluation failed — check logs for details")

Rule Engine Guide

Learn how to create rules, add conditions, and link entities in the OpenLIT UI

API Reference

Full OpenAPI reference for the evaluate endpoint