Overview
The OpenLIT SDKs provide a function to evaluate rules against the Rule Engine from your application code. At runtime, send trace attributes (model, provider, service name, etc.) and get back matching rules and their linked entities — contexts, prompts, or evaluation configurations.
This enables dynamic, condition-driven retrieval of AI resources without hardcoding logic in your application.
Contexts Retrieve system prompts and knowledge based on model, user tier, or any attribute
Prompts Fetch compiled prompts with variable substitution from the Prompt Hub
Evaluation Configs Determine which evaluation types apply to a given trace
Prerequisites
Set up OpenLIT
Ensure you have an OpenLIT instance running. See Quick Start for setup instructions.
Create an API Key
Navigate to Settings > API Keys in OpenLIT. Click Create API Key and save the key securely.
Create Rules
Set up rules with conditions and linked entities in the Rule Engine UI.
Configuration
All SDKs resolve the OpenLIT URL and API key in the same order:
Parameter Environment Variable Description Default url / URLOPENLIT_URLBase URL of your OpenLIT dashboard http://127.0.0.1:3000api_key / apiKey / APIKeyOPENLIT_API_KEYAPI key for Bearer token authentication required
Set environment variables to avoid passing credentials in every call:
export OPENLIT_URL = "https://your-openlit-instance.com"
export OPENLIT_API_KEY = "your-api-key"
Usage
Retrieve Contexts
Fetch context entities (system prompts, knowledge) that match the given trace attributes.
import openlit
result = openlit.evaluate_rule(
entity_type = "context" ,
fields = {
"gen_ai.system" : "openai" ,
"gen_ai.request.model" : "gpt-4" ,
"service.name" : "my-app" ,
},
include_entity_data = True ,
)
if result:
print ( "Matching rules:" , result[ "matchingRuleIds" ])
for entity in result.get( "entities" , []):
entity_key = f " { entity[ 'entity_type' ] } : { entity[ 'entity_id' ] } "
data = result.get( "entity_data" , {}).get(entity_key, {})
print ( f "Context: { data.get( 'name' ) } - { data.get( 'content' ) } " )
import Openlit from 'openlit' ;
const result = await Openlit . evaluateRule ({
entityType: 'context' ,
fields: {
'gen_ai.system' : 'openai' ,
'gen_ai.request.model' : 'gpt-4' ,
'service.name' : 'my-app' ,
},
includeEntityData: true ,
});
if ( ! ( 'err' in result )) {
console . log ( 'Matching rules:' , result . matchingRuleIds );
for ( const entity of result . entities ) {
const key = ` ${ entity . entity_type } : ${ entity . entity_id } ` ;
const data = result . entity_data ?.[ key ];
console . log ( `Context: ${ data ?. name } - ${ data ?. content } ` );
}
}
import (
" context "
" fmt "
openlit " github.com/openlit/openlit/sdk/go "
)
result , err := openlit . EvaluateRule ( context . Background (), openlit . EvaluateRuleOptions {
EntityType : openlit . RuleEntityContext ,
Fields : map [ string ] interface {}{
"gen_ai.system" : "openai" ,
"gen_ai.request.model" : "gpt-4" ,
"service.name" : "my-app" ,
},
IncludeEntityData : true ,
})
if err != nil {
log . Fatal ( err )
}
fmt . Println ( "Matching rules:" , result . MatchingRuleIDs )
for _ , entity := range result . Entities {
key := fmt . Sprintf ( " %s : %s " , entity . EntityType , entity . EntityID )
if data , ok := result . EntityData [ key ]; ok {
fmt . Printf ( "Context: %v \n " , data )
}
}
Retrieve Prompts
Fetch compiled prompts from the Prompt Hub with variable substitution.
import openlit
result = openlit.evaluate_rule(
entity_type = "prompt" ,
fields = {
"gen_ai.system" : "openai" ,
"gen_ai.request.model" : "gpt-4" ,
},
include_entity_data = True ,
entity_inputs = {
"variables" : { "user_name" : "Alice" , "product" : "OpenLIT" },
"shouldCompile" : True ,
},
)
if result:
for entity in result.get( "entities" , []):
key = f "prompt: { entity[ 'entity_id' ] } "
prompt_data = result.get( "entity_data" , {}).get(key, {})
print ( "Compiled prompt:" , prompt_data.get( "compiledPrompt" ))
import Openlit from 'openlit' ;
const result = await Openlit . evaluateRule ({
entityType: 'prompt' ,
fields: {
'gen_ai.system' : 'openai' ,
'gen_ai.request.model' : 'gpt-4' ,
},
includeEntityData: true ,
entityInputs: {
variables: { user_name: 'Alice' , product: 'OpenLIT' },
shouldCompile: true ,
},
});
if ( ! ( 'err' in result )) {
for ( const entity of result . entities ) {
const key = `prompt: ${ entity . entity_id } ` ;
const promptData = result . entity_data ?.[ key ];
console . log ( 'Compiled prompt:' , promptData ?. compiledPrompt );
}
}
result , err := openlit . EvaluateRule ( ctx , openlit . EvaluateRuleOptions {
EntityType : openlit . RuleEntityPrompt ,
Fields : map [ string ] interface {}{
"gen_ai.system" : "openai" ,
"gen_ai.request.model" : "gpt-4" ,
},
IncludeEntityData : true ,
EntityInputs : map [ string ] interface {}{
"variables" : map [ string ] string { "user_name" : "Alice" , "product" : "OpenLIT" },
"shouldCompile" : true ,
},
})
if err != nil {
log . Fatal ( err )
}
for _ , entity := range result . Entities {
key := fmt . Sprintf ( "prompt: %s " , entity . EntityID )
if data , ok := result . EntityData [ key ]; ok {
fmt . Printf ( "Prompt data: %v \n " , data )
}
}
Check Evaluation Rules
Determine which evaluation types (hallucination, bias, etc.) are linked to rules matching the current trace. This works with both the 11 built-in evaluation types and any custom evaluation types you have created.
import openlit
result = openlit.evaluate_rule(
entity_type = "evaluation" ,
fields = {
"gen_ai.system" : "openai" ,
"gen_ai.request.model" : "gpt-4" ,
"service.name" : "production-api" ,
},
)
if result and result[ "matchingRuleIds" ]:
print ( "Evaluation rules matched:" , result[ "matchingRuleIds" ])
for entity in result.get( "entities" , []):
print ( f " Evaluation type: { entity[ 'entity_id' ] } " )
else :
print ( "No evaluation rules matched" )
import Openlit from 'openlit' ;
const result = await Openlit . evaluateRule ({
entityType: 'evaluation' ,
fields: {
'gen_ai.system' : 'openai' ,
'gen_ai.request.model' : 'gpt-4' ,
'service.name' : 'production-api' ,
},
});
if ( ! ( 'err' in result ) && result . matchingRuleIds . length > 0 ) {
console . log ( 'Evaluation rules matched:' , result . matchingRuleIds );
result . entities . forEach ( e => console . log ( ' Evaluation type:' , e . entity_id ));
}
result , err := openlit . EvaluateRule ( ctx , openlit . EvaluateRuleOptions {
EntityType : openlit . RuleEntityEvaluation ,
Fields : map [ string ] interface {}{
"gen_ai.system" : "openai" ,
"gen_ai.request.model" : "gpt-4" ,
"service.name" : "production-api" ,
},
})
if err != nil {
log . Fatal ( err )
}
if len ( result . MatchingRuleIDs ) > 0 {
fmt . Println ( "Evaluation rules matched:" , result . MatchingRuleIDs )
for _ , entity := range result . Entities {
fmt . Printf ( " Evaluation type: %s \n " , entity . EntityID )
}
}
Parameters
Python — openlit.evaluate_rule()
Parameter Type Required Description urlstrNo OpenLIT dashboard URL api_keystrNo API key for authentication entity_typestrYes "context", "prompt", or "evaluation"fieldsdictYes Trace attributes to match against rules include_entity_databoolNo Include full entity data in response. Default: False entity_inputsdictNo Inputs for entity resolution (e.g. prompt variables)
TypeScript — Openlit.evaluateRule()
Parameter Type Required Description urlstringNo OpenLIT dashboard URL apiKeystringNo API key for authentication entityType'context', 'prompt', or 'evaluation'Yes Entity type to match fieldsObject of string/number/boolean values Yes Trace attributes to match includeEntityDatabooleanNo Include full entity data. Default: false entityInputsObject No Inputs for entity resolution
Go — openlit.EvaluateRule()
Field Type Required Description URLstringNo OpenLIT dashboard URL APIKeystringNo API key for authentication EntityTypeRuleEntityTypeYes RuleEntityContext, RuleEntityPrompt, or RuleEntityEvaluationFieldsmap[string]interfaceYes Trace attributes to match IncludeEntityDataboolNo Include full entity data. Default: false EntityInputsmap[string]interfaceNo Inputs for entity resolution Timeouttime.DurationNo HTTP timeout. Default: 30s
All SDKs return the same response structure:
{
"matchingRuleIds" : [ "rule-uuid-1" , "rule-uuid-2" ],
"entities" : [
{
"rule_id" : "rule-uuid-1" ,
"entity_type" : "context" ,
"entity_id" : "ctx-uuid-1"
}
],
"entity_data" : {
"context:ctx-uuid-1" : {
"id" : "ctx-uuid-1" ,
"name" : "Premium System Prompt" ,
"content" : "You are a helpful AI assistant..."
}
}
}
Field Description matchingRuleIdsArray of rule IDs whose conditions matched the input fields entitiesArray of linked entities from matching rules, filtered by entity_type entity_dataFull entity records, keyed as type:id. Only present when include_entity_data is true
Error Handling
Returns None on any error (network, auth, server). Check for None before using the result. result = openlit.evaluate_rule( entity_type = "context" , fields = { "key" : "val" })
if result is None :
print ( "Rule evaluation failed — check logs for details" )
Returns { err: string } on error. Use a type guard to check. const result = await Openlit . evaluateRule ({ entityType: 'context' , fields: {} });
if ( 'err' in result ) {
console . error ( 'Rule evaluation failed:' , result . err );
}
Returns error as the second value (idiomatic Go). result , err := openlit . EvaluateRule ( ctx , opts )
if err != nil {
log . Printf ( "Rule evaluation failed: %v " , err )
}
Rule Engine Guide Learn how to create rules, add conditions, and link entities in the OpenLIT UI
API Reference Full OpenAPI reference for the evaluate endpoint