API Documentation
Guardian
<POST> ConfigureApp API
4 min
the payload below defines the configurable policy that a user can apply to validate the corresponding prompt/ response to apply the policy copy and replace the appropriate policy object in the request json payload as shown below select the correct policy configuration schema based on execution mode "block" or "audit" also depending on the environment where the container is deployed "cpu" or "gpu" "audit mode "works with out llm orchestration and " block mode" works with llm orchestration note this policy is fully customizable and can be modified to reflect the desired behavior of your application or integration audit cpu { "config language" "en", "config llm" "", "image analysis type" "", "input config" { "json" { "enabled" false }, "block competitor" { "comparison" "contains", "value" "" }, "block substring" { "comparison" "contains", "value" "" }, "allowed list" { "comparison" "contains", "value" "" }, "regex" { "value" "" }, "url detection" { "enabled" false }, "code detection" { "enabled" false }, "ban topic" { "value" "", "threshold" "high", "custom threshold" 0 }, "bci detection" { "enabled" false, "custom model available" false, "mode" "default", "threshold" "high", "custom threshold" 0 }, "toxicity" { "enabled" false, "threshold" "high", "custom threshold" 0 80 }, "generic harm" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "pii detection" { "enabled" false, "redaction" false }, "special pii detection" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "prompt injection / jailbreaks" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "secrets" { "enabled" false }, "not safe for work" { "enabled" false, "threshold" "low", "custom threshold" 0 }, "gender sensitive" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "token limit" { "value" "4096" }, "input rate limiter" { "value" "10" }, "invisible text" { "enabled" false }, "medical safety detection" { "enabled" false, "threshold" "low", "custom threshold" 0 85 }, "no llm output" { "enabled" false }, "malicious url detection" { "enabled" false }, "url reachability" { "enabled" false }, "sentiment" { "enabled" false, "threshold" "high", "custom threshold" 0 } }, "output config" {} } audit gpu { "config language" "en", "config llm" "", "image analysis type" "", "input config" { "json" { "enabled" false }, "block competitor" { "comparison" "contains", "value" "" }, "block substring" { "comparison" "contains", "value" "" }, "allowed list" { "comparison" "contains", "value" "" }, "regex" { "value" "" }, "url detection" { "enabled" false }, "code detection" { "enabled" false }, "ban topic" { "value" "", "threshold" "high", "custom threshold" 0 }, "bci detection" { "enabled" false, "custom model available" false, "mode" "default", "threshold" "high", "custom threshold" 0 }, "toxicity" { "enabled" false, "threshold" "high", "custom threshold" 0 80 }, "generic harm" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "pii detection" { "enabled" false, "redaction" false }, "special pii detection" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "prompt injection / jailbreaks" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "secrets" { "enabled" false }, "not safe for work" { "enabled" false, "threshold" "low", "custom threshold" 0 }, "gender sensitive" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "token limit" { "value" "4096" }, "input rate limiter" { "value" "10" }, "invisible text" { "enabled" false }, "medical safety detection" { "enabled" false, "threshold" "low", "custom threshold" 0 85 }, "no llm output" { "enabled" false }, "malicious url detection" { "enabled" false }, "url reachability" { "enabled" false }, "sentiment" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "contextual groundedness" {"enabled" true}, "answer relevance" {"enabled" true} }, "output config" {} } block cpu { "config language" "en", "config llm" "mistral", "image analysis type" "", "input config" { "json" { "enabled" false }, "block competitor" { "comparison" "contains", "value" "" }, "block substring" { "comparison" "contains", "value" "" }, "allowed list" { "comparison" "contains", "value" "" }, "regex" { "value" "" }, "url detection" { "enabled" false }, "code detection" { "enabled" false }, "ban topic" { "value" "", "threshold" "high", "custom threshold" 0 }, "bci detection" { "enabled" false, "custom model available" false, "mode" "default", "threshold" "high", "custom threshold" 0 }, "toxicity" { "enabled" false, "threshold" "high", "custom threshold" 0 80 }, "generic harm" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "pii detection" { "enabled" false, "redaction" false }, "special pii detection" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "prompt injection / jailbreaks" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "secrets" { "enabled" false }, "not safe for work" { "enabled" false, "threshold" "low", "custom threshold" 0 }, "gender sensitive" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "token limit" { "value" "4096" }, "input rate limiter" { "value" "10" }, "invisible text" { "enabled" false }, "medical safety detection" { "enabled" false, "threshold" "low", "custom threshold" 0 85 } }, "output config" { "json" { "enabled" false }, "block competitor" { "comparison" "contains", "value" "" }, "block substring" { "comparison" "contains", "value" "" }, "allowed list" { "comparison" "contains", "value" "" }, "blocked list" { "comparison" "contains", "value" "" }, "regex" { "value" "" }, "url detection" { "enabled" false }, "code detection" { "enabled" false }, "no llm output" { "enabled" false }, "malicious url detection" { "enabled" false }, "url reachability" { "enabled" false }, "ban topic" { "value" "", "threshold" "high", "custom threshold" 0 }, "bci detection" { "enabled" false, "custom model available" false, "mode" "default", "threshold" "high", "custom threshold" 0 }, "toxicity" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "generic harm" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "sentiment" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "pii detection" { "enabled" false, "redaction" false }, "special pii detection" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "secrets" { "enabled" false }, "not safe for work" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "gender sensitive" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "medical safety detection" { "enabled" false, "threshold" "high", "custom threshold" 0 } } } block gpu { "config language" "en", "config llm" "mistral", "image analysis type" "", "input config" { "json" { "enabled" false }, "block competitor" { "comparison" "contains", "value" "" }, "block substring" { "comparison" "contains", "value" "" }, "allowed list" { "comparison" "contains", "value" "" }, "regex" { "value" "" }, "url detection" { "enabled" false }, "code detection" { "enabled" false }, "ban topic" { "value" "", "threshold" "high", "custom threshold" 0 }, "bci detection" { "enabled" false, "custom model available" false, "mode" "default", "threshold" "high", "custom threshold" 0 }, "toxicity" { "enabled" false, "threshold" "high", "custom threshold" 0 80 }, "generic harm" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "pii detection" { "enabled" false, "redaction" false }, "special pii detection" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "prompt injection / jailbreaks" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "secrets" { "enabled" false }, "not safe for work" { "enabled" false, "threshold" "low", "custom threshold" 0 }, "gender sensitive" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "token limit" { "value" "4096" }, "input rate limiter" { "value" "10" }, "invisible text" { "enabled" false }, "medical safety detection" { "enabled" false, "threshold" "low", "custom threshold" 0 85 } }, "output config" { "json" { "enabled" false }, "block competitor" { "comparison" "contains", "value" "" }, "block substring" { "comparison" "contains", "value" "" }, "allowed list" { "comparison" "contains", "value" "" }, "blocked list" { "comparison" "contains", "value" "" }, "regex" { "value" "" }, "url detection" { "enabled" false }, "code detection" { "enabled" false }, "no llm output" { "enabled" false }, "malicious url detection" { "enabled" false }, "url reachability" { "enabled" false }, "ban topic" { "value" "", "threshold" "high", "custom threshold" 0 }, "bci detection" { "enabled" false, "custom model available" false, "mode" "default", "threshold" "high", "custom threshold" 0 }, "toxicity" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "generic harm" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "sentiment" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "pii detection" { "enabled" false, "redaction" false }, "special pii detection" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "secrets" { "enabled" false }, "not safe for work" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "gender sensitive" { "enabled" false, "threshold" "high", "custom threshold" 0 }, "medical safety detection" { "enabled" false, "threshold" "high", "custom threshold" 0 },, "contextual groundedness" {"enabled" true}, "answer relevance" {"enabled" true} } } 🛡️ confidence thresholds in guardian guardian uses a confidence thresholds to decide when to flag a user query / model response as potentially unsafe/risk these thresholds control how cautious or relaxed the system is when making that decision you can choose between four modes high , medium , low (default), and custom — each offering a different balance between precision and coverage 🔵 high threshold in this mode, guardian will only flag when it is very confident that a user query / response violates a defined boundary this results in a low false positive rate — meaning very few things will be flagged incorrectly however, it may miss some actual violation that don’t meet the high confidence level this setting is ideal when you want to avoid unnecessary disruptions or over flagging, especially in low risk 🟡 medium threshold with the medium threshold , guardian takes a balanced approach it flags content when it's reasonably confident that there may be a potential policy issue or risk you may see a moderate false positive rate , but it also reduces the chances of missing real violations this is a good default choice for most general purpose use cases where you want a balance between accuracy and coverage 🔴 low threshold the low threshold setting is the most sensitive guardian will flag responses even if it has only a moderate level of confidence that something might violate policy this ensures that almost no actual issues go undetected, but it also increases the false positive rate — meaning more content might be flagged unnecessarily it’s best suited for high sensitivity environments where missing a potential violation is riskier than flagging extra content ⚙️ custom threshold if your use case requires more precise control, you can set a custom threshold between 0 and 1 lower values make guardian more aggressive in flagging (higher sensitivity), while higher values make it more conservative (flagging less frequently) this mode is ideal when you have specific business rules, risk tolerance levels, or workflows that need fine tuning by adjusting the confidence threshold, you’re effectively choosing how cautious guardian should be — whether it should only flag highly certain violations or proactively surface anything that might require attention request body definitions the table below provides a detailed breakdown of all fields included in the api request body it describes each field's purpose, type, and possible values to help you configure the api effectively with examples field type description config language string specifies the languages that the system supports for user interaction can be english (default value) or english & korean both by default "en" , possible values "en,ko" config llm string defines the configured language model (llm) to be used supported llms for app1/aap2/app3 are "openai" , "mistral" , "gemini" , "chatgpt" supported llms for app4 "claude" image analysis type string supported image analysis types like ocr or content moderation not required for app1/aap2/app3 app1/aap2/app3 , provide empty string e g "" required only for app4 configuration update possible values are "ocr" , "image content moderation" theme policy description key type possible values additional checks json ensures correctness and validates json format enabled boolean true / false content access control block competitor blocks mentions of competitors value string comma separated string comparison string "exact match" / "contains" content access control block substring blocks specific strings of text value string comma separated string comparison string "exact match" / "contains" content access control ban topic blocks entire topics of discussion value string comma separated string threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 content access control allowed list permits only approved content value string comma separated string comparison string "exact match" / "contains" content access control blocked list prevents output of certain block listed content value string comma separated string comparison string "exact match" / "contains" content analysis regex uses patterns to match text/alphanumeric for filtering value string single regex pattern content safety url detection identifies urls in prompt enabled boolean true/false security measures code detection identifies programming code in text (c, c++, html, bash, java, javascript, python, c#, json) enabled boolean true/false content safety toxicity filters toxic and harmful language in input or in prompt's response enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 content safety generic harm blocks swear words and vulgar language (supports english language) enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 privacy protection pii detection detects personal identifiable information (e g , full name, email address, phone number) enabled boolean true/false redaction boolean true/false content validation no llm ouput underlying llm refuses to provide an answer enabled boolean true/false content validation contextual groundedness checks if the response aligns with the provided context enabled boolean true/false content validation answer relevance checks the relevance between the provided query and response enabled boolean true/false security measures special pii detection detects specialized personal information enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 content safety sentiment analyzes the sentiment of ai responses (positive, negative, or neutral) enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 security measures prompt injection / jailbreaks detects attempts to manipulate the ai enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 security measures secrets detects and redacts sensitive information (aws secrets, git secrets, db secrets) enabled boolean true/false security measures malicious url detection scans for harmful urls in output enabled boolean true/false additional checks url reachability checks if urls in output are accessible enabled boolean true/false content safety not safe for work filters sexually explicit or inappropriate material enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 content safety medical safety detection detects medically unsafe information enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 content safety gender sensitive identifies gender discrimination, bias, and stereotype enabled boolean true/false threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 security integrity checks invisible text identifies hidden text within inputs enabled boolean true/false usage management input rate limiter limits the rate of input to the system value integer integer usage management token limit sets limits on token usage in requests value integer integer (max 4096) privacy protection bci detection detects business confidential information enabled boolean true/false custom model available boolean by default false, once custom model will upload then can set as true mode string "default" / "custom" threshold string "low" / "medium" / "high" / "custom" custom threshold float if threshold is set to "custom", provide a custom threshold value in the range of float (0 0 to 1 0), default is 0 0 🧠 hallucination detection hallucination detection can be checked by enabling the policies like contextual groundedness and answer relevance these checks enable precise validation of model responses against a supplied context or prompt, prompt or model's response, helping ensure factual consistency and response integrity this capability is available only on gpu enabled environments feature overview contextual groundedness check verifies that the model’s response is logically and semantically grounded in the provided context (context and response are required fields for audit mode and only context is required field for block mode) it identifies instances where the response introduces unsupported or fabricated information answer relevance check assesses whether the response meaningfully answers the original user query (response is required fields for audit mode and in block mode response will be taken care by querying the respective llm)and stays within the scope of the provided question ⚙️ how to use to activate hallucination checks, structure your payload using the following tags 1 formatting for context \<ais guardian reqcontext ais> \[insert source context here] \</ais guardian reqcontext ais> 2 formatting for llm/model response \<ais guardian response ais> \[insert llm/model response here] \</ais guardian response ais> 3 formatting for prompt 🚫 do not include any tags in the prompt only the context and response must be tagged 📘 examples audit mode audit mode """where was albert einstein born, and what is he known for? \<ais guardian reqcontext ais> albert einstein was born in ulm, germany in 1879 he developed the theory of relativity, one of the two pillars of modern physics \</ais guardian reqcontext ais> \<ais guardian response ais> albert einstein was born in vienna in 1881 and is best known for his work on quantum mechanics \</ais guardian response ais """ block mode block mode in block mode, only the context tag is required """where was albert einstein born, and what is he known for? \<ais guardian reqcontext ais> albert einstein was born in ulm, germany in 1879 he developed the theory of relativity, one of the two pillars of modern physics \</ais guardian reqcontext ais> """