API Documentation
Guardian API Details

<POST> configureApp API

3min
POST
Request
Path Params
app_name
required
String
Specifies the application endpoint for updating the configuration. Possible values: app1, app2, app3, app4
Body Parameters
config_language
required
String
Language setting. Possible values: "en" (English) or "en,ko" (English & Korean).
config_llm
required
String
Configured LLM (Language Model) name (for audit mode, not required).
image_analysis_type
required
String
Type of image analysis by default it will be empty string (leave unchanged for app1, app2, app3).
input_config
required
Object
input policy configuration in dict format
output_config
required
Object
output policy configuration in dict format
Curl
Python
Responses
200
415
500


Request Body Definitions

The table below provides a detailed breakdown of all fields included in the API request body. It describes each field's purpose, type, and possible values to help you configure the API effectively with examples.



Field

Type

Description

Example

config_language



String

Specifies the languages that the system supports for user interaction. Can be English (default value) or English & Korean both. by default "en", possible values "en,ko"

"en"

config_llm



String

Defines the configured Language Model (LLM) to be used.

  • supported LLMs for app1/aap2/app3 are : "OpenAI", "Mistral", "Gemini", "ChatGPT"
  • supported LLMs for app4 : "Claude"

Note:

  • Not required for audit mode.

"Mistral"

image_analysis_type

String

Supported Image Analysis types like OCR or Content Moderation.

  • Not required for app1/aap2/app3 , provide empty string e.g: ""
  • required only for app4 configuration update. possible values are : "ocr", "image_content_moderation"



"ocr"

"input_config"

Dict

Details about the input configuration:







  • JSON :
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"JSON": { "enabled": true }





  • Block Competitor :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string
    • "comparison" :
      • Type String
      • default value is "exact_match"
      • possible values : "exact_match", "contains"

"Block Competitor": { "value": "Airtel,Jio", "comparison":"exact_match" }





  • "Block Substring" :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string
    • "comparison" :
      • Type String
      • default value is "exact_match"
      • possible values : "exact_match", "contains"

"Block Substring": { "value": "kill,murder", "comparison":"exact_match" }





  • "Ban Topic" :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string

"Ban Topic": { "value": "Compensation,War" }





  • "Allowed List" :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string
    • "comparison" :
      • Type String
      • default value is "exact_match"
      • possible values : "exact_match", "contains"

"Allowed List": { "value": "AIShield,Bosch", "comparison": "exact_match" }





  • "Regex" :
    • Type Dict
    • "value" :
      • Type String
      • only one Single regex pattern for validation supported

"Regex": { "value": "[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,}$" }





  • "URL Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"URL Detection": { "enabled": true }





  • "Code Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Code Detection": { "enabled": true }





  • "Toxicity":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false
    • "level" :
      • Type String
      • possible values are "Low", "Medium" and "High"

"Toxicity": { "enabled": true, "level": "Low" }





  • "Profanity":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Profanity": { "enabled": true }





  • "PII Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false
    • "redaction" :
      • Type Boolean
      • possible values : true , false

"PII Detection": { "enabled": true, "redaction": false }





  • "Special PII Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Special PII Detection": { "enabled": true }





  • "Prompt Injection / Jailbreaks":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false
    • "level" :
      • Type String
      • possible values are "Low", "Medium" and "High"

"Prompt Injection / Jailbreaks": { "enabled": true, "level": "Medium" }





  • "Secrets":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Secrets": { "enabled": false }





  • "Not Safe For Work":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Not Safe For Work": { "enabled": false }





  • "Gender Sensitive":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Gender Sensitive": { "enabled": false }





  • "Racial Sensitive":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Racial Sensitive": { "enabled": false }





  • "Invisible Text":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Invisible Text": { "enabled": false }





  • "Input Rate Limiter":
    • Type Dict
    • "value" :
      • Type Integer
      • any integer value can be given

"Input Rate Limiter": { "value": 5 }





  • "Token Limit":
    • Type Dict
    • "value" :
      • Type Integer
      • any integer value can be given

"Token Limit": { "value": 1024 }





  • "BCI Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false
    • "default_model_name" :
      • Type String
      • Default value: "BCI_MODEL_LOW"
      • Note: No need to update this value.
    • "custom_model_name" :
      • Type String
      • Default value: ""
      • Note: This will automatically update once a custom BCI model is uploaded using our "custom model import feature". Refer to the provided image for instructions on uploading a custom model specifically for the BCI feature.
    • "mode" :
      • Type String
      • possible values : "default" , "custom"
      • Note: The "custom" mode can only be set if a custom BCI model has been uploaded.

"BCI Detection": { "enabled": true, "default_model_name": "BCI_MODEL_LOW", "custom_model_name": "", "mode": "dafault" }

"output_config"

Dict

Details about the output guardrail configuration:







  • JSON :
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"JSON": { "enabled": true }





  • "Ban Topic" :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string

"Ban Topic": { "value": "Compensation,War" }





  • "Allowed List" :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string
    • "comparison" :
      • Type String
      • default value is "exact_match"
      • possible values : "exact_match", "contains"

"Allowed List": { "value": "AIShield,Bosch", "comparison": "exact_match" }





  • "Blocked List" :
    • Type Dict
    • "value" :
      • Type String
      • comma separated string
    • "comparison" :
      • Type String
      • default value is "exact_match"
      • possible values : "exact_match", "contains"

"Blocked List": { "value": "kill,murder,suicide", "comparison": "contains" }





  • "Regex" :
    • Type Dict
    • "value" :
      • Type String
      • only one Single regex pattern for validation supported



"Regex": { "value": "[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+.[a-zA-Z]{2,}$" }





  • "Code Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false



"Code Detection": { "enabled": true }





  • "Toxicity":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false
    • "level" :
      • Type String
      • possible values are "Low", "Medium" and "High"



"Toxicity": { "enabled": true, "level": "Low" }





  • "Sentiment":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Sentiment": { "enabled": true }





  • "No LLM Output":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"No LLM Output": { "enabled": true }





  • "Special PII Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Special PII Detection": { "enabled": true }





  • "Malicious URL Detection":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Malicious URL Detection": { "enabled": true }





  • "URL Reachability":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"URL Reachability": { "enabled": true }





  • "Not Safe For Work":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Not Safe For Work": { "enabled": true }





  • "Gender Sensitive":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Gender Sensitive": { "enabled": false }





  • "Racial Sensitive":
    • Type Dict
    • "enabled" :
      • Type Boolean
      • possible values : true , false

"Racial Sensitive": { "enabled": false }





  • "BCI Detection":
    • "enabled" :
      • Type Boolean
      • possible values : true , false
    • "default_model_name" :
      • Type String
      • Default value: "BCI_MODEL_LOW"
      • Note: No need to update this value.
    • "custom_model_name" :
      • Type String
      • Default value: ""
      • Note: This will automatically update once a custom BCI model is uploaded using our "custom model import feature". Refer to the provided image for instructions on uploading a custom model specifically for the BCI feature.
    • "mode" :
      • Type String
      • possible values : "default" , "custom"
      • Note: The "custom" mode can only be set if a custom BCI model has been uploaded.
    • 
    • 



"BCI Detection": { "enabled": true, "default_model_name": "BCI_MODEL_LOW", "custom_model_name": "", "mode": "dafault" }

Definitions of the Guardrails:

Theme

Feature

Description

Content Access Control

Block Competitor

Blocks mentions of competitors

Content Access Control

Block Substring

Blocks specific strings of text

Content Access Control

Ban Topic

Blocks entire topics of discussion

Content Access Control

Allowed List

Permits only approved content

Content Access Control

Blocked List

Prevents output of certain block listed content

Content Analysis

Regex

Uses patterns to match text/alphanumeric for filtering

Content Analysis

URL Detection

Identifies URLs in prompt

Content Safety

Toxicity

Filters toxic and harmful language in input or in prompt's response

Content Safety

Profanity

Blocks swear words and vulgar language (supports English language)

Content Safety

Not Safe For Work

Filters sexually explicit or inappropriate material

Content Safety

Sentiment

Analyzes the sentiment of AI responses (positive, negative, or neutral)

Content Safety

Gender Sensitive

Detects Gender Bias in Language

Content Safety

Racial Sensitive

Detects Racial Bias in Language

Privacy Protection

PII Detection

Detects personal identifiable information (e.g., Full name, email address, phone number)

Privacy Protection

PII Redaction

Redacts sensitive information to enhance security and ensure compliance

Privacy Protection

BII Detection

Detects Business Identified Information (e.g., Salary, Bonus)

Privacy Protection

Special PII Detection

Detects specialized personal information

Security Measures

Prompt Injection / Jailbreaks

Detects attempts to manipulate the AI

Security Measures

Secrets

Detects and redacts sensitive information (AWS Secrets, Git Secrets, DB Secrets)

Security Measures

Code Detection

Identifies programming code in text (C, C++, HTML, Bash, JAVA, JavaScript, Python, C#, JSON)

Security Measures

Malicious URL Detection

Scans for harmful URLs in output

Security Measures

URL Detection

Identifies harmful URLs

Security Measures

URL Reachability

Checks if URLs in output are accessible

Additional Checks

JSON

Ensures correctness and validates JSON format

Additional Checks

Invisible Text

Identifies hidden text in the input

Content Validation

No LLM Output

Underlying LLM refuses to provide an answer

Content - Feature

Image Content Moderation

Support for OCR, Image Filtering