<POST> ConfigureApp API
The payload below defines the configurable policy that a user can apply to validate the corresponding prompt/ response. To apply the policy:
- Copy and replace the appropriate policy object in the request JSON payload as shown below.
- Select the correct policy variant based on execution Mode ("block" or "audit") and depending on the environment where the container is deployed ("cpu" or "gpu")
Note: This policy is fully customizable and can be modified to reflect the desired behavior of your application or integration.


The table below provides a detailed breakdown of all fields included in the API request body. It describes each field's purpose, type, and possible values to help you configure the API effectively with examples.

Field | Type | Description |
---|---|---|
config_language  | String | Specifies the languages that the system supports for user interaction. Can be English (default value) or English & Korean both. by default "en", possible values "en,ko" |
config_llm  | String | Defines the configured Language Model (LLM) to be used.
|
image_analysis_type | String | Supported Image Analysis types like OCR or Content Moderation.
 |
Theme | Policy | Description | Key | Type | Possible Values |
---|---|---|---|---|---|
Additional Checks | JSON | Ensures correctness and validates JSON format | enabled | Boolean | true / false |
Content Access Control | Block Competitor | Blocks mentions of competitors | value | String | comma-separated string |
 |  |  | comparison | String | "exact_match" / "contains" |
Content Access Control | Block Substring | Blocks specific strings of text | value | String | comma-separated string |
 |  |  | comparison | String | "exact_match" / "contains" |
Content Access Control | Ban Topic | Blocks entire topics of discussion | value | String | comma-separated string |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Content Access Control | Allowed List | Permits only approved content | value | String | comma-separated string |
 |  |  | comparison | String | "exact_match" / "contains" |
Content Access Control | Blocked List | Prevents output of certain block listed content | value | String | comma-separated string |
 |  |  | comparison | String | "exact_match" / "contains" |
Content Analysis | Regex | Uses patterns to match text/alphanumeric for filtering | value | String | single regex pattern |
Content Safety | URL Detection | Identifies URLs in prompt | enabled | Boolean | true/false |
Security Measures | Code Detection | Identifies programming code in text (C, C++, HTML, Bash, JAVA, JavaScript, Python, C#, JSON) | enabled | Boolean | true/false |
Content Safety | Toxicity | Filters toxic and harmful language in input or in prompt's response | enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Content Safety | Generic Harm | Blocks swear words and vulgar language (supports English language) | enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float
| If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Privacy Protection | PII Detection | Detects personal identifiable information (e.g., Full name, email address, phone number) | enabled | Boolean | true/false |
 |  |  | redaction | Boolean | true/false |
Content Validation | No LLM Ouput | Underlying LLM refuses to provide an answer | enabled | Boolean | true/false |
Content Validation | Contextual Groundedness | Checks if the response aligns with the provided Context | enabled | Boolean | true/false |
Content Validation | Answer Relevance | Checks the relevance between the provided query and response | enabled | Boolean | true/false |
Security Measures | Special PII Detection | Detects specialized personal information | enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Content Safety | Sentiment | Analyzes the sentiment of AI responses (positive, negative, or neutral) | enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Security Measures | Prompt Injection / Jailbreaks | Detects attempts to manipulate the AI | enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Security Measures | Secrets | Detects and redacts sensitive information (AWS Secrets, Git Secrets, DB Secrets) | enabled | Boolean | true/false |
Security Measures | Malicious URL Detection | Scans for harmful URLs in output | enabled | Boolean | true/false |
Additional Checks | URL Reachability | Checks if URLs in output are accessible | enabled | Boolean | true/false |
Content Safety | Not Safe For Work | Filters sexually explicit or inappropriate material | enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Content Safety 
| Medical Safety Detection | Detects Medically Unsafe information
| enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Content Safety 
| Gender Sensitive | Identifies Gender discrimination, bias, and stereotype
| enabled | Boolean | true/false |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |
Security Integrity Checks 
| Invisible Text | Identifies hidden text within inputs
| enabled | Boolean | true/false |
Usage Management
| Input Rate Limiter | Limits the rate of input to the system
| value | Integer | Integer |
Usage Management
| Token Limit | Sets limits on token usage in requests
| value | Integer | Integer (max 4096) |
Privacy Protection 
| BCI Detection | Detects business confidential information 
| enabled | Boolean | true/false |
 |  |  | custom_model_name | String | "" |
 |  |  | mode | String | "default" / "custom" |
 |  |  | threshold | String | "Low" / "Medium" / "High" / "Custom" |
 |  |  | custom_threshold | Float | If threshold is set to "custom", provide a custom_threshold value in the range of Float (0.0 to 1.0), default is 0.0 |