FAQ & Troubleshooting

FAQ

8min

Welcome to our FAQ page about AI security and the AIShield platform! We've put together this page to answer some common questions you might have about securing your AI systems and how AIShield can help. We hope this page will give you a better understanding of AI security and how you can keep your systems safe. Let's get started!

What is AI Security?

AI security refers to the domain of securing AI systems. One can also refer to cybersecurity for AI applications and models/algorithms. AI Security focuses on identifying, detecting, preventing, remediating, investigating, and responding to malicious attacks on AI Systems. The field is concerned with AI model vulnerabilities, studying attacker capabilities, consequences of attacks, and building algorithms that can resist security challenges.

Why do organizations need to secure AI assets?

With the advancement in technology, there have been increasing AI algorithm failures. The complexity and number of such losses have been growing every day. Looking at some of the facts and the current industry scenario indicates the need for AI Security, and correct measures must be in place before a significant model failure.

  • According to Microsoft, 89% of organizations do not have the right tools to secure their AI.
  • Gartner found that 60% of AI providers will include a means to mitigate possible harm to their AI assets by 2024.
  • IBM quotes that there is an 80% cost difference between cyberattack scenarios where secure AI was developed versus not deployed.

What are various AI/ML adversarial attacks that AIShield protects against?

  • Model Extraction: The act of stealing and extracting a proprietary model.
  • Model Evasion: The act of making a model do the wrong thing, making it intentionally biased.
  • Model/Data Poisoning: The act of making a model learn wrong things by injecting malicious samples.
  • Model Inference: The act of making the model reveal its logic and data.
  • Sponge attacks: A unique adversarial technique disrupting machine-learning systems with malicious inputs.

What type of data can I use with AIShield?

AIShield can work with Image, Tabular, Timeseries, and Text data types.

Do I own my data when using AIShield?

Absolutely. In fact, AIShield only needs a very small starting data (<5% of original data), so AIShield can do a sanity check to know that we are able to successfully generate model output. Only in grey box strategies, AIShield will use the starting data to generate more targeted attack vectors.

What integrations does AIShield support?

AIShield can be integrated seamlessly in any MLOps pipeline. There are multiple reference implementations available to show integrations to AWS Sagemaker, Azure ML, Google Vertex, ClearML, Snyk, Azure Sentinel, Splunk, and Databricks.

Where is the AIShield defense model deployed?

The threat informed defense model can be deployed wherever the customer AI workload is either on Cloud or Embedded. Defense is delivered to the customer with the instruction on how to use.

What is the difference in series and parallel deployment?

We have defense strategies that can self-harden the model and stop the malicious inputs. These might have some drop in model accuracy due to small false positive rates. Other strategies allow defense to be placed in series or parallel where it can block the input when in series or raise detection alerts when placed in parallel. In such scenarios, the system can take a call on remediation actions.

What are the attack telemetry and what do they do?

Attack telemetry data is sent to enterprise SIEM/SOAR solutions like Azure Sentinel and Splunk from the connector built into the threat informed defense model. The attack telemetry contains necessary details like the severity of the attack instance, which model is under attack (image classification, tabular), what attack type (extraction, evasion, etc.), asset details, IP, etc.

Updated 14 Aug 2023
Did this page help you?