Introduction to AIShield
AIShield overview

Attack types

1min

AI systems have become increasingly popular in recent years due to their ability to quickly and accurately process large amounts of data. However, along with their benefits, they also face various types of attacks. Some of the most common types of attacks include model extraction attacks, Evasion, etc.

Developers must be aware of potential attacks and take measures to protect their AI systems, like implementing security protocols and monitoring for suspicious activity. This is where AIShield helps keep your models secure.

Attack types

Description

Example

Model Extraction Attacks

Attacker gains information about the model internals through analysis of input, output, and other external information.

Pedestrian detection

Evasion Attacks

Attacker induces an incorrect output from the model by making a very small change to the digital representation of the targeted input

Misclassification of input, malicious output execution

Poisoning Attacks

Attacker corrupts the data used to train the model by adding malicious data or changing data.

Compromising AI inference's correctness, inaccurate and poor decisions

Inference Attacks

Infer sensitive data from ML model outputs by querying and analyzing responses, enabling reconstruction of sensitive or training data.

Face reconstruction from outputs.

Sponge attack

It increases an ML model's energy consumption during inference, causing delays and potential harm, such as collisions in autonomous vehicles.

increase latency and delay operations in tasks.