AI/ML Supply Chain
AI supply chain attacks occur when attackers modify or replace machine learning libraries, models, or associated data used by systems. Such vulnerabilities can lead to unauthorized system access or behavior manipulation.
To start working with the APIs, visit the Supply Chain Attacks Guide.
- Produces detailed reports classifying risks as Low, Medium, High, or Critical.
- Seamlessly integrates with GitHub, Huggingface, and AWS S3 for automated scanning of repositories and detecting vulnerabilities.
Supported frameworks and file formats include:
Framework | File Format | Deserialization Risks | Backdoor Risks | Runtime Risks |
---|---|---|---|---|
TensorFlow | .pb | ✅ | ✅ | |
Tensorflow | .h5 | ✅ | ✅ | ✅ |
TensorFlow/PyTorch Checkpoint | .ckpt | ✅ | | |
Keras | .keras | ✅ | ✅ | |
Keras | .h5 | ✅ | ✅ | |
PyTorch | .pt, .pth, .bin | ✅ | | |
ONNX | .onnx | | | ✅ |
Scikit-Learn | .pkl | ✅ | | |
GGUF | .gguf | | | ✅ |
SafeTensor | .safetensor | ✅ | | |
Misc | .zip | ✅ | | |
Framework | File Format | Detections |
---|---|---|
Jupyter Notebook | .ipynb | Hardcoded secrets,Passwords PII, Tokens(API, Web, other) |
Python | .py | Hardcoded secrets,Passwords PII, Tokens(API, Web, other) |
File Format | Detections |
---|---|
Requirements File (Autodiscovered) | Libraries, Unsafe Library Flags |
Jupyter Notebook (Autodiscovered) | Libraries, Unsafe Library Flags |
Occurs when unverified data is used to rebuild objects. Attackers may exploit these to introduce malicious code, compromising system integrity.
- Activation: Serialization attacks exploit the process of saving and loading machine learning models, specifically targeting vulnerabilities in the serialization and deserialization mechanisms. These attacks often involve malicious payloads embedded within serialized model files.
- Purpose: The primary goal of serialization attacks is to gain unauthorized access, execute arbitrary code, or manipulate the system in unintended ways. Attackers leverage the trust developers place in model files and frameworks, embedding harmful code that executes during deserialization to compromise environments, exfiltrate sensitive data, or alter system functionality.
- Detection: Serialization attack detection involves examining serialized files for suspicious code patterns and loading models in isolated environments, such as sandboxes, to monitor for unexpected behaviors or executions during deserialization.
Hidden pathways allow attackers to manipulate model behavior through specific triggers. These covert exploits remain undetected during normal operations.
- Activation: Backdoor threats involve hidden pathways or triggers embedded in the model’s architecture that activate only when a specific input or condition is provided.
- Purpose: Backdoors are designed to manipulate model outputs for specific scenarios, enabling attackers to produce targeted malicious outputs without disrupting normal operations.
- Detection: Backdoor risks are harder to detect as they appear dormant in normal use but can be identified by analyzing model architecture for unusual pathways or using specialized tools like Netron for visual inspection and security scanners to detect presence of unusual code.
Activated during model inference or task execution, runtime risks involve malicious code execution, leading to unauthorized access or manipulation.
- Activation: These risks involve malicious code that executes during the model’s inference or runtime. The threat typically resides in the model files, and the malicious code is triggered as the model processes input data.
- Purpose: The aim is to compromise the system at runtime, such as gaining unauthorized access, stealing data, or altering the model’s behavior dynamically.
- Detection: Runtime risks often exploit code execution features in formats like TensorFlow’s SavedModel or Keras’ custom objects.
- Real-Time Scanning: Quickly identifies vulnerabilities in AI/ML models and notebooks.
- Comprehensive Framework Support: Compatible with diverse model frameworks.
- Dynamic Risk Identification: Adapts to evolving security threats.
- Thorough Assessments: Provides a full spectrum of vulnerability analysis.
- Standards Compliance: Aligns with OWASP, MITRE, and CWE standards.
- Scalability: Automated workflows ensure efficient scaling.
- Seamless Integration: Effortless compatibility with popular AI/ML platforms.
- Detailed Reports: Helps prioritize vulnerabilities and allocate resources.
- Competitive Advantage: Showcases commitment to security, appealing to stakeholders and clients.
Parameter | Data Type | Description | Remarks |
---|---|---|---|
repo_type | String | Name of the repository (e.g., github, huggingface,s3_bucket,file) | |
repo_url | String | URL of the repository to be scanned in case of huggingface and github | Formats accepted - Hugginface - https://huggingface.co/<<username>>/<<reponame>> |
branch_name | String | Branch to analyze | Default: main |
depth | Integer | Depth of the repository clone | Default: 1 (latest commit only). |
model_id | String | ID of the model for file uploads. | Obtainable during model registration. |
aws_access_key_id | String | a unique identifier used to authenticate requests to AWS services. | It pairs with the aws_secret_access_key to ensure secure access and authorization. |
aws_secret_access_key | String | a confidential key used to sign and authenticate requests to AWS services | It works with the aws_access_key_id to securely validate access permissions. |
region | String | The region where s3 bucket present. The region specifies the geographic area where AWS resources are deployed and operated. | |
bucket_name | String | The bucket name which need to be scanned. The bucket_name is the unique identifier for an Amazon S3 bucket, where objects (files or data) are stored. | |
- Deserialization Risks: Vulnerabilities arising during object reconstruction from untrusted data or files.
- Backdoor Risks: Undetected pathways that allow behavior manipulation.
- Runtime Risks: Threats triggered during inference or file execution.
For further queries, contact Support.


