AI/ML Supply Chain
AI supply chain attacks occur when attackers modify or replace machine learning libraries, models, or associated data used by systems. Such vulnerabilities can lead to unauthorized system access or behavior manipulation.
To start working with the APIs, visit the Supply Chain Attacks Guide.
- Produces detailed reports classifying risks as Low, Medium, High, or Critical.
- Seamlessly integrates with GitHub, Huggingface, and AWS S3 for automated scanning of repositories and detecting vulnerabilities.
Supported frameworks and file formats include:
Framework | File Format | Deserialization Risks | Backdoor Risks | Runtime Risks |
---|---|---|---|---|
TensorFlow | .pb, .h5 | ✅ | ✅ | |
TensorFlow-savedmodel | .ckpt | ✅ | | |
Keras | .keras, .h5 | ✅ | ✅ | |
PyTorch | .pt, .pth, .bin | ✅ | | |
ONNX | .onnx | | ✅ | |
Scikit-Learn | .pkl | ✅ | | |
GGUF | .gguf | | | ✅ |
SafeTensor | .safetensor | ✅ | | |
Misc | .zip | ✅ | | |
Framework | File Format | Detections |
---|---|---|
Jupyter Notebook | .ipynb | Hardcoded secrets,Passwords PII, Tokens(API, Web, other) |
Python | .py | Hardcoded secrets,Passwords PII, Tokens(API, Web, other) |
File Format | Detections |
---|---|
Requirements File (Autodiscovered) | Libraries, Unsafe Library Flags |
Jupyter Notebook (Autodiscovered) | Libraries, Unsafe Library Flags |
Occurs when unverified data is used to rebuild objects. Attackers may exploit these to introduce malicious code, compromising system integrity.
Hidden pathways allow attackers to manipulate model behavior through specific triggers. These covert exploits remain undetected during normal operations.
Activated during model inference or task execution, runtime risks involve malicious code execution, leading to unauthorized access or manipulation.
- Real-Time Scanning: Quickly identifies vulnerabilities in AI/ML models and notebooks.
- Comprehensive Framework Support: Compatible with diverse model frameworks.
- Dynamic Risk Identification: Adapts to evolving security threats.
- Thorough Assessments: Provides a full spectrum of vulnerability analysis.
- Standards Compliance: Aligns with OWASP, MITRE, and CWE standards.
- Scalability: Automated workflows ensure efficient scaling.
- Seamless Integration: Effortless compatibility with popular AI/ML platforms.
- Detailed Reports: Helps prioritize vulnerabilities and allocate resources.
- Competitive Advantage: Showcases commitment to security, appealing to stakeholders and clients.
Parameter | Data Type | Description | Remarks |
---|---|---|---|
repo_type | String | Name of the repository (e.g., github, huggingface,s3_bucket,file) | |
repo_url | String | URL of the repository to be scanned in case of huggingface and github | Formats accepted - Hugginface - https://huggingface.co/<<username>>/<<reponame>> |
branch_name | String | Branch to analyze | Default: main |
depth | Integer | Depth of the repository clone | Default: 1 (latest commit only). |
model_id | String | ID of the model for file uploads. | Obtainable during model registration. |
aws_access_key_id | String | a unique identifier used to authenticate requests to AWS services. | It pairs with the aws_secret_access_key to ensure secure access and authorization. |
aws_secret_access_key | String | a confidential key used to sign and authenticate requests to AWS services | It works with the aws_access_key_id to securely validate access permissions. |
region | String | The region where s3 bucket present. The region specifies the geographic area where AWS resources are deployed and operated. | |
bucket_name | String | The bucket name which need to be scanned. The bucket_name is the unique identifier for an Amazon S3 bucket, where objects (files or data) are stored. | |
- Deserialization Risks: Vulnerabilities arising during object reconstruction from untrusted data or files.
- Backdoor Risks: Undetected pathways that allow behavior manipulation.
- Runtime Risks: Threats triggered during inference or file execution.
For further queries, contact Support.