Platform guide
...
AISpectra
Model Scanner
AI/ML Supply Chain
18min
overview ai supply chain attacks occur when attackers modify or replace machine learning libraries, models, or associated data used by systems such vulnerabilities can lead to unauthorized system access or behavior manipulation to start working with the apis, visit the supply chain attacks guide https //docs boschaishield com/api docs/lesspostgreater supply chain attacks key features report generation produces detailed reports classifying risks as low , medium , high , or critical repository integration seamlessly integrates with github, huggingface, and aws s3 for automated scanning of repositories and detecting vulnerabilities model format support supported frameworks and file formats include framework file format deserialization risks backdoor risks runtime risks tensorflow pb ✅ ✅ tensorflow h5 ✅ ✅ ✅ tensorflow/pytorch checkpoint ckpt ✅ keras keras ✅ ✅ keras h5 ✅ ✅ pytorch pt, pth, bin ✅ onnx onnx ✅ scikit learn pkl ✅ gguf gguf ✅ safetensor safetensor ✅ misc zip ✅ additional file formats framework file format detections jupyter notebook ipynb hardcoded secrets,passwords pii, tokens(api, web, other) python py hardcoded secrets,passwords pii, tokens(api, web, other) ai software bill of materials (sbom) file format detections requirements file (autodiscovered) libraries, unsafe library flags jupyter notebook (autodiscovered) libraries, unsafe library flags risk analysis 1\ deserialization risks occurs when unverified data is used to rebuild objects attackers may exploit these to introduce malicious code, compromising system integrity activation serialization attacks exploit the process of saving and loading machine learning models, specifically targeting vulnerabilities in the serialization and deserialization mechanisms these attacks often involve malicious payloads embedded within serialized model files purpose the primary goal of serialization attacks is to gain unauthorized access, execute arbitrary code, or manipulate the system in unintended ways attackers leverage the trust developers place in model files and frameworks, embedding harmful code that executes during deserialization to compromise environments, exfiltrate sensitive data, or alter system functionality detection serialization attack detection involves examining serialized files for suspicious code patterns and loading models in isolated environments, such as sandboxes, to monitor for unexpected behaviors or executions during deserialization 2\ backdoor risks hidden pathways allow attackers to manipulate model behavior through specific triggers these covert exploits remain undetected during normal operations activation backdoor threats involve hidden pathways or triggers embedded in the model’s architecture that activate only when a specific input or condition is provided purpose backdoors are designed to manipulate model outputs for specific scenarios, enabling attackers to produce targeted malicious outputs without disrupting normal operations detection backdoor risks are harder to detect as they appear dormant in normal use but can be identified by analyzing model architecture for unusual pathways or using specialized tools like netron for visual inspection and security scanners to detect presence of unusual code 3\ runtime risks activated during model inference or task execution, runtime risks involve malicious code execution, leading to unauthorized access or manipulation activation these risks involve malicious code that executes during the model’s inference or runtime the threat typically resides in the model files, and the malicious code is triggered as the model processes input data purpose the aim is to compromise the system at runtime, such as gaining unauthorized access, stealing data, or altering the model’s behavior dynamically detection runtime risks often exploit code execution features in formats like tensorflow’s savedmodel or keras’ custom objects benefits real time scanning quickly identifies vulnerabilities in ai/ml models and notebooks comprehensive framework support compatible with diverse model frameworks dynamic risk identification adapts to evolving security threats thorough assessments provides a full spectrum of vulnerability analysis standards compliance aligns with owasp, mitre, and cwe standards scalability automated workflows ensure efficient scaling seamless integration effortless compatibility with popular ai/ml platforms detailed reports helps prioritize vulnerabilities and allocate resources competitive advantage showcases commitment to security, appealing to stakeholders and clients parameters parameter data type description remarks repo type string name of the repository (e g , github , huggingface,s3 bucket,file ) repo url string url of the repository to be scanned in case of huggingface and github formats accepted hugginface https //huggingface co/<\<username>>/<\<reponame>> https //huggingface co/<\<username>>/<\<reponame>> github https //github com/<\<username>>/<\<reponame git>> https //github com/<\<username>>/<\<reponame git>> branch name string branch to analyze default main depth integer depth of the repository clone default 1 (latest commit only) model id string id of the model for file uploads obtainable during model registration aws access key id string a unique identifier used to authenticate requests to aws services it pairs with the aws secret access key to ensure secure access and authorization aws secret access key string a confidential key used to sign and authenticate requests to aws services it works with the aws access key id to securely validate access permissions region string the region where s3 bucket present the region specifies the geographic area where aws resources are deployed and operated bucket name string the bucket name which need to be scanned the bucket name is the unique identifier for an amazon s3 bucket, where objects (files or data) are stored sample artifact refer to the vulnerability report for detailed insights appendix glossary deserialization risks vulnerabilities arising during object reconstruction from untrusted data or files backdoor risks undetected pathways that allow behavior manipulation runtime risks threats triggered during inference or file execution for further queries, contact support