Platform guide
...
AISpectra
Model Scanner
AI/ML Supply Chain
18min
overview ai supply chain attacks occur when attackers modify or replace machine learning libraries, models, or associated data used by systems such vulnerabilities can lead to unauthorized system access or behavior manipulation to start working with the apis, visit the supply chain attacks guide https //docs boschaishield com/api docs/lesspostgreater supply chain attacks key features report generation produces detailed reports classifying risks as low , medium , high , or critical repository integration seamlessly integrates with github, huggingface, and aws s3 for automated scanning of repositories and detecting vulnerabilities model format support supported frameworks and file formats include framework file format deserialization risks backdoor risks runtime risks tensorflow pb ✅ ✅ tensorflow h5 ✅ ✅ ✅ tensorflow/pytorch checkpoint ckpt ✅ keras keras ✅ ✅ keras h5 ✅ ✅ pytorch pt, pth, bin ✅ onnx onnx ✅ scikit learn pkl ✅ gguf gguf ✅ safetensor safetensor ✅ misc zip ✅ additional file formats framework file format detections jupyter notebook ipynb hardcoded secrets,passwords pii, tokens(api, web, other) python py hardcoded secrets,passwords pii, tokens(api, web, other) ai software bill of materials (sbom) file format detections requirements file (autodiscovered) libraries, unsafe library flags jupyter notebook (autodiscovered) libraries, unsafe library flags risk analysis 1\ deserialization risks occurs when unverified data is used to rebuild objects attackers may exploit these to introduce malicious code, compromising system integrity activation serialization attacks exploit the process of saving and loading machine learning models, specifically targeting vulnerabilities in the serialization and deserialization mechanisms these attacks often involve malicious payloads embedded within serialized model files purpose the primary goal of serialization attacks is to gain unauthorized access, execute arbitrary code, or manipulate the system in unintended ways attackers leverage the trust developers place in model files and frameworks, embedding harmful code that executes during deserialization to compromise environments, exfiltrate sensitive data, or alter system functionality detection serialization attack detection involves examining serialized files for suspicious code patterns and loading models in isolated environments, such as sandboxes, to monitor for unexpected behaviors or executions during deserialization 2\ backdoor risks hidden pathways allow attackers to manipulate model behavior through specific triggers these covert exploits remain undetected during normal operations activation backdoor threats involve hidden pathways or triggers embedded in the model’s architecture that activate only when a specific input or condition is provided purpose backdoors are designed to manipulate model outputs for specific scenarios, enabling attackers to produce targeted malicious outputs without disrupting normal operations detection backdoor risks are harder to detect as they appear dormant in normal use but can be identified by analyzing model architecture for unusual pathways or using specialized tools like netron for visual inspection and security scanners to detect presence of unusual code 3\ runtime risks activated during model inference or task execution, runtime risks involve malicious code execution, leading to unauthorized access or manipulation activation these risks involve malicious code that executes during the model’s inference or runtime the threat typically resides in the model files, and the malicious code is triggered as the model processes input data purpose the aim is to compromise the system at runtime, such as gaining unauthorized access, stealing data, or altering the model’s behavior dynamically detection runtime risks often exploit code execution features in formats like tensorflow’s savedmodel or keras’ custom objects benefits real time scanning quickly identifies vulnerabilities in ai/ml models and notebooks comprehensive framework support compatible with diverse model frameworks dynamic risk identification adapts to evolving security threats thorough assessments provides a full spectrum of vulnerability analysis standards compliance aligns with owasp, mitre, and cwe standards scalability automated workflows ensure efficient scaling seamless integration effortless compatibility with popular ai/ml platforms detailed reports helps prioritize vulnerabilities and allocate resources competitive advantage showcases commitment to security, appealing to stakeholders and clients parameters parameter data type description remarks repo type string the type of repository to scan (e g , github, gitlab, bitbucket, huggingface, s3 bucket, azure blob, gcp storage ) repo url string url of the repository to be scanned formats accepted hugginface https //huggingface co/<\<username>>/<\<reponame>> https //huggingface co/<\<username>>/<\<reponame>> github https //github com/<\<username>>/<\<reponame git>> https //github com/<\<username>>/<\<reponame git>> branch name string name of the branch in the repository to be scanned (e g , main , dev) depth integer number of recent commits to scan from the specified branch e g , depth 10 scans the latest 10 commits username string username used for authenticating access to private repositories when required (e g , github, gitlab, bitbucket, huggingface) pat string personal access token used for authenticating access to private repositories (e g , github, gitlab, bitbucket, huggingface) the personal access token must have sufficient permissions to read repository contents, metadata, and history—typically including repo (for github), api (for gitlab), or equivalent scopes for other platforms model id string required if repo type is file; identifier returned by the model id generation api, used to locate the uploaded file for scanning obtainable during \<post> generate model id docid\ zqwtfesclevi vap5anth aws access key id string aws access key id used to authenticate and access s3 buckets when repo type is s3 bucket it pairs with the aws secret access key to ensure secure access and authorization aws secret access key string aws secret access key paired with aws access key id for authenticating access to s3 resources it works with the aws access key id to securely validate access permissions region string the aws region where the s3 bucket is hosted e g , us east 1, ap south 1 bucket name string required if repo type is s3 bucket or gcp storage ; specifies the name of the cloud storage bucket to be scanned if not provided, all accessible buckets will be auto detected and scanned (requires list and read permissions) azure connection string string connection string used to authenticate and access azure blob storage when repo type is azure blob must have permissions to list containers and read blobs (e g , via storage blob data reader or contributor roles) container name string name of the azure blob storage container to be scanned; required when repo type is azure blob if not provided, all accessible containers will be auto detected and scanned (requires list and read permissions) service account json file file file containing the json key for a gcp service account; required for authentication when repo type is gcp storage must have permissions like storage buckets list, storage viewer, storage objectviewer for proper scanning access sample artifact refer to the vulnerability report for detailed insights appendix glossary deserialization risks vulnerabilities arising during object reconstruction from untrusted data or files backdoor risks undetected pathways that allow behavior manipulation runtime risks threats triggered during inference or file execution for further queries, contact support