Platform guide
...
Model Scanner
AI/ML Supply Chain
AI/ML Supply Chain Risk Mapping
14min
overview supply chain compromise in ai/ml involves adversarial actions targeting the supply chain of ai/ml systems this includes manipulation of software, data, or model dependencies, leading to vulnerabilities in downstream applications supply chain compromise model ai and machine learning systems often rely on open source models, which are downloaded and fine tuned using private datasets these models require executing saved code during loading, creating risks of compromise through traditional malware or adversarial techniques that manipulate the model’s behavior this dependency on external sources makes ai/ml systems vulnerable to supply chain attacks supply chain compromise in ai/ml models are extensively mapped across multiple frameworks, including mitre atlas ( aml t0010 https //atlas mitre org/techniques/aml t0010 ) for model specific risks, mitre att\&ck ( t1195 002 https //attack mitre org/techniques/t1195/002 ) for software supply chain manipulation, and the owasp top 10 for llms ( llm03 2025 https //genai owasp org/llmrisk/llm032025 supply chain/ ) for vulnerabilities in pre trained models it also aligns with the map and govern functions of the nist ai risk management framework (ai rmf) , emphasizing the identification, management, and oversight of ai supply chain risks to ensure system trust for detail refer information below table framework file format deserialization backdoor runtime vulnerability id tensorflow pb ✅ ais tf d 01 protobuf deserialization tensorflow pb ✅ ais tf b 01 tensorflow pb backdoor tensorflow h5 ✅ ais tf d 02 tensorflow/keras h5 deserialization tensorflow h5 ✅ ais tf b 02 tensorflow/keras h5 backdoor with malicious layers checkpoint(tf,pt) ckpt ✅ ais cp d 01 tensorflow/pytorch checkpoint (saved intermediate model) deserialization keras keras ✅ ais kr d 01 keras deserialization keras keras ✅ ais tf b 02 tensorflow/keras h5 backdoor with malicious layers keras h5 ✅ ais tf d 02 tensorflow/keras h5 deserialization keras h5 ✅ ais tf b 02 tensorflow/keras h5 backdoor with malicious layers pytorch pt ✅ ais pt d 01 pickle serialization in pytorch models pytorch pth ✅ ais pt d 01 pickle serialization in pytorch models pytorch bin ✅ ais pt d 02 serialization in pytorch models onnx onnx ✅ ais on b 01 onnx architecture backdoor onnx onnx ✅ ais on r 01 corrupted or manipulated file format gguf gguf ✅ ais gu r 01 gguf runtime threat scikit learn pkl ✅ ais pk d 01 pickle serialization misc zip ✅ ais mi d 01 zip file trojan or file corruption safetensors safetensors ✅ ais st d 01 improper file format safetensors safetensors ✅ ais st d 02 file shards – path traversal error mitre atlas mapping model compromise the vulnerabilities highlighted in the table above align with the concerns outlined in the ml supply chain compromise technique under the mitre atlas framework ( aml t0010 https //atlas mitre org/techniques/aml t0010 ) specifically, risks associated with compromised models are covered under the sub technique ml supply chain compromise model ( aml t0010 003 https //atlas mitre org/techniques/aml t0010 003 ) these compromises typically involve malicious modifications to pre trained models, unauthorized changes in model weights, or backdoor injections in serialized formats mitre att\&ck mapping ai/ml supply chain compromise the vulnerabilities identified in ai/ml systems align with the supply chain compromise technique in the mitre att\&ck framework ( t1195 https //attack mitre org/techniques/t1195/ ) this technique encompasses risks associated with the manipulation of products or delivery mechanisms prior to their receipt by the final consumer, aiming to compromise data or systems specifically, ai/ml systems are susceptible to compromise of software supply chain ( t1195 002 https //attack mitre org/techniques/t1195/002/ ) attackers might tamper with the software supply chain by modifying application source code, manipulating update mechanisms, or replacing legitimate software with malicious versions in the context of ai/ml, this could involve distributing altered pre trained models or corrupted datasets owasp top 10 for llm supply chain vulnerabilities the vulnerabilities identified in ai/ml systems correspond to the supply chain risk outlined in the owasp top 10 for large language models (llms) ( llm03 2025 supply chain https //genai owasp org/llmrisk/llm032025 supply chain/ ) this risk emphasizes the susceptibility of llm supply chains to various vulnerabilities that can compromise the integrity of training data, models, and deployment platforms key concern include vulnerable pre trained models utilizing third party models that may contain hidden biases, backdoors, or malicious features due to tampering or poisoning attacks supply chain compromise software supply chain compromise in software occurs when adversaries target software components, including dependencies, configurations, or distribution mechanisms these compromises aim to inject malicious elements into the development or runtime environment, potentially leading to unauthorized access, data breaches, or system failures supply chain compromise in ai/ml software is extensively mapped across multiple frameworks, including mitre atlas ( aml t0010 https //atlas mitre org/techniques/aml t0010 ) for software specific risks, mitre att\&ck ( t1195 001 https //attack mitre org/techniques/t1195/001/ ) for manipulation of software supply chains, the owasp top 10 for llms ( llm03 2025 https //genai owasp org/llmrisk/llm032025 supply chain/ ) and the owasp top 10 ( a06 2021 https //owasp org/top10/a06 2021 vulnerable and outdated components/ ) for risks associated with outdated and vulnerable components additionally, these risks align with the map and govern functions of the nist ai risk management framework (ai rmf) , which emphasize proactive identification, management, and governance of software supply chain vulnerabilities to maintain trust and security in ai systems framework file format security vulnerability id jupyter notebook ipynb ✅ ais py s 01 compromised components, libraries python py ✅ ais py s 01 compromised components, libraries misc requirement txt ✅ ais py s 01 compromised components, libraries mitre atlas mapping ml software compromise the vulnerabilities highlighted in the table above align with the concerns outlined in the ml supply chain compromise technique under the mitre atlas framework ( aml t0010 https //atlas mitre org/techniques/aml t0010 ) specifically, risks associated with compromised libraries, components and other artefacts are covered under the sub technique ml supply chain compromise ml software ( aml t0010 001 https //atlas mitre org/techniques/aml t0010 001 ) these compromises typically involve usage of malicious, deprecated libraries and components mitre att\&ck mapping ai/ml supply chain compromise the vulnerabilities identified in ai/ml systems align with the supply chain compromise technique in the mitre att\&ck framework ( t1195 https //attack mitre org/techniques/t1195/ ) this technique encompasses risks associated with the manipulation of products or delivery mechanisms prior to their receipt by the final consumer, aiming to compromise data or systems specifically, ai/ml systems are susceptible to compromise of software dependencies and development tools ( t1195 001 https //attack mitre org/techniques/t1195/001/ ) adversaries may inject malicious code into software dependencies or development tools commonly used in ai/ml pipelines this can lead to the execution of unauthorized code within ai models or data processing workflows owasp top 10 for llm supply chain vulnerabilities the vulnerabilities identified in ai/ml systems correspond to the supply chain risk outlined in the owasp top 10 for large language models (llms) ( llm03 2025 supply chain https //genai owasp org/llmrisk/llm032025 supply chain/ ) this risk emphasizes the susceptibility of llm supply chains to various vulnerabilities that can compromise the integrity of training data, models, and deployment platforms key concern include outdated or deprecated components relying on components that are no longer maintained, leading to potential security issues manipulated components relying on components that are no longer maintained, leading to potential security issues owasp top 10 classic vulnerable and outdated components the vulnerabilities identified in ai/ml systems correspond to the vulnerable and outdated components risk outlined in the owasp top 10 ( a06 2021 vulnerable and outdated components https //owasp org/top10/a06 2021 vulnerable and outdated components/ ) this risk highlights the dangers posed by relying on software libraries, frameworks, and components that are outdated, unsupported, or contain known vulnerabilities key concerns include outdated or deprecated components using outdated or deprecated software components that lack security patches or updates, leaving systems exposed to exploitation this is particularly relevant for machine learning systems, where dependencies often include pre trained models and data processing libraries that may not be actively maintained manipulated components employing components that have been tampered with, whether through malicious intent or accidental corruption, which can introduce security vulnerabilities and compromise system integrity vulnerability id nomenclature format (ais xx y nn) description ais aishield ai spectra sast assessment tag xx framework tag \ tf tensorflow \ cp checkpoint \ kr keras \ pt pytorch \ on onnx \ gu gguf \ mi misc \ st safetensors \ py python file( py) or notebook ( ipynb) y risk tag \ d deserialization \ b backdoor \ r runtime \ s security nn unique number risk overview deserialization risks occurs when unverified data is used to rebuild objects attackers may exploit these to introduce malicious code, compromising system integrity backdoor risks hidden pathways allow attackers to manipulate model behavior through specific triggers these covert exploits remain undetected during normal operations runtime risks activated during model inference or task execution, runtime risks involve malicious code execution, leading to unauthorized access or manipulation security risks risks due to security related aspects embedded in files or code inadvertently used