There’s a pattern playing out inside almost every engineering organization right now. A developer installs GitHub Copilot to ship code faster. A data analyst starts querying a new LLM tool for reporting.
A product team quietly embeds a third-party model into a feature branch. By the time the security team hears about any of it, the AI is already running in production — processing real data, touching real systems, making real decisions. That gap between how fast AI enters an organization and how slowly governance catches up is exactly where risk lives.
According to a new practical framework guide ‘AI Security Governance: A Practical Framework for Security and Development Teams,’ from Mend, most organizations still aren’t equipped to close it. It doesn’t assume you have a mature security program already built around AI. It assumes you’re an AppSec lead, an engineering manager, or a data scientist trying to figure out where to start — and it builds the playbook from there.
The Inventory Problem The framework begins with the critical premise that governance is impossible without visibility (‘you cannot govern what you cannot see’). To ensure this visibility, it broadly defines ‘AI assets’ to include everything from AI development tools (like Copilot and Codeium) and third-party APIs (like OpenAI and Google Gemini), to open-source models, AI features in SaaS tools (like Notion AI), internal models, and autonomous AI agents. To solve the issue of ‘shadow AI’ (tools in use that security hasn’t approved or catalogued), the framework stresses that finding these tools must be a non-punitive process, ensuring developers feel safe disclosing them A Risk Tier System That Actually Scales The framework uses a risk tier system to categorize AI deployments instead of treating them all as equally dangerous.
Each AI asset is scored from 1 to 3 across five dimensions: Data Sensitivity, Decision Authority, System Access, External Exposure, and Supply Chain Origin. The total score determines the required governance: Tier 1 (Low Risk): Scores 5–7, requiring only standard security review and lightweight monitoring. Tier 2 (Medium Risk): Scores 8–11, which triggers enhanced review, access controls, and quarterly behavioral audits.
Tier 3 (High Risk): Scores 12–15, which mandates a full security assessment, design review, continuous monitoring, and a deployment-ready incident response playbook. It is essential to note that a model’s risk tier can shift dramatically (e.g., from Tier 1 to Tier 3) without changing its underlying code, based on integration changes like adding write access to a production database or exposing it to external users. Least Privilege Doesn’t Stop at IAM The framework emphasizes that most AI security failures are due to poor access control, not flaws in the models themselves.
To counter this, it mandates applying the principle of least privilege to AI systems—just as it would be applied to human users. This means API keys must be narrowly scoped to specific resources, shared credentials between AI and human users should be avoided, and read-only access should be the default where write access is unnecessary. Output controls are equally critical, as AI-generated content can inadvertently become a data leak by reconstructing or inferring sensitive information.
The framework requires output filtering for regulated data patterns (such as SSNs, credit card numbers, and API keys) and insists that AI-generated code be treated as untrusted input, subject to the same security scans (SAST, SCA, and secrets scanning) as human-written code. Your Model is a Supply Chain When you deploy a third-party model, you’re inheriting the security posture of whoever trained it, whatever dataset it learned from, and whatever dependencies were bundled with it. The framework introduces the AI Bill of Materials (AI-BOM) — an extension of the traditional SBOM concept to model artifacts, datasets, fine-tuning inputs, and inference infrastructure.
A complete AI-BOM documents model name, version, and source; training data references; fine-tuning datasets; all software dependencies required to run the model; inference infrastructure components; and known vulnerabilities with their remediation status. Several emerging regulations — including the EU AI Act and NIST AI RMF — explicitly reference supply chain transparency requirements, making an AI-BOM useful for compliance regardless of which framework your organization aligns to. Monitoring for Threats Traditional SIEM Can’t Catch Traditional SIEM rules, network-based anomaly detection, and endpoint monitoring don’t catch the failure modes specific to AI systems: prompt injection, model drift, behavioral manipulation, or jailbreak attempts at scale.
The framework defines three distinct monitoring layers that AI workloads require. At the model layer, teams should watch for prompt injection indicators in user-supplied inputs, attempts to extract system prompts or model configuration, and signifi