top of page

AI Security Posture Management

AI-SPM addresses security risks inherent in machine learning systems across their lifecycle, from configuration baselines and misconfigurations that create vulnerabilities to protecting deployed models.

Microsoft Purview AI Hub is designed to expand Microsoft Purview DSPM's capabilities by helping organizations protect sensitive data shared with AI applications. It pinpoints top AI risks while offering simple one-click policies designed to keep organizations safe.

Discover Sensitive or Confidential Information

AI applications have become an increasingly popular way for business users to access data that may be sensitive or confidential, yet when used without adequate protections they risk being exploited by hackers or employees with malicious intentions. Furthermore, regulatory compliance officers fear non-compliant AI usage could result in costly fines.

Microsoft Purview's AI Hub can assist in discovering and protecting sensitive information consumed or used by generative AI models within your organization. With integrated compliance tools like Purview Audit capturing Copilot interactions and configuring policies for retention or deletion, as well as visibility into how your users are using GenAI prompts and responses (including those of third party GenAI applications), as well as helping prioritize protection policies based on risk, prioritizing protection policies according to risk exposure of sensitive information in specific contexts.

The AI Hub is part of an overall data security posture management (DSPM) solution. DSPM takes a comprehensive approach to protecting data at rest, in transit and during processing - including encryption, access controls and classification - with encryption keys used during training as a core element. A subset of DSPM called AI-SPM specifically addresses AI models, algorithms and systems such as training secure models quickly for training or defense against attacks while explaining any explainabilities of systems like machine learning models or explainability systems. With an AI Hub installation quickly start collecting insights regarding your AI-related data before protecting it with protective measures - with self-explanatory prerequisites set of prerequisites along with preconfigured reports and policies ready to go!

Mitigate Risk

AI Security Posture Management (AI-SPM) allows enterprises to gain full visibility into the risks associated with AI applications and mitigate them using enforcement of configuration baselines and misconfiguration detection across their entire AI consumption footprint. Additionally, this service analyzes AI supply chains - models, libraries, APIs used by each application-- for potential vulnerabilities and threats such as encryption keys, log settings or authentication controls that could threaten security.

Additionally, the platform monitors generative AI applications to identify any breaches or unauthorizaed access to sensitive information. DevOps security plays an essential role here by detecting IaC misconfigurations--including SageMaker endpoints being exposed to the internet or private generative AI apps accessing public IP addresses--and offering remediation advice to increase AI security posture. Attack path analysis helps AI workloads identify and address risks, particularly during grounding (linking an AI model to specific data) and fine-tuning (adjusting a pretrained model's performance on specific datasets for optimized task completion) stages, where potentially sensitive information might become exposed.

Contrasting with traditional cloud security posture management (CSPM), which focuses on protecting data at rest and in transit, AI-SPM specializes in safeguarding AI training data to combat unique security challenges associated with artificial intelligence. This may include preventing ad hoc training sessions as well as secure model training to reduce bias within AI algorithms.

Maintain Compliance

Data security posture management (DSPM) is a cybersecurity technology that helps organizations protect sensitive data against breaches and noncompliance risks, mitigating risks of data breaches and regulatory noncompliance. DSP works by identifying and monitoring sensitive information across multi-cloud environments before using visibility and automation to prevent or block risky applications based on policies set by an organization.

Machine Learning Security Operations (MLSecOps), is an umbrella term encompassing practices and tools designed to secure all phases of machine learning (ML). These practices and tools protect data used for training ML pipelines and models; protect models during inference with runtime model hardening; use secure enclaves and AI content prompt shields to safeguard confidential inferencing; secure confidential inferencing inference; as well as manage threats such as adversarial input perturbation attacks or model stealing. Finally, AI security posture management involves monitoring for AI threats like adversarial input perturbation attacks or model theft.

Microsoft Purviews AI Hub's Adaptive Protection feature automatically detects when users visit generative AI sites and provides them with an indicator in their insider risk dashboard. This enables security teams to set up endpoint DLP rules that alert high risk users if they attempt to paste sensitive or confidential information into third-party GenAI apps or Copilot prompts containing GenAI scripts, helping to avoid accidental exposure of sensitive information or breach of compliance policies in regards to corporate sabotage, hate and discrimination, money laundering etc.

Get Started

Microsoft Purviews AI Hub helps security and compliance teams detect risky AI usage with sensitive data, mitigate those risks through comprehensive protection and governance controls, and meet regulatory compliance. Featuring an easy to use graphical dashboard with reports and one-click policies to minimize data sharing via Copilot or other AI assistants, this free service is accessible by all members of your organization.

As organizations adopt AI and ML in the cloud, they require visibility into their AI attack surface. Unfortunately, most organizations do not know exactly how much data and intellectual property are passing through AI applications or being leaked from unprotected cloud resources such as training data, models or notebooks; and more than 80% of leaders cite leakage of sensitive business critical data as one of their top concerns with adopting AI technologies.

Advanced security platforms, including Wiz, have begun offering AI Security Posture Management (AI-SPM). Similar to Cloud Security Posture Management (CSPM), AI-SPM scans all exposed cloud resources AI services or products are using. Additionally, it includes special inspections to detect AI specific vulnerabilities and attacks such as adversarial input perturbations and model theft.

Wiz's AI-SPM is part of a wider framework known as Machine Learning Security Operations (ML SecOps), which supports all aspects of Machine Learning lifecycle from grounding, training, deployment and monitoring. This feature enables proactive response to threats with built-in recommendations and contextualized vulnerability prioritization capabilities.

3 views0 comments

Recent Posts

See All


bottom of page