Defender for Storage: Malware Scan Error Message Update
July 25, 2025New Surface Laptop 5G for Business, Copilot+ PC
July 25, 2025
Recent IDC Study finds that 75% of knowledge workers use Generative AI at work. This can be tools approved by the Enterprise or Publicly available. Although, as organizations leverage AI to enhance productivity and decision-making, risk associated with AI vulnerabilities, data exposure, attacks, and misuse have become increasing targeted. Understanding these challenges, Microsoft has approached AI security with a platform architecture approach. This architecture includes end-to-end coverage across its platform.
Drew Nicholas and I wrote this blog specifically for the decision makers in the Enterprises who are grappling with the threats that are either resurfacing (like Shadow AI) or are emerging as your Enterprise explores AI adoption. We discuss Microsoft’s robust AI security story, highlighting the platform approach.
Discovery of Shadow AI Application
Shadow AI emerges when employees independently adopt generative AI tools—like chatbots, content generators, or SaaS-based AI assistants—for productivity gains, without going through sanctioned IT channels. This behavior mirrors the older concept of “shadow IT,” but with heightened risks due to AI’s ability to process and potentially leak sensitive data.
As a result you are faced with the following risks:
- Data Leakage: Employees may unknowingly share sensitive data—like PII or intellectual property—with third-party AI tools that lack proper safeguards.
- Compliance Violations: Use of unsanctioned AI apps can breach regulations like GDPR or SOC 2 if data is processed or stored improperly.
- Security Gaps: These tools often operate outside the visibility of security teams, making it difficult to detect misuse or malicious behavior.
- Training Risk: Data shared with AI tools may be used to train external models, leading to unintended exposure of proprietary information.
Securing AI begins with discovery, Shadow AI has become increasingly prevalent within organizations as unapproved AI applications are being adopted more widely. Unfortunately, these tools pose significant risk as they are circumventing established security polices and many times interacting with sensitive data.
To discover shadow AI, Microsoft uses its Cloud Application Security Broker solution Defender for Cloud Apps in conjunction with its Endpoint Detection and Response Tool Microsoft Defender for Endpoint. This cohesiveness allows for visibility of the network traffic even off the internal network through Defender for Endpoint which communicates with Defender for Cloud Apps to enable organizations to easily detect AI usage at both the device and the network level. Also allow for policy controls to mitigate risk associated with unauthorized AI usage.
These solutions form the foundation of a secure AI adoption strategy, ensuring that enterprises can confidently embrace AI while maintaining governance and compliance.
Data Security Posture Management (DSPM) for AI
As AI systems—especially generative AI like Microsoft 365 Copilot—interact with vast amounts of enterprise data, DSPM (Microsoft Purview – Data Security Posture Management (DSPM) for AI | Microsoft Community Hub) becomes essential to ensure that sensitive information is not inadvertently exposed, misused, or leaked. The core reasons include:
- Oversharing Risk
AI tools often access large datasets to generate responses. Without DSPM, there’s a high risk of oversharing sensitive data. The DSPM for AI – Demo slides outlines how Microsoft Purview can proactively assess and mitigate this risk by identifying oversharing patterns and applying sensitivity labels automatically.
- Visibility Across Data Estate
AI amplifies the need for visibility into where sensitive data resides. DSPM enables organizations to discover and classify data across Microsoft 365 and third-party sources.
- Governance and Compliance
AI usage must align with compliance frameworks (e.g., GDPR, HIPAA). DSPM helps enforce governance by surfacing non-compliant AI interactions and recommending corrective actions.
- Third-Party AI Risk
DSPM is also vital for managing data exposure to third-party AI tools. This ensures that external integrations don’t become blind spots in your security posture.
- Preparation for AI Deployments
Before rolling out AI tools like Copilot, DSPM helps assess readiness by identifying data that should be restricted or labeled.
Microsoft’s approach to data security for AI is built on Microsoft Purview Data Security Posture Management (DSPM) for AI.
Key capabilities include:
- Insights and analytics into AI activity in your organization
- Ready-to-use policies to protect data and prevent data loss in AI prompts.
- Data risk assessments to identify, remediate and monitor personal oversharing of data.
With these tools, Microsoft empowers organizations to safeguard their most critical assets, ensuring that AI adoption does not come at the expense of data privacy and security.
For your AI Workloads that are built on Azure, GCP, and AWS you can use Defender for Cloud’s AI Posture Management capabilities (part of DCSPM)
AI Security Posture Management – https://learn.microsoft.com/en-us/azure/defender-for-cloud/ai-security-posture
AI-SPM helps identify misconfigurations, exposed endpoints, and insecure pipelines.
This offering provides:
- Support for Azure, AWS, and Google Vertex AI, ensuring comprehensive coverage across diverse cloud environments. This includes coverage of Azure OpenAI Service, Azure AI Foundry and Azure Machine Learning
- Ability to discover generative AI Bill of Materials (AI BOM), which includes application components, data, and AI artifacts from code to cloud.
- Using the attack path analysis to identify and remediate risk.
- The ability to discover misconfigured AI models, exposed endpoints, and insecure pipelines, enabling initiative-taking remediation of vulnerabilities.
By enhancing the security posture of AI models and applications, Microsoft ensures that organizations can innovate securely, without compromising their operational integrity.
Proactive and continuous testing of AI workload
As organizations accelerate their adoption of generative AI, Microsoft’s AI Red Teaming Agent emerges as a critical safeguard for building trustworthy and secure AI systems. These agents simulate adversarial attacks—such as prompt injections, data leakage, and misuse scenarios—across the AI lifecycle, helping teams proactively identify and mitigate vulnerabilities before they reach production.
Integrated into Azure AI Foundry and powered by Microsoft’s open-source PyRIT toolkit, the Red Teaming Agent automates risk evaluations, generates attack success metrics, and produces detailed scorecards to guide remediation efforts.
This capability is especially valuable for security teams, AI developers, and compliance officers who need to ensure their AI models are resilient, compliant, and aligned with responsible AI principles. By embedding red teaming into the development pipeline, organizations not only reduce risk but also build stakeholder trust and accelerate safe AI innovation.
https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/ai-red-teaming-agent
Stopping Generative AI Threats in Runtime
AI systems encounter a range of advanced threats, such as prompt injections, data poisoning, and inference-time abuse. Microsoft’s Defender for AI Services provide robust runtime protection to counter these threats.
Core functionalities include:
- Real-time monitoring of AI systems against activities such as jailbreak attempts, credential theft, access from suspicious or TOR Ips, and many more to detect and mitigate attacks as they occur. Overview – AI threat protection – Microsoft Defender for Cloud | Microsoft Learn
This capability ensures that AI models are resilient to emerging threats, safeguarding their reliability and integrity.
Unified SOC Integration
To streamline and enhance AI security operations, Microsoft integrates its offerings into a Unified Security Operations Center (SOC) experience. Through Microsoft Security Copilot, organizations gain access to autonomous agents that deliver real-time insights and automated responses.
Key benefits include:
- Support for phishing, identity, and AI risk management, enabling comprehensive coverage of security operations.
- Empowering SOC teams with actionable intelligence and automation, enhancing their ability to detect, respond to, and mitigate threats across the AI lifecycle.
This integration transforms the SOC into a hub of AI-driven security excellence, enabling organizations to stay ahead of evolving threats.
End to End Coverage
msft-security-for-ai-whitepaper-with-signature.pdf
Microsoft’s AI security story is underpinned by a commitment to delivering seamless, end-to-end protection.
From the endpoint to the network, from data to models, and from users to SOC, Microsoft’s platform offers a unified and comprehensive security architecture. Key highlights include:
- An AI-first approach that aligns with Zero Trust principles, ensuring that every component of the security framework is built to withstand modern threats.
- Global scalability, enabling organizations of all sizes and industries to adopt Microsoft’s security solutions with confidence.
Call to action
As a decision maker, your next steps are critical to ensuring your organization’s safe and successful adoption of AI. We recommend you should:
- Initiate a Comprehensive AI Security Assessment
Evaluate your current AI landscape for shadow AI usage, data exposure risks, and compliance gaps. Leverage tools like Microsoft Defender for Cloud Apps and Microsoft Defender for Endpoint to discover unsanctioned AI applications and enforce policy controls.
- Implement Data Security Posture Management (DSPM) for AI
Deploy Microsoft Purview DSPM to gain visibility into your data estate, classify sensitive information, and apply automated policies that prevent oversharing and ensure compliance with regulations such as GDPR and HIPAA.
- Strengthen AI Security Posture Across All Cloud Environments
Utilize Defender for Cloud’s AI Posture Management to identify misconfigurations, exposed endpoints, and insecure pipelines across Azure, AWS, and Google Cloud. Ensure your AI Bill of Materials (AI BOM) is documented and secure.
- Adopt Proactive AI Red Teaming and Continuous Testing
Integrate Microsoft’s AI Red Teaming Agent into your development pipeline to simulate adversarial attacks, identify vulnerabilities, and build resilient, trustworthy AI systems before they reach production.
- Enable Real-Time AI Threat Protection
Activate Microsoft Defender for AI Services to monitor and protect your AI workloads against runtime threats such as prompt injections, data poisoning, and inference-time abuse.
- Unify Security Operations with AI-Driven SOC Integration
Empower your security operations center with Microsoft Security Copilot and autonomous agents for real-time insights, automated responses, and comprehensive risk management across the AI lifecycle.
- Embrace an End-to-End, Zero Trust Security Approach
Align your AI security strategy with Zero Trust principles, ensuring protection from endpoint to cloud, and from data to models, for global scalability and resilience.
Take action now:
- Schedule an internal review of your AI security posture.
- Engage with your IT and security teams to deploy the recommended Microsoft solutions.
- Review the Securing AI ebook https://marketingassets.microsoft.com/gdc/gdcpXzm6u/original
- Download and review the Microsoft Security for AI Whitepaper for a deeper dive into best practices and implementation guidance.
- By acting decisively, you will position your organization to innovate with AI—securely, responsibly, and with confidence