Migrating BitLocker Recovery Key Management from ConfigMgr to Intune: A Practical Guide
May 18, 2025Kickstart Your AI Development with the Model Context Protocol (MCP) Course
May 19, 2025With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation.
See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work.
Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security.
Move from detection to prevention.
Built-in, pre-configured policies you can activate in seconds. Check out DSPM for AI.
Monitor risky usage and take action.
Block risky users from uploading sensitive data into AI apps. See how to use DSPM for AI.
Set instant guardrails.
Use DSPM for AI to identify AI agents that may be at risk of data oversharing and take action. Get started.
QUICK LINKS:
00:00 — AI app security, governance, & compliance
01:30 — Take Action with DSPM for AI
02:08 — Activity logging
02:32 — Control beyond Microsoft services
03:09 — Use DSPM for AI to monitor data risk
05:06 — ChatGPT Enterprise
05:36 — Set AI Agent guardrails using DSPM for AI
06:44 — Data oversharing
08:30 — Audit logs
09:19 — Wrap up
Link References
Check out https://aka.ms/SecureGovernAI
Unfamiliar with Microsoft Mechanics?
As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.
- Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries
- Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog
- Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast
Keep getting this insider knowledge, join us on social:
- Follow us on Twitter: https://twitter.com/MSFTMechanics
- Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/
- Enjoy us on Instagram: https://www.instagram.com/msftmechanics/
- Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics
Video Transcript:
-Do you have a good handle on the data security risks introduced by the growing number of GenAI apps inside your organization? Today, 78% of users are bringing their own AI tools, often consumer grade, to use as they work and bypassing the data security protections you’ve set. And now, combined with the increased use of agents, it can be hard to know what data is being used in AI interactions to keep valuable data from leaking outside of your organization.
-In the next few minutes, I’ll show you how enterprise grade data security, governance, and compliance can go hand in hand with GenAI adoption inside your organization with Data Security Posture Management for AI in Microsoft Purview. This single solution not only gives you automatic visibility into Microsoft Copilot and custom apps and agents in use inside your organization, but extends visibility into AI interactions happening across different non-Microsoft AI services that may be in use. Risk analytics then help you see at a glance what’s happening with your data with a breakdown of the top unethical AI interactions, sensitive data interactions per AI app, along with how employees are interacting with apps based on their risk profile, either high, medium, or low. And specifically for agents, we also provide dedicated reports to expose the data risks posed by agents in Microsoft 365 Copilot and maker created agents from Copilot Studio. And visibility is just one half of what we give you. You can also take action.
-Here, DSPM for AI provides you proactive recommendations to help you take immediate action to enhance your data security and compliance posture right from the service using built-in and pre-configured Microsoft Purview policies. And with all AI interactions audited, not only do you get the visibility I just showed, but the data is automatically captured for data lifecycle management, eDiscovery, and Communication Compliance investigations. In fact, clicking on this one recommendation for compliance controls can help you set up policies in all these areas.
-Now, if you’re wondering how activity signals from AI apps and agents flow into DSPM for AI in the first place, the good news is, for the AI apps and agents you build with either Microsoft Copilot services or with Azure AI, even if you haven’t configured a single policy in Microsoft Purview, activity logging is enabled by default, and built-in reports are generated for you out of the gate. As I showed, visibility and control extend beyond Microsoft services as soon as you take proactive action. Directly from DSPM for AI, the fortify data security recommendation, for example, when activated under the covers leverage Microsoft Purview’s built-in classifiers to detect sensitive data and to log interactions from local app traffic over the network, as well as the device level to protect file system interactions on Microsoft Purview onboarded PCs and Macs, and even web-based apps running in Microsoft Edge, to help prevent risky users from leaking sensitive data.
-Next, with insights now flowing in, let me walk you through how you can use DSPM for AI every day to monitor your data risks and take action. I’ll start again from reports in the overview to look at GenAI apps that are popular in our organization. Something that is really concerning are the ones in use by my riskiest users who are interacting with popular consumer apps like DeepSeek and Google Gemini. ChatGPT consumer is at the top of the list, and it’s not a managed app for our organization. It’s brought in by users who are either using it for free or with a personal license, but what’s really concerning is that it has the highest number of risky users interacting with it, which could increase our risk of data loss. Now, my first inclination might be to block usage of the app outright. That said, if I scroll back up, instead I can see a proactive recommendation to prevent sensitive data exfiltration in ChatGPT with adaptive protection.
-Clicking in, I can see the types of sensitive data shared by users and their prompts. Creating this policy will log the actions of minor-risk users and block high-risk users from typing in or uploading sensitive information into ChatGPT. I can also choose to customize this policy further, but I’ll keep what’s there and confirm. And with the policies activated, now let me show you the result. Here we have a user with an elevated risk level. They’re entering sensitive information into the prompt, and when they submit it, they are blocked. On the other hand, when a user with a lower risk level enters sensitive information and submits their prompt, they’re informed that their actions are being audited.
-Next, as an admin, let me show you how this activity was audited. From DSPM for AI in the Activity Explorer, I can see all interactions and any matching sensitive information types. Here’s the activity we just saw, and I can click into it to see more details, including exactly what was shared in the user’s prompt. Now for ChatGPT Enterprise, there’s even more visibility due to the deep API integration with Microsoft Purview. By selecting this recommendation, you can register your ChatGPT Enterprise workspace to discover and govern AI interactions. In fact, this recommendation walks you through the setup process. Then with the interactions logged in Activity Explorer, not only are you able to see what prompts were submitted, but you can also get complete visibility into the generated responses.
-Next, with the rapid development of AI agents, let me show you how you can use DSPM for AI to discover and set guardrails around information used with your user-created agents. Clicking on agents takes you to a filtered view. Immediately, I can see indicators of a potential oversharing issue. This is where data access permissions may be too broad and where not enough of my data is labeled with corresponding protections. I can also see the total agent interactions over time, the top five agents open to internet users, with interactions by unauthenticated or anonymous users. This is where people outside of my organization are interacting with agents grounded on my organization’s data, which can be bad.
-I can also quickly see a breakdown of sensitive interactions per agent along with the top sensitivity labels referenced to get an idea of the type of data in use and how well protected it is. To find out more, from the Activity Explorer, I can see in this AI interaction, the agent was invoked in Copilot Chat, and I can view the agent’s details and see the prompt and response just like before. Now what I really want to do is to take a closer look at the potential data oversharing issue that was flagged. For that, I’ll return to my dashboard and click into the default assessment. These run every seven days, scanning files containing sensitive data and identifying where those files are located, such as SharePoint sites with overly permissive user access.
-And I can dig into the details. I’ll click into the top one for “Obsidian Merger” and I can see label coverage for the data within it. And in the protect tab, there are eight sensitivity labels and five that are referenced by Copilot and agents. Since I want agents to honor data classifications and their related protections, I can configure recommended policies. The most stringent option is to restrict all items, removing the entire site from view of Copilot and agents. Or for more granular controls, I also have a few more options. I can create default sensitivity labels for newly created items, or if I move back to the top-level options, I have the option to “Restrict Access by Label.” The Obsidian Merger information is highly privileged, and even if you’re on the core team working on it, we don’t want agents to reason over the information, so I’ll pick this label option.
-From there, I need to extend the list of sensitivity labels and I’ll select Obsidian Merger, then confirm to create the policy. And this will now block the agent from reasoning over the content that includes the Obsidian Merger label. In fact, let’s look at the policy in action. Here you can see the user is asking the Copilot agent to summarize the Project Obsidian M&A doc and even though they are the owner and author of the file, the agent cannot reason over it. It responds, “Unfortunately, I can’t provide detailed information because the content is protected.”
-As I mentioned, for both your agents and GenAI apps across Microsoft and non-Microsoft services, all activity is recorded in Audit logs to help conduct investigations whenever needed. In fact, DSPM for AI logged activity flows directly into Microsoft Purview’s best-in-class solutions for insider risk management, letting your security teams detect risky AI prompts as part of their investigations into risky users, communication compliance to aid investigations into non-compliance use in AI interactions, such as a user trying to get sensitive information like an acquisition plan, eDiscovery, where interactions across your Copilots, agents, and AI apps can be collected and reviewed to help conduct investigations and respond to litigations.
-So that was an overview of how GenAI adoption can go hand in hand with your enterprise grade data security, governance, and compliance requirements for your organizations, keeping your data protected. To learn more, check out aka.ms/SecureGovernAI. Keep watching Microsoft Mechanics for the latest updates, and thanks for watching.