
Blocking risky users with Passwordless Authentication methods
August 5, 2025Azure Workbook for ACR tokens and their expiration dates
August 5, 2025As AI transforms software development, the opportunities are vast–but so are the risks. AI promises faster innovation, smarter experiences, and new business models. But behind the excitement, leaders across industries are grappling with a core question:
“How do I unlock the benefits of AI while protecting my data, complying with regulations, and maintaining customer trust?”
In our 7th episode of the Security for Software Development Companies webinar series–Safeguard Data Security and Privacy in AI-Driven Applications–we addressed this challenge directly. Featuring Microsoft experts Kyle Marsh and Vic Perdana, this session revealed how Microsoft Purview delivers practical, built-in security for AI applications, helping software development companies and enterprise developers meet security expectations from day one.
AI security is now a top concern for business leaders
The shift toward AI-driven applications has heightened concern among CISOs and decision makers. Recent research from the ISMG First Annual Generative AI Study revealed that:
Microsoft Purview for AI: Visibility, control, and compliance–by design
To address these risks without slowing innovation, Microsoft has extended Purview, our enterprise data governance platform, into the world of AI.
From Microsoft Copilot to custom GPT-based assistants, Purview now governs AI interactions, offering:
– Data Loss Prevention (DLP) on prompts and responses
– Real-time blocking of sensitive content
– Audit trails and reporting for AI activity
– Seamless integration via Microsoft Graph APIs
This means software developers can plug into enterprise-grade governance–with minimal code and no need to reinvent compliance infrastructure.
What it looks like: AI Hub in Microsoft Purview
Purview’s AI Hub offers centralized visibility into all AI interactions across Microsoft Copilot, Azure OpenAI, and even third-party models like Google Gemini or ChatGPT.
A developer’s guide: How to integrate AI security using Microsoft Graph APIs
Microsoft Purview offers a lightweight, developer-friendly integration path. As Kyle Marsh demonstrated during the webinar, just two core APIs are required:
protectionScopes/compute
This API lets you determine when and why to submit prompts/responses for review. It returns the execution mode:
– evaluateInline: Wait for Purview to approve before sending to the AI model or to the user from the AI model (future functionality)
– evaluateOffline: Send in parallel for audit only
processContent
Use this API to send prompts/responses along with metadata. If a DLP rule is triggered (e.g., presence of a credit card number), the app receives a block instruction before continuing.
For less intrusive monitoring, you can use contentActivity, which logs metadata only–ideal for auditing AI usage patterns without exposing user content.
Example in action: Blocking confidential data in Microsoft Copilot
The power of Purview’s inline protection is demonstrated in Microsoft Copilot. Below, we see how a user’s query surfaced confidential documents–but was blocked from sharing due to policy enforcement.
Built-in support for Microsoft tooling
Developers using Copilot Studio, Azure AI Studio, or Azure AI Foundry benefit from built-in or automatic integration:
– Copilot Studio: Purview integration is fully automatic–developers don’t need to write a single line of security code.
– Azure AI Foundry: Supports evaluateOffline by default; advanced controls can be added via APIs.
Custom apps–like a chatbot built with OpenAI APIs–can integrate directly using Microsoft Graph, ensuring enterprise-readiness with minimal effort.
Powerful enterprise controls with zero developer overhead
Enterprise customers can define and manage AI security policies through the familiar Microsoft Purview interface:
– Create custom sensitive info types
– Apply role-based access and location targeting
– Build blocking or allow-list policies
– Conduct audits, investigations, and eDiscovery
As a software development company, you don’t need to manage any of these rules. Your app simply calls the API and responds to the decision returned–block, allow, or log.
Resources to help you get started
Microsoft provides comprehensive tools and docs to help developers integrate AI governance:
– Purview Developer Samples: samples
– Microsoft Graph APIs for Purview: docs
– Web App Security Assessment: aka.ms/wafsecurity
– Cloud Adoption Framework: aka.ms/caf
– Zero Trust for AI: aka.ms/zero-trust
– SaaS Workload Design Principles: docs
Final takeaway: Secure AI is smart AI
“Securing AI isn’t optional–it’s a competitive advantage. If you want your solution in the hands of enterprises, you must build trust from day one.”
With Microsoft Purview and Microsoft Graph, software developers can build AI experiences that are not only intelligent–but trustworthy, compliant, and ready for scale.
🎥 Watch the full episode of “Safeguard Data Security and Privacy in AI-Driven Applications” at aka.ms/asiasdcsecurity/recording