Scaling Azure Functions & PaaS – Concurrency, Async, Messaging. Insights from Python Log Analysis
July 18, 2025AI is no longer an emerging trend—it’s already embedded in the daily workflows of modern enterprises. Tools like ChatGPT, Gemini, and Claude are transforming the way organizations operate. What began as simple summarization and conversational capabilities have rapidly evolved into intelligent agents capable of executing tasks and making decisions. Many of these AI tools—including ChatGPT—now offer connectors to Microsoft 365 and other SaaS platforms. These integrations enable seamless access to business-critical data such as emails, files, calendars, and chats, streamlining productivity across teams.
However, this convenience comes with risks. Employees may, knowingly or unknowingly, grant broad permissions to these applications, potentially exposing sensitive data. Without deep visibility into how these AI tools are connected and what data they can access, security teams are left with blind spots that could lead to unintended data exposure or misuse.
In this blog, we’ll explore how Microsoft Defender for Cloud Apps helps security teams gain enhanced visibility into the permissions granted to AI applications like ChatGPT as they access Microsoft 365 data. We’ll also share best practices for app governance to help security teams make informed decisions and take proactive steps to enable secure usage of AI apps accessing Microsoft 365 data.
Discover and govern ChatGPT and other AI apps accessing Microsoft 365
Let’s take a closer look at ChatGPT, which is increasingly adopted in enterprise environments for its ability to streamline tasks and generate insights. ChatGPT recently added connectors to Microsoft 365 services like Outlook, SharePoint, and Teams, providing access to emails, files, calendars, and chats.
While this enhances productivity, it also introduces potential risks, especially when extensive permissions are granted without sufficient oversight. In such cases, sensitive business data may be exposed or misused without the organization’s awareness. Defender for Cloud Apps addresses this challenge by providing deep visibility into AI applications including ChatGPT to help security teams assess and control how these applications interact with Microsoft 365.
App governance provides a comprehensive set of security and policy management capabilities designed for OAuth-enabled apps registered in Microsoft Entra, Google, and Salesforce environments. It enables security teams to:
- Gain visibility and actionable insights into ChatGPT and other AI applications that have permission to access Microsoft 365. They can also get detailed insights into publisher information, consent type (admin or user granted), permission type (delegated, application, or mixed), community usage patterns (commonly or rarely used), last usage date, and more.
- Assess Graph and other API permissions granted and actively used over the past 90 days to access business critical data in Microsoft 365.
Figure 1: Permissions granted to ChatGPT and access scopes across Microsoft 365
- Analyze Microsoft 365 data accessed by ChatGPT and other apps across platforms such as Exchange, OneDrive, and Teams to understand both resource usage and user activities. Security teams can gain in-depth analytics on ChatGPT’s interaction with files, emails, and chat or channel messages, and utilize pre-built KQL queries to analyze detailed logs of resource access over the past 30 days.
Figure 2: Data access patterns and resource-level insights for ChatGPT activities
App governance best practices
App governance allows organizations to proactively assess and control app behavior, reduce risk, and maintain control over how resources are accessed — all within the context of their existing SaaS ecosystem. To enhance your environment’s app hygiene, we recommend the following best practices:
- Follow the principle of least privilege by regularly reviewing and removing unused or excessive permissions granted to apps.
- Disable or remove unused apps to reduce unnecessary exposure and minimize attack surface.
- Closely monitor high-privileged apps, which pose greater risk if misconfigured or compromised
- Review external apps from unverified publishers carefully, as they may introduce unknown risks or lack enterprise-grade security assurances.
Figure 3: The new applications page in the Microsoft Defender portal highlights risky OAuth apps
As generative AI becomes more embedded in the enterprise, Defender for Cloud Apps equips security teams with the visibility and control they need. With detailed insights into app permissions, usage patterns, and data access, organizations can confidently embrace the benefits of AI while keeping their Microsoft 365 environment secure.
Learn more:
- Check our documentation to explore app governance in Defender for Cloud Apps.
- Visit our website to learn more about Defender for Cloud Apps.
- Not a customer yet? Start a free trial today.