Microsoft 365 Admin Center Video Overview
July 1, 2025
Model Context Protocol (MCP) in VS Code with Microsoft Learn
July 1, 2025
For IT admins and Microsoft 365 admins
7-minute read
Overview
Shadow AI is almost certainly happening across your organization—whether you can see it or not. Employees are using tools like ChatGPT and Notion AI to get work done, even without organizational knowledge or approval. This creates real risks like data leakage, compliance violations, and a lack of visibility into how employees are using artificial intelligence.
Fortunately, IT admins are in a unique position to fix the problem at its core.
Today’s article is intended to be a practical playbook for helping IT admins lead the charge toward responsible AI use in their organizations by empowering secure, compliant, and easy-to-manage agents for Microsoft 365 Copilot Chat.
What is shadow AI?
Like shadow IT, the term ‘shadow AI’ exists for a reason: it refers to unsanctioned, often hidden, use of AI tools.
In the shadows, artificial intelligence can be hard to detect and even harder to govern. Tools can be browser-based, embedded in SaaS apps, or used on personal devices. Controls that mitigate shadow IT—like app blocking or firewall rules—don’t necessarily translate to AI use.
Both shadow IT and shadow AI involve technical and behavioral elements, however unauthorized use of AI presents deeper behavioral challenges beyond unauthorized tools. These challenges center around how users make decisions and potentially bypass governance in ways that are harder to detect and control.
While employees may not want to go rogue or bypass IT—and they generally don’t want to put the organization at risk—they do want to get their work done efficiently. They turn to public AI tools when they can’t find the capabilities they need inside the tools they have permission to use.
Agents for Microsoft 365 Copilot Chat give you a way to lead AI use into the light and meet your users’ needs with modern AI business tools. By building and deploying task-specific, data-grounded chat experiences that live inside Microsoft 365, users get fast, relevant answers they’re looking for without having to step into the shadows and leave the secure environment you manage.
These agents are part of the broader Microsoft 365 Copilot ecosystem and are designed to automate and execute business processes directly within Copilot Chat.
Should you ignore or even allow shadow AI?
When employees use public AI tools without oversight, they create risks that are harder to detect, harder to govern, and harder to reverse.
For IT admins, the stakes are high for operational, security, and technical risks:
- Loss of visibility and control: You can’t protect what you can’t see.
- Shadow AI obscures oversight. It’s harder to track usage or enforce policies for tools used outside your environment.
- No centralized monitoring = no control. Without a unified view, you can’t troubleshoot issues, optimize usage, or step in when something goes wrong.
- Shadow data silos emerge. Generative AI content created outside your tenant isn’t retained or governed, which complicates lifecycle management, legal holds, and compliance requests.
- Security and compliance risks
- Enterprise-grade protections are lacking. Most public AI tools don’t support conditional access, audit logs, or data loss prevention (DLP) policies, leaving you with blind spots and increased risk of data leaks.
- Sensitive data exposure. Employees may unknowingly input proprietary or regulated data into public models, risking violations of GDPR, HIPAA, or internal policies.
- Compliance gaps. If tools aren’t tracked or documented, they increase the burden of proving compliance and can become major liabilities during audits or regulatory reviews.
- IT and governance challenges
- IT is out of the loop. Adoption of unauthorized AI tools sidelines IT, preventing teams from recommending secure, supported alternatives or aligning tools with organizational standards. When users go rogue with AI tools, they aren’t using recommended secure, supported options that align with your environment and policies.
- Tool sprawl = more support tickets. Unapproved tools often lack integration with existing systems, creating support burdens and increasing the risk of misconfigurations.
Bottom line: Allowing or ignoring shadow AI will make it much harder to manage later. That’s why Copilot Chat agents, combined with strong governance and user education, are such a powerful response: they give you a way to meet end user demand without losing control.
What IT admins are up against
When it comes to eradicating rogue AI, admins have their work cut out for them. Here’s a summary table of how activating Copilot Chat agents at your organization can help stem the tide:
Unsanctioned AI use contributes to: |
How to stem the problem: |
Loss of visibility and control |
Reframe shadow AI as a signal |
Data governance gaps |
Keep data in your tenant |
Inconsistent AI use across teams |
Centralize AI access |
Security and compliance risks |
Use enterprise-grade protection |
Lack of deployment clarity |
Follow a clear blueprint |
Missed innovation opportunities |
Support safe innovation |
Copilot Chat agents remove the roadblocks to getting value from AI
Microsoft’s chat agents aren’t just another AI tool—they’re designed to work the way IT works.
- Secure by design: Agents run inside your Microsoft 365 tenant and authenticate through Azure AD.
- Compliant by default: They respect DLP and audit policies and retention through Microsoft Purview.
- Customizable and governable: You can define access, data sources, and usage policies.
- Easy to deploy: Agents live inside Teams and Microsoft apps, so users don’t need to install anything new.
Copilot Chat agents strengthen governance
While Copilot for Microsoft 365 helps users work more efficiently inside apps like Word, Excel, and Teams, Copilot’s AI agents go a step further. They give IT the ability to create task-specific, role-based, and data-grounded AI experiences that directly replace the kinds of tools employees might otherwise seek out on their own.
Key deployment benefits for IT admins
Benefit |
Impact |
Visibility |
Know who’s using AI, how, and with what data. |
Control |
Define and enforce usage policies. |
Compliance |
Align AI use with regulatory standards. |
Efficiency |
Reduce support tickets with self-service agents. |
Innovation |
Empower business units without losing oversight. |
Take the next step
Like shadow IT, you may not get rid of shadow AI completely or overnight. But you can meet it head-on with tools that work for your users and comply with your policies.
Start by deploying a few AI Chat agents in high-impact areas. Use the resources in this article to guide your rollout.
With Copilot Chat agents, you’re not just solving a technical problem. You’re leading your organization toward safer, smarter AI adoption.
Tools that make it easier
When it comes to Microsoft 365 deployments, you’re never alone. FastTrack for Microsoft 365 offers a full set of resources to help you learn about, build, manage, and instruct end users on Copilot Chat agents:
Credentialed access, sign in required:
- Microsoft 365 advanced deployment guides and assistance
- Microsoft 365 Copilot onboarding hub
- Microsoft 365 Copilot: Quickstart, Copilot Chat licensing
Open access, no sign-in required:
- Get started with Microsoft 365 Copilot extensibility
- Microsoft 365 Copilot ADG: Streamlining your Copilot journey (video)
- Copilot Chat Success Kit – Microsoft Adoption
- Microsoft Copilot AI setup and usage guides
- AI in business: Artificial intelligence tools & solutions (blog)
- Request assistance from FastTrack
Deployment blueprint: Get started today
Remember: You don’t need to roll out everything at once. Start small, build momentum, and scale responsibly.
Here’s a blueprint that will get you to the finish line:
Copilot Chat agent deployment checklist
Step 1: Prepare your environment
☐ Set up Copilot Studio and review licensing.
☐ Create Power Platform environments that reflect your data boundaries and governance needs.
☐ Identify early declarative agent use cases (e.g., HR FAQs, IT help desk).
Note: Only declarative agents are currently supported in Copilot Chat. Agents that access tenant data (e.g., SharePoint, Graph) require pay-as-you-go billing.
Step 2: Define governance policies
☐ Use role-based access control (RBAC) to manage who can create, publish, and use agents.
☐ Apply naming conventions, approval workflows, and publishing guidelines.
☐ Set up guardrails for data access, agent behavior, and knowledge sources.
☐ Assign maker permissions via Microsoft Entra groups or Copilot Studio user licenses.
Step 3: Deploy and monitor
☐ Use the Microsoft admin center and Power Platform admin center to manage billing and access.
☐ Monitor usage with audit logs, analytics, and the Copilot Control System.
☐ Identify which teams are still using unauthorized AI tools and guide them toward approved Copilot agents.
Step 4: Support and scale
☐ Offer training, templates, and office hours to support agent creators and users.
☐ Establish a Center of Excellence (CoE) to share best practices and governance.
☐ Highlight successful use cases to drive adoption and build momentum.
☐ Encourage feedback loops to refine agent behavior and expand scenarios.
Shadow AI prevention checklist
What else should you do to discourage shadow AI? Here’s a handy checklist of actions to take:
Data protection
☐ Apply Microsoft Purview DLP policies to monitor and restrict sensitive data.
☐ Use sensitivity labels and encryption to protect data at rest and in transit.
☐ Set up conditional access policies to limit AI tool usage by role, device, or location.
Acceptable use
☐ Publish clear guidance on approved AI tools and data usage.
☐ Include AI-specific clauses in acceptable use and security policies.
☐ Reinforce policies through onboarding, training, and regular reminders.
Monitoring and detection
☐ Use Microsoft Defender for Cloud Apps (MCAS) to detect unsanctioned AI usage.
☐ Analyze browser traffic and app usage patterns for high-risk behavior.
☐ Set up alerts for uploads to known AI endpoints (e.g., ChatGPT, Claude).
Education and empowerment
☐ Run awareness campaigns about shadow AI risks and approved alternatives.
☐ Offer training on how to use Copilot and Copilot Chat agents effectively.
☐ Create a feedback loop for users to request new AI capabilities.
Internal partnerships
☐ Collaborate with HR, legal, and other teams to understand AI needs.
☐ Support business units in building Copilot Chat agents with IT oversight.
☐ Use shadow AI behavior as a signal for unmet needs and prioritize accordingly.
Governance alignment
☐ Align Copilot deployment with your organization’s responsible AI principles.
☐ Document how Copilot Chat agents support ethical and regulatory standards.
☐ Use audit logs and analytics to support transparency and accountability.