Orchestrating Multi-Agent Intelligence: MCP-Driven Patterns in Agent Framework
October 23, 2025Azure File Sync with ARC… Better together.
October 23, 2025Let’s pick up from where we left off in the previous post — Selecting the Right Agentic Solution on Azure – Part 1. Earlier, we explored a decision tree to help identify the most suitable Azure service for building your agentic solution.
Following that discussion, we received several requests to dive deeper into the security considerations for each of these services. In this post, we’ll examine the security aspects of each option, one by one. But before going ahead and looking at the security perspective I highly recommend looking at list of Azure AI Services Technologies made available by Microsoft. This list is inclusive of all those services which were part of erstwhile cognitive services and latest additions.
Workflows with AI agents and models in Azure Logic Apps (Preview) – This approach focuses on running your agents as an action or as part of an “agent loop” with multiple actions within Azure Logic Apps. It’s important not to confuse this with the alternative setup, where Azure Logic Apps integrates with AI Agents in the Foundry Agent Service—either as a tool or as a trigger. (Announcement: Power your Agents in Azure AI Foundry Agent Service with Azure Logic Apps | Microsoft Community Hub). In that scenario, your agents are hosted under the Azure AI Foundry Agent Service, which we’ll discuss separately below.
Although, to create an agent workflow, you’ll need to establish a connection—either to Azure OpenAI or to an Azure AI Foundry project for connecting to a model. When connected to a Foundry project, you can view agents and threads directly within that project’s lists.
Since agents here run as Logic Apps actions, their security is governed by the Logic Apps security framework. Let’s look at the key aspects:
- Easy Auth or App Service Auth (Preview) – Agent workflows often integrate with a broader range of systems—models, MCPs, APIs, agents, and even human interactions. You can secure these workflows using Easy Auth, which integrates with Microsoft Entra ID for authentication and authorization. Read more here: Protect Agent Workflows with Easy Auth – Azure Logic Apps | Microsoft Learn.
- Securing and Encrypting Data at Rest – Azure Logic Apps stores data in Azure Storage, which uses Microsoft-managed keys for encryption by default. You can further enhance security by:
- Restricting access to Logic App operations via Azure RBAC
- Limiting access to run history data
- Securing inputs and outputs
- Controlling parameter access for webhook-triggered workflows
- Managing outbound call access to external services
More info here: Secure access and data in workflows – Azure Logic Apps | Microsoft Learn.
- Secure Data at transit – When exposing your Logic App as an HTTP(S) endpoint, consider using:
- Azure API Management for access policies and documentation
- Azure Application Gateway or Azure Front Door for WAF (Web Application Firewall) protection.
I highly recommend the labs provided by Logic Apps Product Group to learn more about Agentic Workflows: https://azure.github.io/logicapps-labs/docs/intro.
Azure AI Foundry Agent Service – As of this writing, the Azure AI Foundry Agent Service abstracts the underlying infrastructure where your agents run. Microsoft manages this secure environment, so you don’t need to handle compute, network, or storage resources—though bring-your-own-storage is an option.
- Securing and Encrypting Data at Rest – Microsoft guarantees that your prompts and outputs remain private—never shared with other customers or AI providers (such as OpenAI or Meta).
- Data (from messages, threads, runs, and uploads) is encrypted using AES-256.
- It remains stored in the same region where the Agent Service is deployed.
- You can optionally use Customer-Managed Keys (CMK) for encryption.
Read more here: Data, privacy, and security for Azure AI Agent Service – Azure AI Services | Microsoft Learn.
- Network Security – The service allows integration with your private virtual network using a private endpoint.
Note: There are known limitations, such as subnet IP restrictions, the need for a dedicated agent subnet, same-region requirements, and limited regional availability. Read more here: How to use a virtual network with the Azure AI Foundry Agent Service – Azure AI Foundry | Microsoft Learn.
- Secure Data at transit – Upcoming enhancements include API Management support (soon in Public Preview) for AI APIs, including Model APIs, Tool APIs/MCP servers, and Agent APIs.
Here is another great article about using APIM to safeguard HTTP APIs exposed by Azure OpenAI that let your applications perform embeddings or completions by using OpenAI’s language models.
Agent Orchestrators – We’ve introduced the Agent Framework, which succeeds both AutoGen and Semantic Kernel. According to the product group, it combines the best capabilities of both predecessors. Support for Semantic Kernel and related documentation for AutoGen will continue to be available for some time to allow users to transition smoothly to the new framework.
When discussing the security aspects of agent orchestrators, it’s important to note that these considerations also extend to the underlying services hosting them—whether on AKS or Container Apps. However, this discussion will not focus on the security features of those hosting environments, as comprehensive resources already exist for them. Instead, we’ll focus on common security concerns applicable across different orchestrators, including AutoGen, Semantic Kernel, and other frameworks such as LlamaIndex, LangGraph, or LangChain.
Key areas to consider include (but are not limited to):
- Secure Secrets / Key Management
- Avoid hard-coding secrets (e.g., API keys for Foundry, OpenAI, Anthropic, Pinecone, etc.).
- Use secret management solutions such as Azure Key Vault or environment variables.
- Encrypt secrets at rest and enforce strict limits on scope and lifetime.
- Access Control & Least Privilege
- Grant each agent or tool only the minimum required permissions.
- Implement Role-Based Access Control (RBAC) and enforce least privilege principles.
- Use strong authentication (e.g., OAuth2, Azure AD) for administrative or tool-level access.
- Restrict the scope of external service credentials (e.g., read-only vs. write) and rotate them regularly.
- Isolation / Sandboxing
- Isolate plugin execution and use inter-process separation as needed.
- Prevent user inputs from executing arbitrary code on the host.
- Apply resource limits for model or function execution to mitigate abuse.
- Sensitive Data Protection
- Encrypt data both at rest and in transit.
- Mask or remove PII before sending data to models.
- Avoid persisting sensitive context unnecessarily.
- Ensure logs and memory do not inadvertently expose secrets or user data.
- Prompt & Query Security
- Sanitize or escape user input in custom query engines or chat interfaces.
- Protect against prompt injection by implementing guardrails to monitor and filter prompts.
- Set context length limits and use safe output filters (e.g., profanity filters, regex validators).
- Observability, Logging & Auditing
- Maintain comprehensive logs, including tool invocations, agent decisions, and execution paths.
- Continuously monitor for anomalies or unexpected behaviour.
I hope this overview assists you in evaluating and implementing the appropriate security measures for your chosen agentic solution.