Understanding Apple enrollment methods in Microsoft Intune
July 19, 2025By Mays_Algebary shruthi_nair
As your Azure footprint grows with a hub-and-spoke topology, managing User-Defined Routes (UDRs) for inter-hub connectivity can quickly become complex and error-prone. In this article, we’ll explore how Azure Route Server (ARS) can help streamline inter-hub routing by dynamically learning and advertising routes between hubs, reducing manual overhead and improving scalability.
Baseline Architecture
The baseline architecture includes two Hub VNets, each peered with their respective local spoke VNets as well as with the other Hub VNet for inter-hub connectivity. Both hubs are connected to local and remote ExpressRoute circuits in a bowtie configuration to ensure high availability and redundancy, with Weight used to prefer the local ExpressRoute circuit over the remote one.
To maintain predictable routing behavior, the VNet-to-VNet configuration on the ExpressRoute Gateway should be disabled.
Note: Adding ARS to an existing Hub with Virtual Network Gateway will cause downtime that expect to last 10 minutes.
Scenario 1: ARS and NVA Coexist in the Hub
Option A: Full Traffic Inspection
In this scenario, ARS is deployed in each Hub VNet, alongside the Network Virtual Appliances (NVAs).
- NVA1 in Region1 establishes BGP peering with both the local ARS (ARS1) and the remote ARS (ARS2).
- Similarly, NVA2 in Region2 peers with both ARS2 (local) and ARS1 (remote).
Let’s break down what each BGP peering relationship accomplishes. For clarity, we’ll focus on Region1, though the same logic applies to Region2:
- NVA1 Peering with Local ARS1
- Through BGP peering with ARS1, NVA1 dynamically learns the prefixes of Spoke1 and Spoke2 at the OS level, eliminating the need to manually configure these routes.
- The same applies for NVA2 learning Spoke3 and Spoke4 prefixes via its BGP peering with ARS2.
- NVA1 Peering with Remote ARS2
- When NVA1 peers with ARS2, the Spoke1 and Spoke2 prefixes are propagated to ARS2.
- ARS2 then injects these prefixes into NVA2 at both the NIC level with NVA1 as the next hop, and at the OS level. This mechanism removes the need for UDRs on the NVA subnets to enable inter-hub routing.
- Additionally, ARS2 advertises the Spoke1 and Spoke2 prefixes to both ExpressRoute circuits (EXR2 and EXR1 due to bowtie configuration) via GW2, making them reachable from on-premises through either EXR1 or EXR2.
👉Important:
To ensure that ARS2 accepts and propagates Spoke1/Spoke2 prefixes received via NVA1, AS-Override must be enabled.
Without AS-Override, BGP loop prevention will block these routes at ARS2, since both ARS1 and ARS2 use the default ASN 65515, and ARS2 will consider the route as already originated locally.
The same principle applies in reverse for Spoke3 and Spoke4 prefixes being advertised from NVA2 to ARS1.
Traffic Flow
- Inter-Hub Traffic:
Spoke VNets are configured with UDRs that contain only a default route (0.0.0.0/0) pointing to the local NVA as the next hop. Additionally, the “Propagate Gateway Routes” setting should be set to False to ensure all traffic, whether East-West (intra-hub/inter-hub) or North-South (to/from internet), is forced through the local NVA for inspection. Local NVAs will have the next hop to the other region spokes injected at the NIC level by local ARS, pointing to the other region NVA, for example NVA2 will have next hop to Spoke1 and Spoke2 as NVA1 (10.0.1.4) and vice versa.
Why are UDRs still needed on spokes if ARS handles dynamic routing?
Even with ARS in place, UDRs are required to maintain control of the next hop for traffic inspection.
For instance, if Spoke1 and Spoke2 do not have UDRs, they will learn the remote spoke prefixes (e.g., Spoke3/Spoke4) injected via ARS1, which received them from NVA2. This results in Spoke1/Spoke2 attempting to route traffic directly to NVA2, a path that is invalid, since the spokes don’t have the path to NVA2. The UDR ensures traffic correctly routes through NVA1 instead.
- On-Premises Traffic:
To explain the on-premises traffic flow, we’ll break it down into two directions: Azure to on-premises, and on-premises to Azure.
Azure to On-Premises Traffic Flow:
As previously noted, Spokes send all traffic, including traffic to on-premises, via NVA1 due to the default route in the UDR.
NVA1 then routes traffic to the local ExpressRoute circuit, using Weight to prefer the local path over the remote.
Note: While NVA1 learns on-premises prefixes from both local and remote ARSs at the OS level, this doesn’t affect routing decisions. The actual NIC-level route injection determines the next hop, ensuring traffic is sent via the correct path—even if the OS selects a different “best” route internally.
The screenshot below from NVA1 shows four next hops to the on-premises network 10.2.0.0/16. These include the local ARS (ARS1: 10.0.2.5 and 10.0.2.4) and the remote ARS (ARS2: 10.1.2.5 and 10.1.2.4).
On-Premises to Azure Traffic Flow
In a bowtie ExpressRoute configuration, Azure VNet prefixes are advertised to on-premises through both local and remote ExpressRoute circuits. Because of this dual advertisement, the on-premises network must ensure optimal path selection when routing traffic to Azure.
From Azure side, to maintain traffic symmetry, add UDRs at the GatewaySubnet (GW1 and GW2) with specific routes to the local Spoke VNets, using the local NVA as the next hop. This ensures return traffic flows back through the same path it entered.
👉How Does the ExpressRoute Edge Router Select the Optimal Path?
You might ask: If Spoke prefixes are advertised by both GW1 and GW2, how does the ExpressRoute edge router choose the best path?
(e.g., diagram below shows EXR1 learns Region1 prefixes from GW1 and GW2)
Here’s how:
- Edge routers (like EXR1) receive the same Spoke prefixes from both gateways.
- However, these routes have different AS-Path lengths:
– Routes from the local gateway (GW1) have a shorter AS-Path.
– Routes from the remote gateway (GW2) have a longer AS-Path because NVA1’s ASN (e.g., 65001) is prepended twice as part of the AS-Override mechanism.
As a result, the edge router (EXR1) will prefer the local path from GW1, ensuring efficient and predictable routing.
For example:
EXR1 receives Spoke1, Spoke2, and Hub1-VNet prefixes from both GW1 and GW2. But because the path via GW1 has a shorter AS-Path, EXR1 will select that as the best route. (Refer to the diagram below for a visual of the AS-Path difference).
Final Traffic Flow:
Option-A Insights:
- This design simplifies UDR configuration for inter-hub routing, especially useful when dealing with non-contiguous prefixes or operating across multiple hubs.
- For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS).
- When on-premises traffic inspection is required, the UDR setup in the GatewaySubnet becomes more complex as the number of Spokes increases. Additionally, each route table is currently limited to 400 UDR entries.
- As your Azure network scales, keep in mind that Azure Route Server supports a maximum of 8 BGP peers per instance (as of the time writing this article). This limit can impact architectures involving multiple NVAs or hubs.
Option B: Bypass On-Premises Inspection
If on-premises traffic inspection is not required, NVAs can advertise a supernet prefix summarizing the local Spoke VNets to the remote ARS. This approach provides granular control over which traffic is routed through the NVA and eliminates the need for BGP peering between the local NVA and local ARS. All other aspects of the architecture remain the same as described in Option A.
For example, NVA2 can advertise the supernet 192.168.2.0/23 (supernet of Spoke3 and Spoke4) to ARS1. As a result, Spoke1 and Spoke2 will learn this route with NVA2 as the next hop.
To ensure proper routing (as discussed earlier) and inter-hub inspection, you need apply a UDR in Spoke1 and Spoke2 that overrides this exact supernet prefix, redirecting traffic to NVA1 as the next hop.
At the same time, traffic destined for on-premises will follow the system route through the local ExpressRoute gateway, bypassing NVA1 altogether.
In this setup:
- UDRs on the Spokes should have “Propagate Gateway Routes” set to True.
- No UDRs are needed in the GatewaySubnet.
👉Can NVA2 Still Advertise Specific Spoke Prefixes?
You might wonder: Can NVA2 still advertise specific prefixes (e.g., Spoke3 and Spoke4) learned from ARS2 to ARS1 instead of a supernet?
Yes, this is technically possible, but it requires maintaining BGP peering between NVA2 and ARS2.
However, this introduces UDR complexity in Spoke1 and Spoke2, as you’d need to manually override each specific prefix. This also defeats the purpose of using ARS for simplified route propagation, undermining the efficiency and scalability of the design.
Bypass On-Premises Inspection
Final Traffic Flow:
Option B: Bypass on-premises inspection traffic flow
Option-B Insights:
- This approach reduces the number of BGP peerings per ARS. Instead of maintaining two BGP sessions (local NVA and remote NVA) per Hub, you can limit it to just one, preserving capacity within ARS’s 8-peer limit for additional inter-hub NVA peerings.
- Each NVA should advertise a supernet prefix to the remote ARS. This can be challenging if your Spokes don’t use contiguous IP address spaces, as described in Option B.
Scenario 2: ARS in the Hub and the NVA in Transit VNet
In Scenario 1, we highlighted that when on-premises inspection is required, managing UDRs at the GatewaySubnet becomes increasingly complex as the number of Spoke VNets grows. This is due to the need for UDRs to include specific prefixes for each Spoke VNet. In this scenario, we eliminate the need to apply UDRs at the GatewaySubnet altogether.
In this design, the NVA will be deployed in Transit VNet, where:
- Transit-VNet will be peered with local Spoke VNets and with the local Hub-VNet to enable intra-Hub and on-premises connectivity.
- Transit-VNet also peered with remote Transit VNets (e.g., Transit-VNet1 peered with Transit-VNet2) to handle inter-Hub connectivity through the NVAs.
- Additionally, Transit-VNets are peered with remote Hub-VNets, to establish BGP peering with the remote ARS.
- NVAs OS will need to add static routes for the local Spoke VNets prefixes, it can be specific or it can supernet prefix, which will later be advertised to ARSs over BGP Peering, then ARS will advertise it to on-premises via ExpressRoute.
- NVAs will BGP peer with local ARS and also with the remote ARS.
To understand the reasoning behind this design, let’s take a closer look at the setup in Region1, focusing on how ARS and NVA are configured to connect to Region2. This will help illustrate both inter-hub and on-premises connectivity. The same concept applies in reverse from Region2 to Region1.
Inetr-Hub:
- To enable NVA1 in Region1 to learn prefixes from Region2, NVA2 will configure static routes at the OS level for Spoke3 and Spoke4 (or their supernet prefix) and advertise them to ARS1 via remote BGP peering.
- As a result, these prefixes will be received by NVA1, both at the NIC level, with NVA2 as the next hop, and at the OS level for proper routing.
- Spoke1 and Spoke2 will have a UDR with a default route pointing to NVA1 as the next hop. For instance, when Spoke1 needs to communicate with Spoke3, the traffic will first route through NVA1. NVA1 will then forward the traffic to NVA2 using VNet peering between the two Hubs.
- A similar configuration will be applied in Region2, where NVA1 will configure static routes at the OS level for Spoke1 and Spoke2 (or their supernet prefix) and advertise them to ARS2 via remote BGP peering, as a result, these prefixes will be received by NVA2, both at the NIC level (injected by ARS2), with NVA1 as the next hop, and at the OS level for proper routing.
Note: At the OS level, NVA1 learns Spoke3 and Spoke4 prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2.
- On-Premises Traffic:
To explain the on-premises traffic flow, we’ll break it down into two directions: Azure to on-premises, and on-premises to Azure.
Azure to On-Premises Traffic Flow:
- Spokes in Region1 route all traffic through NVA1 via a default route defined in their UDRs.
- Because of BGP peering between NVA1 and ARS1, ARS1 advertises the Spoke1 and Spoke2 (or their supernet prefix) to on-premises through ExpressRoute (EXR1).
- The Transit-VNet1 (hosting NVA1) is peered with Hub1-VNet, with “Use Remote Gateway” enabled. This allows NVA1 to learn on-premises prefixes from the local ExpressRoute gateway (GW1), and traffic to on-premises is routed through the local ExpressRoute circuit (EXR1) due to higher BGP Weight configuration.
Note: At the OS level, NVA1 learns on-prem prefixes from both local and remote ARSs. However, the NIC-level route injection determines the actual next hop, so even if the OS selects a different best route, it won’t affect forwarding behavior. same applies to NVA2.
On-Premises to Azure Traffic Flow:
- Through BGP peering with ARS1, NVA1 enables ARS1 to advertise Spoke1 and Spoke2 (or their supernet prefix) to both EXR1 and EXR2 circuits (due to the ExpressRoute bowtie setup).
- Additionally, due to BGP peering between NVA1 and ARS2, ARS2 also advertises Spoke1 and Spoke2 (or their supernet prefix) to EXR2 and EXR1 circuits.
- As a result, both ExpressRoute edge routers in Region1 and Region2 learn the same Spoke prefixes (or their supernet prefix) from both GW1 and GW2, with identical AS-Path lengths, as shown below.
- This causes non-optimal inbound routing, where traffic from on-premises destined to Region1 Spokes may first land in Region2’s Hub2-VNet before traversing to NVA1 in Region1. However, return traffic from Spoke1 and Spoke2 will always exit through Hub1-VNet.
- To prevent suboptimal routing, configure NVA1 to prepend the AS path for Spoke1 and Spoke2 (or their supernet prefix) when advertising them to the remote ARS2. Likewise, ensure NVA2 prepends the AS path for Spoke3 and Spoke4 (or their supernet prefix) when advertising to ARS1. This approach helps maintain optimal routing under normal conditions and during ExpressRoute failover scenarios.
Below diagram shows NVA1 is setting AS-Prepend for Spoke1 and Spoke2 supernet prefix when BGP peer with remote ARS (ARS1), same will apply for NVA2 when advertising Spoke3 and Spoke4 prefixes to ARS1.
Final Traffic Flow:
Full Inspection: Traffic flow when NVA in Transit-VNet
Insights:
- This solution is ideal when full traffic inspection is required.
- Unlike Scenario 1 – Option A, it eliminates the need for UDRs in the GatewaySubnet.
- When ARS is deployed in a VNet (typically in Hub VNets), the VNet will be limited to 500 VNet peerings (as of the time writing this article). However, in this design, Spokes peer with the Transit-VNet instead of directly with the ARS VNet, allowing you to scale beyond the 500-peer limit by leveraging Azure Virtual Network Manager (AVNM) or submitting a support request.
- Some enterprise customers may encounter the 1,000-route advertisement limit on the ExpressRoute circuit from the ExpressRoute gateway. In traditional hub-and-Spoke designs, there’s no native control over what is advertised to ExpressRoute. With this architecture, NVAs provide greater control over route advertisement to the circuit.
- For simplicity, we used a single NVA in each Hub-VNet while explaining the setup and traffic flow throughout this article. However, a high available (HA) NVA deployment is recommended. To maintain traffic symmetry in an HA setup, you’ll need to enable the next-hop IP feature when peering with Azure Route Server (ARS).
- This design does require additional VNet peerings, including:
- Between Transit-VNets (inter-region),
- Between Transit-VNets and local Spokes, and
- Between Transit-VNets and both local and remote Hub-VNets.