GenAIOps and Evals Best Practices
April 30, 2025What’s New in Excel (April 2025)
April 30, 2025Azure CNI Powered by Cilium is a high-performance data plane leveraging extended Berkeley Packet Filter (eBPF) technologies to enable features such as network policy enforcement, deep observability, and improved service routing. Legacy CNI supports Node Subnet where every pod gets an IP address from a given subnet. AKS clusters that require VNet IP addressing mode (non-overlay scenarios) are typically advised to use Pod Subnet mode. However, AKS clusters that do not face the risk of IP exhaustion can continue to use node subnet mode for legacy reasons and switch the CNI dataplane to utilize Cilium’s features. With this feature launch, we are providing that migration path!
Users often leverage node subnet mode in Azure Kubernetes Service (AKS) clusters for ease of use. This mode provides an optionality where users do not want to worry about managing multiple subnets, especially when using smaller clusters. Besides, let’s highlight some additional benefits unlocked by this feature.
Improved Network Debugging Capabilities through Advanced Container Networking Services
By upgrading to Azure CNI Powered by Cilium with Node Subnet, Advanced Container Networking Services opens the possibility of using eBPF tools to gather request metrics at the node and pod level. Advanced Observability tools provide a managed Grafana dashboard to inspect these metrics for a streamlined incident response experience.
Advanced Network Policies
Network policies for Legacy CNI present a challenge because policies on IP-based filtering require constant updating in a Kubernetes cluster where pod IP addresses frequently change. Enabling the Cilium data plane offers an efficient and scalable approach to managing network policies.
Create an Azure CNI Powered by Cilium cluster with node subnet as the IP Address Management (IPAM) networking model. This is the default option when done with a `–network-plugin azure` flag.
az aks create –name –resource-group –location –network-plugin azure –network-dataplane cilium –generate-ssh-keys
A flat network can lead to less efficient use of IP addresses. Careful planning through the List Usage command of a given VNet helps to see current usage of the subnet space. AKS creates a VNet and subnet automatically from cluster creation. Note the resource group for this VNet is generated based on the resource group for the cluster, the cluster name, and location.
From the Portal under Settings > Networking for the AKS cluster, we can see the names of the resources created automatically.
–url https://management.azure.com/subscriptions/{subscription-id} /resourceGroups/MC_acn-pm_node-subnet-test_westus2/providers/Microsoft.Network/virtualNetworks/aks-vnet-34761072/usages?api-version=2024-05-01
{
“value”: [
{
“currentValue”: 87,
“id”: “/subscriptions/9b8218f9-902a-4d20-a65c-e98acec5362f/resourceGroups/MC_acn-pm_node-subnet-test_westus2/providers/Microsoft.Network/virtualNetworks/aks-vnet-34761072/subnets/aks-subnet”,
“isAdjustable”: false,
“limit”: 65531,
“name”: {
“localizedValue”: “Subnet size and usage”,
“value”: “SubnetSpace”
},
“unit”: “Count”
}
]
}
To better understand this utilization, click the link of the Virtual network then access the list of Connected Devices. The view also shows which IPs are utilized on a given node.
There are a total of 87 devices consistent with the previous command line output of subnet usage. Since the default creates three nodes with a max pod count of 30 per node (configurable up to 250), IP exhaustion is not a concern although careful planning is required for larger clusters.
Next, we will enable Advanced Container Networking Services (ACNS) on this cluster.
az aks update –resource-group –name –enable-acns
Create a default deny Cilium Network policy. The namespace is `default`, and we will use `app: server` as the label in this example.
kubectl apply -f – <<EOF
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
name: default-deny
namespace: default
spec:
endpointSelector:
matchLabels:
app: server
ingress:
– {}
egress:
– {}
EOF
The empty brackets under ingress and egress represent all traffic. Next, we will use `agnhost`, a network connectivity utility used in Kubernetes upstream testing that can help set up a client/server scenario.
kubectl run server –image=k8s.gcr.io/e2e-test-images/agnhost:2.41 –labels=”app=server” –port=80 –command — /agnhost serve-hostname –tcp –http=false –port “80”
Get the server address IP:
kubectl get pod server -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
server 1/1 Running 0 9m 10.224.0.57 aks-nodepool1-20832547-vmss000002
Create a client that will use the agnhost utility to test the network policy. Open a new terminal window as this will also open a new shell.
kubectl run -it client –image=k8s.gcr.io/e2e-test-images/agnhost:2.41 –command — bash
Test connectivity to the server from client. Timeout is expected since the network policy is default deny for all traffic in the default namespace. Your pod IP may be different from the example.
bash-5.0# ./agnhost connect 10.224.0.57:80 –timeout=3s –protocol=tcp –verbose
TIMEOUT
Remove the network policy. In practice, additional policies would be added to retain the default deny policy while allowing applications that satisfy the conditions to allow connectivity.
kubectl delete cnp default-deny
From a shell with the client pod, verify the connection is now allowed. If successful, there is simply no output.
kubectl attach client -c client -i -t
bash-5.0# ./agnhost connect 10.224.0.57:80 –timeout=3s –protocol=tcp
Connectivity between server and client is restored. Additional tools such as Hubble UI for debugging can be found in Container Network Observability – Advanced Container Networking Services (ACNS) for Azure Kubernetes Service (AKS) – Azure Kubernetes Service | Microsoft Learn.
Conclusion
Building a seamless migration path is critical to continued growth and adoption of ACPC. The goal is to provide a best-in-class experience by providing an upgrade path to enable the Cilium data plane to enable high-performance networking across various IP addressing modes. This allows for flexibility to fit your IP address plans to build varied workload types using AKS networking. Keep an eye out on the AKS public roadmap for more developments in the near future.
Resources
- Learn more about Azure CNI Powered by Cilium.
- Learn more about IP address planning.
- Visit Azure CNI Powered by Cilium benchmarking to see performance benchmarks using an eBPF dataplane.
- Visit to learn more about Advanced Container Networking Services.