Making the Most of Attack Simulation Training: Dynamic Groups, Automation, and User Education
June 25, 2025Logic App Standard – When High Memory / CPU usage strikes and what to do
June 25, 2025Overview
Semiconductor (or Electronic Design Automation [EDA]) companies prioritize reducing time to market (TTM), which depends on how quickly tasks such as chip design validation and pre-foundry work can be completed. Faster TTM also helps save on EDA licensing costs, as less time spent on work means more time available for the licenses.
To achieve shorter TTM, storage solutions are crucial. As illustrated in the article “Benefits of using Azure NetApp Files for Electronic Design Automation (EDA)” (1*), with Large Volume feature, which requires a minimum size of 50TB, Azure NetApp Files can be boosted to reach up to 652,260 I/O rate at 2ms latency, and 826,379 at performance edge (~7 ms) for one Large Volume.
Objective
In real-world production, EDA files—such as tools, libraries, temporary files, and output—are usually stored in different volumes with varying capacities. Not every EDA job needs extremely high I/O rates or throughput. Additionally, cost is a key consideration, since larger volumes are more expensive.
The objective of this article is to share benchmark results for different storage volume sizes: 50TB, 100TB, and 500TB, all using the Large Volume features. We also included a 32TB case—where Large Volume features aren’t available on ANF—for comparison with Azure Managed Lustre File System (AMLFS), another Microsoft HPC storage solution. These benchmark results can help customers evaluate their real-world needs, considering factors like capacity size, I/O rate, throughput, and cost.
Testing Tool
The EDA workload in this test was generated using a standard industry benchmark tool: SPEC Storage 2020 Suite. It can simulate a mixture of EDA applications used to design semiconductor chips. It consists of EDA_FRONTEND and EDA_BACKEND workloads, and is maintained at 3 to 2 (3 EDA_FRONTEND processes for every 2 EDA_BACKEND processes) ratio as the load increments. Detailed EDA workload distribution can be found at SPEC User’s Guide and (1*).
Testing Environment
We used 10 E64dsv5 as client VMs connecting to one single ANF or AMFLS volume with nconnect mount option (for ANF) to ensure generate enough workloads for benchmark. The client VM’s tuning and configuration are the same that specified on (1*).
- ANF mount option: nocto,actimeo=600,hard,rsize=262144,wsize=262144,vers=3,tcp,noatime,nconnect=8
- AMLFS mount: sudo mount -t lustre -o noatime,flock
All resources reside in the same VNET and same Proximity Placement Group when possible to ensure low network latency.
Figure 1. High level architecture of the testing environment
Benchmark Results
As EDA jobs are highly latency sensitive. For today’s more complex chip designs, 2 milliseconds of latency per EDA operation is generally seen as the ideal target, while edge performance limit is around 7 milliseconds. We listed the I/O rates achieved at both latency points for easier reference. Throughput (in MB/s) is also included, as it is essential for many back-end tasks and the output phase. (Figure 2., Figure 3,. Figure 4, and Table 1.)
For cases where the Large Volume feature is enabled, we observe the following:
- 100TB with Ultra tier and 500TB with Standard, Premium or Ultra tier can reach to over 640,000 I/O rate at 2ms latency. This is consistent to the 652,260 as stated in (*1). For Ultra 500TB volume can even reach 705,500 I/O rate at 2ms latency.
- For workloads not requiring much I/O rate, either 50TB with Ultra tier or 100TB with Premium tier can reach 500,000 I/O rate. For an even smaller job, 50TB with Premium tier can reach 255,000 and more inexpensive.
- For scenarios throughput is critical, 500TB with Standard, Premium or Ultra tier can all reach 10~12TB/s throughput.
Figure 2. Latency vs. I/O rate: Azure NetApp Files- one Large Volume
Figure 3. Achieved I/O rate at 2ms latency & performance edge (~7ms): Azure NetApp Files- one Large Volume
Figure 4. Achieved throughput (MB/s) at 2ms latency & performance edge (~7ms): Azure NetApp Files- one Large Volume
Table 1. Achieved I/O rate and Throughput at both latency: Azure NetApp Files- one Large Volume
For cases with less than 50TB of capacity, where the Large Volume feature not available for ANF, we included Azure Managed Lustre File System (AMLFS) for comparison.
With the same 32TB volume size, a regular ANF volume achieves about 90,000 I/O at 2ms latency, while an AMLFS Ultra volume (500 MB/s/TiB) can reach roughly double that, around 195,000. This shows that AMLFS is a better choice for performance when the Large Volume feature isn’t available on ANF. (Figure 5.)
Figure 5. Achieved I/O rate at 2ms latency: ANF regular volume vs. AMLFS
Summary
This article shared benchmark results for different storage capacities needed for EDA workloads, including 50TB, 100TB, and 500TB volumes with the Large Volume feature enabled. It also compared a 32TB volume—where the Large Volume feature isn’t available on ANF—to Azure Managed Lustre File System (AMLFS), another Microsoft HPC storage option. These results can help customers choose or design storage that best fits their needs by balancing capacity, I/O rate, throughput, and cost.
With the Large Volume feature, 100TB Ultra and 500TB Standard, Premium, or Ultra tiers can achieve over 640,000 I/O at 2ms latency. For jobs that need less I/O, 50TB Ultra or 100TB Premium can reach 500,000, while 50TB Premium offers 255,000 at a lower cost. When throughput matters most, 500TB volumes across all tiers can deliver 10–12TB/s.
If you have a smaller job or can’t use the Large Volume feature, Azure Managed Lustre File System (AMLFS) gives you better performance than a regular ANF volume.
A final reminder, this article primarily provided benchmark results to help semiconductor customers in designing their storage solutions, considering capacity size, I/O rate, throughput, and cost. It did not address other important criteria such as heterogeneous integration or legacy compliance, which are also important when selecting an appropriate storage solution.
References