MB-310 Course Evolves to Meet Your Needs in a Changing Finance Landscape
June 20, 2025Removal of unwanted drivers from Windows Update
June 20, 2025Table of Contents
Using Ansys Access with Azure NetApp Files
Ansys Redhawk Scenario Details
Why Azure NetApp Files Capacity and Scale Fits for Ansys Access to support RedHawk-SC
Transient Simulations with Frequent Checkpoints
Multiple Concurrent Simulation Runs
End-to-End Engineering Workflows
Azure Well-Architected Pillars And Considerations
Dynamic Service Levels and Volume Resizing
Storage Efficiency for data protection
Abstract
This article explores the integration of Ansys Access with Azure NetApp Files (ANF) to deliver a high-performance, cloud-native environment tailored for Ansys RedHawk-SC simulations on Microsoft Azure. It addresses the critical importance of storage performance, reliability, and simplicity of deployment for engineering workloads that demand massive compute and I/O resources. By leveraging ANF’s enterprise-grade, low-latency shared storage and advanced data management capabilities, organizations can overcome traditional hardware limitations, streamline file management, and accelerate simulation workflows. The document highlights how this intelligent data infrastructure not only supports the complex I/O requirements and large datasets characteristic of RedHawk-SC but also boosts engineering productivity by reducing simulation times and enabling seamless scalability in the cloud.
Co-authors:
- Tilman Schroeder, CTO Automotive & MFG, NetApp
- Asutosh Panda, Azure NetApp Files Technical Marketing Engineer
- Andy Chan, Azure NetApp Files Principal Product Manager HPC/EDA
- Narayanan Terizhandur (T V), Principal Product Manager, Ansys
Introduction
This article presents a practical guide to integrating Ansys Access with Azure NetApp Files (ANF) for advanced simulation workloads in the cloud. After outlining the motivation and benefits of this integration, the article walks through the technical setup, key architectural elements, and specific considerations for Ansys RedHawk-SC. It concludes with strategies for optimizing storage costs and summarizes the main takeaways.
Using Ansys Access with Azure NetApp Files
Ansys Access on Microsoft Azure provides a high-performance computing (HPC) and visualization platform for engineering simulations such as Ansys Fluent and RedHawk-SC. For these workloads, storage performance, simplicity of deployment, and reliability are critical, especially when simulations scale to large model sizes or involve complex solvers requiring high throughput and low latency upward of thousands of cores. Azure NetApp Files (ANF) is well-suited for these demands by delivering NFS- or SMB-based shared storage with enterprise-grade performance, availability, and security.
By integrating Azure NetApp Files within the Ansys Access platform, engineers gain a streamlined approach to file management, eliminating hardware limitations while ensuring advanced data protection and an optimized cloud-native HPC environment—accelerating innovation. This document details the seamless integration of Ansys Access, with a particular emphasis on the Ansys RedHawk tool/module. With its reliable, low-latency file storage and robust data management capabilities, Azure NetApp Files effectively supports the complex I/O demands and large datasets essential for RedHawk simulations. This integration of ANF not only streamlines simulation workloads but also enhances engineering productivity via reducing simulation times.
Architecture Diagram
The diagram shows Ansys Access integrating with Azure NetApp Files (ANF) in Microsoft Azure, optimized for high-performance engineering simulations like Ansys RedHawk-SC. ANF offers enterprise-grade, low-latency shared storage via NFS or SMB protocols, efficiently exchanging data between simulation nodes and handling large I/O requirements. This setup ensures reliable, scalable, high-throughput access to big datasets, enhancing simulation workflows and overcoming hardware limitations while providing strong data protection and security in the cloud.
Ansys Redhawk Scenario Details
This chapter introduces the integration of Ansys Access with Azure NetApp Files (ANF) for high-performance Ansys RedHawk-SC simulations on Microsoft Azure. It outlines the motivations and benefits, guides readers through the technical setup and architecture, explores RedHawk-SC’s key storage needs, and concludes with strategies to optimize storage costs and a summary of key insights.
Overview and Context
The following paragraphs detail the critical storage and performance requirements for Ansys RedHawk-SC simulations and explain the key drivers motivating organizations to migrate HPC workloads to Azure with Azure NetApp Files.
HPC Simulation Environment
Ansys RedHawk-SC requires robust, low-latency I/O capabilities for reading and writing large model data files, and result sets.
Storage performance is one of the key Ansys RedHawk-SC considerations when deployed at scale, with communication between Master, workers and threads all through NFS networking protocol to work on a large GDSII file in a distributed way.
For optimal deployment, RedHawk-SC prefers (link):
- High-bandwidth network (10Gbps or more) for worker communication.
- Distributed storage to handle large design sizes and results.
- Thousands of CPUs for signoff workloads, depending on the complexity of the chip design.
Cloud Shift Drivers
Organizations often migrate Ansys HPC workloads to Azure for on-demand compute scalability, reduced on-premises infrastructure complexity, innovative performance and flexible cost management. Azure NetApp Files extends these benefits by offering a cloud-aligned approach to performance, availability, and data protection.
Azure NetApp Files Capacity and Scale fits to support RedHawk-SC
Large-volume capabilities accommodate growing simulation data sets, avoiding the complexity of multiple file shares without sacrificing performance and providing up to 2 PiB capacity in a single namespace.
Use Cases
While Ansys Access supports a broad array of Ansys solvers, this guide focuses on Ansys RedHawk-SC to illustrate the benefits of Azure NetApp Files in HPC simulations in the cloud. Below are examples of how these solutions addresses critical pain points.
Power Integrity Simulations
Scenario: Running detailed power integrity analysis that involve large circuit netlists and extensive simulation datasets across multiple cores.
Why Move to Cloud: On-premises clusters may be capacity-constrained or require large capex investments while being unable to keep up with the latest compute and storage hardware in a cost-effective manner. Azure HPC compute (HB- or HC-series VMs) combined with Azure NetApp Files storage can handle large-scale parallel I/O efficiently. Each new CPU generation from AMD, Intel, or Microsoft offers a 10-20% IPC (instructions per cycle) improvement in Azure. Using Azure NetApp Files under Ansys Access allows customers to achieve faster simulation turnaround times, optimize HPC software license use, and boost productivity with latest CPU.
Transient Simulations with Frequent Checkpoints
Scenario: Conducting time-dependent dynamic drop analysis that generate significant simulation data.
Why Move to Cloud: Azure NetApp Files snapshots reduce overhead and facilitate quick partial/whole-volume restores without complex manual data backup processes.
Multiple Concurrent Simulation Runs
Scenario: Engineering teams in different departments or geographical regions simultaneously running different RedHawk-SC jobs simultaneously.
Why Move to Cloud: Pay-as-you-go HPC resources eliminate queue times. Azure NetApp Files ensures consistent performance for concurrent I/O-heavy workloads.
End-to-End Engineering Workflows
Scenario: Integrating pre-processing, simulation, postprocessing and data archival in seamless workflow.
Why Move to Cloud: Azure’s ecosystem integration allows for smooth transitions between different stages of the simulation workflow, with Azure NetApp Files ensuring data consistency and cross-region replication for business continuity.
Azure Well-Architected Pillars And Considerations
Performance Efficiency
The Performance Efficiency pillar focuses on ensuring workloads are responsive and scalable under to support HPC requirements. Azure NetApp Files (ANF) offers capabilities that optimize for this pillar:
Parallel I/O and Low Latency
Azure NetApp Files delivers sub-millisecond latency and high throughput, making it ideal for performance-intensive workloads such as HPC, analytics, and databases. Its architecture supports parallel I/O operations, which is critical for workloads that require concurrent access to large datasets (link).
Large Volumes
For simulation or analytics workloads that exceed tens or hundreds of TB, ANF supports volume sizes up to 1,024 Tib and more with performance to accompany, enabling petabyte-scale storage for massive datasets. This eliminates the need to manage multiple mount points, reducing complexity and performance overhead.
Dynamic Service Levels and Volume Resizing
Volumes can be resized dynamically without downtime, allowing you to scale up or down based on performance needs. This ensures that your storage footprint and performance profile remain aligned with business requirements. Furthermore, ANF allows nondisruptive changes between Standard, Premium, and Ultra tiers. This flexibility enables you to align performance with HPC workload demands in real time, optimizing, cost performance and efficiency.
Protocol Optimization
ANF supports NFSv3, NFSv4.1, and SMB 3.x, including dual-protocol volumes. This allows seamless integration for different stages of a HPC workflow and ensures optimal performance across different application types, from heavy simulation to image rendering.
Performance Isolation and QoS
Capacity pools in ANF provide performance isolation and predictable throughput. You can allocate throughput budgets per volume, ensuring critical workloads maintain consistent performance even under contention.
Cluster Right-Sizing
Integrate ANF with HPC environments by spinning up compute nodes only when needed. Automatically shut down idle nodes to reduce compute costs while maintaining performance during active workloads.
Cost Optimization
The Cost Optimization pillar focuses on maximizing business value while minimizing unnecessary expenses. Azure NetApp Files (ANF) provides several capabilities that support this goal:
Pay-As-You-Go Model
ANF charges are based on the capacity you assign to volumes and capacity pools not on actual usage. This allows for predictable cost modelling and encourages right-sizing to avoi overprovisioning.
Storage Efficiency for data protection
ANF supports efficient snapshot technology for primary data protection. Snapshots are space-efficient and instantaneous, enabling frequent recovery points without duplicating data or incurring significant storage overhead.
Reserved Capacity
For predictable workloads, ANF offers reserved capacity pricing options that provide significant discounts compared to pay-as-you-go rates. This is ideal for long-term projects or environments with stable storage needs.
Tiering for Cold Data
Move infrequently accessed data to lower-cost tiers within the Azure storage account. This ensures that only active simulation data resides on high-performance volumes, while older results are stored more economically.
Archival
Migrate older or less frequently accessed simulation results to archival tiers in the Azure storage account. This reduces overall storage costs while preserving access to historical data when needed.
Wrapping Up
By integrating Ansys Access with Azure NetApp Files, organizations unlock a powerful, cloud-native HPC environment that maintains enterprise-grade performance, reliability, and security standards. The on-demand scalability of latest compute solutions at scale and storage, combined with built-in data management features, enables engineering teams to run larger, more complex Ansys RedHawk-SC simulations while streamlining operational overhead. Adhering to the Azure Well-Architected Framework — across performance, cost optimization, operational excellence, and security — ensures a robust, simple to use, and future-proof HPC infrastructure. Providing intelligent data infrastructure platform to enhance engineering productivity.