Deploying Siemens NX/X on Azure Virtual Desktop: Multi-Session GPU Sharing for CAD Workloads
June 21, 2025📢 Announcing Public Preview: Organizational Templates in Azure Logic Apps
June 21, 2025Generative AI has been the buzz across engineering, science and consumer applications, including EDA. It was the centerpiece of the keynotes at both SNUG and CadenceLive, and it will feature heavily at DAC. Very impressive task specific tools and capabilities powered by traditional and generative AI are being developed by both industry vendors and customers. However, all these solutions are point solutions addressing specific tasks. This leaves the question of how customers will tie it all together and how customers will be able to run and access the LLMs, AI and data resources needed to power these solutions. While our industry has experience developing, running, and maintaining high-performance EDA environments, an AI centric data center running GPUs and low latency interconnect like Infiniband, is not an environment many chip development companies already have or have experience executing. Unfortunately, because LLMs are so resource hungry, it’s difficult to “ease into” a deployment.
The Agentic Platform for EDA
At the Microsoft Build conference in May, Microsoft introduced the Microsoft Discovery Platform. This platform aims to accelerate R&D across several industry verticals, specifically Biology (Life science and drug discovery), Chemistry (materials and substance discovery), and Physics (semiconductors and multi-physics).
Microsoft Discovery provides the platform and capabilities to help customers implement a complete agentic AI environment. Being a cloud-based solution means customers won’t need to manage the AI models or RAG solutions themselves. Running inside the customer’s cloud tenant, the AI models, the data they use, and the results they produce all remain under the customer’s control and within the customer’s environment. No data goes back to the Internet and all learning remains with the customer.
This gives customers the confidence that they can safely and easily deploy and use AI models while maintaining complete sovereignty over their data and IP. Customers are free to deploy any of the dozens of available AI models offered on Azure. Customers can also deploy and use Graph RAG solutions to improve context and get better LLM responses. This is all available without having to deploy additional hardware or manage a large, independent GPU deployment. Customers testing out generative AI solutions and starting to develop their flows, tools, and methodologies with this new technology can deploy and use these resources as needed.
The Microsoft Discovery platform does not try to replace the EDA tools you already have. Instead, it allows you to incorporate those tools into an agentic environment. Without anthropomorphizing, these agents can be thought of as AI driven task engines that can reason and interact with each other or tools. They can be used to make decisions, analyze results, generate responses, take action, or even drive tools. Customers will be able to incorporate existing EDA tools into the platform and drive them with an agent. Microsoft Discovery will even be able to run agents from partners and help customers intelligently tie together multiple capabilities and help automate analysis and decision-making on the flow helping each engineering teams accomplish a greater number of tasks more quickly and achieve increased productivity.
HPC Infrastructure for EDA
Of course, to run EDA tools, customers need an effective environment to run those tools in. One of the things that has always been true in our industry but is often overlooked is that, as good as the algorithms in the tools are, they’re always limited by the infrastructure it runs on. No matter how fast your algorithm is, running on a slow processor means turn-around time is still going to be slow. No matter how fast your tools are and how new and shiny your servers are, if your file system is a bottleneck, your tool and server will have to wait for the data. The infrastructure you run on sets the speed limit for your job regardless of how fast an engine you have. Most of the AI solutions being discussed for EDA focus only on the engine and ignore the infrastructure. The Microsoft Discovery platform understands this and addresses the issue by having the Azure HPC environment at its core.
The HPC core of the platform uses elements familiar to the EDA community. High performance file storage utilizes Azure NetApp Files (ANF). This shared file service uses the same NetApp technology and hardware that many in the EDA community already uses on-prem. ANF delivers unmatched performance for cloud-based file storage, especially for metadata heavy workloads, like EDA. This will help provide EDA workloads with a familiar pathway into the Discovery platform to make use of the AI capabilities for chip design.
Customers will also have access to Azure’s fleet of high-performance compute, including the recently released Intel Emerald Rapids-based FXv2, which was developed with large, back-end EDA workloads in mind. FXv2 features 1.8TB of RAM and all core turbo clock speed of 4 GHz. Ideal for large STA, P&R, and PV workloads. For front-end and moderate sized back-end workloads, in addition to the existing HPC compute offerings, Microsoft recently updated the D and E series compute SKUs with Intel Emerald Rapids processors in the v6 versions of those systems, further pushing performance for smaller workloads.
Design teams will have access to the required high-performance compute and storage resources to maximize their EDA tools while also taking advantage of the benefits of AI capabilities offered by the platform. The familiar EDA-friendly HPC environment makes migration of existing workloads easier and ensures that tools will run effectively and, more importantly, flows mesh more smoothly.
Industry Standards and Interoperability
Another aspect of the Microsoft Discovery platform that will be especially important for EDA customers is the fact that the platform will utilize A2A for agent-to-agent communication and MCP for agent-service communication. The reason this is important is because both A2A and MCP are industry standard protocols. Microsoft also expects to support the evolution of these and other newer standards that will emerge in this field, future-proofing your investment.
Those of us who have been involved in the various standards and interoperability efforts in semiconductor and EDA over the years will understand that having the platform use industry standards-based interfaces makes adoption of new technology much easier for all users. With AI development rushing forward and everyone, customers and vendors alike, trying to capitalize on gen AI’s promises, there are already independent efforts by customers and vendors to develop capabilities quickly.
In the past, this meant that everyone went off in different directions developing mutually exclusive solutions. Vendors would develop mutually exclusive solutions that customers would have to also develop customized solutions to leverage. The various solutions would all work slightly differently, making integration a painful process. The history of VMM, OVM, and UVM was an example of this. As the industry starts to develop AI and agentic environments, the same fragmentation is likely to also happen again.
By starting with A2A and MCP, Microsoft is signaling for the industry to align around these industry standard protocols. This will make it easier for agents developed by customers and vendors to interoperate with each other and the Discovery platform. Vendor tools implementing a MCP server interface can directly communicate with customer agents using MCP as well as with the Discovery platform. This makes it easier for our industry to develop interoperable solutions. Similarly, agents that use the A2A protocol to interact with other agents can be more easily integrated if the other agents also communicate using A2A. If you’re going to be building agents for EDA or EDA tools or services that interact with agents, build them using A2A for inter-agent communication and MCP for agent-to-tool/service communication.
Generative AI is likely to be the most transformative technology to impact EDA this decade. It likely will be at least as impactful, productivity wise, for us a synthesis, STA, and automatic place and route were in their own ways. To learn more about these innovations, come join the Microsoft team at the Design Automation Conference (DAC) in San Francisco on June 23. At DAC, the Microsoft team will go into depth about the Discovery platform and the larger impact that AI will have on the semiconductor industry.
In his opening keynote discussion on Monday, Bill Chappell, Microsoft’s CTO for the Microsoft Discovery and Quantum team will discuss AI’s impact on science and the semiconductor industry. Serge Leef’s engineering track session will talk about generative AI in chip design, and don’t miss Prashant Varshney’s detailed explanation of the Microsoft Discovery platform in his Exhibitor Forum session. Visit the Microsoft booth (second floor, 2124) for more in-depth discussions with our team.