AWS, Arm Production-Scale Cloud EDA

  • January 6, 2021
  • William Payne

Chip designer Arm is moving its electronic design automation to the AWS cloud. The Cambridge, England, based firm will use AWS Graviton2-based instances, powered by Arm Neoverse cores. This marks a significant shift in the semiconductor industry, which has traditionally used on-premises data centres for the computationally intensive work of verifying semiconductor designs.

Since beginning its AWS cloud migration, Arm has realised a 6x improvement in performance time for EDA workflows on AWS. Arm ultimately plans to reduce its global data centre footprint by at least 45% and its on-premises compute by 80% as it completes its migration to AWS.

EDA workflows are complex and include front-end design, simulation, and verification, as well as increasingly large back-end workloads that include timing and power analysis, design rule checks, and other applications to prepare the chip for production. These highly iterative workflows traditionally take many months or even years to produce a new device, such as a system-on-a-chip, and involve massive compute power. Semiconductor companies that run these workloads on-premises must constantly balance costs, schedules, and data centre resources to advance multiple projects at the same time. As a result, they can face shortages of compute power that slow progress or bear the expense of maintaining idle compute capacity.

By migrating its EDA workloads to AWS, Arm overcomes the constraints of traditionally managed EDA workflows and gains elasticity through massively scalable compute power, enabling it to run simulations in parallel, simplify telemetry and analysis, reduce its iteration time for semiconductor designs, and add testing cycles without impacting delivery schedules.

Arm is employing Amazon Elastic Compute Cloud (Amazon EC2) to streamline its costs and timelines by optimising EDA workflows across the variety of specialised Amazon EC2 instance types. The company is using AWS Graviton2-based instances to achieve high-performance and scalability, resulting in more cost-effective operations than running hundreds of thousands of on-premises servers. Arm uses AWS Compute Optimizer, a service that uses machine learning to recommend the optimal Amazon EC2 instance types for specific workloads, to help streamline its workflows.

On top of the cost benefits, Arm is utilising the high-performance of AWS Graviton2 instances to increase throughput for its engineering workloads, with an improvement of over 40% compared to previous generation x86 processor-based M5 instances. In addition, Arm uses services from AWS partner Databricks to develop and run machine learning applications in the cloud. Through the Databricks platform running on Amazon EC2, Arm can process data from every step in its engineering workflows to generate actionable insights for the company’s hardware and software groups and achieve measurable improvement in engineering efficiency.

“Through our collaboration with AWS, we’ve focused on improving efficiencies and maximising throughput to give precious time back to our engineers to focus on innovation,” said Rene Haas, President, IPG, Arm. “Now that we can run on Amazon EC2 using AWS Graviton2 instances with Arm Neoverse-based processors, we’re optimising engineering workflows, reducing costs, and accelerating project timelines to deliver powerful results to our customers more quickly and cost effectively than ever before.”

“AWS provides truly elastic high performance computing, unmatched network performance, and scalable storage that is required for the next generation of EDA workloads, and this is why we are so excited to collaborate with Arm to power their demanding EDA workloads running our high-performance Arm-based Graviton2 processors,” said Peter DeSantis, Senior Vice President of Global Infrastructure and Customer Support, AWS. “Graviton2 processors can provide up to 40% price performance advantage over current-generation x86-based instances.”