Fueling the Next Phase of Semiconductor Innovation Through the Cloud

By Vikram Bhatia 

Synopsys Cloud Product Management & GTM Strategy

Synopsys, Inc.

August 09, 2022

Blog

Fueling the Next Phase of Semiconductor Innovation Through the Cloud

With growing compute requirements and tighter design schedules, semiconductor companies are constantly being tested to deliver both complex functionality and exceptional performance very quickly. Irrespective of where the customer is in their cloud journey, designers need to drive greater productivity and efficiency for complex chip design flows.

Today, many customers have big teams managing these flows. On top of this, the ability to scale electronic design automation (EDA) software licenses with extreme granularity has been of utmost priority. However, EDA tool flow and license management have remained a heavy lift. In on-prem environments especially, project cycles depend on how long it takes to run a particular workload.

To accelerate this demanding chip development landscape and provide teams with greater efficiency and productivity, Synopsys launched the industry's first broad-scale cloud software-as-a-service (SaaS) solution earlier this year.

The Need for All-In-One Flexibility

Chip development in the cloud represents a way forward for an industry grappling with exploding computational demands. So far, there have been several breakthroughs in cloud technology as well as in the mindset of how users think about a pay-per-use license model and how EDA vendors view it. From a technology perspective, cloud computing has been around for a while, but being able to scale the pay-per-use approach for EDA workloads automatically has only become possible because of the tight integration with cloud providers.

From an economics perspective, it is all about time-to-market. For most semiconductor companies, large or small, a significant part of their IT spending goes to the core of their business — designing a chip. The economics of shifting methodologies are not tied to cost, which is important for customers trying to adjust to the pay-per-use mindset. For example, to perform a timing analysis that takes hours for a typical job, an hourly pay-per-use model makes sense. However, for a task like library characterization that can run 50,000 simulation jobs, each taking a few minutes, it makes sense to charge by the minute.

With a pay-per-use approach, customers are no longer constrained by deciding the number of licenses for each tool and when it’s needed upfront. Instead, the design needs dictate how and when designers use chip design and verification tools. As a result, teams can now determine how they want to run specific parts of the project without being limited to how many licenses they have and need to run — a big breakthrough in user mindset.

Designing Without Barriers

From large, established design houses to startups “born in the cloud,” teams are experiencing the productivity and time-to-market benefits of moving their workloads to the cloud. As a result, we have seen a huge demand for technologies and business models from customers.

Customers currently face three key challenges on-prem:

  • Compute capacity: This is largely because there are more projects to do, increasingly complex chips to design, and less time to design them. The lead time to procure hardware is much longer on-prem as opposed to the cloud, where everything is available on-demand at scale.
  • License management: Customers really want to spend their effort and time in innovating and designing better chips than dealing with license server management. Almost every customer we've spoken to has told us about how they don't want to manage licenses but consider it a necessity to get the job done.
  • Cutting-edge compute: Access to the latest compute offerings, for example GPUs, is much easier, takes less time, and is less expensive when done on the cloud. The process to enable the latest hardware is much harder on-prem, since customers need to commit to large capital expenditure to procure these and plan that ahead.  

The Power of Two

Using a SaaS-based cloud EDA methodology gives teams a truly flexible model that accurately represents usage and aligns with solving today’s time-to-market needs. In the case of Synopsys Cloud, we designed our solution to offer two deployment models: bring your own cloud (BYOC) and software-as-a-service (SaaS).

For customers who have complex customized flows, we offer the option for users to switch or deploy the model in any way they want at any point of time. The BYOC approach enables customers to run their own cloud environment and have full control while leveraging unlimited EDA license availability on-demand that customers can pay for by the hour or by the minute, based on actual usage.

On the other hand, the SaaS deployment model is for customers who don’t need to manage IT and CAD in-house and prefer a single-source model for the entire solution. Our SaaS solution is tightly integrated with the Microsoft Azure cloud platform and our BYOC solution is available with Microsoft Azure, Amazon Web Services, and Google Cloud Platform. Customers can now get everything they need for running their EDA flows in one place — be it EDA software, compute, storage, or foundry collateral — on-demand and on a pay-per-use basis.

For example, if a customer wants to run a simple flow and only uses one EDA tool to run 100 copies of that tool on day one and run another 10,000 copies of the tool the next day, the underlying proprietary metering technology makes that possible seamlessly.

Because users don’t need to quantify licenses and build the infrastructure upfront in the pay-per-use model, they can start small and test it out for a short period of time. Since all the features are on-demand, they can choose to grow their usage to make the most of their experience or choose to walk away. This flexibility combines the availability of pre-optimized and advanced compute and storage infrastructure so teams don’t need to worry about pre-planning tasks in a much shorter time span and can design chips faster.

Tracking Usage for Better Scale and Performance

Usage for cloud infrastructure has been relatively straightforward, but what is new is how usage is tracked for EDA tools. Every tool has a specific per hour or minute rate that teams use to start a job based on how big the task is.

Based on actual usage, teams can determine what decisions can be made better. These features make it possible for the design team to leverage data and make better decisions, quicker. This means that there isn’t a need to decide how much time or licenses need to be allocated to a project. For instance, if a 1000-core cluster at a scale X took five hours to run, teams can use this information to decide whether there was scope for the project to run overnight, at a smaller scale, or use a bigger core count to complete the job faster.

This is not an easy task to do in on-prem. Today, there are many tools where the number of users is irrelevant, especially for tools that scale jobs horizontally on the cloud, but there are still several that rely on the interactive user interface and number of users. Regardless of how it is set up, the number of users and projects need to be added upfront.

New SaaS Instances for Fast Ramp-Up

Because chip design flows can be complex to implement, we’ve introduced new SaaS instances built on Synopsys Cloud. These SaaS instances are ready-to-use flows that are preconfigured for specific design types: analog, digital, and verification. Each comes with flow automation and pre-optimized compute based on these sub-tasks. With these instances, design teams can quickly ramp up on their chip design and verification projects and set their sights on their quality and time-to-market targets.

Reimagining the Future of Chip Design

Going forward, the success of the cloud model will largely depend on whether cloud and software vendors can continue to provide an excellent user experience and an application interface that is friendly and intuitive. Users shouldn’t feel threatened by an interface and must be able to easily grasp what the platform does after using it once, so that they can focus on the real job — designing a quality chip. As long as the user experience is seamless, the adoption will be faster.

Compared to on-prem, there’s an immense value add from user interfaces in the cloud, enabling teams to make better decisions week after week — something that hasn’t been available until now.

EDA on the cloud continues to be a journey, one that is only getting more exciting. With the pace at which the semiconductor industry is moving, chip and system design will never be the same; that’s going to be a welcome transformation.


Vikram Bhatia is head of cloud product management and GTM strategy at Synopsys. Before joining Synopsys, Vikram served in a variety of roles at companies including NetApp (vice president of cloud GTM strategy and business operations), Oracle (director of sales strategy & business development for Oracle Cloud), and Microsoft (director, Microsoft Azure). He has a Bachelor of Technology degree from the Indian Institute of Technology Kanpur, a master of science degree from the Colorado School of Mines, and an MBA from the Indian School of Business.

Vikram Bhatia is head of cloud product management and GTM strategy at Synopsys. Before joining Synopsys, Vikram served in a variety of roles at companies including NetApp (vice president of cloud GTM strategy and business operations), Oracle (director of sales strategy & business development for Oracle Cloud), and Microsoft (director, Microsoft Azure).

More from Vikram