How Computational Software Helps Deliver Increasing Computing Power

  • Post author:

By Russ Banham

Forbes

Moore’s Law—the theory that the number of transistors installed in a silicon chip will double every two years as the cost of a computer reduces by half—turns 56 years old this year. This daring calculation of exponentiality has stood the test of time, but will it continue to prevail?

The answer could be, in part, dependent on computational software, which comprises numerical algorithms used for system analysis and design. Every chip begins as mathematical computations of the architecture, which is needed to expand the number of transistors that provide the processing power and performance of electronic devices.

Transistors made of silicon are the building blocks of integrated circuits (ICs) like a computer’s central processing unit (CPU). Since they’re similar to the billions of neurons inside the human brain that switch on and off to help us process thoughts and memories, transistors might be considered the electronic equivalent of brain cells. Generally, the more transistors, the more powerful the computer.

The theory behind Moore’s Law is credited to the cofounder of one of the world’s largest semiconductor manufacturers. In 1965, he projected that the number of transistors would double every two years as the relative costs of computing halved—promising a significant increase in performance and power. “As time progressed, Moore’s Law progressed with it,” said Anirudh Devgan, president of Cadence, a global leader that enables the design of electronic systems and chips.

Power Begets Power

This progression has culminated in the development of emergent technologies like artificial intelligence/machine learning (AI/ML), autonomous vehicles, 5G communications, hyperscale computing and the industrial internet of things (IIoT), explained Devgan.

“When people consider exponentiality, they imagine some sort of scary explosion at an endpoint, but my sense is that we’re in the middle of Moore’s Law,” said Devgan. “I honestly think we can keep the pace going. Yes, it will get harder to double the number of transistors every couple of years, as the last 15 years of technological development has shown us. But companies like ours, with singular expertise in using computational software to electronically design chips, are positioned to enable this exponentiality.”

Like the powerful chips it enables, Cadence requires higher computing power to operate its computational software. Thousands of CPUs are networked together across multiple computers. “When people think about the future of computers, they imagine the development of a supercomputer, but we’re really moving toward the scaling out of many computers connected together,” Devgan said.

This parallelism boosts overall computing brain power to process huge volumes of data to perform complex computational calculations, explained Devgan. In the early 2000s, dual-core processors—a CPU with two processors in the same IC—were state of the art. Today, Cadence is using computers with 32 CPUs each.

“Internally we have networked about 150,000 CPUs altogether in our cloud-computing data center and are adding another 20,000 CPUs to this network on a yearly basis,” Devgan said. “That’s an order of magnitude in power that was inconceivable a decade ago, yet it attests to the durability of the exponentiality behind Moore’s Law.”

Computations And Calculations

To compute is to calculate, literally and figuratively. The meaning behind “does not compute” is often that something doesn’t add up or make sense. Cadence is in the business of ensuring everything adds up with precision to produce chips that are smaller, yet faster and smarter.

To achieve this precision, the company performs mind-boggling computations using rarefied algebraic formulations like parallel algorithms. Such algorithms allow for data to be organized into unique structures like a cube or an array, for example. To perform the computations and reach a conclusion, complex tasks involving massive amounts of data are executed on different processing devices simultaneously.

“Parallel algorithms are used in tasks involving the processing of a huge amount of complex data, such as in astronomical calculations, robotics and nuclear physics,” said Devgan. “We saw the value of parallel algorithms in the electronic design of semiconductor chips about 10 years ago, well before others did in the industry. It has allowed us to make much faster simulations to solve extremely complex problems.”

A case in point is machine learning. The large-scale computing application relies on the use of sparse matrix algorithms—matrices in which many elements have a value of zero (versus dense matrices in which many elements have a nonzero value). “Using computational software, we can compress sparse matrices to speed up many machine learning processes at less power,” Devgan said.

That sounds quite a bit like Moore’s Law. “The combination of computational software and mathematics makes it possible for Moore’s Law to continue unabated for another generation or two,” said Devgan. “At that point, who knows? We may still be in the middle of the journey.”

Russ Banham is a Pulitzer-nominated financial journalist and bestselling author.

Leave a Reply