The first computer I used was a real performance beast. Equipped with Intel’s 486 clocking in at 66 MHz, this machine was ready to take on whatever challenges the future would bring us. Or so I thought. The CPU clock speeds increased and soon passed 500 MHz, 1 GHz, and continued upwards. Around 2005, the top speed of the high-end processors settled around 4 GHz and hasn’t increased much since then. Why is that? I’ll explain.
You are watching: What is the practical limit for modern cpu clock speeds?
Why Are We Talking About Clock Speeds?
Even though the clock speed doesn’t tell us everything about a processor, most of us automatically connect higher clock speeds with faster processors. How come? What does this number tell us about the performance of a processor? To understand this, let’s briefly look at how a processor is constructed.
The Processor and Clock Speed
The most important parts in a processor are the transistors, the electronic devices that act as switches in order to construct logical gates. These logical gates are the hard-working components of our processors. Put together in different combinations, they form units capable of arithmetic and complex logical operations.
The speed at which such an operation can be performed is, in layman terms, limited by the frequency at which the transistor can switch from on to off, and still perform without failure. Since transistors are the building blocks of the logical gates, this switching frequency also limits the operating speed of our processor.
So, if we feed our processor with one input signal per second and the processor performs our operations error-free, we say that the processor is clocked at 1 Hz. In other words, the clock speed (sometimes referred to as the clock frequency or clock rate) of the processor is a kind of certification telling us how often we can give it instructions and still have failure-free operations. Looking at it the other way around, a processor with a clock speed of 3 GHz allows us to feed it with 3 billion operations per second, and we can still expect it to perform as predicted.
Now it is easy to see why we are interested in a higher clock speed. More operations per second mean that we can get more work done per unit time. For the user, this means that the programs on the computer will run faster — and this without doing much modification to the code. No wonder why all processor manufacturers pushed for higher and higher switching frequencies.
Why the Stagnation in CPU Clock Speed?
To understand this, we need to look at another aspect of the processors, namely the transistor count. The transistor count of a processor is the number of transistors that the processor is equipped with. Since the CPUs stay roughly the same size, the transistor count is directly related to the size of the transistors.
Another important observation is the so-called Dennard scaling, which says that the amount of power required to run the transistors in a specific unit volume stays constant despite increasing their number, such that the voltage and current scale with length. Yet, this observation is no longer becoming valid as transistors are growing very small. The scaling of voltage and current with length is reaching its limits, since transistor gates have become too thin, affecting their structural integrity, and currents are starting to leak.
Furthermore, thermal losses occur when you are putting several billions of transistors together on a small area and switching them on and off again several billion times per second. The faster we switch the transistors on and off, the more heat will be generated. Without proper cooling, they might fail and be destroyed. One implication of this is that a lower operating clock speed will generate less heat and ensure the longevity of the processor. Another severe drawback is that an increase in clock speed implies a voltage increase and there is a cubic dependency between this and the power consumption. Power costs are an important factor to consider when operating computing centers.
This wafer contains efficient network-on-chip circuits scalable to 100’s of compute nodes. The circuits enable the industry’s first 256-node network-on-chip in 22 nm Tri-Gate CMOS that operates at near-threshold and ultra-low voltage, decreasing power by 9x to 363 μW at 340 mV. Image Credit: Intel Corp.
But how can we get more computing power out of more transistors without increasing the clock speed? Through the application of multicore computing. The overwhelming benefit of multicores can be derived from the following reasoning: When cutting down the clock speed by 30%, the power is reduced to 35% of its original consumption, due to the cubic dependency (0.7*0.7*0.7 ~ 0.35).
Yet, computing performance is also reduced by 30%. But when operating two compute cores running with 70% of the original clock speed, we have 140% of the original compute power using only 70% of the original power consumption (2 x 35%). Of course, to reach this type of efficiency, you would have to program the parallelization of the process code to perfectly exploit both cores operating at the same time.
The Solution and Way Forward
Obviously, this plateauing of the clock speed hasn’t stopped the engineers at Intel and the like to push the envelope in order to achieve more performance. We have already seen that they can fit more and more of them on the same chip, through making the step from 2D, or planar, transistors to 3D, or tri-gate, transistors:
This step not only decreases the power the processors require by reducing the current flowing through the transistor to almost zero when it is in the “off” state, but it also allows as much current as possible when it is in the “on” state. Therefore, it will increase the performance.
Yet, it is the utilization of multicore computing that has contributed and will continue to contribute to increased computer performance — as this means that the major focus is now centering on parallelism and how to best divide up computations over multiple cores.
Pushing for Future Performance — Beyond CPU Clock Speed
The road to more computational performance wasn’t cut off by the lack of increase in clock speeds. As we weren’t able to rely on the increase of raw power, we were forced to find better and more effective solutions. This resulted in an increased investment in parallelism — for both hardware and software — and finding ways to make the processors more energy efficient.
And no matter what the next roadblock will be, we can be certain that there will be clever engineers in the world who will overcome it and push the development and performance forward.
Intel, the Intel logo, Intel Core, Pentium and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.