Optimize for EnergyNed Bingham
The concepts introduced by Von Neumann in 1945 , remain the centerpiece of computer architectures to this day. His programmable model for general purpose compute combined with a relentless march toward increasingly efficient devices cultivated significant long-term advancement in the performance and power-efficiency of general-purpose computers. For a long time, chip area was the limiting factor and raw instruction throughput was the goal, leaving energy largely ignored. However, technology scaling has demonstrated diminishing returns, and the technology landscape has shifted quite a bit over the last 15 years.
Around 2007, three things happened. First, Apple released the iPhone opening a new industry for mobile devices with limited access to power. Second, chips produced with technology nodes following Intel's 90nm process ceased scaling frequency () as the power density collided with the limitations of air-cooling (). For the first time in the industry, a chip could not possibly run all transistors at full throughput without exceeding the thermal limits imposed by standard cooling technology. By 2011, up to 80% of transistors had to remain off at any given time .
Third, the growth in wire delay relative to frequency introduced new difficulties in clock distribution. Specifically, around the introduction of the 90nm process, global wire delay was just long enough relative to the clock period to prevent reliable distribution across the whole chip ().
As a result of these factors, the throughput of sequential programs stopped scaling after 2005 (). The industry adapted, turning its focus toward parallelism. In 2006, Intel's Spec Benchmark scores jump by a 135% with the transtion from NetBurst to the Core microarchitecture, dropping the base clock speed to optimize energy and doubling the width of the issue queue from two to four, targeting Instruction Level Parallelism (ILP) instead of raw execution speed of sequential operations . Afterward, performance grows steadily as architectures continue to optimize for ILP. While Spec2000 focused on sequential tasks, Spec2006 introduced more parallel tasks .
By 2012, Intel had pushed most other competitors out of the Desktop CPU market, and chips following Intel's 32nm process ceased scaling total transistor counts. While smaller feature sizes supported higher transistor density, it also brought higher defect density () causing yield losses that make larger chips significantly more expensive ().
Today, energy has superceded area as the limiting factor and architects must balance throughput against energy per operation. Furthermore, improvements in parallel programs have slowed due to a combination of factors (). First, all available parallelism has already been exploited for many applications. Second, limitations in power density and device counts have put an upper bound on the amount of computations that can be performed at any given time. And third, memory bandwidth has lagged behind compute throughput, introducing a bottleneck that limits the amount of data that can be communicated at any given time () .John Von Neumann. mirror) SPEC CPU Subcommittee. mirror) Kelin J Kuhn. mirror) Bill Holt. mirror) Eugene S. Meieran. mirror) Linley Gwennap,
Estimating IC Manufacturing Costs: Die size, process type are key factors in microprocessor cost.Microprocessor report, Volume 7. August 1993. (data mirror) John D. McCalpin. https://www.cs.virginia.edu/stream/. Intel.
Energy-Efficient, High Performing and Stylish Intel–Based Computers to Come with Intel® Core™ Microarchitecture.Intel Developer Forum, San Francisco CA, March 2006. (mirror) Venkatesan Packirisamy, et al.