Themegahertz myth,or in more recent cases thegigahertz myth,refers to the misconception of only usingclock rate(for example measured inmegahertzor gigahertz) to compare the performance of differentmicroprocessors.While clock rates are a valid way of comparing the performance of different speeds of the same model and type of processor, other factors such as an amount ofexecution units,pipeline depth,cache hierarchy,branch prediction,andinstruction setscan greatly affect the performance when considering different processors. For example, one processor may take twoclock cyclesto add two numbers and another clock cycle to multiply by a third number, whereas another processor may do the same calculation in two clock cycles. Comparisons between different types of processors are difficult because performance varies depending on the type of task. Abenchmarkis a more thorough way of measuring and comparingcomputer performance.

The myth started around 1984 when comparing theApple IIwith theIBM PC.[citation needed]The argument was that the IBM computer was five times faster than the Apple II, as itsIntel 8088processor had a clock speed roughly 4.7 times the clock speed of theMOS Technology 6502used in the latter. However, what really matters is not how finely divided a machine's instructions are, but how long it takes to complete a given task. Consider the LDA # (Load Accumulator Immediate) instruction. On a 6502 that instruction requires two clock cycles, or 2 μs at 1 MHz. Although the 4.77 MHz 8088's clock cycles are shorter, the LDA # needs at least[1]4 of them, so it takes 4 / 4.77 MHz = 0.84 μs at least. So, at best, that instruction runs only a little more than 2 times as fast on the original IBM PC than on the Apple II.

History

edit
Pentium 1 series processors

Thex86CISCbasedCPUarchitecture whichIntelintroduced in 1978 was used as the standard for theDOSbasedIBM PC,and developments of it still continue to dominate theMicrosoft Windowsmarket. AnIBMRISCbased architecture was used for thePowerPCCPU which was released in 1992. In 1994,Apple ComputerintroducedMacintoshcomputers using these PowerPC CPUs. Initially, this architecture met hopes for performance, and different ranges of PowerPC CPUs were developed, often delivering different performances at the same clock rate. Similarly, at this time theIntel 80486was selling alongside thePentiumwhich delivered almost twice the performance of the 80486 at the same clock rate.[2]

The myth arose because theclock ratewas commonly taken as a simple measure of processor performance, and was promoted in advertising and by enthusiasts without taking into account other factors. The term came into use in the context of comparing PowerPC-based Apple Macintosh computers with Intel-based PCs. Marketing based on the myth led to the clock rate being given higher priority than actual performance and led toAMDintroducing model numbers giving a notional clock rate based on comparative performance to overcome a perceived deficiency in their actual clock rate.[3]

Comparisons between PowerPC and Pentium had become a staple of Apple presentations. At the New York CityMacworld Expoon July 18, 2001,Steve Jobsin his "Stevenote"described an 867 MHzPowerPC G4as completing a task in 45 seconds while a 1.7 GHzPentium 4took 82 seconds for the same task, saying that "the name that we've given it is the megahertz myth".[4]He then introduced senior hardware VPJon Rubinsteinwho gave a tutorial describing how shorterpipelinesgave better performance at half the clock rate. The online cartoonThe Joy of Techsubsequently presented a series of cartoons inspired by Rubinstein's tutorial.[5]

Processor speed limits

edit
Pentium 4 processors had high clock speeds, resulting in high temperatures and high power use.

From approximately 1995 to 2005, Intel advertised its Pentium mainstream processors primarily on the basis of clock speed alone, in comparison to competitor products from AMD. Press articles had predicted that computer processors may eventually run as fast as 10 to 20 gigahertz in the next several decades.

This continued up until about 2005, when thePentium Extreme Editionwas reachingthermal dissipationlimits running at speeds of nearly 4 gigahertz. The processor could not go faster without requiring complex changes to the cooling design, such asmicrofluidiccooling channels embedded within the chip itself to remove heat rapidly.

This was followed by the introduction of theCore 2desktop processor in 2006, which was a major change from previous Intel desktop processors, allowing nearly a 50% decrease in processor clock while retaining the same performance.

Core 2 had its beginnings in thePentium Mmobile processor, where energy efficiency was more important than raw power, and initially offered power-saving options not available in the Pentium 4 andPentium D.

Higher frequencies

edit

In the succeeding years after the demise of theNetBurstmicroarchitecture and its 3+ GHz CPUs, microprocessor clock speeds kept slowly increasing after initially dropping by about 1 GHz. Several years' advances in manufacturing processes and power management (specifically, the ability to set clock speeds on a per-core basis) allowed for clock speeds as high or higher than the old NetBurst Pentium 4s and Pentium Ds but with much higher efficiency and performance. As of 2018, many Intel microprocessors are able to exceed a base clock speed of 4 GHz (Intel Core i7-7700K and i3-7350K have a base clock speed of 4.20 GHz, for example).

In 2011, AMD was first able to break the 4 GHz barrier forx86microprocessors with the debut of the initialBulldozerbasedAMD FXCPUs. In June 2013, AMD released the FX-9590 which can reach speeds of up to 5.0 GHz, but similar issues with power usage and heat output returned.

Neither Intel nor AMD produced the first microprocessor across the industry to break the 4 GHz and 5 GHz barriers. TheIBM z10achieved 4.4 GHz in 2008, and theIBM z196achieved 5.2 GHz in 2010, followed by thez12achieving 5.5 GHz in Autumn 2012.

See also

edit

References

edit
  1. ^The 8088 has a loosely-coupled Execution Unit (EU) and Bus Interface Unit (BIU), with a prefetch queue; in the 8088, to execute the MOV AL,# instruction, similar in function to the LDA # instruction of the 6502, the EU requires 4 clock cycles, but the BIU requires 8 clock cycles. (It is a 2-byte instruction, and the BIU requires 4 clock cycles to read or write 1 byte, assuming no wait states.) Therefore, if the instruction is already in the prefetch queue, it takes 4 clock cycles to execute; if the instruction has not been prefetched, it takes 8 clock cycles; and if the BIU is in the process of prefetching the instruction when the EU begins to execute it, it takes 5 to 7 clock cycles. In contrast, the 6502, which has a much simpler fetch-execute pipeline, always takes the same number of clock cycles to execute a given instruction in any context.
  2. ^"Analysis: x86 Vs PPC".Retrieved2008-09-18.
  3. ^Tony Smith (February 28, 2002)."Megahertz myth: Technology".The Guardian.Retrieved2008-09-18.
  4. ^"A video of Megahertz Myth presentation".YouTube.Archivedfrom the original on 2021-12-21.
  5. ^Nitrozac and Snaggy (2001-10-11)."The Megahertz Myth".The Joy of Tech.Retrieved2011-11-21.
edit