Ramblings From a Mac Nut: Part 2

Last time we talked about the affect that OSX MIGHT have on Apple, its customer base, and the rest of the computing world. I received many thoughtful E-mail messages regarding this column and my conclusions and I would like to thank those who took the time to write. I also received some not so thoughtful E-mail messages.

One reader suggested that he had “….found out…” that I shouldn’t be allowed to give my opinion in public. He might well have a point there, but what I would like to know is WHERE and HOW he found this tidbit of information out. Such a source of insight and wisdom would, doubtless, be of incalculable value to us all. I also received the honor of having my column written about and quoted extensively by contributing editor Charles W. Moore at Applelinks.com. Thanks for the good words, Mr. Moore!

Processor Wars

Having been involved with Macs since 1984 (I’ve owned 8), I well remember the days in the early ‘nineties when we had to make excuses for the fastest Mac being only a 40 Mhz model while 86486DX4 based computers were approaching 100MHz. We (and Motorola) made excuses for the fact that the 68000 line of processors was falling behind. Motorola even went so far as to “double-label” their processor speed ratings, calling the 40 MHz 68040 a 40/80 MHz processor because the clock was doubled on-chip (unfortunately, this fooled nobody. The 486DX4s were clock doubled on-chip as well) just to make it look like the gap between the 68040 and the 86486DX4 wasn’t that great Well, it was that great, we knew it was, and all the apologizing in the world couldn’t change the fact that Macs were woefully slow in comparison to their Intel rivals. Then came the Power PC RISC processor. “Boy,” we shouted, ” the Intel crowd will never catch us now.” Well, for several years, it looked like this was so. Apple even ran adds showing Intel Pentium chips as snails.

Even as late as last fall when the G4 was announced, Jobs was talking about the Mac being the first desktop “supercomputer” with Gigaflops of performance. This balloon was busted pretty quickly, because hardly had the hypo on those supercomputer ad films dried before it was announced that Motorola was unable to ship the 500 MHz G4 chip in quantities sufficient for Apple to offer a computer built around it. Apple regrouped quickly, downgrading its G4 line from a highest speed of 500 MHz to a highest speed of 450 MHz. Meanwhile, the Intel/AMD crowd is crowing loudly about 700 MHz, 850 MHz, and even 1GHz models! Shades of 1991 again. What happened? Why has the AMI (Apple/Motorola/IBM) faltered? How can Intel continue to squeeze more and more performance out of a CISC architecture that pundits said was at end-of-life when it hit 100 MHz over eight years ago? I can’t answer all of these questions, but we can examine Apple and Motorola to see what their part in all of this is.

The AMI Alliance

Back in the late eighties, soon after Apple abandoned their own foray into microprocessor development (the ‘Jaguar’ project), they stunned the computer world by announcing a joint venture with Motorola and, of all people, their old arch-enemy, IBM to co-develop a new RISC (Reduced Instruction Set Computer) processor based on the IBM RISC6000 design. This processor was to power the Mac in the nineties and beyond. Experts were quick to point out that due to its RISC architecture, the new chip would easily be able to outstrip the aging Intel X86 design which was still based on the theoretically slower CISC (Complex Instruction Set Computer) architecture. Finally, Mac users would have a processor that would leave the despised PC in the dust.

When the first PPC-based machines shipped in 1994, they still had clock speeds slower than the fastest Intel chips, but the RISC advantage, so we were told, was such that a PPC could easily out bench-mark the Intel offering even if its clock speed were a bit higher. Soon, the original PPC601 chip was replaced by the 604, then the 604E, and it looked as if the promise of the PowerPC was being realized, because even though the Intel X86 chip (now re-named the Pentium) was able to keep up mega-Hertz-wise, experts generally agreed that the 604E was slightly faster than a Pentium at the same clock speed. Then, in 1998, Apple came out with its new G3 machines. These new machines were based on the Motorola third generation (hence the “G3” moniker) PPC chip, the PPC750.

Here was a PowerPC processor that truly lived up to the chips initial hype. It was significantly faster than Pentia of the same clock speed. Mac users and Apple rejoiced in the leap ahead and Apple’s ad campaign showed it. Bunny-suited Intel workers being scorched by the hot new processor, snails with Pentium II chips on their backs sluggishly crawling across the screen, etc. The Mac was on a performance roll. Even some of Apple’s harshest critics allowed that the latest Macs were excellent performers. When the G4 came out last fall, it was amid the winks, and nudges of self-assured Mac users that their platform was pulling ahead of the competition at a rapid rate. Then the news hit. Motorola couldn’t make the 500 MHz processors.

Not only that, but early in the G4 development, Motorola, and IBM had fallen out over Alti-Vec, a set of processor instructions which accelerated certain computer actions as much as 30 times. IBM didn’t want to build G4s with the Alti-Vec engine (a dedicated co-processor) on board. Instead, they wanted to concentrate on supplying G4 chips with smaller geometries and higher speed. So Apple couldn’t rely on its old second source to fill the gap until Motorola was again on track. To add to these woes, Motorola was shifting its PPC emphasis away from the desktop and more toward embedded applications and the joint IBM/Motorola PPC design center had closed in 1998. The AMI Alliance was falling apart.

Apple’s Woes: Big Ships Turn Slowly

What was happening? The answer, in part, lie in Apple’s recent financial woes. In the eyes of the world, the release of Windows 95 by Microsoft in 1995 was pretty much the final nail in the Mac’s coffin. It was touted as an OS which finally erased the last vestiges of any advantage that the Mac might have enjoyed over the Intel platform. Apple was in the red, an quickly losing market share. The general prognosis was that they could never pull out from this nose-dive and it was just a matter of time before both Apple and the Mac were just a memory.

Clone Blues

When Steve Jobs came back and took over Apple’s helm from Gil Amelio, one of the first things he did was to kill the Mac clone industry. Suddenly, Apple was, once again, the only maker of Macintosh computers. The desktop computer division of Motorola, Umax, Power Computing, and a number of second tier Mac clone makers were told that their licenses to make Mac clones would NOT be renewed. Apple actually bought Power Computing outright, and closed its doors. With Apple stock prices at an all time low, and with a shrinking market share and no other Mac manufactures as customers, it looked to the microprocessor division of Motorola as if practically all demand for the desktop computer version of the PPC chip was about to go away.

They decided to make other plans. The new plans did not include Apple or the Mac in any way except for a promise to continue to supply Apple with processors for as long as Apple lasted. This seemed a safe bet. Apple couldn’t last long. But Jobs wasn’t about to give up on Apple or the Macintosh, and in one marketing coup after another, he started to revitalize the Macintosh market. First, he introduced a revamped Powerbook line, then the ground-breaking iMac, then the beautiful ice-blue G3, then the iBook, next the slate-grey G4 machines. Sales soared, profits soared, stock prices went through the roof. All of this success took Apple suppliers, especially Motorola and IBM quite by surprise. Motorola was counting on winding down sales to Apple, and suddenly the sales numbers started to INCREASE. Ask the survivors of the Titanic, big ships take a long time to turn around, and Motorola is a VERY big ship. In the scramble to address the increased demand for a segment of the PPC market that Motorola had decided, essentially, to scrap, some things began to be painfully clear.

One of those was that Motorola now lacked the ability to successfully make large numbers of functional desktop PPC chips, especially at the higher clock rates. The number of successful pieces that a manufacturer can get from any one manufacturing run is called, in the semiconductor business ,”yield”, and it works like this. All computer chips, processors included, are fabricated on large wafers of silicon. These wafers look like thin, shiny metal discs. They can hold many identical integrated circuit chips called ‘die’. These ‘dice’ are all exactly alike and the number that can be fabricated at one time on a single wafer of silicon is a direct function of that chip’s die size.

Obviously, the smaller each rectangular die is, the greater the number that can be placed on any given sized wafer. Microprocessors like PPC chips tend be among the largest types of die produced, and often fewer than 100 can be fit on a single large wafer (modern semiconductors are built on wafers which range in size from about three inches in diameter all the way up to 12 inches. I am assuming that PPC chips are built on 12 inch wafers, but I don’t know this for sure. Motorola could be using 8 inch wafers.). Since wafer manufacturing costs are fairly fixed for any given wafer size irrespective of what chips are being made on them or what the actual, individual chip size might be, the cost of each die has to include its fraction of the wafer’s manufacturing costs. In other words, many hundreds of very tiny chips can fit on a given wafer, and therefore the cost of each can be low.

Big chips have relatively few die per wafer and are therefore expensive. There are other cost factors involved, of course. One of these is the so-called ‘font-end’ or engineering costs which must be amortized of the life of the product. Also, final product cost is, to a degree, a function of yield. Ideally, all the die on a finished wafer should function perfectly (100% yield), after all, they are all identical. Unfortunately, the process by which these die are fabricated is not perfect, and inevitably, this affects the number of good die which yield from each wafer. Often the yield of a new design starts out poor, often as low as 15% and this drives the individual die price for functioning parts high. As the manufacturer gets more experienced at making the new design, the yield improves, often to higher than 90%, and the price of each individual die comes down. This is, at least, one reason why the latest and fastest Pentium chip starts out expensive, but soon drops precipitously in price.

Motorola’s problem are these as well as others with its desktop PPC chips. Not only have they had teething problems with the new G4, but since Apple is, essentially, the only volume customer for these chips, they don’t build them in the kinds of numbers that Intel builds Pentiums. Therefore, getting the manufacturing experience to overcome yield problems comes to the PPC designs much later in their manufacturing lives than do similar problems with any given Pentium design. This also causes other related manufacturing problems. Many times, a manufacturer will attribute low yields to manufacturing teething problems, when often, it is a fundamental design flaw in the chip architecture itself.

In a high-volume operation like a Pentium manufacturing line, this would soon become apparent as the chips would begin to yield better with time, and then at some point short of expectation, yields would stop improving; a sure sign of design problems. With a low volume product like the G4, design problems would take longer to show up, and thus any design fix would be delayed. This is essentially what has happened with the Motorola G4. The low yields were chalked-up to manufacturing inexperience with the new chip when in reality, there was a design flaw in the G4 itself. By the time the flaw was found, Motorola was very far behind in its shipments of the 500 MHz devices to Apple.

Put This One in Here, That One in There

Subtle variations in processing will yield chips on the same wafer which differ wildly in performance. Some chips won’t work at all, some examples barely meet minimum requirements and still a few more are super performers. Thus the industry has taken to a practice called bin-ing. This practice consists of testing each chip after manufacturing and placing them in different ‘bins’ according to their performance. Substandard parts are weeded out and discarded (or sold to less demanding markets), and the rest are segregated by the speed at which they will comfortably function. In the case of G4s for Apple, this would be ‘bins’ of 350, 400, 450, and 500 MHz. Motorola was getting very few 500 MHz parts and didn’t know why. This forced Apple to back away from their announced G4 machine speeds and rather than shipping the three announced machines of 400, 450, and 500 MHz, they had to ship machines at 350, 400, and 450 MHz instead.

Next time, IBM gives in, Motorola plays catch-up and Apple honors its commitments.