Niagria

Sun’s new processor offering is kind of interesting. It appears that they decided that the right processor architecture for servers (i.e. machines used by thousands of people at a time) is just different than the right one for more machines used more personally.

If you freeze the machine your using right at this moment and ask it how many threads are standing by waiting to get some processor cycles, i.e. not blocked for any reason other than access to the processor, the answer is almost certainly none, or one. If you designing the chip for that machine it’s worth spending transistors to make one processor run really fast.

If you freeze a reasonably loaded server, say one serving up web pages to a thousand users, with a few middleware applications behind the server, and a database or two behind those. The answer is may well be a few dozen, or possibly even a few hundred. Because this work load is more naturally fragmented your better off spending your transistors on multiple processors.

You might say they asked to question: if we could put a room full of servers on a chip what would that look like? Or maybe it’s the return of Connection Machine. I wonder if Sun’s got a *Lisp implementation?

I spent a lot of my early career working on multiprocess architecture both hardware and software. I became disillusioned. Given Moore’s law they were always only a few years ahead of the curve, which limited your markets and meant once you won customers you lost them a short while later. The action shifted to personal computing and away from servers, where the problems were easier to map onto the hardware. Mapping customer problems onto parallel architectures was sometimes easy; but more typically it tended to require too many clever engineering hours to be worth it. The programming languages, compilers, and operating system architectures that would reduce the how clever your engineers needed to be tended to be immature and the market wasn’t deep enough to spin up the network effects to fix that. That was the first time I began to appreciate the value of moving where the network effect current is fast.

Two things really triggered my exit from the multiprocessor industry. One was the Macintosh, it was just so cool and it had so many obvious opportunities to do neat things. The other was the death of the line printer. We used to have these line printers, and they would have one print head for every column on the page. And then we bought a new printer and it only had one print head that went back and forth. It was much faster. I thought – Yeah, if you can’t even make the case for multiprocessing in a task so transparently parallel in the physical world how likely is it that this whole multiprocessor thing is going to work out?

I wonder if it will be different this time? Three things have changed. Much of the action has shifted back to the server side. The total size of the market is just amazingly larger – which can feed the network effects needed to build the tools. Our talent at this kind of stuff is much deeper and broader. So it seems like it might.

But it’s daring to decide that the processor industry has a viable niche where a new species can thrive.

Leave a Reply

Your email address will not be published. Required fields are marked *