I'm going to argue that high-performance chip designs ought to use a (relatively) modest number of (relatively) strong cores. This might seem obvious. However, enough money is spent on developing the other kinds of chips to make the topic interesting, at least to me.
I must say that I understand everyone throwing millions of dollars at hardware which isn't your classic multi-core design. I have an intimate relationship with multi-core chips, and we definitely hate each other. I think that multi-core chips are inherently the least productive programming environment available. Here's why.
Our contestants are:
- single-box, single-core
- single-box, multi-core
- multi-box
With just one core, you aren't going to parallelize the execution of anything in your app just to gain performance, because you won't gain any. You'll only run things in parallel if there's a functional need to do so. So you won't have that much parallelism. Which is good, because you won't have synchronization problems.
If you're performance-hungry enough to need many boxes, you're of course going to parallelize everything, but you'll have to solve your synchronization problems explicitly and gracefully, because you don't have shared memory. There's no way to have a bug where an object happens to be accessed from two places in parallel without synchronization. You can only play with data that you've computed yourself, or that someone else decided to send you.
If you need performance, but for some reason can't afford multiple boxes (you run on someone's desktop or an embedded device), you'll have to settle for multiple cores. Quite likely, you're going to try to squeeze every cycle out of the dreaded device you have to live with just because you couldn't afford more processing power. This means that you can't afford message passing or a side-effect-free environment, and you'll have to use shared memory.
I'm not sure about there being an inherent performance impact to message passing or to having no side effects. If I try to imagine a large system with massive data structures implemented without side effects, it looks like you have to create copies of objects at the logical level. Of course, these copies can then be optimized out by the implementation; I just think that some of the copies will in fact be implemented straight-forwardly in practice.
I could be wrong, and would be truly happy if someone explained to me why. I mean, having no side effects helps analyze the data flow, but the language is still Turing-complete, so you don't always know when an object is no longer needed, right? So sometimes you have to make a new object and keep the old copy around, just in case someone needs it, right? What's wrong with this reasoning? Anyway, today I'll assume that you're forced to use mutable shared memory in multi-core systems for performance reasons, and leave this no-side-effects business for now.
Summary: multiple cores is for performance-hungry people without a budget for buying computational power. So they end up with awful synchronization problems due to shared memory mismanagement, which is even uglier than normal memory mismanagement, like leaks or dangling references.
Memory mismanagement kills productivity. Maybe you disagree; I won't try to convince you now, because, as you might have noticed, I'm desperately trying to stay on topic here. And the topic was that multi-core is an awful environment, so it's natural for people to try to develop a better alternative.
Since multi-core chips are built for anal-retentive performance weenies without a budget, the alternative should also be a high-performance, cheap system. Since the clock frequency doesn't double as happily as it used to these days, the performance must come from parallelism of some sort. However, we want to remove the part where we have independent threads accessing shared memory. What we can do is two things:
- Use one huge processor.
- Use many tiny processors.
What does processor "size" have to do with anything? There are basically two ways of avoiding synchronization problems. The problems come from many processors accessing shared memory. The huge processor option doesn't have many processors; the tiny processors option doesn't have shared memory.
The huge processor would run one thread of instructions. To compensate for having just one processor, each instruction would process a huge amount of data, providing the parallelism. Basically we'd have a SIMD VLIW array, except it would be much much wider/deeper than stuff like AltiVec, SSE or C6000.
The tiny processors would talk to their neighbor tiny processors using tiny FIFOs or some other kind of message passing. We use FIFOs to eliminate shared memory. We make the processors tiny because large processors are worthless if they can't process large amounts of data, and large amounts of data mean lots of bandwidth, and lots of bandwidth means memory, and we don't want memory. The advantage over the SIMD VLIW monster is that you run many different threads, which gives more flexibility.
So it's either huge or tiny processors. I'm not going to name any particular architecture, but there were and still are companies working on such things, both start-ups and seasoned vendors. What I'm claiming is that these options provide less performance per square millimeter compared to a multi-core chip. So they can't beat multi-core in the anal-retentive performance-hungry market. Multiple cores and the related memory mismanagement problems are here to stay.
What I'm basically saying is, for every kind of workload, there exists an optimal processor size. (Which finally gets me to the point of this whole thing.) If you put too much stuff into one processor, you won't really be able to use that stuff. If you don't put enough into it, you don't justify the overhead of creating a processor in the first place.
When I think about it, there seems to be no way to define a "processor" in a "universal" way; a processor could be anything, really. Being the die-hard von-Neumann machine devotee that I am, I define a processor as follows:
- It reads, decodes and executes an instruction stream (a "thread")
- It reads and writes memory (internal and possibly external)
This definition ignores at least two interesting things: that the human brain doesn't work that way, and that you can have hyper-threaded processors. I'll ignore both for now, although I might come back to the second thing some time.
Now, you can play with the "size" of the processor – its instructions can process tiny or huge amounts of data; the local memory/cache size can also vary. However, having an instruction processing kilobytes of data only pays off if you can normally give the processor that much data to multiply. Otherwise, it's just wasted hardware.
In a typical actually interesting app, there aren't that many places where you need to multiply a zillion adjacent numbers at the same cycle. Sure, your app does need to multiply a zillion numbers per second. But you can rarely arrange the computations in a way meeting the time and location constraints imposed by having just one thread.
I'd guess that people who care about, say, running a website back-end efficiently know exactly what I mean; their data is all intertwined and messy, so SIMD never works for them. However, people working on number crunching generally tend to underestimate the problem. The abstract mental model of their program is usually much more regular and simple in terms of control flow and data access patterns than the actual code.
For example, when you're doing white board run time estimations, you might think of lots of small pieces of data as one big one. It's not at all the same; if you try to convince a huge SIMD array that N small pieces of data are in fact one big one, you'll get the idea.
For many apps, and I won't say "most" because I've never counted, but for many apps, data parallelism can only get you that much performance; you'll need task parallelism to get the rest. "Task parallelism" is when you have many processors running many threads, doing unrelated things.
One instruction stream is not enough, unless your app is really (and not theoretically) very simple and regular. If you have one huge processor, most of it will remain idle most of the time, so you will have wasted space in your chip.
Having ruled out one huge processor, we're left with the other extreme – lots of tiny ones. I think that this can be shown to be inefficient in a relatively intuitive way.
Each time you add a "processor" to your system, you basically add overhead. Reading and decoding instructions and storing intermediate results to local memory is basically overhead. What you really want is to compute, and a processor necessarily includes quite some logic for dispatching computations.
What this means is that if you add a processor, you want it to have enough "meat" for it to be worth the trouble. Lots of tiny processors is like lots of managers, each managing too few employees. I think this argument is fairly intuitive, at least I find it easy to come up with a dumb real-world metaphor for it. The huge processor suffering from "lack of regularity" problems is a bit harder to treat this way.
Since a processor has an "optimal size", it also has an optimal level of performance. If you want more performance, the optimal way to get it is to add more processors of the same size. And there you have it – your standard, boring multi-core design.
Now, I bet this would be more interesting if I could give figures for this optimal size. I could of course shamelessly weasel out of this one – the optimal size depends on lots of things, for example:
- Application domain. x86 runs Perl; C6000 runs FFT. So x86 has speculative execution, and C6000 has VLIW. (It turns out that I use the name "Perl" to refer to all code dealing with messy data structures, although Python, Firefox and Excel probably aren't any different. The reason must be that I think of "irregular" code in general and Perl in particular as a complicated phenomenon, and a necessary evil).
- The cost of extra performance. Will your customer pay extra 80% for extra 20% of performance? For an x86-based system, the answer is more likely to be "yes" than for a C6000-based system. If the answer is "yes", adding hardware for optimizing rare use cases is a good idea.
I could go on and on. However, just for the fun of it, I decided to share my current magic constants with you. In fact there aren't many of them – I think that you can use at most 8-16 of everything. That is:
- At most 8-16 vector elements for SIMD operations
- At most 8-16 units working in parallel for VLIW machines
- At most 8-16 processors per external memory module
Also, 8-16 of any of these is a lot. Many good systems will use, say, 2-4 because their application domain is more like Perl than FFT in some respect, so there's no chance of actually utilizing that many resources in parallel.
I have no evidence that the physical constants of the universe make my magic constants universally true. If you know about a great chip that breaks these "rules", it would be interesting to hear about it.