Optimal processor size

February 28th, 2008

I'm going to argue that high-performance chip designs ought to use a (relatively) modest number of (relatively) strong cores. This might seem obvious. However, enough money is spent on developing the other kinds of chips to make the topic interesting, at least to me.

I must say that I understand everyone throwing millions of dollars at hardware which isn't your classic multi-core design. I have an intimate relationship with multi-core chips, and we definitely hate each other. I think that multi-core chips are inherently the least productive programming environment available. Here's why.

Our contestants are:

With just one core, you aren't going to parallelize the execution of anything in your app just to gain performance, because you won't gain any. You'll only run things in parallel if there's a functional need to do so. So you won't have that much parallelism. Which is good, because you won't have synchronization problems.

If you're performance-hungry enough to need many boxes, you're of course going to parallelize everything, but you'll have to solve your synchronization problems explicitly and gracefully, because you don't have shared memory. There's no way to have a bug where an object happens to be accessed from two places in parallel without synchronization. You can only play with data that you've computed yourself, or that someone else decided to send you.

If you need performance, but for some reason can't afford multiple boxes (you run on someone's desktop or an embedded device), you'll have to settle for multiple cores. Quite likely, you're going to try to squeeze every cycle out of the dreaded device you have to live with just because you couldn't afford more processing power. This means that you can't afford message passing or a side-effect-free environment, and you'll have to use shared memory.

I'm not sure about there being an inherent performance impact to message passing or to having no side effects. If I try to imagine a large system with massive data structures implemented without side effects, it looks like you have to create copies of objects at the logical level. Of course, these copies can then be optimized out by the implementation; I just think that some of the copies will in fact be implemented straight-forwardly in practice.

I could be wrong, and would be truly happy if someone explained to me why. I mean, having no side effects helps analyze the data flow, but the language is still Turing-complete, so you don't always know when an object is no longer needed, right? So sometimes you have to make a new object and keep the old copy around, just in case someone needs it, right? What's wrong with this reasoning? Anyway, today I'll assume that you're forced to use mutable shared memory in multi-core systems for performance reasons, and leave this no-side-effects business for now.

Summary: multiple cores is for performance-hungry people without a budget for buying computational power. So they end up with awful synchronization problems due to shared memory mismanagement, which is even uglier than normal memory mismanagement, like leaks or dangling references.

Memory mismanagement kills productivity. Maybe you disagree; I won't try to convince you now, because, as you might have noticed, I'm desperately trying to stay on topic here. And the topic was that multi-core is an awful environment, so it's natural for people to try to develop a better alternative.

Since multi-core chips are built for anal-retentive performance weenies without a budget, the alternative should also be a high-performance, cheap system. Since the clock frequency doesn't double as happily as it used to these days, the performance must come from parallelism of some sort. However, we want to remove the part where we have independent threads accessing shared memory. What we can do is two things:

What does processor "size" have to do with anything? There are basically two ways of avoiding synchronization problems. The problems come from many processors accessing shared memory. The huge processor option doesn't have many processors; the tiny processors option doesn't have shared memory.

The huge processor would run one thread of instructions. To compensate for having just one processor, each instruction would process a huge amount of data, providing the parallelism. Basically we'd have a SIMD VLIW array, except it would be much much wider/deeper than stuff like AltiVec, SSE or C6000.

The tiny processors would talk to their neighbor tiny processors using tiny FIFOs or some other kind of message passing. We use FIFOs to eliminate shared memory. We make the processors tiny because large processors are worthless if they can't process large amounts of data, and large amounts of data mean lots of bandwidth, and lots of bandwidth means memory, and we don't want memory. The advantage over the SIMD VLIW monster is that you run many different threads, which gives more flexibility.

So it's either huge or tiny processors. I'm not going to name any particular architecture, but there were and still are companies working on such things, both start-ups and seasoned vendors. What I'm claiming is that these options provide less performance per square millimeter compared to a multi-core chip. So they can't beat multi-core in the anal-retentive performance-hungry market. Multiple cores and the related memory mismanagement problems are here to stay.

What I'm basically saying is, for every kind of workload, there exists an optimal processor size. (Which finally gets me to the point of this whole thing.) If you put too much stuff into one processor, you won't really be able to use that stuff. If you don't put enough into it, you don't justify the overhead of creating a processor in the first place.

When I think about it, there seems to be no way to define a "processor" in a "universal" way; a processor could be anything, really. Being the die-hard von-Neumann machine devotee that I am, I define a processor as follows:

This definition ignores at least two interesting things: that the human brain doesn't work that way, and that you can have hyper-threaded processors. I'll ignore both for now, although I might come back to the second thing some time.

Now, you can play with the "size" of the processor – its instructions can process tiny or huge amounts of data; the local memory/cache size can also vary. However, having an instruction processing kilobytes of data only pays off if you can normally give the processor that much data to multiply. Otherwise, it's just wasted hardware.

In a typical actually interesting app, there aren't that many places where you need to multiply a zillion adjacent numbers at the same cycle. Sure, your app does need to multiply a zillion numbers per second. But you can rarely arrange the computations in a way meeting the time and location constraints imposed by having just one thread.

I'd guess that people who care about, say, running a website back-end efficiently know exactly what I mean; their data is all intertwined and messy, so SIMD never works for them. However, people working on number crunching generally tend to underestimate the problem. The abstract mental model of their program is usually much more regular and simple in terms of control flow and data access patterns than the actual code.

For example, when you're doing white board run time estimations, you might think of lots of small pieces of data as one big one. It's not at all the same; if you try to convince a huge SIMD array that N small pieces of data are in fact one big one, you'll get the idea.

For many apps, and I won't say "most" because I've never counted, but for many apps, data parallelism can only get you that much performance; you'll need task parallelism to get the rest. "Task parallelism" is when you have many processors running many threads, doing unrelated things.

One instruction stream is not enough, unless your app is really (and not theoretically) very simple and regular. If you have one huge processor, most of it will remain idle most of the time, so you will have wasted space in your chip.

Having ruled out one huge processor, we're left with the other extreme – lots of tiny ones. I think that this can be shown to be inefficient in a relatively intuitive way.

Each time you add a "processor" to your system, you basically add overhead. Reading and decoding instructions and storing intermediate results to local memory is basically overhead. What you really want is to compute, and a processor necessarily includes quite some logic for dispatching computations.

What this means is that if you add a processor, you want it to have enough "meat" for it to be worth the trouble. Lots of tiny processors is like lots of managers, each managing too few employees. I think this argument is fairly intuitive, at least I find it easy to come up with a dumb real-world metaphor for it. The huge processor suffering from "lack of regularity" problems is a bit harder to treat this way.

Since a processor has an "optimal size", it also has an optimal level of performance. If you want more performance, the optimal way to get it is to add more processors of the same size. And there you have it – your standard, boring multi-core design.

Now, I bet this would be more interesting if I could give figures for this optimal size. I could of course shamelessly weasel out of this one – the optimal size depends on lots of things, for example:

I could go on and on. However, just for the fun of it, I decided to share my current magic constants with you. In fact there aren't many of them – I think that you can use at most 8-16 of everything. That is:

Also, 8-16 of any of these is a lot. Many good systems will use, say, 2-4 because their application domain is more like Perl than FFT in some respect, so there's no chance of actually utilizing that many resources in parallel.

I have no evidence that the physical constants of the universe make my magic constants universally true. If you know about a great chip that breaks these "rules", it would be interesting to hear about it.

1. AlexMay 29, 2008

Why is it necessarily so that for a single machine, we're stuck with having threads running in a single process trying their best not to stomp on each other?

Can't one use the page table mechanism to pass ownership of pages between processes as needed? (I'm not sure whether this would qualify as message passing or shared memory, but the effect would be that a single process would have access to a page/pages at any time).

I can't imagine the code to change ownership of a page to have significantly greater performance penalty than taking a lock.

Of course there is no OS support for something like this, but why should we take the current OS model as dogma and not look for improvements?

2. Yossi KreininMay 31, 2008

I think it would basically be shared memory, with the twist of making pages "owned" by one of the processors.

Basically the benefit of explicit memory sharing between processes is that you don't have race conditions in places you didn't even think about, and the problem is that it makes sharing memory harder. I think that adding ownership transfer would make both the benefit and the problem bigger (you have even less bugs due to poorly thought out code and even more coding overhead for sharing memory).

3. AntiguruNov 6, 2009

Hmm. Well, I have done quite a lot of this and it seems to be an area I do well in. The only issue is between environments and every time you move to new one you have to 'solve' it before anything works at all but then it's fine. The biggest problem is you always WORRY it's the MT aspect causing trouble in some way you don't quite understand but 99.9% of the time it's not the issue.

The multicore aspect is not a problem, except maybe it might hide some bugs. Fortunately for PCs the memory model is forgiving. Fast processors are nice but I think the issue is they can't make them any faster any more, as far as clock speeds go. The extra core is really just some special cache to make for quick context switching so you get a lot of bang for almost no buck in that respect.

Not sure why you don't like shared memory, it's the best way to do virtually everything MT. Message passing is generally introduces a lot of issues such as not working most of the time or stopping working every time you change platform or compiler or hardware.

The shared memory stuff is complex but that's on the hardware maker, ie intel. ANd, when you break it down it's all simple parts, basically cache layers that ultimately figure out what's what in the proper way and reset things if they need to be reset. That part intel does pretty well, their TBB nonsense is an utter joke of course.

Whn you look at processors, the reason tiny ones make sense is this: the actual processor part that DOES anything is already tiny. All the other crap is just there to keep it fed at all times. They've been doing MT for years in a very halfassed way – IE instruction pipelines etc. That's 90% of the processor.

So if you can get 10 processors in the same place, suddenly the overhead of wiring them together makes sense. I mean, they have 480 (powerful) processors on newer graphics cards. Larrabee with have 32 or 64.

Even if you can't parallelize things at all, every PC has dozens of crap running at all time nowadays. None of it requires tons of processing, but if the context switching and scheduling can be taken off the CPU and put onto side CPUs you get a big improvement in usability.

For things like rendering, skinning, maya, blah blah blah you get 100% parallelization. Unless the programmers are morons.

So, it's basically bound for lots of tiny processors. Nothing else makes a ton of sense.

4. AntiguruNov 6, 2009

And as for magic constant, there isn't one. It's all on your parallelization skill.

The reason people get choked after N where N is some tiny number is because they use locks or they preprocess huge amounts in just one thread, ie have no clue what they are doing.

Not everything can be properly broken up but for things that can be all you have to do is put jobs on the queue with various index ranges you want computed.

5. Yossi KreininNov 7, 2009

Actually if you look at Nvidia's machines, you'll see constants around 8, 16 or 32. The hundreds of cores they cite (between 256 and 512) are really tens of cores, each with a SIMD array. They call it SIMT and in fact it isn't quite the traditional SIMD, but it's still a single instruction dispatched to 16 execution units.

Regarding shared memory – its assessment will depend on how many dozens of developers are using it in how many hundreds of thousands of lines of code.

6. Yossi KreininNov 7, 2009

...Also, Nvidia has ~6 external DRAMs in its high-end variants so the 32 SIMT cores are actually <6 cores per external memory module. I still believe the constants above have some magic to them.

7. AntiguruDec 3, 2009

AH, I see what you are saying about that now.

8. he/sheSep 13, 2010

you know what

9. Yossi KreininSep 13, 2010

Actually, I don't. Is this some kind of spam for spam's sake, like those messages with the subject "qz" and the body "ygdf4c" that I sometimes get – sneak past the spam filter but don't seem to yield any benefits for the spammer except satisfaction at having wasted someone's time?

Post a comment