The Algorithmic Virtual Machine

There's a very influential platform called the AVM, which stands for Algorithmic Virtual Machine. That's the imaginary device people use as their mental model of a computer. In particular, it's used by many people working on algorithms where performance matters. Performance matters in many different contexts, ranging from huge clusters processing astronomic amounts of data to modest applications running on pathetically weak hardware. However, I believe that the core architecture of the AVM is basically the same everywhere.

AVM application development is done using the ubiquitous AVM SDK – a whiteboard and a couple of hands for handwaving. An AVM application consists of a set of operations your algorithm needs executed. Each operation has a cost (typically one cycle, sometimes more). You can then estimate the run time of your algorithm by the clever technique of summing the cost of all operations.

These estimations are never close enough to the real run time. The definition of "close enough" varies; the quality of estimations, by and large, doesn't. That is, I claim that your handwavy AVM-derived estimation will fail to meet your precision requirements no matter what those requirements are. Apparently our tolerance for errors grows with the lack of understanding of the problem, but it never grows enough. But I'm not really sure about this theory; I'm only sure about AVM-estimations-suck part. Here's why.

The AVM is basically this imaginary machine that runs "operations". Here are some things that real machines must do, but the AVM doesn't:

  • Fetching instructions
  • Fetching operands
  • Testing for conditions
  • Storing results

Basically, the Algorithmic Virtual Machine developers concentrate on "operations" and ignore addressing, branches, caches, buses, registers, pipelines, and all those other gadgets which are needed in order to dispatch the operation. In fact, that's how I currently distinguish between people who write software to get a job done and people who think of software as their job. "People who program" are into operations (algebra, networking, AI); "programmers" are into dispatching (programming languages, operating systems, OO). This is about mental focus rather than aptitude. I haven't noticed that people of either group are inherently less productive than the other kind.

When they're after performance, the "operations" people will naturally look for a way to reduce the number of operations. Sometimes, they'll find an algorithm with a better asymptotic complexity – O(N+M) instead of O(N*M). At other times, they'll come up with a way to perform 4*N*M operations instead of 16*N*M. Both results are very significant – if M and N are the only variables. The trouble is that you can't see all the variables if you just look at the math (as in "we want to multiply and sum all these and then compare to that"). That way, you assume that you run on the AVM and leave out all the dispatching-related variables and get the wrong answer.

Is there a way to take the cost of dispatching into account? Not really, not without implementing your algorithm and measuring its performance. However, families of machines do have related sets of heuristics that can be used to guess the cost of running on them. For example, here are a couple of heuristics that I use for SIMD machines (they are relevant elsewhere, but their relative importance may drop):

  1. Bandwidth is costly.
  2. Addressing is costly.

These heuristics are vague, and I don't see a very good way to make them formal. Perhaps there isn't any. To show that my points have any formal significance, I'd have to formally prove that there's unavoidable intrinsic cost to some things no matter how you build your hardware. And I don't know how to go about that. So what I'll do is I'll give some examples to show what I mean, and leave it there.

Bandwidth

Consider two "algorithms" (probably too fancy a name in this context): computing dot product, and computing its partial sums (Matlab: sum(a .* b) and cumsum(a .* b)). Exactly the same amount of "operations" – N multiplications and N additions. Many people with BA, MSc and PhD degrees in CS assume that the run time is going to be the same, too. It won't, because sum only produces one output, and cumsum produces N outputs. Worse, if the input vector elements are 8-bit integers, we probably need at least 32 bits for each output element. So we generate N*4 bytes of output from N*2 input bytes.

At this point, some people will say "Yeah, memory. Processors are fast, memories are slow, sure, memory is a problem". But it isn't just about the memory; memory bandwidth is just one kind of bandwidth. Let's look at the non-memory problems of the partial-sums-of-dot-product algorithm. On the way, I'll try to show how the "bandwidth costs" heuristic can be used to guess what your hardware can do and what the performance will be.

Consider a machine with a SIMD instruction set. Most likely, the machine has registers of fixed width (say, 16 bytes), and each instruction gets 2 inputs and produces 1 output. Why? Well, the hardware ought to support 2 inputs and 1 output to do basic math. Now, if it also wants to have an instruction that produces, say, 4 outputs, then it needs to have 3 additional output buses from the data processing units to the register file. It also needs a multiplexer so that each of the 4 outputs can be routed to each of its N registers (N can be 16 or 32 or even 128). The cost of multiplexers is, roughly, O(M*N), where M is the number of inputs and N is the number of outputs. That's awfully costly. Bandwidth costs. So they probably use 2 inputs and 1 output everywhere.

Now, suppose the machine has 16 multipliers, which is quite likely – 1 multiplier for each register byte, so we can multiply 16 pairs of bytes simultaneously. Does this mean that we can then take those 16 products and compute 16 new partial sums, all in the same cycle? Nope, because, among other things, we'd need a command producing 16×4 bytes to do that, and that's too much bandwidth. Are we likely to have a command that updates less than 16 accumulators? Yes, because that would speed up dot products, and dot products are very important; let's look at the manual.

You're likely to find a command updating – guess how many? – 4 accumulators (32 bits times 4 equals 16 bytes, that's exactly one machine register). If the register size is 8 bytes, you'll probably get a command updating 2 accumulators, and so on. Sometimes the machine uses "register pairs" for output; that doubles the register size for output bandwidth calculation purposes. The bottom line is that instruction set extensions can speed up dot product to an extent impossible for its partial sums. You might have noticed another problem here, that of the dependency of a partial sum on the previous partial sum. Removing this dependency doesn't solve the bandwidth problem. For example, consider the vertical projection of point-wise multiplication of 2 8-bit images, which has the same not-enough-accumulators problem.

There is little you can do about the bandwidth problem in the partial sums case – the algorithm is I/O bound. Some algorithms aren't, so you can optimize them to minimize the cost of bandwidth. For example, matrix multiplication is essentially lots of dot products. If you do those dot products straightforwardly, you'll have a loop spending 2 commands for loading the matrix elements into registers, and one command for multiplying and accumulating (MAC). 2 loads per MAC means an overhead of 200%.

However, you can work on blocks – 4 rows of matrix A and 4 columns of matrix B, and compute the 4×4=16 dot products in your loop. That's 4+4=8 loads per 16 MACs; the overhead dropped to 50%. If you have enough registers to do this. And it's still quite impressive overhead, isn't it? Your typical AVM user would be very disappointed. (Yes, some machines can parallelize the loads and the MACs, but some can't, and it's a toy example, and stop nitpicking). BTW, blocking can be used to save loads from main memory to cache just like we've used it to save loads from cache to registers.

OK. With partial sums of dot product, the bandwidth problem kills performance, and with matrix multiplication, it doesn't. What about convolution, which is about as basic as our previous examples? Gee, I really don't know. It's tricky, because with convolution, you need to store intermediate results somewhere, and it's unclear how many of them you're going to need. The optimal implementation depends on the quirks of the data processing units, the I/O, and the filter size. If you come across a benchmark showing the performance of convolution on some machine, you'll probably find interesting variations caused by the filter size.

So we have a bread-and-butter algorithm, and non-trivial & non-portable performance characteristics. I think it's one indication that your own less straightforward algorithm will also perform somewhat unpredictably. Unless you know an exact reason for the opposite.

Addressing

Bandwidth is one problem with fetching operands and storing results. Another problem is figuring out where they go. In the case of registers, we have costly multiplexers for selecting the source and destination registers of instructions. In the case of memory, we have addresses. Computing addresses has a cost. Reading data from those addresses also has a cost. Some address sequences are costlier than others from one of these perspectives, or both.

The dumbest example is the misalignment problem. People who learned C on x86 are sometimes annoyed when they meet a PowerPC or an ARM or almost any other processor since it won't read a 32-bit integer from a misaligned address. So when you read a binary buffer from a file or a socket, you can't just cast the char* to an int* and expect it to work. Isn't it nice of x86 to properly handle these cases?

Maybe it's nice, maybe it isn't (at least if it failed, the code would be fixed to become legal C), but it sure is costly. The fact that it's "in the hardware" doesn't make it a single-cycle operation. If your address is misaligned, the 32 bits may reside in two different memory words (no matter what the word size is). The hardware will have to read the low word, and then read the high word, and then take the high bits of the low word and the low bits of the high word and make a single 32-bit value out of them. Because in one cycle, memories can only fetch one word from an aligned address.

Does it matter outside of I/O-related code using illegal pointer-casting? Consider the prosaic algorithm of computing the first derivative of a vector, spelled v(2:end)-v(1:end-1) in Matlab. If we run on a SIMD machine, we could execute several subtractions simultaneously. In order to do that, we need to fetch a word containing v[0]…v[15] and a word containing v[1]…v[16] (both zero-based). But the second word is misaligned. The handling of misalignment will have a cost, whether it's done in hardware or in software.

Well, at least the operands of subtraction live in subsequent addresses – 0,1,2…15 and 1,2,3…16. That's how data processing units like them: you read a pack of numbers from memory and feed them right into the array of adders, ready to crunch them. It's not always like that. Consider scaling: a(x) = b(s*x+t). This can be used to resize images (handy), or to play records at a different speed the way you'd do with a tape recorder (less handy, unless you like squeaky or growly voices).

Now, if s isn't integral (say, s=0.6), you'd have to fetch data from places such as s*x+t = 1.3, 1.9, 2.5, 3.1, 3.7... Suppose you want to use linear interpolation to approximate a(1.3) as a(1)*0.7+a(2)*0.3. So now we need to multiply the vector of "low" elements – a([1,1,2...]) – by the vector of weights – [0.7,0.1,0.5...] – and add the result to the similar product a([2,2,3...])*[0.3,0.9,0.5...]. The multiplications and the additions map nicely to SIMD instruction sets; the indexing doesn't, because you have those weird jumpy indexes. So this time, the addressing can become a real bottleneck because it can prevent you from using SIMD instructions altogether and serialize your entire computation.

Well, at least we access adjacent elements. This means that most memory accesses will hit the cache. When you bump into an element that isn't cached yet, the machine will bring a whole cache line (say, 32 bytes), and then you'll read the other elements in that cache line, so it will pay off. You can even issue cache prefetching instructions so that while you're working on the current cache line, the machine will read the next one in the background. That way, you'll hit the cache all the time, instead of having your processor repeatedly surprised (hey, I don't have a(32) in the cache!.. hey, I don't have a(64) in the cache!.. hey, I don't have…). Avoiding the regularly scheduled surprise can be really beneficial, although cache prefetching is truly disgusting (it's basically a very finicky kind of cooperative multi-tasking – you ought to stuff the prefetching commands into the exactly right spots in your code).

Now, consider a(x) = b(f(x)) – a generic transformation of an input vector given a function for computing the input coordinate from the output coordinate. We have no idea what the next address is going to be, do we? If the transformation is complicated enough, we're going to miss the cache a lot. By the way, if the transformation is in fact simple, and the compiler knows the transformation at compile time, the compiler is still very unlikely to generate optimal cache prefetching commands. Which is one of the gazillion differences between C++ templates and "machine-optimal" code.

DVMs and TVMs

My bandwidth and addressing heuristics don't model a real machine; they only model an upgrade to the AVM for SIMD machines. Multi-box computing is one example of an entire universe of considerations they fail to model. So what we got is a DVM – Domain-specific Virtual Machine.

Now, in order to estimate performance without measuring (which is necessary when you choose your optimizations – you just can't try all the different options), I recommend a TVM (Target-specific Virtual Machine). You get one as follows. You start with the AVM. This gives overly optimistic performance estimations. You then add the features needed to get a DVM. This gives overly pessimistic estimations.

Then, you ask some low-level-loving person: "What are the coolest features of this machine that other machines don't have?" This will give you the capabilities that the real processor has but its DVM doesn't have. For example, PowerPC with AltiVec extensions is basically a standard SIMD DVM plus vec_perm. I won't talk about vec_perm very much, but if you ever need to optimize for AltiVec, this is the one instruction you want to remember. It solves the indexing problem in the scaling example above, among other things. Using a SIMD DVM and forgetting about vec_perm would make AltiVec look worse than it really is, and some algorithms much more costly than they really are.

And this is how you get a TVM for your platform. The resulting mental model gives you a fairly realistic picture, second only to reading the entire manual and understanding the interactions of all the features (not that easy). And it definitely beats the AVM by… how do you estimate the quality of handwaving? OK, it beats the AVM by the factor of 5, on average. What, you want a proof? Just watch the hands go.

43 comments ↓

#1 queisser on 02.13.08 at 9:21 am

Not exactly sure what you're point is other than "optimization is hard and architecture specific." I think a good exercise for programmers who want to get better at writing speed optimized code is going through a hand-optimized DSP algorithm, say a convolution or FFT and figuring out why the vendor's application engineers ordered the instructions the way they did or which data they copy into internal RAM or how they use the DMA engine the way they do. DSPs make a good vehicle for this learning because they tend to be fast but also cheap so they put a lot of burden on the programmer.

#2 Yossi Kreinin on 02.13.08 at 10:56 am

"Not exactly sure what my point is?" Damn, I suck. When a text is any good, everybody knows what it meant, even if they didn't understand any of it…

One thing I wanted to say was that lots of otherwise competent people use a broken mental model for machines with complete confidence in it. Another thing was that this model could be easily improved to become pretty realistic.

DSPs are a good practice. I'd say that understanding of the C6000 qualifies as "understanding hardware" for many purposes.

#3 queisser on 02.13.08 at 12:29 pm

Ah, C6000 was actually the DSP I was thinking of!

I liked your text a lot, by the way, just wasn't sure what you were advocating. I just reread the last section and it's clearer now.

#4 Yossi Kreinin on 02.13.08 at 12:40 pm

Everybody loves the C6000…

The thing that I firmly believe that C6000 gets right is that a processor wants to be a SIMD VLIW machine. At least when you do hardware design, and when you do compilers, it flows really well. So well that there are actual make-your-own-C6000 companies (Tensilica, CoWare…) Not the way to go, if you ask me, but that's another matter.

#5 lorg on 02.14.08 at 6:02 am

Thanks for the excellent reading.

It should be noted though, that the many of the virtual machines algorithms are designed on may be very well defined. For instance, in my early university days, I learned data structures on a model called the 'RAM machine', which was defined just with 'a little' hand waving :)

#6 Twitter Trackbacks for The Algorithmic Virtual Machine [yosefk.com] on Topsy.com on 12.13.10 at 4:49 am

[...] The Algorithmic Virtual Machine yosefk.com/blog/the-algorithmic-virtual-machine.html – view page – cached There’s a very influential platform called the AVM, which stands for Algorithmic Virtual Machine. That’s the imaginary device people use as their mental model of a computer. In particular, it’s used by many people working on algorithms where performance matters. Performance matters in many different contexts, ranging from huge clusters processing astronomic amounts of data to modest… Read moreThere’s a very influential platform called the AVM, which stands for Algorithmic Virtual Machine. That’s the imaginary device people use as their mental model of a computer. In particular, it’s used by many people working on algorithms where performance matters. Performance matters in many different contexts, ranging from huge clusters processing astronomic amounts of data to modest applications running on pathetically weak hardware. View page [...]

#7 geek42 on 03.08.12 at 11:26 pm

i hope TVM could be easily to custimized

#8 calfre2020 on 07.26.18 at 1:16 pm

wow really superb you had posted one nice information through this. Definitely it will be useful for many people. So please keep update like this.

#9 financiële administratie on 04.13.19 at 5:13 am

Yes! Finally something about accountant.

#10 huidspecialist on 04.22.19 at 1:11 am

This piece of writing will assist the internet visitors for creating new web site or even a blog from start to end.

#11 administratiekantoor on 04.24.19 at 8:26 pm

Hi there, I would like to subscribe for this website to get newest updates, so where
can i do it please help.

#12 Nerdy Milf on 05.12.19 at 11:30 am

Hello, everything is going well here and ofcourse every one is sharing information, that's
genuinely good, keep up writing.

#13 Best Online Dating on 05.12.19 at 8:52 pm

Yesterday, while I was at work, my sister stole my iphone and tested to see if it can survive a
30 foot drop, just so she can be a youtube sensation. My apple ipad is now broken and
she has 83 views. I know this is completely off topic but I had
to share it with someone!

#14 resharper crack on 05.15.19 at 9:53 pm

I must say, as a lot as I enjoyed reading what you had to say, I couldnt help but lose interest after a while.

#15 partner forsd on 05.16.19 at 11:55 am

I have been exploring for a little for any high-quality articles or weblog posts in this kind of space . Exploring in Yahoo I ultimately stumbled upon this site. Studying this information So i am glad to show that I have an incredibly just right uncanny feeling I came upon just what I needed. I most undoubtedly will make sure to do not forget this site and give it a look on a relentless basis.

#16 krunker hacks on 05.16.19 at 1:37 pm

Deference to op , some superb selective information .

#17 izmir esford on 05.16.19 at 3:47 pm

Nice blog right here! Additionally your website rather a lot up very fast! What host are you using? Can I am getting your associate hyperlink in your host? I want my web site loaded up as fast as yours lol

#18 fortnite aimbot download on 05.16.19 at 5:31 pm

Enjoyed reading through this, very good stuff, thankyou .

#19 Annita Sentinella on 05.16.19 at 5:41 pm

In my view, yosefk.com does a good job of dealing with subject matter of this kind. While often intentionally controversial, the information is more often than not well-written and thought-provoking.

#20 nonsense diamond key on 05.17.19 at 7:48 am

I was looking at some of your articles on this site and I believe this internet site is really instructive! Keep on posting .

#21 fallout 76 hacks on 05.17.19 at 11:11 am

I like this site, useful stuff on here : D.

#22 red dead redemption 2 digital key resale on 05.17.19 at 4:19 pm

Deference to op , some superb selective information .

#23 redline v3.0 on 05.17.19 at 7:26 pm

Your website has proven useful to me.

#24 chaturbate hack cheat engine 2018 on 05.18.19 at 8:51 am

Respect to website author , some wonderful entropy.

#25 led ryggsäck on 05.18.19 at 3:43 pm

I am glad to be one of the visitors on this great website (:, appreciate it for posting .

#26 mining simulator 2019 on 05.19.19 at 7:45 am

I conceive this web site holds some real superb information for everyone : D.

#27 smutstone on 05.20.19 at 12:24 pm

Intresting, will come back here again.

#28 redline v3.0 on 05.21.19 at 7:56 am

Very interesting points you have remarked, appreciate it for putting up.

#29 free fire hack version unlimited diamond on 05.21.19 at 5:16 pm

very nice post, i actually enjoyed this web site, carry on it

#30 nonsense diamond on 05.22.19 at 7:06 pm

Respect to website author , some wonderful entropy.

#31 Puffy Nipples on 05.23.19 at 3:05 am

Its like you learn my mind! You seem to know a lot
about this, such as you wrote the book in it or something.
I feel that you just can do with some % to power
the message home a bit, but other than that, this is excellent blog.
A great read. I'll certainly be back.

#32 krunker aimbot on 05.23.19 at 7:26 am

I’m impressed, I have to admit. Genuinely rarely should i encounter a weblog that’s both educative and entertaining, and let me tell you, you may have hit the nail about the head. Your idea is outstanding; the problem is an element that insufficient persons are speaking intelligently about. I am delighted we came across this during my look for something with this.

#33 bitcoin adder v.1.3.00 free download on 05.23.19 at 11:05 am

I really got into this article. I found it to be interesting and loaded with unique points of interest.

#34 vn hax on 05.23.19 at 7:49 pm

You got yourself a new rader.

#35 eternity.cc v9 on 05.24.19 at 8:37 am

I was looking at some of your articles on this site and I believe this internet site is really instructive! Keep on posting .

#36 ispoofer pogo activate seriale on 05.24.19 at 7:05 pm

Great, yahoo took me stright here. thanks btw for post. Cheers!

#37 how to get help in windows 10 on 05.26.19 at 4:34 am

Hi there, after reading this amazing paragraph i am as well delighted to share my
knowledge here with colleagues.

#38 cheats for hempire game on 05.26.19 at 7:06 am

Thanks for this article. I definitely agree with what you are saying.

#39 iobit uninstaller 7.5 key on 05.26.19 at 9:50 am

Some truly good article on this web site , appreciate it for contribution.

#40 smart defrag 6.2 serial key on 05.26.19 at 4:22 pm

Appreciate it for this howling post, I am glad I observed this internet site on yahoo.

#41 gamefly free trial on 05.26.19 at 5:26 pm

Thanks designed for sharing such a pleasant opinion, paragraph is fastidious, thats why i have read it entirely

#42 resetter epson l1110 on 05.26.19 at 7:10 pm

Good Morning, yahoo lead me here, keep up great work.

#43 sims 4 seasons code free on 05.27.19 at 8:27 am

I really got into this web. I found it to be interesting and loaded with unique points of view.

Leave a Comment