Entries Tagged 'software' ↓

If a tree falls in a forest, it kills Schrödinger's cat

Schrödinger used to have this quantum cat which was alive and dead at the same time as long as nobody opened the box, and it was the very act of looking at the cat that made it either alive or dead. Now, I'm not sure about this quantum stuff, but if you ask me you'd always find a dead cat upon opening the box, killed by the act of not looking. In fact, if you open any random box nobody was looking into, chances are you'll find a dead cat there. Let me give an example.

I recently chatted with a former employee of a late company I'll call Excellence (the real name was even worse). Excellence was a company with offices right across the street that kept shrinking until the recent financial crisis. It then had to simultaneously fire almost all of its remaining employees, carefully selected as their best during the previous years when other employees were continuously fired at a lower rate. Giving us a whole lot of great hires, including MV, the co-worker in this story (though he was among those who guessed where things were going and crossed the street a bit earlier).

Rumors tell that to the extent possible, Excellence used to live up to expectations created by its name. In particular, without being encouraged or forced by customers to comply to any Software Development Methodology such as the mighty and dreadful CMM, they had (as CMM puts it) not only Established, but rigorously Maintained an elaborate design, documentation and review process which preceded every coding effort. Other than praise, MV had little to say about the process, except perhaps that occasionally someone would invent something awfully complicated that made code incomprehensible, having gone just fine through the review process because of sounding too smart for anyone to object.

Now, in our latest conversation about how things were at Excellence, MV told how he once had to debug a problem in a core module of theirs, to which no changes had been made in years. There, he stumbled upon a loop searching for a value. He noticed that when the value was found, the loop wouldn't terminate – a continue instead of a break kind of thing. Since the right value tended to be found pretty early through the loop, and because it was at such a strategic place, test cases everyone was running took minutes instead of seconds to finish. Here's a dead cat in a gold-plated box for ya, and one buried quite deeply.

My own professional evolution shaped my mind in such a way that it didn't surprise me in the slightest that this slipped past the reviewer(s). What surprised me was how this slipped past the jittery bodybuilder. You see, we have this Integration Manager, one of his hobbies being bodybuilding (a detail unrelated, though not entirely, to his success in his work), and one thing he does after integrating changes is he looks at the frame rate. When the frame rate seems low, he pops up a window with the execution profile, where the program is split into about 1000 parts. If your part looks heavier than usual, or if it's something new that looks heavy compared to the rest, it'll set him off.

So I asked MV how come that the cat, long before it got dead and buried, didn't set off the jittery bodybuilder. He said they didn't have one for it to set off. They were translating between the formats of different programs. Not that performance didn't matter – they worked on quite large data sets. But to the end user, automatic translation taking hours had about the same value as automatic translation taking seconds – the alternative was manual translation taking weeks or months. So they took large-scale performance implications of their decisions into account during design reviews. Then once the code was done and tested, it was done right, so if it took N cycles to run, it was because it took N cycles to do whatever it did right.

And really, what is the chance that the code does everything right according to a detailed spec it is tested against, but there's a silly bug causing it to do it awfully slowly? If you ask me – the chance is very high, and more generally:

  • Though not looking at performance followed from a reasonable assessment of the situation,
  • Performance was bad, and bad enough to become an issue (though an unacknowledged one), when it wasn't looked at,
  • Although the system in general was certainly "looked at", apparently from more eyes and angles than "an average system", but it didn't help,
  • So either you have a jittery bodybuilder specifically and continuously eyeballing something, or that something sucks.

Of course you can save effort using jittery automated test programs. For example, we've been running a regression testing system for about a year. I recently decided to look at what's running through it, beyond the stuff it reports as errors that ought to be fixed (in this system we try to avoid false positives to the point of tolerating some of the false negatives, so it doesn't complain loudly about every possible problem). I found that:

  • It frequently ran out of disk space. It was OK for it to run out of disk space at times, but it did it way too often now. That's because its way of finding out the free space on the various partitions was obsoleted by the move of the relevant directories to network-attached storage.
  • At some of the machines, it failed to get licenses to one of the compilers it needed – perhaps because the env vars were set to good values with most users but not all, perhaps because of a compiler upgrade it didn't take into account. [It was OK for it to occasionally fail to get a license (those are scarce) - then it should have retried, and at the worst case report a license error. However, the compiler error messages it got were new to it, so it thought something just didn't compile. It then ignored the problem on otherwise good grounds.]
  • Its way of figuring out file names from module names failed for one module which was partially renamed recently (don't ask). Through an elaborate path this resulted in tolerating false negatives it really shouldn't.

And I'm not through with this thing yet, which to me exemplifies the sad truth that while you can have a cat looking at other cats to prevent them from dying, a human still has to look at that supervisor cat, or else it dies, leading to the eventual demise of the others.

If you don't look at a program, it rots. If you open a box, there's a dead cat in it. And if a tree falls in a forest and no one is around to hear it, it sucks.

Getting the call stack without a frame pointer

Everything I know about getting the current call stack of C or C++ programs, including ones compiled with -fomit-frame-pointer or an equivalent, with or without a debugger. Hardly entertaining except for those specifically wishing to lay their hands on call stacks.

We'll start with trivia people sharing my unhealthy interest in call stacks probably know. There are two contexts where you might want to get a call stack:

  1. Look at the call stack in a debugger to see what the program is doing.
  2. Get a representation of the call stack inside the program itself. For example, a memory profiler might want to attach the call stack identifying the context where the allocation is made to each memory block to see who allocates the most.

Sometimes (2) can be implemented using (1) by running a debugger programmatically and asking it for the current call stack, and sometimes it can't (too much communication overhead, or no debugger available – for example, a program stripped of debugging information and running on a standalone embedded board).

The straightforward way of getting the current call stack, used both by debuggers and by programs curious about their own stacks, relies on the frame pointer. The idea is that a machine register is reserved for keeping a pointer into the stack, called the frame pointer, and every function is compiled to do the following in its prologue:

  • Push the return address to the stack
  • Push the frame pointer to the stack
  • Save the address of the resulting two-pointer structure to the frame pointer register

This creates a linked list on the stack, with every node keeping a return address – this list is the call stack (and this is why debuggers show the points of return from function calls and not the points of call, a bit annoyingly for function calls spanning many source code lines – in fact we get the return stack and not the call stack). Here's how you get this list from within a program:

struct stack_frame {
  struct stack_frame* next;
  void* ret;
};
int get_call_stack(void** retaddrs, int max_size) {
  /* x86/gcc-specific: this tells gcc that the fp
     variable should be an alias to the %ebp register
     which keeps the frame pointer */
  register struct stack_frame* fp asm("ebp");
  /* the rest just walks through the linked list */
  struct stack_frame* frame = fp;
  int i = 0;
  while(frame) {
    if(i < max_size) {
      retaddrs[i++] = frame->ret;
    }
    frame = frame->next;
  }
  return i;
}

The code for getting the list head pointer depends on the platform, the list structure itself is common to many machines and compilers. The return addresses may be converted to function names and source line numbers with addr2line -f or similar. When the program can't access its own debug info during its execution as in the case of embedded devices, the translation from addresses to names will be a separate offline step.

The whole frame pointer business is fairly widely documented, with a bunch of source code available centered around getting the call stack that way. I think the GNU backtrace function and the Windows StackWalk64 function, which these days are probably a better alternative than code snippets like the one above, also use this linked list when available.

Now, what happens if the compiler is told to avoid generating the code maintaining the frame pointer (-fomit-frame-pointer for gcc, /Oy for MSVC), or not told to override its default behavior of not generating such code (-ga for Green Hills C++)?

Admittedly it doesn't appear to be such a good idea to make debugging that much harder in order to save a few instructions. However, there are reasons to do so. For example, one common consequence of Software Design is lots of functions doing little or nothing except for delegating their work to another function. Without frame pointer maintenance, such a call is just a jump instruction – your callee function will return directly to the address saved by your caller. With frame pointers, you need a whole prologue and epilogue here. Anyway, we won't discuss the benefits of frame pointer omission since measuring the overhead for your particular code will be more reliable than such a discussion anyway.

Compiling without frame pointers hits code trying to obtain its own context harder than it hits debuggers, because debuggers don't really need frame pointers and only (sometimes) rely on them for simplicity of implementation. Given a return address, a debugger can tell:

  1. Which function it belongs to (unlike the program itself, a debugger is necessarily supposed to have access to the symbol table)
  2. Where that function keeps the return address (the compiler knows that, so it can tell the debugger)
  3. The amount by which the function decrements the stack pointer, assuming the stack grows downwards – again something the compiler knows. Now that the debugger knows the previous return address and the previous stack pointer, it can go back to step 1.

So a debugger can do just fine without frame pointers as long as the compiler gives it enough information about the layout of the stack. I've been debugging without frame pointers for a long time with the Green Hills MULTI debugger which uses a proprietary debug info format. More recently the DWARF format, gcc and gdb seem to have caught up and now programs compiled with -fasynchronous-unwind-tables -fomit-frame-pointer are debuggable with gdb. The information generated by -fasynchronous-unwind-tables seems to go to a separate ELF section called .eh_frame_hdr.

Not only will gdb use .eh_frame_hdr, but the GNU backtrace function appears to be using it as well (it doesn't work under -fomit-frame-pointer but apparently does work when you add -fasynchronous-unwind-tables – although the docs explicitly say: "frame pointer elimination will stop backtrace from interpreting the stack contents correctly"). Nor is this section stripped from the program – it's not implemented as a "normal" debug information section but as an allocated data section, so it's always available to a program (in particular, to the backtrace function).

So under gcc, all call stack problems seem to be solved – unless you trust the docs (!?), or unless some code isn't compiled with the right flags because of not being up to date or someone being too greedy to allocate space for a debug info section. Outside gcc, or more precisely DWARF, I don't think a stripped program can access such debug info.

Is there a way to get a call stack without a frame pointer, without a debugger and without debug info?

For years I was sure that the answer was "no", hence some things will only work under a separate build mode – just like the release build but with frame pointers. Then one time the Green Hills debugger failed to list the call stack for some reason as it sometimes does, but this time we really wanted to decipher it. And we figured that we can in fact do the same thing the debugger does, except we'll understand from the assembly code what the debugger usually understood from debug information.

Specifically, to understand where the return address is kept and by what amount the stack pointer is decremented, you need to find the instructions doing (or undoing) these things in the prologue (or the epilogue) of the function. This worked, but due to either inertia or stupidity it took me months to realize that you can write code doing this. Anyway, here's how it works on a 32b MIPS processor under the Green Hills compiler. The prologue code of a function will contain instructions like these:

main+0: 27bdffe8 addiu sp, sp, -0x18
main+4: afbf0014 sw    r31, 0x14(sp)

The add immediate instruction decrements the stack pointer, and the store word instruction saves the return address from the register where it's saved by the caller to some place on the stack. The high 16 bits of these instructions don't depend on the function, encoding the "addui sp, sp" and the "sw, r31 …(sp)" parts. The low 16 bits encode a signed offset. So we can obtain the call stack from our code disassembling it thusly:

/* get previous stack pointer and return address
   given the current ones */
int get_prev_sp_ra(void** prev_sp, void** prev_ra,
                   void* sp, void* ra) {
  unsigned* wra = (unsigned*)ra;
  int spofft;
  /* scan towards the beginning of the function -
     addui sp,sp,spofft should be the first command */
  while((*wra >> 16) != 0x27bd) {
    /* test for "scanned too much" elided */
    wra--;
  }
  spofft = ((int)*wra << 16) >> 16; /* sign-extend */
  *prev_sp = (char*)sp - spofft;
  /* now scan forward for sw r31,raofft(sp) */
  while(wra < (unsigned*)ra) {
    if((*wra >> 16) == 0xafbf) {
      int raofft = ((int)*wra << 16) >> 16; /* sign */
      *prev_ra = *(void**)((char*)sp + raofft);
      return 1;
    }
    wra++;
  }
  return 0; /* failed to find where ra is saved */
}

The call stack will then be produced by the following loop:

int get_call_stack_no_fp(void** retaddrs, int max_size) {
  void* sp = get_sp(); /* stack pointer register */
  void* ra = get_ra(); /* return address register */
  /* adjust sp by the offset by which this function
     has just decremented it */
  int* funcbase = (int*)(int)&get_call_stack_no_fp;
  /* funcbase points to an addiu sp,sp,spofft command */
  int spofft = (*funcbase << 16) >> 16; /* 16 LSBs */
  int i=0;
  sp = (char*)sp-spofft;
  do {
    if(i < max_size) {
      retaddrs[i++] = ra;
    }
  }
  while(get_prev_sp_ra(&sp,&ra,sp,ra));
  return i; /* stack size */
}

get_sp and get_ra access registers so they must be assembly macros, which in the case of Green Hills can be spelled like this:

asm void* get_ra() {
  move $v0, $ra
}
asm void* get_sp() {
  move $v0, $sp
}

Under MIPS32 and Green Hills, this code seems to be giving decent call stacks except for the inevitable omission of function calls done without saving the return address; the most common case of those – simple delegating functions – was already mentioned above. If f calls g which does (almost) nothing except for calling h, and g doesn't bother to save the return address to the stack, having h return directly to f, then the call stack will contain f and h but not g. Not much of a problem and sometimes even an advantage as far as I'm concerned, since g is rarely interesting. Also, you can get this problem irregardless of the lack of a frame pointer – for example, gcc -O3 on x86 will not maintain the call stack accurately even without -fomit-frame-pointer, generating the following ridiculous code:

pushl   %ebp
movl    %esp, %ebp
popl    %ebp
; so what was the point of setting up and
; immediately destroying a stack frame?
jmp     print_trace

Now, although code like get_prev_sp_ra looking for 0x27bd admittedly is a heuristic relying on undocumented platform-specific behavior, it looks like a passable way of getting a call stack on a RISC machine like MIPS, ARM or PowerPC. What about the x86 though? Effectively we have our code partially disassembling itself here, which is not nearly as easy with the x86 (in particular, I don't think there's a way to scan backwards because of the variable encoding length; although we could look at epilogues instead of prologues just as well).

Instead of dragging in a disassembler, we can use an external program, such as, well, a debugger. This obviously defeats the purpose of being able to get the call stack without a debugger. But this purpose isn't very interesting on the x86 in the first place because there you're rarely stuck in situations where a program can't run a debugger.

The only point of the disassembling business on the x86 thus remains to deal with programs compiled without a frame pointer and without debug information making it possible to get the call stack nonetheless. I don't know if anybody has such a problem these days, now that gcc has -fasynchronous-unwind-tables – perhaps someone uses compilers which can't do this or binaries compiled without this, and perhaps the problem is extinct on the x86. For what it's worth, here's a script getting the call stack from a core file without relying on gdb's bt command but relying on its disassemble command. Usage: python bt <program> <core>. No warranty, …or FITNESS FOR A PARTICULAR PURPOSE.

And this is all I know about getting the call stack in C or C++, something users of other languages can do, in the (unlikely) absence of a library function doing just that, simply by throwing an exception, immediately catching it and using its getStackTrace method or some such.

What makes cover-up preferable to error handling

There was a Forth tutorial which I now fail to find that literally had a "crash course" right in the beginning, where you were shown how to crash a Forth interpreter. Not much of a challenge – `echo 0 @ | pforth` does the trick for me – but I liked the way of presentation: "now we've learned how to crash, no need to return to that in the future".

So, let's have a Python & Perl crash course – do something illegal and see what happens. We'll start with my favorite felony – out-of-bounds array access:

python -c 'a=(1,2,3); print "5th:",a[5]'
perl -e '@a=(1,2,3); print "5th: $a[5]n"'

The output:

5th:
Traceback (most recent call last):
  File "<string>", line 1, in <module>
IndexError: tuple index out of range
5th:

Python in fact crashed, telling us it didn't like the index.

Perl was more kind and rewarded our out-of-bounds index with what looks like the empty string. Being the kind of evildoer who's only further provoked by the gentle reactions of a do-gooder, what shall we do to further harass it? Well, it looks like anything makes a good index (and I mean anything: if @a=(11,22), then $a["1"]==22 and $a["xxx"]==11). But perhaps some things don't make good arrays.

python -c 'x=5; print "2nd:",x[2]'
perl -e '$x=5; print "2nd: $x[2]n"'

Output:

2nd:
Traceback (most recent call last):
  File "<string>", line 1, in <module>
TypeError: 'int' object is unsubscriptable
2nd:

Python gives us its familiar elaborate complains, while Perl gives us its familiar laconic empty string. Its kindness and flexibility are such that in a numeric context, it would helpfully give us the number 0 – the empty string is what we get in a string context.

Is there any way to exceed the limits of Perl's patience? What about nonsensical operations – I dunno, concatenating a hash/dictionary/map/whatever you call it and a string?

python -c 'map={1:2,3:4}; print "map+abc:",map+"abc"'
perl -e '%map=(1,2,3,4); print "map+abc: ",%map . "abcn"'

Output:

map+abc:
Traceback (most recent call last):
  File "<string>", line 1, in <module>
TypeError: unsupported operand type(s) for +: 'dict' and 'str'
map+abc: 2/8abc

Python doesn't like our operands but Perl retains its non-judgmental frame of mind. A Perl detractor could point out that silently converting %map to "2/8" (hash size/reserved space) is patently insane. A Perl aficionado could point out that Perl seems to be following Python's motto "Explicit is better than implicit" better than Python itself. In Python you can't tell the type of map at the point of usage. Perl code clearly states it's a hash with %, moreover . specifically means string concatenation (as opposed to +). So arguably you get what you asked for. Well, the one thing that is not debatable is that we still can't crash Perl.

OK, so Perl is happy with indexes which aren't and it is happy with arrays which aren't, and generally with variables of some type which aren't. What about variables that simply aren't?

python -c 'print "y:",y'
perl -e 'print "y: $yn"'

Output:

y:
Traceback (most recent call last):
  File "<string>", line 1, in <module>
NameError: name 'y' is not defined
y:

NameError vs that hallmark of tolerance, the empty string, a helpful default value for a variable never defined.

By the way, this is how $x[5] evaluates to an empty string when x isn't an array, I think. $x[5] is unrelated to the scalar variable $x, it looks for the array variable @x in another namespace. There's no @x so you get an empty array, having no 5th element so you get "". I think I understand it all, except for one thing: is there any way at all to disturb the divine serenity of this particular programming language?

python -c 'a=0; print 1/a'
perl -e '$a=0; print 1/$a'

This finally manages to produce:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero
Illegal division by zero at -e line 1.

The second message about "illegal" division by zero (is there any other kind?) comes from our no longer tolerant friend, making me wonder. What is so special about division by zero? Why not be consistent with one's generally calm demeanor and return something useful like "" or 0? Would be perfectly reasonable – I did it myself, more accurately asked to have a hardware divider returning zero in these cases. Because there wasn't what hardware calls exception handling (having the processor jump to a handler in the middle of an instruction stream). We lived happily ever after, so what's wrong with 0?

But the real question is, what explains the stunning difference between Perl's and Python's character? Is it philosophical, "There's More Than One Way To Do It (TMTOWTDI)" vs "There should be one – and preferably only one – obvious way to do it; Although that way may not be obvious at first unless you're Dutch" (actual Perl and Python mottos, respectively)? The latter approach encourages to classify program behavior as "erroneous" where the former tends to instead assume you're knowingly doing something clever in Yet Another Possible Way, right?

Modernism vs Postmodernism, maybe, as outlined by Perl's author in "Perl, the first postmodern computer language"? "Perl is like the perfect butler. Whatever you ask Perl to do, it says `Very good, sir,' or `Very good, madam.' … Contrast that with the Modern idea of how a computer should behave. It's really rather patronizing: `I'm sorry Dave. I can't allow you to do that.'" The latter can be illustrated by Python's way of answering the wish of its many users to use braces rather than indentation for scoping:

>>> from __future__ import braces
SyntaxError: not a chance

So, "Very good, sir" vs "I can't allow you to do that". Makes sense with Python vs Perl, but what about, say, Lisp vs C++?

Lisp definitely has a "There's More Than One Way To Do It" motive in its culture. Look how much control flow facilities it has compared to Python – and on top of that people write flow macros, and generally "if you don't like the way built-in stuff works, you can fix it with macros", you know the drill. Guessing the user's intent in ambiguous situations? Perl says that it "uses the DWIM (that's "Do What I Mean") principle" for parsing, borrowing a term from Lisp environments. And yet:

(print (let ((x (make-array 3))) (aref x 5)))
*** - AREF: index 5 for #(NIL NIL NIL) is out of range
(print (let ((x 5)) (aref x 2)))
*** - AREF: argument 5 is not an array
(print (let ((m (make-hash-table))) (concatenate 'string m "def")))
*** - CONCATENATE: #S(HASH-TABLE :TEST FASTHASH-EQL) is not a SEQUENCE
(print y)
*** - EVAL: variable Y has no value
(print (/ 1 0))
*** - division by zero

5 out of 5, just like Python. Contrast that with C++ which definitely has a Bondage and Discipline culture, what with all the lengthy compiler error messages. Actually C++ would score 4 out of 5 on this test, but the test is a poor fit for statically typed languages. A more appropriate way to evaluate C++'s error handling approach would be to focus on errors only detectable at run time. The following message from The UNIX-HATERS Handbook records the reaction of a lisper upon his encounter with this approach:

Date: Mon, 8 Apr 91 11:29:56 PDT
From: Daniel Weise
To: UNIX-HATERS
Subject: From their cradle to our grave.

One reason why Unix programs are so fragile and unrobust is that C
coders are trained from infancy to make them that way. For example,
one of the first complete programs in Stroustrup’s C++ book (the
one after the “hello world” program, which, by the way, compiles
into a 300K image), is a program that performs inch-to-centimeter
and centimeter-to-inch conversion. The user indicates the unit of the
input by appending “i” for inches and “c” for centimeters. Here is
the outline of the program, written in true Unix and C style:

#include <stream.h>

main() {
  [declarations]
  cin >> x >> ch;
    ;; A design abortion.
    ;; This reads x, then reads ch.
  if (ch == 'i') [handle "i" case]
  else if (ch == 'c') [handle "c" case]
  else in = cm = 0;
    ;; That’s right, don’t report an error.
    ;; Just do something arbitrary.
[perform conversion] }

Thirteen pages later (page 31), an example is given that implements
arrays with indexes that range from n to m, instead of the usual 0 to
m. If the programmer gives an invalid index, the program just
blithely returns the first element of the array. Unix brain death forever!

You could say that the sarcasm in the Lisp-style comments proudly intermixed with C++ code is uncalled for since example programs are just that – example programs. As to the dreaded out-of-bound array access cited in the second example, well, C++ doesn't handle that to avoid run time overhead.

But the cited program didn't just ignore the range problem the way C would – it went to the trouble of checking the index and then helpfully returned the 0th element the way Perl would. Probably as one part of its illustration how in C++ you could have custom array types which aren't like C arrays. But why Postmodern Perl arrays, in a generally Disciplined language?

Well, it was 1991 and C++ exceptions were very young, likely younger than the cited example programs. (They've since aged but didn't improve to the point where most library users would be happy to have to catch them, hence many library writers aren't throwing them.)

Likewise, Perl had all the features used in the examples above before it had exceptions – or more accurately before it had them under the spotlight, if I understand this correctly. (In Perl you handle exceptions by putting code in an eval { … } block and then calls to the die() function jump to the end of that block, saving die's argument to $@ – instead of, well, dying. I think Perl had this relatively early; however it seems to only have become idiomatic after Perl 5's OO support and an option to send exception objects to $@, with people using die strictly for dying prior to that.) Perhaps Perl's helpful interpretations of obvious nonsense like $a["xxx"] aren't that helpful after all, but what would you rather have it do – die()?

AFAIK Python had exceptions under the spotlight from the beginning – although similarly to Perl it had exception strings before it had exception classes. And in fact it does its best to adhere to its "Errors should never pass silently" philosophy, the few deviations coming to mind having to do with Boolean contexts – the falsehood of empty strings, None and 0 together with None!=False/0==False/1==True/2!=True and similar gateways to pornographic programming.

Lisp has conditions and restarts which blow exceptions out of the water, hence its willingness to report errors isn't surprising. However, it gained these features in the 80s; what did previous dialects do? ERRORSET, which is similar to a try/catch block, appears to predate the current error handling system, but it doesn't seem to have been there from the very beginning, either. I'm not familiar with the Lisp fossil record, but there's a function for indexing lists called NTH which returns NIL given an out-of-bounds index. Lists definitely predate arrays, so I assume NTH likewise predates AREF which complains given a bad index. Perhaps NTH doesn't complain about bad indexes since it also predates ERRORSET and any other form of exception handling?

The pattern seems to be: if the language has exceptions, most of its features and libraries handle errors. If it doesn't, they don't; errors are covered up.

(Although I won't be surprised if I'm wrong about the Lisp history part because Lisp is generally much more thoughtful than I am  – just look at all the trouble Interlisp, by now an ancient dialect, went into in order to figure out whether a user wants to get an opportunity to fix an error manually or would rather have the program silently return to the topmost ERRORSET.)

awk and early PHP lack exceptions and are happy with out-of-bound array access. Java and Ruby have exceptions and you'll get one upon such access. It isn't just the culture. Or is it? Perl is PHP's father and awk is PHP's grandfather. *sh and make, which, like Perl, produce an empty string from $nosuchvar, aren't good examples, either – sh is Perl's mother and make is Perl's sister. Is it really the Unix lineage that is at fault as suggested by a dated message to a late mailing list?

Here's JavaScript, an offspring of Lisp:

>>> [1,2,3][5]+5
NaN
>>> [1,2,3][5]+"abc"
"undefinedabc"

I think this definitely rivals Perl. Apparently it's not the lineage that is the problem – and JavaScript didn't have exceptions during its first 2 years.

The thing is, errors are exceedingly gnarly to handle without exceptions. Unless you know what to do at the point where the error is detected, and you almost never know what to do at the point where the error is detected, you need to propagate a description of the error up the call chain. The higher a function sits up the call chain, the more kinds of errors it will have to propagate upwards.

(With a C++ background it can look like the big problem doesn't come from function calls but from operators and expressions which syntactically have nowhere to send their complaints. But a dynamic language could have those expressions evaluate to a special Error object just as easily as it can produce "" or "undefined". What this wouldn't solve is the need to clutter control flow somewhere down the road when deciding which control path to take or what should go to files or windows, now that every variable can have the value Error tainting all computations in its path.)

Different errors carry different meta-data with them – propagating error codes alone ought to be punishable by death. What good is a "no such file" error if I don't know which file it is? What good is a "network layer malfunction" error if I can't tell that in fact it's a "no such file" error at the network layer because a higher level lumps all error codes from a lower level into one code? (It always lumps them together since the layers have different lifecycles and the higher level can't be bothered to update its error code list every time the lower level does.)

Different meta-data means, importantly for static languages, different types of output objects from callee functions depending on the error, which can only be handled in an ugly fashion. But even if there's no such problem, you have to test every function call for errors. If a function didn't originally return error information but now does, you have to change all points of call.

Nobody ever does it.

It is to me an upper bound that apparently all programming languages without exception handling cover up errors at the language level.

A designer of a successful programming language, whatever you think of that language, is definitely an above average programmer, actually I think one that can safely be assumed to occupy the top tenth no matter how you rate programming ability (and really, how do you?). Moreover, the language designer is also in the top tenth if programmers are sorted by the extent of importance they assign to their work and motivation to get things right – because the problems are fundamental, because of the ego trip, because of everything.

And still, once decision or chance lead them into a situation where the language has no exceptions, they allow themselves to have errors covered up throughout their cherished system. Despite the obvious pain it causes them, as evident from the need for rationalizations ranging from runtime overhead (as if you couldn't have an explicit unsafe subset for that – see C#) to the Postmodern Butler argument ("Fuck you plenty? Very good, sir!").

What does this leave us to expect from average programs? The ones written in the order of execution, top to bottom, with the same intellectual effort that goes into driving from point A to point B? The ones not exposed to the kind of massive testing/review that could exterminate bugs in a flophouse?

This is why nothing inspires my trust in a program like a tendency to occasionally wet its pants with an error call stack. Leaving some exceptions uncaught proves the programmers are somewhat sloppy but I wouldn't guess otherwise anyway. What's more important is that because of exceptions along the road, the program most likely won't make it to the code where shooting me in the foot is implemented. And if a program never spills its guts this way, especially if it's the kind of program with lots of I/O going on, I'm certain that under the hood, it's busy silencing desperate screams: where there are no exceptions, there must be cover-up.

The C++ Sucks Series: petrifying functions

Your IT department intends to upgrade your OS and gives a developer a new image to play with. The developer is generally satisfied, except there's this one program that mysteriously dumps core. Someone thoughtfully blames differences in system libraries.

Alternative prelude: you have this program and you're working on a new version. Being generally satisfied with the updates, you send the code overseas. They build it and it mysteriously dumps core. Someone thoughtfully blames differences in the compiler version.

Whatever the prelude, you open the core dump with `gdb app core` and gdb says:

#0  0x080484c9 in main (argc=Cannot access memory at address 0xbf3e7a8c) at main.cpp:4
4    int main(int argc, char** argv)
(gdb)

Check out the garbage near "argc=" – if it ain't printing garbage, it ain't a C++ debugger. Anyway, it looks like the program didn't even enter main. An alert C++ hater will immediately suspect that the flying circus happening in C++ before main could be at fault, but in this case it isn't. In fact, a program can be similarly petrified by the perspective of entering any function, not necessarily main. It's main where it crashes in our example because the example is small; here's the source code:

#include <stdio.h>
#include "app.h"

int main(int argc, char** argv)
{
  if(argc != 2) {
    printf("please specify a profilen");
    return 1;
  }
  const char* profile = argv[1];
  Application app(profile);
  app.mainLoop();
}

On your machine, you run the program without any arguments and sure enough, it says "please specify a profile"; on this other machine, it just dumps core. Hmmm.

Now, I won't argue that C++ isn't a high-level object-oriented programming language since every book on the subject is careful to point out the opposite. Instead I'll argue that you can't get a first-rate user experience with this high-level object-oriented programming language if you don't also know assembly. And with the first-rate experience being the living hell that it is, few would willingly opt for a second-rate option.

For example, nothing at the source code level can explain how a program is so shocked by the necessity of running main that it dumps a core in its pants. On the other hand, here's what we get at the assembly level:

(gdb) p $pc
$1 = (void (*)(void)) 0x80484c9 <main+20>
(gdb) disass $pc
Dump of assembler code for function main:
0x080484b5 <main+0>:    lea    0x4(%esp),%ecx
0x080484b9 <main+4>:    and    $0xfffffff0,%esp
0x080484bc <main+7>:    pushl  -0x4(%ecx)
0x080484bf <main+10>:    push   %ebp
0x080484c0 <main+11>:    mov    %esp,%ebp
0x080484c2 <main+13>:    push   %ecx
0x080484c3 <main+14>:    sub    $0xa00024,%esp
0x080484c9 <main+20>:    mov    %ecx,-0xa0001c(%ebp)
# we don't care about code past $pc -
# a screenful of assembly elided

What this says is that the offending instruction is at the address main+20. As you'd expect with a Segmentation fault or a Bus error core dump, this points to an instruction accessing memory, specifically, the stack.

BTW I don't realy know the x86 assembly, but I can still read it thusly: "mov" can't just mean the tame RISC "move between registers" thing because then we wouldn't crash, so one operand must spell a memory address. Without remembering the source/destination order of the GNU assembler (which AFAIK is the opposite of the usual), I can tell that it's the second operand that is the memory operand because there's an integer constant which must mean an offset or something, and why would you need a constant to specify a register operand. Furthermore, I happen to remember that %ebp is the frame pointer register which means that it points into the stack, however I could figure it out from a previous instruction at main+11, which moves %esp [ought to be the stack pointer] to %ebp (or vice versa, as you could think without knowing the GNU operand ordering – but it would still mean that %ebp points into the stack.)

Which goes to show that you can read assembly while operating from a knowledge base that is not very dense, a way of saying "without really knowing what you're doing" – try that with C++ library code; but I digress. Now, why would we fail to access the stack? Could it have to do with the fact that we apparenty access it with the offset -0xa0001c, which ought to be unusually large? Let's have a look at the local variables, hoping that we can figure out the size of the stack main needs from their sizes. (Of course if the function used a Matrix class of the sort where the matrix is kept by value right there in a flat member array, looking at the named local variables mentioned in the program wouldn't be enough since the temporaries returned by overloaded operators would also have to be taken into account; luckily this isn't the case.)

(gdb) info locals
# if it ain't printing garbage, it ain't a C++ debugger:
profile = 0xb7fd9870 "U211?WVS??207"
app = Cannot access memory at address 0xbf3e7a98

We got two local variables; at least one must be huge then. (It can be worse in real life, main functions being perhaps the worst offenders, as many people are too arrogant to start with an Application class. Instead they have an InputParser and an OutputProducer and a Processor, which they proudly use in a neat 5-line main function – why wrap that in a class, 2 files in C++-land? Then they add an InputValidator, an OutputFormatConfigurator and a ProfileLoader, then less sophisticated people gradually add 20 to 100 locals for doing things right there in main, and then nobody wants to refactor the mess because of all the local variables you'd have to pass around; whereas an Application class with two hundred members, while disgusting, at least makes helper functions easy. But I digress again.)

(gdb) p sizeof profile
$2 = 4
(gdb) p sizeof app
$3 = 10485768

"10485768". The trouble with C++ debuggers is that they routinely print so much garbage due to memory corruption, debug information inadequacy and plain stupidity that their users are accustomed to automatically ignore most of their output without giving it much thought. In particular, large numbers with no apparent regularity in their digits are to a C++ programmer what "viagra" is to a spam filter: a sure clue that something was overwritten somewhere and the number shouldn't be trusted (I rarely do pair programming but I do lots of pair debugging and people explicitly shared this spam filtering heuristic with me).

However, in this case overwriting is unlikely since a sizeof is a compile time constant stored in the debug information and not in the program memory. We can see that the number will "make more sense" in hexadecimal (which is why hex is generally a good thing to look at before ignoring "garbage"):

(gdb) p /x sizeof app
$4 = 0xa00008

…Which is similar to our offset value, and confirms that we've been debugging a plain and simple stack overflow. Which would be easy to see in the case of a recursive function, or if the program crashed, say, in an attempt to access a large local array. However, in C++ it will crash near the beginning of a function long before the offending local variable is even declared, in an attempt to push the frame pointer or some such; I think I also saw it crash in naively-looking places further down the road, but I can't reproduce it.

Now we must find out which member of the Application class is the huge one, which is lots of fun when members are plentiful and deeply nested, which, with a typical Application class, they are. Some languages have reflection given which we could traverse the member tree automatically; incidentally, most of those languages don't dump core though. Anyway, in our case finding the problem is easy because I've made the example small.

(I also tried to make it ridiculous – do you tend to ridicule pedestrian code, including your own, sometimes as you type? Few do and the scarcity makes them very dear to me.)

class Application
{
 public:
  Application(const char* profile);
  void mainLoop();
 private:
  static const int MAX_BUF_SIZE = 1024;
  static const int MAX_PROF = 1024*10;
  const char* _profPath;
  char _parseBuf[MAX_BUF_SIZE][MAX_PROF];
  Profile* _profile;
};

This shows that it's _parseBuf that's causing the problem. This also answers the question of an alert C++ apologist regarding all of the above not being special to C++ but also relevant to C (when faced with a usability problem, C++ apologists like to ignore it and instead concentrate on assigning blame; if a problem reproduces in C, it's not C++'s fault according to their warped value systems.) Well, while one could write an equivalent C code causing a similar problem, one is unlikely to do so because C doesn't have a private keyword which to a first approximation does nothing but is advertised as an "encapsulation mechanism".

In other words, an average C programmer would have a createApplication function which would malloc an Application struct and all would be well since the huge _parseBuf wouldn't land on the stack. Of course an average C++ programmer, assuming he found someone to decipher the core dump for him as opposed to giving up on the OS upgrade or the overseas code upgrade, could also allocate the Application class dynamically, which would force him to change an unknown number of lines in the client code. Or he could change _parseBuf's type to std::vector, which would force him to change an unknown number of lines in the implementation code, depending on the nesting of function calls from Application. Alternatively the average C++ programmer could change _parseBuf to be a reference, new it in the constructor(s) and delete it in the destructor, assuming he can find someone who explains to him how to declare references to 2D arrays.

However, suppose you don't want to change code but instead would like to make old code run on the new machine – a perfectly legitimate desire independently of the quality of the code and its source language. The way to do it under Linux/tcsh is:

unlimit stacksize

Once this is done, the program should no longer dump core. `limit stacksize` would show you the original limit, which AFAIK will differ across Linux installations and sometimes will depend on the user (say, if you ssh to someone's desktop, you can get a lesser default stacksize limit and won't be able to run the wretched program). For example, on my wubi installation (Ubuntu for technophopes like myself who have a Windows machine, want a Linux, and hate the idea of fiddling with partitions), `limit stacksize` reports the value of 8M.

Which, as we've just seen, is tiny.

Coding standards: having more errors in code than code

I ran LINT version 9, configured to report the violations of the rules in the MISRA C++ 2008 coding standard, on a C++ source file. LINT is perhaps the most famous tool for statically checking C and C++ source code. MISRA stands for the Motor Industry Software Reliability Association, mandating adherence to its coding standards throughout the automotive industry.

The source file I tried has several KLOC worth of code, and the output of the preprocessor takes about 1M – pretty normal for C++ where a "Hello, world!" program generates 3/4M of preprocessed output. The output of LINT takes 38M. That's 38x more errors than code.

We're not finished parsing this output so I'm not sure which rules cause most violations and whether they can be clustered somehow to compress the 38M into something resembling comprehensible narrative in contents and size. The only thing basic attempts at parsing revealed at this point is that the distribution of the violations is roughly geometric, with the majority of the errors reporting violations of a minority of the rules.

Therefore, my only way of conveying some insight into the MISRA rules enforced by LINT is to look at a toy example. My example will be a Hello, world program – 2 LOC or 3/4M worth of code depending on your perspective. I'll assume LINT is told to ignore standard libraries, so it will actually be closer to 2 LOC.

#include <iostream>
int main() { std::cout << "Hello, world" << std::endl; }

From this program, LINT will produce 4 error messages when configured to enforce MISRA C++ 2008:

  1. The "int" in "int main" violates an advisory rule to avoid using built-in types and instead use typedefs indicating the size and signedness of the type, such as int32_t, INT or signed32T. Many an automotive project use a mixture of 2 or 3 of these conventions, which is compliant with the MISRA guidelines and presumably results from the history of merging or integrating code bases and/or teams. (I believe that in the particular case of main, the C and C++ standards both mandate the use of int; I didn't check if you can use a typedef to spell int but I'm certain that you can't have main() return an int32_t on a platform where int is 16b. Anyway, it appears that LINT doesn't bother to special-case main() – but you can do that yourself in its configuration file or right there in the source code, as you will have to do in many other cases.)
  2. The first left shift operator violates a MISRA rule disallowing the use of bitwise shift on signed types, or so it does according to LINT, which presumably checks whether the operands are of an unsigned integral type and reports an error if they are not (the other option is that it figures an output stream or a literal character array are "signed", but I can't see how they can be unless it's a signature we're talking about rather than signedness). The MISRA rule is based on the fact that the behavior of bitwise shift is implementation-defined and thus not portable. I do believe that there does not exist a 32b machine which does not use the 2's complement representation for integers and is a target of an automotive application. A notable share of automotive applications use signed integers to represent fixed point numbers, and I believe all of them rely on the 2's complement semantics of bitwise shifts to emulate multiplication and division.
  3. The second left shift operator is reported as violating the same rule.
  4. The two left shift operators as a whole are reported to violate the rule disallowing dependence on C operator precedence. That is, in order to correctly understand this program, a reader would have to know that (std::cout << "Hello, world!") would be evaluated first and then its output would be shifted to the left by std::endl. MISRA strives to prevent confusion, based on a well-founded assumption that few programmers know the rules of operator precedence and evaluation order, and LINT enforces the rules defined based on these premises.

I hope this gives some insight on the general code/errors ratio.

Humans and compilers need each other: the VLIW SIMD case

The state of the art in optimizing compilers today is such that for optimizing code, you need (1) a strong optimizing compiler and (2) a strong optimizing human. My rule of thumb is that (1) alone will yield 2x to 10x slower code. This is also what a person selling a (great) compiler "giving 80% of the optimal performance with no manual intervention" once told off-record to a roomful of programmers who pressed him into a corner, elevating my rule of thumb to a nobler plane of anecdotal evidence.

Now, I claim that this situation will persist, and in this post I'll try to close the fairly large gap between this claim and the mere acknowledgment of what the state of the art is today. The gap is particularly large for the believer in the possibility of strong AI – and while my position is a bit different, I do believe in fairly strong AI (can I say that? people keep telling that I can't say "nearly context-free". oh well.)

I realize that many people experienced in optimization feel that, on the contrary, there's in fact no gap large enough to justify an attempt as boringly rigorous (for a pop tech blog) at proving what they think is obvious as will shortly follow. But I think that many language geek discussions could benefit from a stronger bound on the power of a Sufficiently Smart Compiler than can be derived from (necessarily vague) doubts on the power of AI, and in this post I'll try to supply such a bound. I actually think a lot of (mainly domain-specific) things could be achieved by AI-ish work on compilation – closer to "identify bubble-sort and convert to quick-sort" than to traditional "analyze when variables are alive and assign them to registers" – and this is why it's useful to have a feeling when not to go there.

So, consider chess, where the state of the art is apparently quite similar to that in optimization: a strong human player using a strong computer program will take out both a human and a computer playing alone. However, it is conceivable that a program can be developed that doesn't need the help of a human, being able of completely simulating human thought processes or instead alternative processes which are consistently superior. Why can't it be the same with optimizing compilers?

(Chess and optimization are similar in another respect – few care about them; I readily acknowledge the insignificance of a 10x speed-up in a continuously expanding set of circumstances, I just happen to work in an area where it does count as a checkmate.)

I'll try to show that optimization is a fundamentally different game from chess, quite aside from the formal differences such as decidability. I'll use optimizing for VLIW SIMD processors to show where compilers outperform humans and vice versa. I'll be quoting a book by the inventor of VLIW called "Embedded Computing: A VLIW Approach" to support my position on the relative strength of humans and compilers in these cases. I'll then try to show that my examples are significant outside the peculiarities of current hardware, and attempt to state the general reason why humans are indispensable in optimization.

VLIW SIMD

First, we'll do the acronym expansion; skip it if you've been through it.

VLIW stands for "Very Long Instruction Word". What it really means is that your target processor can be told to execute several instructions in parallel. For example: R0=Add R1,R2 and R3=Mul R0,R1 and R1=Shift R5,R6. For this to work, the processor ought to be able to add, multiply and shift in parallel, that is, its execution hardware must be packed into several units, each getting distinct inputs. The units can be completely symmetric (all supporting the same operations); more often, different units support different instruction sets (so, for example, only one unit in a processor can multiply, but two of them can add, etc.) A stinky thing to note about VLIW instructions is the register semantics. In the example instruction above, R0 is mentioned both as an input and as an output. When it's mentioned as an input of Mul its old value is meant, and not the value computed by Add. This is somewhat natural since the whole point is to run Add and Mul in parallel so you don't want Mul to wait for Add; but it's confusing nonetheless. We'll come back to this shortly.

SIMD stands for "Single Instruction, Multiple Data" and is known much more widely than VLIW, being available at desktop and server processor architectures like x86 and PowerPC (VLIW reigns the quieter embedded DSP domain, the most commercially significant design probably being TI's C6000 family.) SIMD means that you have commands like R0=Add8 R1,R2, which does 8 additions given 2 inputs. The registers are thus treated as vectors of numbers – for example, uint8[16], or uint16[8], or uint32[4], assuming 16b registers. This establishes a preference for lower-precision numbers since you can pack more of them into a register and thus process more of them at a time: with uint16, you use Add8, but with uint8, you get to use the 2x faster Add16. We'll come back to this, too.

Optimizing for VLIW targets

The basic thing at which VLIW shines is the efficient implementation of "flat" loops (where most programs spend most time); by "flat", I mean that there are no nested if/elses or loops. The technique for implementing loops on VLIW machines is called modulo scheduling. The same technique is used on superscalar machines like modern x86 implementations (the difference from VLIWs being the instruction encoding semantics).

Since I couldn't find a good introductory page to link to, we'll run through a basic example of modulo scheduling right here. The idea is pretty simple, although when I first saw hardware designers doing it manually in a casual manner, I was deeply shocked (they do it for designing new hardware rather than programming existing hardware but it's the same principle).

Suppose you want to compute a[i]=b[i]*c+d on a VLIW processor with 4 units, 2 of them capable of load/store operations, 1 with an adder and 1 with a multiplier. All units have single-cycle latency (that is, their output is available to the next instruction; real VLIW units can have larger latencies, so that several instructions will execute before the result reaches the output register.) Let's assume that Load and Store increment the pointer, and ignore the need to test for the exit condition through the loop. Then a trivial assembly implementation of a[i]=b[i]*c+d looks like this:

LOOP:
R0=Load b++
R1=Mul R0,c
R2=Add R1,d
Store a++,R2

This takes 4 cycles per iteration, and utilizes none of the processor's parallelism as each instruction only uses 1 of the 4 execution units. Presumably we could do better; in fact the upper bound on our performance is 1 cycle per iteration, since no unit has to be used more than once to implement a[i]=b[i]*c+d (if we had two multiplications, for example, then with only 1 multiplying unit the upper bound would be 2 cycles/iteration.)

What we'll do now is blithely schedule all of the work to a single instruction, reaching the throughput suggested by our upper bound:

LOOP:
R0=LOAD b++ and R1=MUL R0,c and R2=ADD R1,d and STORE a++,R2

Let's look at what this code is doing at iteration N:

  • b[N] is loaded
  • b[N-1] (loaded at the previous iteration into R0) is multiplied by c
  • b[N-2]*c (computed at the previous iteration from the old value of R0 and saved to R1) is added to d
  • b[N-3]*c+d is saved to a[N]

This shows why our naive implementation doesn't work (it would be quite surprising if it did) – at iteration 0, b[N-1] to b[N-3] are undefined, so it makes no sense to do things depending on these values. However, starting at N=3, our (single-instruction) loop body seems to be doing its job just fine (except for storing the result to the wrong place – b ran away during the first 3 iterations). We'll take care of the first iterations by adding a loop header – instructions which implement the first 3 iterations, only doing the stuff that makes sense in those iterations:

R0=Load b++
R0=Load b++ and R1=Mul R0,c
R0=Load b++ and R1=Mul R0,c and R2=Add R1,d
LOOP:
R0=Load b++ and R1=Mul R0,c and R2=Add R1,d and Store a++,R2

For similar reasons, we need a loop trailer – unless we don't mind loading 3 elements past the end of a[], but I reckon you get the idea. So we'll skip the trailer part, and move to the more interesting case – what happens when the loop body won't fit into a single instruction. To show that, I can add more work to be done in the loop so it won't fit into the units, or I can use a weaker imaginary target machine to do the same work which will no longer fit into the (fewer) units. The former requires more imaginary assembly code, so I chose the latter. Let's imagine a target machine with just 2 units, 1 with Load/Store and one with Add/Mul. Then our upper bound on performance is 2 cycles per iteration. The loop body will look like this:

LOOP:
R0=Load b++ and R2=Add R1,d
R1=Mul R0,c and Store a++,R2

Compared to the single-instruction case, which was still readable ("Load and Mul and Add and Store"), this piece looks garbled. However, we can still trace its execution and find that it works correctly at iteration N (assuming we added proper header code):

  • At instruction 1 of iteration N, b[N] is loaded
  • At instruction 2 of iteration N, b[N] (loaded to R0 by instr 1 of iter N) is multiplied by c
  • At instruction 1 of iteration N, b[N-1]*c (computed in R1 by instr 2 of iter N-1) is added to d
  • At instruction 2 of iteration N, b[N-1]*c+d (computed in R2 by instr 1 of iter N) is stored to a[N]

In common VLIW terminology, the number of instructions in the loop body, known to the rest of humanity as "throughput", is called "initiation interval". "Modulo scheduling" is presumably so named because the instructions implementing a loop body are scheduled "modulo initiation interval". In our second example, the operations in the sequence Load, Mul, Add, Store go to instructions 0,1,0,1 = 0%2,1%2,2%2,3%2. In our first example, everything goes to i%1=0 – which is why I needed an example with at least 2 instructions in a loop, "modulo 1" being a poor way to illustrate "modulo".

In practice, "modulo scheduling" grows more hairy than simply computing the initiation interval, creating a linear schedule for your program and then "wrapping it around" the initiation interval using %. For example, if for whatever reason we couldn't issue Mul and Store at the same cycle, we could still implement the loop at the 2 cycles/iteration throughput, but we'd have to move the Mul forward in our schedule, and adjust the rest accordingly.

I've done this kind of thing manually for some time, and let me assure you that fun it was not. An initiation interval of 3 with 10-15 temporary variables was on the border of my mental capacity. Compilers, on the other hand, are good at this, because you can treat your input program as a uniform graph of operations and their dependencies, and a legal schedule preserving its semantics is relatively easy to define. You have a few annoyances like pointer aliasing which precludes reordering, but it's a reasonably small and closed set of annoyances. Quoting "Embedded Computing: A VLIW Approach" (3.2.1, p. 92): "All of these problems have been solved, although some have more satisfyingly closed-form solution than others." Which is why some people with years of experience on VLIW targets know almost nothing about modulo scheduling – a compiler does a fine job without their help.

The book goes on to say that "Using a VLIW approach without a good compiler is not recommended" – in other words, a human without a compiler will not perform very well. Based on my experience of hand-coding assembly for a VLIW, I second that. I did reach about 95% of the performance of a compiler that was developed later, but the time it took meant that many optimizations just wouldn't fit into a practical release schedule.

Optimizing for SIMD targets

I will try to show that humans optimize well for SIMD targets and compilers don't. I'll quote "Embedded Computing: A VLIW Approach" more extensively in this section. A book on VLIW may not sound like the best source for insight on SIMD, however, I somewhat naturally haven't heard of a book on SIMD stressing how compilers aren't good at optimizing for it. But then I haven't heard of a book stressing the opposite, either, and success papers I saw claimed at automatic vectorization was modest. Furthermore, the particular VLIW book I quote is in fact focusing on embedded DSP where SIMD is ubiquitous, and its central theme is the importance of designing processors in ways making them good targets for optimizing compilers. It sounds like a good place to look for tips on designing compilers to work well with SIMD and vice versa; and if they say they have no such tips, it's telling.

And in fact the bottom line of the discussion on SIMD (which they call "micro-SIMD") is fairly grim: "The ability of compilers to automatically extract micro-SIMD without hints (and in particular, without pointer alignment information) is still unproven, and manual code restructuring is still necessary to exploit micro-SIMD parallelism" (4.1.4, p. 143). This statement from 2005 is consistent with what (AFAIK) compilers can do today. No SIMD-targeted programming environment I know relieves you of the need to use intrinsics in your C code as in "a = Add8(b,c)", where Add8 is a built-in function-looking operator translated to a SIMD instruction.

What I find fascinating though is the way they singled out pointer alignment as a particularly interesting factor necessitating "hints". Sure, most newbies to SIMD are appalled when they find out about the need to align pointers to 16 bytes if you want to use instructions accessing 16 bytes at a time. But how much of a show-stopper can that be if we are to look at the costs and benefits more closely? Aligning pointers is easy, producing run time errors when they aren't is easier, telling a compiler that they are can't be hard (say, gcc has a __vector type modifier telling that), and alternatively generating two pieces of code – optimized for the aligned case and non-optimized for the misaligned case – isn't hard, either (the book itself mentions still other option – generating non-optimized loop header and trailer for the misaligned sections of an array).

There ought to be more significant reasons for people to be uglifying their code with non-portable intrinsics, and in fact there are. The book even discusses them in the pages preceeding the conclusion – but why doesn't it mention the more serious reasons in the conclusion? To me this is revealing of the difference between a programmer's perspective and a compiler writer's perspective, which is related to the difference between optimization and chess: in chess, there are rules.

For an optimizing programmer, SIMD instructions are a resource from which most benefit must be squeezed at any reasonable cost, including tweaking the behavior of the program. For an optimizing compiler, SIMD instructions are something that can be used to implement a piece of source code, in fact the preferable way to implement it – as long as its semantics are preserved. This means that a compiler obeys rules a programmer doesn't, making winning impossible. A typical reaction of a compiler writer is to think of this as not his problem – his problem ending where program transformations preserving the semantics are exhausted. I think this is what explains the focus on things like pointer alignment (which a compiler can in fact solve with a few hints and without affecting the results of the program) at the expense of the substantive issues (which it can't).

In the context of SIMD optimizations, the most significant example of rules obeyed by just one of the contestants has to do with precision, which the book mentions right after alignment in its detailed discussion of the problems with SIMD. "Even when we manipulate byte-sized quantities (as in the case of most pixel-based images, for example), the precision requirements of the majority of manipulation algorithms require keeping a few extra bits around (9, 12, and 16 are common choices) for the intermediate stages of an algorithm. …this forces us up to the next practical size of sub-word … reducing the potential parallelism by a factor of two up front." They go on to say that a 32b register will end up keeping just 2 16b numbers, giving a 2x speed-up – modest considering all the cases when you won't get even that due to other obstacles.

This argument shows the problems precision creates for the hardware implementation of SIMD. However, the precision of intermediate results isn't as hard a problem as this presentation makes it sound, because intermediate results are typically kept in registers, not in memory. So to keep the extra bits in intermediate results, you can either use large registers for SIMD operations and not "general-purpose" 32b ones, or you can keep intermediate results in pairs of registers – as long as you have enough processing units to generate and further process these intermediate results. Both things are done by actual SIMD hardware.

However, the significant problems created by precision lie at the software side: the compiler doesn't know how many bits it will need for intermediate results, nor when precision can be traded for performance. In C, the type of the intermediate results in the expression (a[i]*3+b[i]*c[i])>>d is int (roughly, 32b), even if a, b and c are arrays of 8b numbers, and the parenthesized expression can in fact exceed 16b. The programmer may know that b[i]*c[i] never exceeds, say, 20000 so the whole thing will fit in 16b. That C has no way of specifying precise ranges of values a variable can hold (as opposed to Lisp, of all rivals to the title of the most aggressively optimizing environment) doesn't by itself make an argument since a way could be added, just like gcc added __vector, not to mention the option of using a different language. Specifying the ranges of b[i] and c[i] wouldn't always suffice and we would have to further uglify the code to specify the range of the product (in case both b[i] and c[i] can be large by themselves but never together), but it could be done.

The real problem with having to specify such information to the compiler isn't the lack of a standard way of spelling it, but that a programmer doesn't know when to do it. If it's me who is responsible for the low-level aspects of optimization, I'll notice the trouble with an intermediate result requiring too many bits to represent. I will then choose whether to handle it by investigating the ranges of b[i] and c[i] and restricting them if needed, by moving the shift by d into the expression as in (a[i]*3>>d)+(b[i]*c[i]>>d) so intermediate results never exceed 16b, or in some other way. But if it's the compiler who's responsible, chances are that I won't know that this problem exists at all.

There's a trade-off between performance gains, precision losses and the effort needed to obtain more knowledge about the problem. A person can make these trade-offs because the person knows "what the program really does", and the semantics of the source code are just a rendering of that informal spec from one possible perspective. It's even worse than that – a person actually doesn't know what the program really does until an attempt to optimize it, so even strong AI capable of understanding an informal spec in English wouldn't be a substitute for a person.

A person can say, "Oh, we run out of bits here. OK, so let's drop the precision of the coefficients." Theoretically, and importantly for my claim, strong AI can also say that – but only if it operates as a person and not as a machine. I don't claim that we'll never reach a point where we have a machine powerful enough to join our team as a programmer, just that (1) we probably wouldn't want to and (2) if we would, it wouldn't be called a compiler, it would be called a software developer. That is, you wouldn't press a button and expect to get object code from your source code, you'd expect a conversation: "Hey, why do you need so many bits here – it's just a smoothing filter, do you really think anyone will notice the difference? Do you realize that this generates 4x slower code?" And then someone, perhaps another machine, would answer that yes, perhaps we should drop some of the bits, but let's not overdo it because there are artifacts, and I know you couldn't care less because your job ends here but those artifacts are amplified when we compute the gradient, etc.

This is how persons optimize, and while a machine could in theory act as a person, it would thereby no longer be a compiler. BTW, we have a compiler at work that actually does converse with you – it says that it will only optimize a piece of code if you specify that the minimal number of iterations executed is such and such; I think it was me who proposed to handle that case using conversation. So this discussion isn't pure rhetoric. I really wish compilers had a -warn-about-missed-optimization-opportunities switch that would give advice of this kind; it would help in a bunch of interesting cases. I just think that in some cases, precision being one of them, the amount and complexity of interactions needed to make headway like that exceeds the threshold separating aggressive optimization from aggressive lunacy.

To be sure, there are optimization problems that could be addressed by strong AI. In the case of SIMD, the book mentions one such area – they call it "Pack, Unpack, and Mix". "Some programs require rearranging the sub-words within a container to deal with the different sub-word layouts. From time to time, the ordering of the sub-words within a word (for example, coming from loading a word from memory) does not line up with the parallelism in the code… The only solution is to rearrange the sub-words within the containers through a set of permutation or copying operations (for example, the MIX operation in the HP PA-RISC MAX-2 extension)."

An example of this reordering problem is warping: computing a[i]=b[i*step+shift]. This is impossible to do in SIMD without a permutation instruction of the kind they mention (PowerPC's AltiVec has vec_perm, and AFAIK x86's SSE has nothing so you can't warp very efficiently). However, even if an instruction is available, compilers are AFAIK unable to exploit it. I see no reason why sufficiently strong AI couldn't manage to do such things with few hints in some interesting cases. I wouldn't bet my money on it – I side with Mitch Kapor on the Turing Test bet, but it is conceivable like the invincible chess playing program, and unlike transformations requiring "small" changes of the semantics.

Significance

There are areas of optimization that are very significant commercially but hardly interesting in a theoretical discussion (and this here's a distinctively theoretical discussion as is any discussion where the possibility of strong AI is supposed to be taken into account).

For example, register allocation for the x86 is exceedingly gnarly and perhaps an interesting argument could be made to defend the need for human intervention in this process in extreme cases (I wouldn't know since I never seriously optimized for the x86). However, a general claim that register allocation makes compiler optimization hard wouldn't follow from such an argument: on a machine with plentiful and reasonably uniform registers, it's hard to imagine what a human can do that a compiler can't do better, and almost everybody would agree that the single reason for not making hardware that way is a commercial one – to make an x86-compatible processor.

Now, I believe that both SIMD and VLIW instruction encodings don't have this accidental nature, and more likely are part of the Right Way of designing high-performance processors (assuming that it makes no sense to move cost from software to hardware and call that a "performance gain", that is, assuming that performance is measured per square millimeter of silicon). One argument of rigor worthy of a pop tech blog is that most high-end processors have converged to SIMD VLIW: they have instructions processing short vectors and they can issue multiple instructions in parallel; some do the latter in the "superscalar" way of having the hardware analyze dependencies between instructions at run time and others do it in the "actual VLIW" way of having the lack of dependencies proven and marked by the compiler, but you end up doing modulo scheduling anyway.

However, this can of course indicate uninformed consumer preference rather than actual utility (I type this on a noisy Core 2 Duo box running Firefox on top of XP, a job better handled by a cheaper, silent single-core – and I'm definitely a consumer who should have known better). So my main reasons for believing VLIW and SIMD are "right" are abstract considerations on building von Neumann machines:

  • You typically have lots of distinct execution hardware: a multiplier has little in common with a load/store unit. Up to a point, it will therefore make sense to support parallel execution of instructions on the different execution hardware. The cost of supporting it will be more I/O ports connecting the execution units with the register file – quite serious because of the multiplexers selecting the registers to read/write. However, the cost of not supporting it will be more execution hardware left unused for more time. So the optimum is unlikely to be "no parallel execution", it's likely "judicious parallel execution".
  • It is cheaper to have few wide registers and wide buses between the register file and the execution units than it is to have many narrow registers and buses. That's because the cost of the register file is proportional to the product of #registers and #buses to the execution units. It is thus significantly cheaper to have 1 unit with 4 8bx8b multipliers and 2 32b buses for the inputs then it is to have 4 units with 1 8bx8b multiplier in each and 8 8b buses for the inputs. It's also cheaper to keep 4 bytes in 1 32b register than in 4 8b registers. Likewise, it is cheaper to have 4 multipliers in 1 processor than to have 4 full-blown processor cores, because each core would have, say, its own fetch and decode logic and instruction cache – which are in fact pure overhead. So if you have a von Neumann machine with registers and buses and instruction cache, it makes sense (up to a point) to add SIMD to make the best of that investment, and this is why commercial VLIWs have SIMD, although the VLIW theory recommends more units instead.

Since I believe that both VLIW and SIMD are essential for maximizing hardware performance, I also tend to think that optimizations needed to utilize these features are "mainstream" enough to support a broad claim about optimization in general. And the main point of my claim is that compilers can't win in the optimization game, because part of that game is the ability to change the rules once you start losing.

Humans faced with a program they fail to optimize change the program, sometimes a little, sometimes a lot – I heard of 5×5 filters made 4×4 to run on a DSP. But even if we exclude the truly shameless cheating of the latter kind, the gentler cheating going into every serious optimization effort still requires to negotiate and to take responsibility in a way that a person – human or artificial – can, but a tool like a compiler can not.

Modulo scheduling is an example of the kinds of optimizations which in fact are best left to a compiler – the ones where rules are fixed: once the code is spelled, few can be added by further annotations by the author and hence the game can be won without much negotiations with the author; although sometimes a little interrogation can't hurt.

The C++ Sucks Series: the quest for the entry point

Suppose you run on the x86 and you don't like its default FPU settings. For example, you want your programs to dump core when they divide by zero or compute a NaN, having noticed that on average, these events aren't artifacts of clever numerical algorithm design, but rather indications that somebody has been using uninitialized memory. It's not necessarily a good idea for production code, but for debugging, you can tweak the x86 FPU thusly:

//this is a Linux header using GNU inline asm
#include <fpu_control.h>
void fpu_setup() {
unsigned short cw;
_FPU_GETCW(cw);
cw &= ~_FPU_MASK_ZM;//Divide by zero
cw &= ~_FPU_MASK_IM;//Invalid operation
_FPU_SETCW(cw);
}

So you call this function somewhere during your program's initialization sequence, and sure enough, computations producing NaN after the call to fpu_setup result in core dumps. Then one day someone computes a NaN before the call to fpu_setup, and you get a core dump the first time you try to use the FPU after that point. Because that's how x86 maintains its "illegal operation" flags and that's how it uses them to signal exceptions.

The call stack you got is pretty worthless as you're after the context that computed the NaN, not the context that got the exception because it happened to be the first one to use the FPU after the call to fpu_setup. So you move the call to fpu_setup to the beginning of main(), but help it does not. That's because the offending computation happens before main, somewhere in the global object construction sequence. The order of execution of the global object constructors is undefined by the C++ standard. So if you kindly excuse my phrasing – where should we shove the call to fpu_setup?

If you have enough confidence in your understanding of the things going on (as opposed to entering hair-pulling mode), what you start looking for is the REAL entry point. C++ is free to suck and execute parts of your program in "undefined" (random) order, but a computer still executes instructions in a defined order, and whatever that order is, some instructions ought to come first. Since main() isn't the real entry point in the sense that stuff happens before main, there ought to be another function which does come first.

One thing that could work is to add a global object to each C++ translation unit, and have its constructor call fpu_setup(); one of those calls ought to come before the offending global constructor – assuming that global objects defined in the same translation unit will be constructed one after another (AFAIK in practice they will, although in theory the implementation could, for example, order the constructor calls by the object name, so they wouldn't). However, this can get gnarly for systems with non-trivial build process and/or decomposition into shared libraries. Another problem is that compilers will "optimize away" (throw away together with the side effects, actually) calls to constructors of global objects which aren't "used" (mentioned by name). You can work around that by generating code "using" all the dummy objects from all the translation units and calling that "using" code from, say, main. Good luck with that.

The way I find much easier is to not try to solve this "portably" by working against the semantics prescribed by the C++ standard, but instead rely on the actual implementation, which usually has a defined entry point, and a bunch of functions known to be called by the entry point before main. For example, the GNU libc uses a function called __libc_start_main, which is eventually called by the code at _start (the "true" entry point containing the first executed instruction, AFAIK; I suck at GNU/Linux and only know what was enough to get by until now.) In general, running `objdump -T <program> | grep start` (which looks for symbols from shared libraries – "nm <program>" will miss those) is likely to turn up some interesting function. In these situations, some people prefer to find out from the documentation, others prefer to crawl under a table and die of depression; the grepping individuals of my sort are somewhere in between.

Now, instead of building (correctly configure-ing and make-ing) our own version of libc with __libc_start_main calling the dreaded fpu_setup, we can use $LD_PRELOAD – an env var telling the loader to load our library first. If we trick the loader into loading a shared library containing the symbol __libc_start_main, it will override libc's function with the same name. (I'm not very good at dynamic loading, but the sad fact is that it's totally broken, under both Windows and Unix, in the simple sense that where a static linker would give you a function redefinition error, the dynamic loader will pick a random function of the two sharing a name, or it will call one of them from some contexts and the other one from other contexts, etc. But if you ever played with dynamic loading, you already know that, so enough with that.)

Here's a __libc_start_main function calling fpu_setup and then the actual libc's __libc_start_main:

#include <dlfcn.h>

typedef int (*fcn)(int *(main) (int, char * *, char * *), int argc, char * * ubp_av, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void (* stack_end));
int __libc_start_main(int *(main) (int, char * *, char * *), int argc, char * * ubp_av, void (*init) (void), void (*fini) (void), void (*rtld_fini) (void), void (* stack_end))
{
fpu_setup();
void* handle = dlopen("/lib/libc.so.6", RTLD_LAZY | RTLD_GLOBAL);
fcn start = (fcn)dlsym(handle, "__libc_start_main");
(*start)(main, argc, ubp_av, init, fini, rtld_fini, stack_end);
}

Pretty, isn't it? Most of the characters are spent on spelling the arguments of this monstrosity – not really interesting since we simply propagate whatever args turned up by grepping/googling for "__libc_start_main" to the "real" libc's __libc_start_main. dlopen and dlsym give us access to that real __libc_start_main, and /lib/libc.so.6 is where my Linux box keeps its libc (I found out using `ldd <program> | grep libc`).

If you save this to a fplib.c file, you can use it thusly:

gcc -o fplib.so -shared fplib.c
env LD_PRELOAD=./fplib.so <program>

And now your program should finally dump core at the point in the global construction sequence where NaN is computed.

This approach has the nice side-effect of enabling you to "instrument" unsuspecting programs without recompiling them s.t. they run with a reconfigured FPU (to have them crash if they compute NaNs, unless of course they explicitly configure the FPU themselves instead of relying on what they get from the system.) But there are niftier applications of dynamic preloading, such as valgrind on Linux and .NET on Windows (BTW, I don't know how to trick Windows into preloading, just that you can.) What I wanted to illustrate wasn't how great preloading is, but the extent to which C++, the language forcing you to sink that low just to execute something at the beginning of your program, SUCKS.

Barf.

Corrections - thanks to the respective commenters for these:

1. Section 3.6.2/1 of the ISO C++ standard states, that “dynamically initialized [objects] shall be initialized in the order in which their definition appears in the translation unit”. So at least you have that out of your way if you want to deal with the problem at the source code level.

2. Instead of hard-coding the path to libc.so, you can pass RTLD_NEXT to dlsym.

Consistency: how to defeat the purpose of IEEE floating point

I don't know much about the design of IEEE floating point, except for the fact that a lot of knowledge and what they call "intellectual effort" went into it. I don't even know the requirements, and I suspect those were pretty detailed and complex (for example, the benefits of having a separate representation for +0 and -0 seem hard to grasp unless you know about the very specific and hairy examples in the complex plane). So I don't trust my own summary of the requirements very much. That said, here's the summary: the basic purpose of IEEE floating point is to give you results of the highest practically possible precision at each step of your computation.

I'm not going to claim this requirement is misguided, because I don't feel like arguing with people two orders of magnitude more competent than myself who have likely faced much tougher numerical problems than I've ever seen. What I will claim is that differences in numerical needs divide programmers into roughly three camps, and the highest-possible-precision approach hurts one of them really badly, and so has to be worked around in ways we'll discuss. The camps are:

  1. The huge camp of people who do businessy accounting. Those should work with integral types to get complete, deterministic, portable control over rounding and all that. Many of the clueless people in this camp represent 1 dollar and 10 cents as the floating point number 1.1. While they are likely a major driving force behind economical growth, I still think they deserve all the trouble they bring upon themselves.
  2. The tiny camp doing high-end scientific computing. Those are the people who can really appreciate the design of IEEE floating point and use its full power. It's great that humanity accidentally satisfied the needs of this small but really cool group, making great floating point hardware available everywhere through blind market forces. It's like having a built-in Stradivari in each home appliance. Yes, perhaps I exaggerate; I get that a lot.
  3. The sizable camp that deals with low-end to mid-range semi-scientific computing. You know, programs that have some geometry or physics or algebra in them. 99.99% of the code snippets in that realm work great with 64b floating point, without the author having invested any thought at all into "numerical analysis". 99% of the code works with 32b floats. When someone stumbles upon a piece of code in the 1% and actually witnesses fatal precision loss, everybody gathers to have a look as if they heard about a beautiful rainbow or a smoke suggesting a forest fire near the horizon.

The majority of people who use and actually should be using floating point are thus in camp 3. Those people don't care about precision anywhere near camp 2, nor do they know how to make the best of IEEE floating point in the very unlikely circumstances where their naive approach will actually fail to work. What they do care about though is consistency. It's important that things compute the same on all platforms. Perhaps more importantly for most, they should compute the same under different build settings, most notably debug and release mode, because otherwise you can't reproduce problems.

Side note: I don't believe in build modes; I usually debug production code in release mode. It's not just floating point that's inconsistent across modes – it's code snippets with behavior undefined by the language, buggy dependence on timing, optimizer bugs, conditional compilation, etc. Many other cans of worms. But the fact is that most people have trouble debugging optimized code, and nobody likes it, so it's nice to have the option to debug in debug mode, and to do that, you need things to reproduce there.

Also, comparing results of different build modes is one way to find worms from those other cans, like undefined behavior and optimizer bugs. Also, many changes you make are optimizations or refaptorings and you can check their sanity by making sure they didn't change the results of the previous version. As we'll see, IEEE FP won't give you even that, regardless of platforms and build modes. The bottom line is that if you're in camp 3, you want consistency, while the "only" things you can expect from IEEE FP is precision and speed. Sure, "only" should be put in quotes because it's a lot to get, it's just a real pity that fulfilling the smaller and more popular wish for consistency is somewhere between hard and impossible.

Some numerical analysts seem annoyed by the camp 3 whiners. To them I say: look, if IEEE FP wasn't the huge success that it is in the precision and speed departments, you wouldn't be hearing from us because we'd be busy struggling with those problems. What we're saying is the exact opposite of "IEEE FP sucks". It's "IEEE FP is so damn precise and fast that I'm happy with ALL of its many answers – the one in optimized x86 build, the one in debug PowerPC build, the one before I added a couple of local variables to that function and the one I got after that change. I just wish I consistently got ONE of these answers, any of them, but the same one." I think it's more flattering than insulting.

I've accumulated quite some experience in defeating the purpose of IEEE floating point and getting consistency at the (tiny, IMO) cost of precision and speed. I want to share this knowledge with humanity, with the hope of getting rewarded in the comments. The reward I'm after is a refutation of my current theory that you can only eliminate 95%-99% of the pain automatically and have to solve the rest manually each time it raises its ugly head.

The pain breakdown

I know three main sources of floating point inconsistency pain:

  1. Algebraic compiler optimizations
  2. "Complex" instructions like multiply-accumulate or sine
  3. x86-specific pain not available on any other platform; not that ~100% of non-embedded devices is a small market share for a pain.

The good news is that most pain comes from item 3 which can be more or less solved automatically. For the purpose of decision making ("should we invest energy into FP consistency or is it futile?"), I'd say that it's not futile and if you can cite actual benefits you'd get from consistency, than it's worth the (continuous) effort.

Disclaimer: I only discuss problems I know and to the extent of my understanding. I have no solid evidence that this understanding is complete. Perhaps the pain breakdown list should have item 4, and perhaps items 1 to 3 have more meat than I think. And while I tried to get the legal stuff right – which behavior conforms to IEEE 754, which conforms to C99, and which conforms to nothing but is still out there – I'm generally a weak tech lawyer and can be wrong. I can only give you the "worked on my 4 families of machines" kind of warranty.

Algebraic compiler optimizations

Compilers, or more specifically buggy optimization passes, assume that floating point numbers can be treated as a field – you know, associativity, distributivity, the works. This means that a+b+c can be computed in either the order implied by (a+b)+c or the one implied by a+(b+c). Adding actual parentheses in source code doesn't help you one bit. The compiler assumes associativity and may implement the computation in the order implied by regrouping your parentheses. Since each floating point operation loses precision on some of the possible inputs, the code generated by different optimizers or under different optimization settings may produce different results.

This could be extremely intimidating because you can't trust any FP expression with more than 2 inputs. However, I think that programming languages in general don't allow optimizers to make these assumptions, and in particular, the C standard doesn't (C99 §5.1.2.3 #13, didn't read it in the document but saw it cited in two sources). So this sort of optimization is actually a bug that will likely be fixed if reported, and at any given time, the number of these bugs in a particular compiler should be small.

I only recall one recurring algebraic optimization problem. Specifically, a*(b+1) tended to be compiled to a*b+a in release mode. The reason is that floating-point literal values like 1 are expensive; 1 becomes a hairy hexadecimal value that has to be loaded from a constant address at run time. So the optimizer was probably happy to optimize a literal away. I was always able to solve this problem by changing the spelling in the source code to a*b+a – the optimizer simply had less work to do, while the debug build saw no reason to make me miserable by undoing my regrouping back into a*(b+1).

This implies a general way of solving this sort of problem: find what the optimizer does by looking at the generated assembly, and do it yourself in the source code. This almost certainly guarantees that debug and release will work the same. With different compilers and platforms, the guarantee is less certain. The second optimizer may think that the optimization you copied from the first optimizer into your source code is brain-dead, and undo it and do a different optimization. However, that means you target two radically different optimizers, both of which are buggy and can't be fixed in the near future; how unlucky can you get?

The bottom line is that you rarely have to deal with this problem, and when it can't be solved with a bug report, you can look at the assembly and do the optimization in the source code yourself. If that fails because you have to use two very different and buggy compilers, use the shotgun described in the next item.

"Complex" instructions

Your target hardware can have instructions computing "non-trivial" expressions beyond a*b or a+b, such as a+=b*c or sin(x). The precision of the intermediate result b*c in a+=b*c may be higher than the size of an FP register would allow, had that result been actually stored in a register. IEEE and the C standard think it's great, because the single instruction generated from a+=b*c is both faster and more precise than the 2 instructions implementing it as d=b*c, a=a+d. Camp 3 people like myself don't think it's so great, because it happens in some build modes but not others, and across platforms the availability of these instruction varies, as does their precision.

AFAIK the "contraction" of a+=b*c is permitted by both the IEEE FP standard (which defines FP + and *) and the C standard (which defines FP types that can map to standards other than IEEE). On the other hand, sin(x), which also gets implemented in hardware these days, isn't addressed by either standard – to the same effect of making the optimization perfectly legitimate. So you can't solve this by reporting a bug the way you could with algebraic optimizations. The other way in which this is tougher is that tweaking the code according to the optimizer's wishes doesn't help much. AFAIK what can help is one of these two things:

  1. Ask the compiler to not generate these instructions. Sometimes there's an exact compiler option for that, like gcc's platform-specific -mno-fused-madd flag, or there's (a defined and actually implemented) pragma directive such as #pragma STDC FP_CONTRACT. Sometimes you don't have any of that, but you can lie to the compiler that you're using an older (compatible) revision of the processor architecture without the "complex" instructions. The latter is an all-or-nothing thing enabling/disabling lots of stuff, so it can degrade performance in many different ways; you have to check the cost.
  2. If compiler flags can't help, there's the shotgun approach I promised to discuss above, also useful for hypothetical cases of hard-to-work-around algebraic optimizations. Instead of helping the optimizer, we get in its way and make optimization impossible using separate compilation. For example, we can convert a+=b*c to a+=multiply_dammit(b,c); multiply_dammit is defined in a separate file. This makes it impossible for most optimizers to see the expression a+=b*c, and forces them to implement multiplication and addition separately. Modern compilers support link-time inlining and then they do optimize the result as a whole. But you can disable that option, and as a side effect speed up linkage a great deal; if that seriously hurts performance, your program is insane and you're a team of scary ravioli coders.

The trouble with the shotgun approach, aside from its ugliness, is that you can't afford to shoot at the performance-critical parts of your code that way. Let us hope that you'll never really have to choose between FP consistency and performance, as I've never had to date.

x86

Intel is the birthplace of IEEE floating point, and the manufacturer of the most camp-3-painful and otherwise convoluted FP hardware. The pain comes, somewhat understandably, from a unique commitment to the IEEE FP philosophy – intermediate results should be as precise as possible; more on that in a moment. The "convoluted" part is consistent with the general insanity of the x86 instruction set. Specifically, the "old" (a.k.a "x87") floating point unit uses a stack architecture for addressing FP operands, which is pretty much the exact opposite of the compiler writer's dream target, but so is the rest of x86. The "new" floating point instructions in SSE don't have these problems, at the cost of creating the aesthetic/psychiatric problem of actually having two FP ISAs in the same processor.

Now, in our context we don't care about the FP stack thingie and all that, the only thing that matters is the consistency of precision. The "old" FP unit handles precision thusly. Precision of stuff written to memory is according to the number of bits of the variable, 'cause what else can it be. Precision of intermediate results in the "registers" (or the "FP stack" or whatever you call it) is defined according to the FPU control & status register, globally for all intermediate results in your program.

By default, it's 80 bits. This means that when you compute a*b+c*d and a,b,c,d are 32b floats, a*b and c*d are computed in 80b precision, and then their 80b sum is converted to a 32b result in memory (if a*b+c*d is indeed written to memory and isn't itself an "intermediate" result). Indeed, what's "intermediate" in the sense of not being written to memory and what isn't? That depends on:

  1. Debug/release build. If we have "float e=a*b+c*d", and e is only used once right in the next line, the optimizer probably won't see a point in writing it to memory. However, in a debug build there's a good reason to allocate it in memory, because if you single-step your program and you're already past the line that used e, you still might want to look at the value of e, so it's good that the compiler kept a copy of it for the debugger to find.
  2. The code "near" e=a*b+c*d according to the compiler's notion of proximity also affects its decisions. There are only so many registers, and sometimes you run out of them and have to store things in memory. This means that if you add or remove code near the line or in inline functions called near the line, the allocation of intermediate results may change.

Compilers could have an option asking them to hide this mess and give us consistent results. The problems with this are that (1) if you care about cross-platform/compiler consistency, then the availability of cross-mode consistency options in one compiler doesn't help with the other compiler and (2) for some reason, compilers apparently don't offer this option in a very good way. For example, MS C++ used to have a /fltconsistency switch but seems to have abandoned it in favor of an insane special-casing of the syntax float(a*b)+float(c*d) – that spelling forces consistency (although the C++ standard doesn't assign it a special meaning not included in the plain and sane a*b+c*d).

I'd guess they changed it because of the speed penalty it implies rather than the precision penalty as they say. I haven't heard about someone caring both about consistency and that level of precision, but I did hear that gcc's consistency-forcing -ffloat-store flag caused notable slowdowns. And the reason it did is implied by its name – AFAIK the only way to implement x86 FP consistency at compile time is to generate code storing FP values to memory to get rid of the extra precision bits. And -ffloat-store only affects named variables, not unnamed intermediate results (annoying, isn't it?), so /fltconsistency, assuming it actually gave you consistency of all results, should have been much slower. Anyway, the bottom line seems to be that you can't get much help from compilers here; deal with it yourself. Even Java gave up on its initial intent of getting consistent results on the x87 FPU and retreated to a cowardly strictfp scheme.

And the thing is, you never have to deal with it outside of x86 – all floating point units I've come across, including the ones specified by Intel's SSE and SSE2, simply compute 32b results from 32b inputs. People who decided to do it that way and rob us of quite some bits of precision have my deepest gratitude, because there's absolutely no good way to work around the generosity of the original x86 FPU designers and get consistent results. Here's what you can do:

  1. Leave the FP CSR configured to 80b precision. 32b and 64b intermediate results aren't really 32b and 64b. The fact that it's the default means that if you care about FP result consistency, intensive hair pulling during your first debugging sessions is an almost inevitable rite of passage.
  2. Set the FP CSR to 64b precision. If you only use 64b variables, debug==release and you're all set. If you have 32b floats though, then intermediate 32b results aren't really 32b. And usually you do have 32b floats.
  3. Set the FP CSR to 32b precision. debug==release, but you're far from "all set" because now your 64b results, intermediate or otherwise, are really 32b. Not only is this a stupid waste of bits, it is not unlikely to fail, too, because 32b isn't sufficient even for some of the problems encountered by camp 3. And of course it's not compatible with other platforms.
  4. Set the FP CSR to 64b precision during most of the program run, and temporarily set it to 32b in the parts of the program using 32b floats. I wouldn't go down that path; option 4 isn't really an option. I doubt that you use both 32b and 64b variables in a very thoughtful way and manage to have a clear boundary between them. If you depend on the ability of everyone to correctly maintain the CSR, then it sucks sucks sucks.

Side note: I sure as hell don't believe in "very special" "testing" build/running modes. For example, you could say that you have a special mode where you use option (3) and get 32b results, and use that mode to test debug==release or something. I think it's completely self-defeating, because the point of consistency is being able to reproduce a phenomenon X that happens in a mode which is actually important, in another mode where reproducing X is actually useful. Therefore, who needs consistency across inherently useless modes? We'd be defeating the purpose of defeating the purpose of IEEE floating point.

Therefore, if you don't have SSE, the only option is (2) – set the FP CSR to 64b and try to avoid 32b floats. On Linux, you can do it with:

#include <fpu_control.h>
fpu_control_t cw;
_FPU_GETCW(cw);
cw = (cw & ~_FPU_EXTENDED) | _FPU_DOUBLE;
_FPU_SETCW(cw);

Do it first thing in main(). If you use C++, you should do it first thing before main(), because people can use FP in constructors of global variables. This can be achieved by figuring out the compiler-specific translation unit initialization order, compiling your own C/C++ start-up library, overriding the entry point of a compiled start-up library using stuff like LD_PRELOAD, overwriting it in a statically linked program right there in the binary image, having a coding convention forcing to call FloatingPointSingleton::instance() before using FP, or shooting the people who like to do things before main(). It's a trade-off.

The situation is really even worse because the FPU CSR setting only affects mantissa precision but not the exponent range, so you never work with "real" 64b or 32b floats there. This matters in cases of huge numbers (overflow) and tiny numbers (double rounding of subnormals). But it's bad enough already, and camp 3 people don't really care about the extra horror; if you want those Halloween stories, you can find them here. The good news are that today, you are quite likely to have SSE2 and very likely to have SSE on your machine. So you can automatically sanitize all the mess as follows:

  1. If you have SSE2, use it and live happily ever after. SSE2 supports both 32b and 64b operations and the intermediate results are of the size of the operands. BTW, mixed expressions like a+b where a is float and b is double don't create consistency problems on any platform because the C standard specifies the rules for promotion precisely and portably (a will be promoted to double). The gcc way of using SSE2 for FP is -mfpmath=sse -msse2.
  2. If you only have SSE, use it for 32b floats which it does support (gcc: -mfpmath=sse -msse). 64b floats will go to the old FP unit, so you'll have to configure it to 64b intermediate results. Everything will work, the only annoying things being (1) the retained necessity to shoot the people having fun before main and (2) the slight differences in the semantics of control flags in the old FP and the SSE FP CSR, so if you configure your own policy, floats and doubles will not behave exactly the same. Neither problem is a very big deal.

Interestingly, SSE with its support for SIMD FP commands actually can make things worse in the standard-violating-algebraic-optimizations department. Specifically, Intel's compiler reportedly has (had?) an optimization which unrolls FP accumulation loops and reorders additions in order to utilize SIMD FP commands (gcc 4 does that, too, but only if you explicitly ask for trouble with -funsafe-math-optimizations or similar). But I wouldn't conclude anything from it, except that automatic vectorization, which is known to work only on the simplest of code snippets, actually doesn't work even on them.

Summary: use SSE2 or SSE, and if you can't, configure the FP CSR to use 64b intermediates and avoid 32b floats. Even the latter solution works passably in practice, as long as everybody is aware of it.

I think I covered everything I know except for things like long double, FP exceptions, etc. – and if you need that, you're not in camp 3; go away and hang out with your ivory tower numerical analyst friends. If you know a way to automate away more pain, I'll be grateful for every FP inconsistency debugging afternoon your advice will save me.

Happy Halloween!

I want a struct linker

Here's a problem I've seen a lot (it's probably classified as an "Antipattern" or a "Code Smell" and as such has a name in the appropriate circles, but I wouldn't know, so I'll leave it nameless).

You have some kind of data structure that you pass around a lot. Soon, the most valuable thing about the structure isn't the data it keeps, but the fact that it's available all the way through some hairy flow of control. If you want to have your data flow through all those pipes, just add it to The Data Structure. (To antipattern classification enthusiasts: I don't think we have a god object yet because we really want to pass our data through that flow and it's right to have one kind of structure for that and not, say, propagating N+1 function parameters.)

Now suppose the structure holds an open set of data. For example, a spam filter could have a data structure to which various passes add various cues they extract from the message, and other passes can access those cues. We don't want the structure to know what passes exist and what cues they extract, so that you can add a pass without changing the structure.

I don't think there's a good way to do it in a 3GL. In C or C++, you can:

  • Aggregate the cue structures by value (which means you have to recompile everything once you change/add/remove a member from any of them)
  • Keep pointers to the cue structures and use forward declarations to avoid recompilation (a bit slower, and you still have to recompile when you add/remove a whole cue structure)
  • Keep an array of void* or base class objects (not debugger-friendly, and requires a registration procedure to resize the arrays according to the number of passes and deal dynamically computed indexes to the cues to all who wish to access them)
  • Keep a key -> void* map (increasingly slow and debugger-unfriendly, and you need registration to compute the keys from cue names, or use the C substitute for interning – use pointers to global variables with names like &g_my_cue_key as keys)
  • Keep a string -> void* map (no registration or pseudo-interning, but really slow)

On top of JVM or .NET, you have pretty much the same options, plus the option to generate the cue container structure dynamically. Each cue would define an interface and the container structure would implement all those interfaces. The debugger would display containers nicely, and the code accessing them wouldn't depend on the container class. I'd guess nobody does that though because the class generation part is likely somewhat gnarly.

In a 4GL, you can add attributes to class objects at run time. This is similar to keeping a key->pointer map in a 3GL, except the name interning is handled by the system as it should, and you don't confuse debuggers because you're using a standard feature of the object system. Which solves everything except for the speed issue, which is of course dwarfed by other 4GL speed issues.

Now, I used to think of it as one of the usual speed vs convenience trade-offs, but I no longer think it is, because a struct linker could solve it.

Suppose you could have "distributed" struct/class definitions in an offset-based language; you could write "dstruct SpamCues { ViagraCue viagra; CialisCue cialis; }" in the Medication spam filter module, and "dstruct SpamCues { FallicSymbolsCue fallic; SizeDescriptionsCue size; }" in the Penis Enlargement module. The structure is thus defined by all modules linked into the application.

When someone gets a SpamCues structure and accesses cues.viagra, the compiler generates a load-from-pointer-with-offset instruction – for example, in MIPS assembly it's spelled "lw offset(ptrreg)". However, the offset would be left for the linker to resolve, just the way it's done today for pointers in "move reg, globalobjectlabel" and "jump globalfunclabel".

This way, access to "distributed" structures would be as fast as "normal" structures. And you would preserve most optimizations related to adjacent offsets. For example, if your machine supports multiple loads, so a rectangle structure with 4 int members can be loaded to 4 registers with "ldm rectptrreg,{R0-R4}" or something, it could still be done because the compiler would know that the 4 members are adjacent; the only unknown thing would be the offset of the rectangle inside the larger struct.

One issue the linker could have on some architectures is handling very large offsets that don't fit into the instruction encoding of load-from-pointer-with-offset forms. Well, I'd live even with the dumbest solution where you always waste an instruction to increment a pointer in case the offset is too large. And then you could probably do better than that, similarly to the way "far calls" (calls to functions at addresses too far from the point of call for the offset to fit into 28 bits or whatever the branch offset encoding size is on your machine) are handled today.

The whole thing can fail in presence of dynamic loading during program run as in dlopen/LoadLibrary; if you already have objects of the structure, and your language doesn't support relocation because of using native pointers, then the dynamically loaded module won't be able to add members to a dstruct since it can't update the existing objects. Well, I can live with that limitation.

If the language generates native object files, there's the problem of maintaining compatibility with the object file format. I think this could "almost" be done, by mapping a distributed structure to a custom section .dstruct.SpamCues, and implementing members (viagra, cialis, fallic, size) as global objects in that section. Then if an equivalent of a linker script says that the base address of .dstruct.SpamCues is 0, then &viagra will resolve to the offset of the member inside the structure. The change to automatically map sections named .dstruct.* to 0 surely isn't more complicated than the handling of stuff like .gnu.linkonce, inflicted upon us by the idiocy of C++ templates and the likes of them.

And here's why I'll probably never see a struct linker:

  • If the language uses a native linker, a small change must be done to that linker in order to handle encodings of load/store instructions in ways it previously didn't (currently it only has to deal with resolving pointers, not offsets). Since it's platform-specific, the small change is actually quite large.
  • You could compromise and avoid that change by generating less efficient code which uses the already available linker ability to resolve the "address" of the viagra object in the zero-based .dstruct.SpamCues section – the code can add that "address" (offset, really) to &cues. But that could still force changes in the compiler back-end because now it has to generate assembly code adding what looks like 2 addresses, which makes no sense today and might be unsupported if the back-end preserves type information.
  • The previous items assume that the portable "front-end" work to support something like dstruct isn't a big deal. However, I'd guess that not enough people would benefit from it/realize they'd benefit from it to make it appear in a mainstream language and its front-ends.
  • I could roll my own compiler to a language similar to a mainstream one, with a bunch of additions like this struct linker thingie. Two problems with this. One – it's too hard to parse all the crud in a mainstream language (even if it isn't C++) to make it worth the trouble, unless your compiler does something really grand; a bunch of nice features probably aren't worth it. Two – most programmers take a losing approach towards their career where they want to put mainstream languages on their resume so that losers at the other end can scan their resumes for those languages; if your code is spelled in a dialect, you'll scare off the losers forming the backbone of our industry.

It still amazes me how what really matters isn't what can be done, but what's already done. It's easier to change goddamn hardware than it is to change "software infrastructure" like languages, software tools, APIs, protocols and all kinds of that shit. I mean, here we have something that's possible and even easy to do, and yet completely impractical.

Guess I'll have to roll my own yet-another-distributed-reflective-registration bullshit. Oh well.

I love globals, or Google Core Dump

The entire discussion only applies to unsafe languages, the ones that dump core. By which I mean, C. Or C++, if you're really out of luck.

If it can dump core, it will dump core, unless it decides to silently corrupt its data instead. Trust my experience of working in a multi-processor, multi-threaded, multi-programmer, multi-nightmare environment. Some C++ FQA Lite readers claimed that the fact that I deal with lots of crashes in C++ indicates that I'm a crappy programmer surrounded by similar people. Luckily, you don't really need to trust my experience, because you can trust Google's. Do this:

  1. Find a Google office near you.
  2. Visit a Google toilet.
  3. You'll find a page about software testing, with the subtitle "Debugging sucks. Testing rocks." Read it.
  4. Recover from the trauma.
  5. Realize that the chances of you being better at eliminating bugs than Google are low.
  6. Read about the AdWords multi-threaded billing server nightmare.
  7. The server was written in C++. The bug couldn't happen in a safe language. Meditate on it.
  8. Consider yourself enlightened.

This isn't the reason why this post has "Google core dump" in its title, but hopefully it's a reason for us to agree that your C/C++ program will crash, too.

I love globals

What happens when we face a core dump? Well, we need the same things you'd expect to look for in any investigation: names and addresses. Names of objects looking at which may explain what happened, their addresses to actually look at them, and type information to sensibly display them.

In C and C++, we have 3 kinds of addresses: stack, heap and global. Let's see who lives there.

Except the stack is overwritten, because it can be. Don't count on being able to see the function calls leading to the point of crash, nor the parameters and local variables of those functions. In fact, don't even count on being able to see the point of crash itself: the program counter, the link register, the frame pointer, all that stuff can contain garbage.

And the heap is overwritten, too, nearly as badly. The typical data structure used by C/C++ allocators (for example, dlmalloc) is a kind of linked list, where each memory block is prefixed with its size so you can jump to the next one. Overwrite one of these size values and you will have lost the boundaries of the chunks above that address. That's a loss of 50% of the heap objects on average, assuming uniform distribution of memory overwriting bugs across the address space of the heap.

So don't count on the stack or the heap. Your only hope is that someone has ignored the Best Practices and the finger-pointing by the more proficient colleagues, and allocated a global object. Possibly under the clever disguise of a "Singleton". Not a bad thing after all, that moronic "design pattern", because it ultimately allowed to counter cargo cult programmers' accusations of "globals are evil" with equally powerful cargo cult argument of "it's a design pattern". So people could allocate globals again.

Which is good, because a global always has an accurate name-to-address mapping, no matter what atrocity was committed by the bulk of unsafe code running on the loose. Can't overwrite a symbol table. And it has accurate type information, too. As opposed to objects you find through void*, or a base class pointer where the base class lacks virtual functions or the object vptr was overwritten, etc.

Which is why I frequently start debugging by firing an object view window on a global, or running debugger macros which read globals, etc. Of course you can fuck up a global variable to make debugging unpleasant. For example, if the variable is "static" in the C sense, you need to open the right file or function to display it, and you need the debugger front-end to understand the context, which will be especially challenging if it's a static variable in a template function (one of the best things in C++ is how neatly its new features interact with C's old ones).

Or you can stuff the global into a class or a namespace. I was never able to display globals by their qualified C++ name in, say, gdb 5. But no matter; nm <program> | grep <global> followed by p *(TypeOfGlobal*)addr always does the trick, and no attempts at obfuscating the symbol table will stop it. I still say make it a real, unashamed global to make debugging easier. If you're lucky, you'll get to piss off a couple of cargo cult followers as a nice side-effect.

Google Core Dump

A core dump is a web. Its sites are objects. It's hyperlinks are pointers. It's PageRank is a TypeRank: what's the type of this object according to the votes of the pointers stored in other objects? The spamdexing is done by pointer-like bit patterns stored in unused memory slots. The global variables are the major sites with high availability you can use as roots for the crawling.

What utilities would we like to have for this web? The usual stuff.

  • Browsers. Debugger object view window is the Firefox, and the memory view window is the Lynx. The core dump Lynx usually sucks in that it doesn't make it easy to follow pointers – can't click on a word and have the browser follow the pointer (by jumping to the memory pointed by it). No back button, either. Oh well.
  • DNS. The ability to translate variable names to raw addresses. Works very reliably for globals and passably otherwise. Works reliably for all objects in safe languages.
  • Reverse DNS. Given an address, tell me the object name. Problematic for dynamically allocated objects, although you could list the names of pointer variables leading to it (Google bombing). Works reliably for global functions and variables. For some reason, the standard addr2line program only supports functions though. Which is why I have an addr2sym program. It so happened that I have several of them, in fact. You can download one here. "Reverse DNS" is particularly useful when you find pointers somewhere in registers or memory and wonder what they could point to. In safe languages, you don't have that problem because everything is typed and so you can simply display the pointed object.
  • Google Core Dump, similar to Google Desktop or Google for the WWW. Crawl a core dump, figure out the object boundaries and types by parsing the heap linked list and the stack and looking at pointers' "votes", create an index, and allow me to query that index. Lots of work, that, some of it heuristical. And in order to get type information in C or C++, you'll have to either parse the source code (good luck doing it with C++), or parse the non-portable debug information format. But it's doable; in fact, we have it, for our particular target/debugger/allocator combo. Of course it has its glitches. Quirky and obscure enough to make open sourcing it not worth the trouble.

I really wish there was a reasonably portable and reliable Google Core Dump kind of thing. But it doesn't look like that many people care about debugging crashes at all. Most core dumps at customer sites seem to go to /dev/null, and those that can't be easily deciphered are apparently given up on until the bug manifests itself in some other way or its cause is guessed by someone.

Am I coming from a particularly weird niche where the code size is large enough and the development rapid enough to make crashes almost unavoidable, but crashes in the final product version are almost intolerable? Or do most good projects allocate everything on the stack and the heap, so with those smashed they're doomed no matter what? Or is the problem simply stinky enough to make it unattractive for a hobby project while lacking revenue potential to make a good commercial project?

Would you like this sort of thing? If you would, drop me a line. In the meanwhile, I satisfy my wish for a Google Core Dump with my perfect implementation for an embedded co-processor, the one I've poked at with Tcl commands. With 128K of memory, no dynamic allocation, and local variables effectively implemented as globals, perfect decoding is easy. I'm telling ya, globals rule.

As to my "reverse DNS" implementation:

  • I could make it more portable by parsing the output of nm --print-size. But just running nm on a 20M symbol table takes about 2 seconds. I want instantaneous output, 'cause I'm very impatient when I debug.
  • Alternatively, I could make it more portable by using a library such as bfd. But that would drag in a library such as bfd, and I had trouble with what looked like library/compiler version mismatches with bfd, whereas my ELF parsing code never had any trouble. Also, an implementation parsing ELF is more interesting as sample code because you get to see how easy to parse these formats are. So it's elfaddr2sym, not addr2sym. (It's really 32-bit-ELF-with-host-endianness-addr2sym, because I'm lazy and it covers all my targets.)
  • There's a ton of addr2sym code out there, and maybe a good addr2sym program. I just didn't find it. I have an acknowledged weakness in the wheel reinventing department.
  • Of course I don't demangle the ugly C++ names; piping to c++filt does.
  • The program is in D, because of the "instantaneous" bit, and because D is one of the best choices available today if you care about both speed and brevity. Look at this: lowerBound!("a.st_value <= b")(ssyms, addr) does a binary search for addr in the sorted ssyms array. As brief as it gets out of the box with any language and standard library, isn't it? The string is compiled statically into the instantiation of the lowerBound template; a & b are the arguments of the anonymous function represented by the string. Readable. Short. Fast. Easy to use – garbage-collected array outputs in functions like filter(), error messages to the point – that's why a decent grammar is a good thing even if you aren't the compiler writer. Looks a lot like C++, braces, static typing, everything. Thus easy to pimp in a 3GL environment, in particular, a C++ environment. You can download the Digital Mars D compiler for Linux, or wait for C++0x to solve 15% of the problems with <algorithm> by introducing worse problems.

By the way, the std.algorithm module, the one with the sort, filter, lowerBound and similar functions, is by Andrei Alexandrescu, of Modern C++ Design fame. How is it possible that his stuff in D is so yummy while his implementation of similar things in C++ is equally icky? Because C++ is to D what proper fixation is to anaesthesia. There, I bet you saw it coming.

What does "global" mean?

For the sake of completeness, I'd like to bore you with a discussion of the various aspects of globalhood, in the vanishingly small hope of this being useful in a battle against a cargo cult follower authoring a coding convention or such. In C++, "global" can mean at least 6 things:

  • Number of instances per process. A "global" is everything that's instantiated once.
  • Life cycle. A "global" is constructed before main and destroyed after main. A static variable inside a function is not "global" in this sense.
  • "Scope" in the "namespace" sense (as opposed to the life cycle sense). We have C-style file scope, class scope, function scope, and "the true global scope". And we have namespaces.
  • Storage. A "global" is assigned a link time address and stored there. In a singleton implementation calling new and assigning its output to a global pointer, the pointer is "global" in this sense but the object is not.
  • Access control. If it's in a class scope, it may be private or protected, which makes it less of a global in this fifth sense.
  • Responsibility. A global can be accessible from everywhere but only actually accessed from a couple of places. For example, you can allocate a large object instantiating lots of members near your main function and then call object methods which aren't aware that the stuff is allocated globally.

So when I share my love of globals with you, the question is which aspect of globality I mean. What I mean is this:

  1. I like global storage – link-time addresses – for everything which can be handled that way. A global pointer is better than nothing, but it can be overwritten and you will have lost the object; better allocate the entire thing globally.
  2. I like global scope, no classes, namespaces and access control keywords attached, to make symbol table look-up easier, thus making use of the global allocation.
  3. I like global life cycle – no Meyers' singletons and lazy initialization. In fact, I like trivial constructors/destructors, leaving the actual work to init/close functions called by main(). This way, you can actually control the order in which things are done and know what the dependencies are. With Meyers' singletons, the order of destruction is uncontrollable (it's the reverse order of initialization, which doesn't necessarily work). Solutions were proposed to this problem, so dreadful that I'm not going to discuss them. Just grow up, design the damned init/close sequence and be in control again. Why do people think that all major operations should be explicit except for initialization which should happen automagically when you least expect it?
  4. "Globals" in the sense of "touched by every piece of code" is the trademark style of a filthy swine. There are plenty of good reasons to use "globals"; none of them has anything to do with "globals" as in "variables nobody/everybody is responsible for".
  5. I think that everything that's instantiated once per process is a "global", and when you wrap it with scope, access control, and design patterns, you shouldn't stop calling it a global (and instead insist on "singleton", "static class member", etc.). It's still a global, and its wrapping should be evaluated by its practical virtues. Currently, I see no point in wrapping globals in anything – plain old global variables are the thing best supported by all software tools I know.

I think this can be used as "rationale" in a coding guideline, maybe in the part allowing the use of globals as an "exception". But I keep my hopes low.