The entire discussion only applies to unsafe languages, the ones that dump core. By which I mean, C. Or C++, if you're really out of luck.
If it can dump core, it will dump core, unless it decides to silently corrupt its data instead. Trust my experience of working in a multi-processor, multi-threaded, multi-programmer, multi-nightmare environment. Some C++ FQA Lite readers claimed that the fact that I deal with lots of crashes in C++ indicates that I'm a crappy programmer surrounded by similar people. Luckily, you don't really need to trust my experience, because you can trust Google's. Do this:
- Find a Google office near you.
- Visit a Google toilet.
- You'll find a page about software testing, with the subtitle "Debugging sucks. Testing rocks." Read it.
- Recover from the trauma.
- Realize that the chances of you being better at eliminating bugs than Google are low.
- Read about the AdWords multi-threaded billing server nightmare.
- The server was written in C++. The bug couldn't happen in a safe language. Meditate on it.
- Consider yourself enlightened.
This isn't the reason why this post has "Google core dump" in its title, but hopefully it's a reason for us to agree that your C/C++ program will crash, too.
I love globals
What happens when we face a core dump? Well, we need the same things you'd expect to look for in any investigation: names and addresses. Names of objects looking at which may explain what happened, their addresses to actually look at them, and type information to sensibly display them.
In C and C++, we have 3 kinds of addresses: stack, heap and global. Let's see who lives there.
Except the stack is overwritten, because it can be. Don't count on being able to see the function calls leading to the point of crash, nor the parameters and local variables of those functions. In fact, don't even count on being able to see the point of crash itself: the program counter, the link register, the frame pointer, all that stuff can contain garbage.
And the heap is overwritten, too, nearly as badly. The typical data structure used by C/C++ allocators (for example, dlmalloc) is a kind of linked list, where each memory block is prefixed with its size so you can jump to the next one. Overwrite one of these size values and you will have lost the boundaries of the chunks above that address. That's a loss of 50% of the heap objects on average, assuming uniform distribution of memory overwriting bugs across the address space of the heap.
So don't count on the stack or the heap. Your only hope is that someone has ignored the Best Practices and the finger-pointing by the more proficient colleagues, and allocated a global object. Possibly under the clever disguise of a "Singleton". Not a bad thing after all, that moronic "design pattern", because it ultimately allowed to counter cargo cult programmers' accusations of "globals are evil" with equally powerful cargo cult argument of "it's a design pattern". So people could allocate globals again.
Which is good, because a global always has an accurate name-to-address mapping, no matter what atrocity was committed by the bulk of unsafe code running on the loose. Can't overwrite a symbol table. And it has accurate type information, too. As opposed to objects you find through void*, or a base class pointer where the base class lacks virtual functions or the object vptr was overwritten, etc.
Which is why I frequently start debugging by firing an object view window on a global, or running debugger macros which read globals, etc. Of course you can fuck up a global variable to make debugging unpleasant. For example, if the variable is "static" in the C sense, you need to open the right file or function to display it, and you need the debugger front-end to understand the context, which will be especially challenging if it's a static variable in a template function (one of the best things in C++ is how neatly its new features interact with C's old ones).
Or you can stuff the global into a class or a namespace. I was never able to display globals by their qualified C++ name in,
say, gdb 5. But no matter; nm <program> | grep <global>
followed by p *(TypeOfGlobal*)addr
always does the trick, and no attempts at obfuscating the symbol table will stop it. I still say make it a real, unashamed
global to make debugging easier. If you're lucky, you'll get to piss off a couple of cargo cult followers as a nice
side-effect.
Google Core Dump
A core dump is a web. Its sites are objects. It's hyperlinks are pointers. It's PageRank is a TypeRank: what's the type of this object according to the votes of the pointers stored in other objects? The spamdexing is done by pointer-like bit patterns stored in unused memory slots. The global variables are the major sites with high availability you can use as roots for the crawling.
What utilities would we like to have for this web? The usual stuff.
- Browsers. Debugger object view window is the Firefox, and the memory view window is the Lynx. The core dump Lynx usually sucks in that it doesn't make it easy to follow pointers – can't click on a word and have the browser follow the pointer (by jumping to the memory pointed by it). No back button, either. Oh well.
- DNS. The ability to translate variable names to raw addresses. Works very reliably for globals and passably otherwise. Works reliably for all objects in safe languages.
- Reverse DNS. Given an address, tell me the object name. Problematic for dynamically allocated objects, although you could list the names of pointer variables leading to it (Google bombing). Works reliably for global functions and variables. For some reason, the standard addr2line program only supports functions though. Which is why I have an addr2sym program. It so happened that I have several of them, in fact. You can download one here. "Reverse DNS" is particularly useful when you find pointers somewhere in registers or memory and wonder what they could point to. In safe languages, you don't have that problem because everything is typed and so you can simply display the pointed object.
- Google Core Dump, similar to Google Desktop or Google for the WWW. Crawl a core dump, figure out the object boundaries and types by parsing the heap linked list and the stack and looking at pointers' "votes", create an index, and allow me to query that index. Lots of work, that, some of it heuristical. And in order to get type information in C or C++, you'll have to either parse the source code (good luck doing it with C++), or parse the non-portable debug information format. But it's doable; in fact, we have it, for our particular target/debugger/allocator combo. Of course it has its glitches. Quirky and obscure enough to make open sourcing it not worth the trouble.
I really wish there was a reasonably portable and reliable Google Core Dump kind of thing. But it doesn't look like that many people care about debugging crashes at all. Most core dumps at customer sites seem to go to /dev/null, and those that can't be easily deciphered are apparently given up on until the bug manifests itself in some other way or its cause is guessed by someone.
Am I coming from a particularly weird niche where the code size is large enough and the development rapid enough to make crashes almost unavoidable, but crashes in the final product version are almost intolerable? Or do most good projects allocate everything on the stack and the heap, so with those smashed they're doomed no matter what? Or is the problem simply stinky enough to make it unattractive for a hobby project while lacking revenue potential to make a good commercial project?
Would you like this sort of thing? If you would, drop me a line. In the meanwhile, I satisfy my wish for a Google Core Dump with my perfect implementation for an embedded co-processor, the one I've poked at with Tcl commands. With 128K of memory, no dynamic allocation, and local variables effectively implemented as globals, perfect decoding is easy. I'm telling ya, globals rule.
As to my "reverse DNS" implementation:
- I could make it more portable by parsing the output of
nm --print-size
. But just running nm on a 20M symbol table takes about 2 seconds. I want instantaneous output, 'cause I'm very impatient when I debug. - Alternatively, I could make it more portable by using a library such as bfd. But that would drag in a library such as bfd, and I had trouble with what looked like library/compiler version mismatches with bfd, whereas my ELF parsing code never had any trouble. Also, an implementation parsing ELF is more interesting as sample code because you get to see how easy to parse these formats are. So it's elfaddr2sym, not addr2sym. (It's really 32-bit-ELF-with-host-endianness-addr2sym, because I'm lazy and it covers all my targets.)
- There's a ton of addr2sym code out there, and maybe a good addr2sym program. I just didn't find it. I have an acknowledged weakness in the wheel reinventing department.
- Of course I don't demangle the ugly C++ names; piping to c++filt does.
- The program is in D, because of the "instantaneous" bit, and because D is one of the best choices available today if you
care about both speed and brevity. Look at this:
lowerBound!("a.st_value <= b")(ssyms, addr)
does a binary search for addr in the sorted ssyms array. As brief as it gets out of the box with any language and standard library, isn't it? The string is compiled statically into the instantiation of the lowerBound template; a & b are the arguments of the anonymous function represented by the string. Readable. Short. Fast. Easy to use – garbage-collected array outputs in functions like filter(), error messages to the point – that's why a decent grammar is a good thing even if you aren't the compiler writer. Looks a lot like C++, braces, static typing, everything. Thus easy to pimp in a 3GL environment, in particular, a C++ environment. You can download the Digital Mars D compiler for Linux, or wait for C++0x to solve 15% of the problems with<algorithm>
by introducing worse problems.
By the way, the std.algorithm module, the one with the sort, filter, lowerBound and similar functions, is by Andrei Alexandrescu, of Modern C++ Design fame. How is it possible that his stuff in D is so yummy while his implementation of similar things in C++ is equally icky? Because C++ is to D what proper fixation is to anaesthesia. There, I bet you saw it coming.
What does "global" mean?
For the sake of completeness, I'd like to bore you with a discussion of the various aspects of globalhood, in the vanishingly small hope of this being useful in a battle against a cargo cult follower authoring a coding convention or such. In C++, "global" can mean at least 6 things:
- Number of instances per process. A "global" is everything that's instantiated once.
- Life cycle. A "global" is constructed before main and destroyed after main. A static variable inside a function is not "global" in this sense.
- "Scope" in the "namespace" sense (as opposed to the life cycle sense). We have C-style file scope, class scope, function scope, and "the true global scope". And we have namespaces.
- Storage. A "global" is assigned a link time address and stored there. In a singleton implementation calling new and assigning its output to a global pointer, the pointer is "global" in this sense but the object is not.
- Access control. If it's in a class scope, it may be private or protected, which makes it less of a global in this fifth sense.
- Responsibility. A global can be accessible from everywhere but only actually accessed from a couple of places. For example, you can allocate a large object instantiating lots of members near your main function and then call object methods which aren't aware that the stuff is allocated globally.
So when I share my love of globals with you, the question is which aspect of globality I mean. What I mean is this:
- I like global storage – link-time addresses – for everything which can be handled that way. A global pointer is better than nothing, but it can be overwritten and you will have lost the object; better allocate the entire thing globally.
- I like global scope, no classes, namespaces and access control keywords attached, to make symbol table look-up easier, thus making use of the global allocation.
- I like global life cycle – no Meyers' singletons and lazy initialization. In fact, I like trivial constructors/destructors, leaving the actual work to init/close functions called by main(). This way, you can actually control the order in which things are done and know what the dependencies are. With Meyers' singletons, the order of destruction is uncontrollable (it's the reverse order of initialization, which doesn't necessarily work). Solutions were proposed to this problem, so dreadful that I'm not going to discuss them. Just grow up, design the damned init/close sequence and be in control again. Why do people think that all major operations should be explicit except for initialization which should happen automagically when you least expect it?
- "Globals" in the sense of "touched by every piece of code" is the trademark style of a filthy swine. There are plenty of good reasons to use "globals"; none of them has anything to do with "globals" as in "variables nobody/everybody is responsible for".
- I think that everything that's instantiated once per process is a "global", and when you wrap it with scope, access control, and design patterns, you shouldn't stop calling it a global (and instead insist on "singleton", "static class member", etc.). It's still a global, and its wrapping should be evaluated by its practical virtues. Currently, I see no point in wrapping globals in anything – plain old global variables are the thing best supported by all software tools I know.
I think this can be used as "rationale" in a coding guideline, maybe in the part allowing the use of globals as an "exception". But I keep my hopes low.