How to make a heap profiler

I have a new blog post at embeddedrelated.com describing heapprof, a 250-line heap profiler (C+Python) working out of the box on Linux, and easy to port/tweak.

Why bad scientific code beats code following "best practices"

I've just read "The Low Quality of Scientific Code", which claims that code written by scientists comes out worse than it would if "software engineers" were involved.

I've been working, for more than a decade, in an environment dominated by people with a background in math or physics who often have sparse knowledge of "software engineering".

Invariably, the biggest messes are made by the minority of people who do define themselves as programmers. I will confess to having made at least a couple of large messes myself that are still not cleaned up. There were also a couple of other big messes where the code luckily went down the drain, meaning that the damage to my employer was limited to the money wasted on my own salary, without negative impact on the productivity of others.

I claim to have repented, mostly. I try rather hard to keep things boringly simple and I don't think I've done, in the last 5-6 years, something that causes a lot of people to look at me funny having spent the better part of the day dealing with the products of my misguided cleverness.

And I know a few programmers who have explicitly not repented. And people look at them funny and they think they're right and it's everyone else who is crazy.

In the meanwhile, people who "aren't" programmers but are more of a mathematician, physicist, algorithm developer, scientist, you name it commit sins mostly of the following kinds:

  • Long functions
  • Bad names (m, k, longWindedNameThatYouCantReallyReadBTWProgrammersDoThatALotToo)
  • Access all over the place – globals/singletons, "god objects" etc.
  • Crashes (null pointers, bounds errors), largely mitigated by valgrind/massive testing
  • Complete lack of interest in parallelism bugs (almost fully mitigated by tools)
  • Insufficient reluctance to use libraries written by clever programmers, with overloaded operators and templates and stuff

This I can deal with, you see. I somehow rarely have a problem, if anyone wants me to help debug something, to figure out what these guys were trying to do. I mean in the software sense. Algorithmically maybe I don't get them fully. But what variable they want to pass to what function I usually know.

Not so with software engineers, whose sins fall into entirely different categories:

  • Multiple/virtual/high-on-crack inheritance
  • 7 to 14 stack frames composed principally of thin wrappers, some of them function pointers/virtual functions, possibly inside interrupt handlers or what-not
  • Files spread in umpteen directories
  • Lookup using dynamic structures from hell – dictionaries of names where the names are concatenated from various pieces at runtime, etc.
  • Dynamic loading and other grep-defeating techniques
  • A forest of near-identical names along the lines of DriverController, ControllerManager, DriverManager, ManagerController, controlDriver ad infinitum – all calling each other
  • Templates calling overloaded functions with declarations hopefully visible where the template is defined, maybe not
  • Decorators, metaclasses, code generation, etc. etc.

The result is that you don't know who calls what or why, debuggers are of moderate use at best, IDEs & grep die a slow, horrible death, etc. You literally have to give up on ever figuring this thing out before tears start flowing freely from your eyes.

Of course this is a gross caricature, not everybody is a sinner at all times, and, like, I'm principally a "programmer" rather than "scientist" and I sincerely believe to have a net positive productivity after all – but you get the idea.

Can scientific code benefit from better "software engineering"? Perhaps, but I wouldn't trust software engineers to deliver those benefits!

Simple-minded, care-free near-incompetence can be better than industrial-strength good intentions paving a superhighway to hell. The "real world" outside the computer is full of such examples.

Oh, and one really mean observation that I'm afraid is too true to be omitted: idleness is the source of much trouble. A scientist has his science to worry about so he doesn't have time to complexify the code needlessly. Many programmers have no real substance in their work – the job is trivial – so they have too much time on their hands, which they use to dwell on "API design" and thus monstrosities are born.

(In fact, when the job is far from trivial technically and/or socially, programmers' horrible training shifts their focus away from their immediate duty – is the goddamn thing actually working, nice to use, efficient/cheap, etc.? – and instead they declare themselves as responsible for nothing but the sacred APIs which they proceed to complexify beyond belief. Meanwhile, functionally the thing barely works.)

Working simultaneously vs waiting simultaneously

"Multiprocessing", "multi-threading", "parallelism", "concurrency" etc. etc. can give you two kinds of benefits:

  • Doing many things at once – 1000 multiplications every cycle.
  • Waiting for many things at once – wait for 1000 HTTP requests just issued.

Some systems help with one of these but not the other, so you want to know which one – and if it's the one you need.

For instance, CPython has the infamous GIL – global interpreter lock. To what extent does the GIL render CPython "useless on multiple cores"?

  • Indeed you can hardly do many things at once – not in a single pure Python process. One thread doing something takes the GIL and the other thread waits.
  • You can however wait for many things at once just fine – for example, using the multiprocessing module (pool.map), or you could spawn your own thread pool to do the same. Many Python threads can concurrently issue system calls that wait for data – reading from TCP sockets, etc. Then instead of 1000 request-wait, request-wait steps, you issue 1000 requests and wait for them all simultaneously. Could be close to a 1000x speed-up for long waits (with a 1000-thread worker pool; more on that below). Works like a charm.

So GIL is not a problem for "simultaneous waiting" for I/O. Is GIL a problem for simultaneous processing? If you ask me – no, because:

  • If you want performance, it's kinda funny to use pure Python and then mourn the fact that you can't run, on 8 cores, Python code that's 30-50x slower than C to begin with.
  • On the other hand, if you use C bindings, then the C code could use multiple threads actually running on multiple cores just fine; numpy does it if properly configured, for instance. Numpy also uses SIMD/vector instructions (SSE etc.) – another kind of "doing many things at once" that pure Python can't do regardless of the GIL.

So IMO Python doesn't have as bad a story in this department as it's reputed to have – and if it does look bad to you, you probably can't tolerate Python's slowness doing one thing at a time in the first place.

So Python – or C, for that matter – is OK for simultaneous waiting, but is it great? Probably not as great as Go or Erlang – which let you wait in parallel for millions of things. How do they do it? Cheap context management.

Context management is a big challenge of waiting for many things at once. If you wait for a million things, you need a million sets of variables keeping track of what exactly you're waiting for (has the header arrived? then I'm waiting for the query. has it arrived? then I ask the database and wait for it etc. etc.)

If those variables are thread-local variables in a million threads, then you run into one of the problems with C – and hence OS-supported threads designed to run C. The problem is that C has no idea how much stack it's gonna need (because of the halting problem, so you can't blame C); and C has no mechanism to detect that it ran out of stack space at runtime and allocate some more (because that's how its ABIs have evolved; in theory C could do this, but it doesn't.)

So the best thing a Unixy OS could do is, give C one page for the stack (say 4K), and make say the next 1-2M of the virtual address space unaccessible (with 64b pointers, address space is cheap). When C page-faults upon stack overflow, give it more physical memory – say another 4K. This method means at least 4K of allocated physical memory per thread, or 4G for a million threads – rather wasteful. (I think in practice it's usually way worse.) All regardless of us often needing a fraction of that memory for the actual state.

And that's before we got to the cost of context switching – which can be made smaller if we use setjmp/longjmp-based coroutines or something similar, but that wouldn't help much with stack space. C's lax approach to stack management – which is the way it is to shave a few cycles off the function call cost – can thus make C terribly inefficient in terms of memory footprint (speed vs space is generally a common trade-off – it's just a bad one in the specific use case of "massive waiting" in C).

So Go/Erlang don't rely on the C-ish OS threads but roll their own – based on their stack management, which doesn't require a contiguous block of addresses. And AFAIK you really can't get readable and efficient "massive waiting" code in any other way – your alternatives, apart from the readable but inefficient threads, are:

  • Manual state machine management – yuck
  • Layered state machines as in Twisted – better, but you still have callbacks looking at state variables
  • Continuation passing as in Node.js – perhaps nicer still, but still far from the smoothness of threads/processes/coroutines

The old Node.js slides say that "green threads/coroutines can improve the situation dramatically, but there is still machinery involved". I'm not sure how that machinery – the machinery in Go or Erlang – is any worse than the machinery involved in continuation passing and event loops (unless the argument is about compatibility more than efficiency – in which case machinery seems to me a surprising choice of words.)

Millions of cheap threads or whatever you call them are exciting if you wait for many events. Are they exciting if you do many things at once? No; C threads are just fine – and C is faster to begin with. You likely don't want to use threads directly – it's ugly – but you can multiplex tasks onto threads easily enough.

A "task" doesn't need to have its own context – it's just a function bound to its arguments. When a worker thread is out of work, it grabs the task out of a queue and runs it to completion. Because the machine works - rather than waits - you don't have the problems with stack management created by waiting. You only wait when there's no more work, but never in the middle of things.

So a thread pool running millions of tasks doesn't need a million threads. It can be a thread per core, maybe more if you have some waiting – say, if you wait for stuff offloaded to a GPU/DSP.

I really don't understand how Joe Armstrong could say Erlang is faster than C on multiple cores, or things to that effect, with examples involving image processing – instead of event handling which is where Erlang can be said to be more efficient.

Finally, a hardware-level example – which kind of hardware is good at simultaneous working, and which is good at simultaneous waiting?

If your goal is parallelizing work, eventually you'll deteriorate to SIMD. SIMD is great because there's just one "manager" – instruction sequencer – for many "workers" – ALUs. CPUs, DSPs and GPUs all have SIMD. NVIDIA calls its ALUs "cores" and 16-32 ALUs running the same instruction "threads", but that's just shameless marketing. A "thread" implies, to everyone but a marketeer, independent control flow, while GPU "threads" march in lockstep.

In practice, SIMD is hard despite thousands of man-years having been invested into better languages and libraries – because telling a bunch of dumb soldiers marching in lockstep what to do is just harder than running a bunch of self-motivating threads each doing its own thing.

(Harder in one way, easier in another: marching in lockstep precludes races – non-deterministic, once-in-a-blue-moon, scary races. But races of the kind arising between worker threads can be almost completely remedied with tools. Managing the dumb ALUs can not be made easier with tools and libraries to the same extent – not even close. Where I work, roughly there's an entire team responsible for SIMD programming, while threading is mostly automatic and bugs are weeded out by automated testing.)

If, however, you expect to be waiting much of the time – for memory or for high-latency floating point operations, for instance – then hoards of hardware threads lacking their own ALUs, as in barrel threading or hyper-threading, can be a great idea, while SIMD might do nothing for you. Similarly, a bunch of weaker cores can be better than a smaller number of stronger cores. The point being, what you really need here is a cheap way to keep context and switch between contexts, while actually doing a lot at once is unlikely to be possible in the first place.

Conclusions

  • Doing little or nothing while waiting for many things is both surprisingly useful and surprisingly hard (which took me way too long to internalize both in my hardware-related and software/server-related work). It motivates things looking rather strange, such as "green threads" and hardware threads without their own ALUs.
  • Actually doing many things in parallel – to me the more "obviously useful" thing – is difficult in an entirely different way. It tends to drag in ugly languages, intrinsics, libraries etc. about as much as having to do one single thing quickly. The "parallelism" part is actually the simplest (few threads so easy context management; races either non-existent [SIMD] or very easy to weed out [worker pool running tasks])
  • People doing servers (which wait a lot) and people doing number-crunching (work) think very differently about these things. Transplanting experience/advice from one area to the other can lead to nonsensical conclusions.

See also

Parallelism and concurrency need different tools – expands on the reasons for races being easy to find in computational code – but impossible to even uniformly define for most event handling code.

Can your static type system handle linear algebra?

It seems a good idea to use a static type system to enforce unit correctness – to make sure that we don't add meters to inches or some such. After all, spacecraft has been lost due to such bugs costing hundreds of millions.

Almost any static type system lets us check units in sufficiently simple cases. I'm going to talk about one specific case which seems to me very hard and I'm not sure there's a type system that handles it.

Suppose we have a bunch of points – real-world measurements in meters – that should all lie on a straight line, modulo measurement noise. Our goal is to find that line – A and B in the equation:

A*x + B = y

If we had exactly 2 points, A and B would be unique and we could obtain them by solving the equation system:

A*x1 + B = y1
A*x2 + B = y2

2 equations, 2 unknowns – we're all set. With more than 2 measurements however, we have an overdetermined equation system – that is, the 3rd equation given by the measurement x3, y3 may "disagree" with the first two. So our A and B must be a compromise between all the equations.

The "disagreement" of x,y with A and B is |A*x + B – y|, since if A and B fit a given x,y pair perfectly, A*x+B is exactly y. Minimizing the sum of these distances from each y sounds like what we want.

Unfortunately, we can't get what we want because minimizing functions with absolute values in them cannot be done analytically. The analytical minimum is where the derivative is 0 but you can't take the derivative of abs.

So instead we'll minimize the sum of (A*x + B – y)^2 – not exactly what we want, but at least errors are non-negative (good), they grow when the distance grows (good), and this is what everybody else is doing in these cases (excellent). If the cheating makes us uneasy, we can try RANSAC later, or maybe sticking our head in the sand or something.

What's the minimum of our error function? Math says, let's first write our equations in matrix form:

(x1 1)*(A B) = y1
(x2 1)*(A B) = y2
...
(xN 1)*(A B) = yN

Let's call the matrix of all (xi 1) "X" and let's call the vector of all yi "Y". Then solving the equation system X'*X*(A B) = X'*Y gives us A, B minimizing our error function. Matlab's "" operator – as in XY – even does this automatically for overdetermined systems; that's how you "solve" them.

And now to static type systems. We start out with the following types:

  • x,y are measured in meters (m).
  • A is a number (n)- a scale factor – while B is in meters, so that A*x + B is in meters as y should be.

What about X'*X and X'*Y? X'*X looks like this:

sum(xi*xi) sum(xi) //m^2 m
sum(xi)    sum(1)  //m   n

….while X'*Y looks like this:

sum(xi*yi) sum(yi) //m^2 m

The good news is that when we solve for A and B using Cramer's rule which involves computing determinants, things work out wonderfully – A indeed comes out in meters and B comes out as a plain number. So if we code this manually – compute X'*X and X'*Y by summing the terms for all our x,y points and then hand-code the determinants computation – that could rather easily be made to work in many static type systems.

The bad news is that thinking about the type of Matlab's operator or some other generic least-squares fitting function makes my head bleed.

Imagine that large equation systems will not be solved by Cramer's rule but by iterative methods and all the intermediate results will need to have a type.

Imagine that we may be looking, not for a line, but a parabola. Then our X'*X matrix would have sum(x^4), sum(x^3) etc. in it – and while spelling out Cramer's rule works nicely again, a generic function's input and output types would have to account for this possibility.

Perhaps the sanest approach is we type every column of X and Y separately, and every element of X'*X and X'*Y separately – not meters^something but T1, T2, T3… And then we see what comes out, and whether intermediate results have sensible types. But I just can't imagine the monstrous types of products of the various elements ever eventually cancelling out as nicely as they apparently should – not in a real flesh-and-blood programming language.

Maybe I'm missing something here? I will admit that I sort of gave up on elegance in programming in general and static typing in particular rather long ago. But we have many static type systems claiming to be very powerful and concise these days. Maybe if I did some catching-up I would regain the optimism of years long gone.

Perhaps in some of these systems, there's an elegant solution to the problem of 's type – more elegant than just casting everything to "number" when it goes into and casting the result back into whatever types we declared for the result, forgoing the automatic type checking.

So – can your static type system do linear algebra while checking that measurement units are used consistently?

C++11 FQA anyone?

isocpp.org announced "the C++ FAQ" to which reportedly Bjarne Stroustrup and Marshall Cline will link to help boost its search rank. Good stuff; also, given that the language is >30 years old – it's about time.

Which reminds me. I'm failing to keep my promise to update the C++ FQA, an overly combative reply to Marshall Cline's fairly combative FAQ, to the new standard from hell – C++11.

Could you perhaps do it instead of me? Unfortunately, explaining how terrible C++ is cannot be made into a career the way explaining how wonderful C++ is can be. So I sort of moved on to other things; I managed to summon the mental energy just that one time to write the C++ FQA, and even as I wrote it I knew that I better hurry because that energy was waning. After all, at work I managed to reduce the amount of C++ I deal with to a minimum, so there was neither a profit motive nor continuous frustration to recharge me.

Not that I haven't made money off the C++ FQA – indirectly I did make money, specifically by getting a headhunter's attention and significantly improving my employment conditions. But I don't think a new FQA's maintainer/co-author could count on something along those lines.

Still, you'd become about as widely famous in narrow circles as I am, getting maybe 200K unique visitors per year and hundreds of "thank you" emails. Most importantly, you'd do the world a great service.

There's a curious double standard in the world of programming languages. Say, PHP is widely ridiculed because, for instance, its string comparison operator converts strings to numbers if they start with "0e" and this results in unexpected behavior. And it doesn't help PHP that === works just fine.

Imagine someone mocking C++ for two char pointers comparing "wrong" with ==. Immediately they'd be told that casting to std::string (more verbose than ===) would work just fine, and that "you just don't get it".

Why is PHP – a language full of quirks which at least gives you memory and type safety – is universally ridiculed and the developers, while defending the language, never ridicule back, while C++, an absolutely insane language, is ridiculed often enough but C++ developers always counter-attack viciously? For that matter, why isn't Lisp ridiculed for EQ, EQL, EQUAL and EQUALP, if comparison operators are so funny?

The reason, IMO, is simple: PHP is not taught in academic CS courses. C++ developers are much more likely to have a CS degree, therefore both they and others treat their knowledge of crazy C++ arcana as something of intellectual value. And Lisp is the poster child of academic programming language development. Educated proponents deter attempts at ridiculing a language.

In a "rational" universe, PHP would be held in high esteem given the amount of output produced by PHP developers with little training. In our universe we have this double- or triple-standard. Pointing out the darker aspects of C++ – which are most of its aspects – thus increases the supply of a rather scarce commodity.

But why C++ and not, say, Lisp, Haskell or C#?

One reason is that C++ is arguably the craziest language in widespread use, combining the safety of C (and I can live with C just fine) with the clarity of Perl (and I can live with Perl peacefully enough). But it is of course subjective.

The thing that really sets C++ apart is its development culture and value system – a perverse amalgamation of down-to-earth shrewdness and idealistic perfectionism. The whole idea of making the best, richest, most efficient and most generic/versatile programming language of the planet – you can sense that Bjarne Stroustrup aims at nothing less – ON TOP OF C is the perfect illustration of this culture, and perhaps its origin.

Stroustrup always knew that the language would be better if it didn't have to be compatible with C and he publicly acknowledged it – the "smaller, much simpler language struggling to come out" remark. But he also knew that using an Embrace, Extend and Exterminate strategy is a much more likely way to succeed. So he did that. In fact he managed to more or less kill C or at least put it in a coma – as even C99, not to mention C11, will never be supported by the Microsoft compiler, and C89 is a really old and really restrictive standard.

A shrewd move – almost an evilly shrewd one – netting the people behind C++ fame and fortune at the expense of programmers dealing with a uniquely dangerous language full of sugar-coated death traps.

(Yes, sugar-coated death traps, you clueless cheerleaders. X& obj=a.b().c() – oops, b() is a temporary object and c() returns a reference into it! Shouldn't have assigned that to a reference. Not many chances for a compiler warning, either.)

Who did things differently? Sun and Microsoft, for example, marketing Java and C#, respectively. They made much cleaner languages from scratch, and to solve the chicken-and-egg problem – we have no legacy projects hence no programmers hence no new projects hence no programmers – they used money, large marketing budgets and large budgets for creating large standard libraries. A much more honest approach yielding much better results, I find.

And Stroustrup says about himself that he "lacks marketing clout" and says Java and C# are bad for you because they're "platforms, not languages", whatever that means.

And I'm not claiming to be able to read minds, but if I had to bet – I'd say he really believes that. He probably thinks he's your altruistic benefactor and Java and C# are evil attempts to drag you into proprietary platforms.

And you can see the same shrewdness, the same "altruism", the same attention to detail, the same tunnel vision – "this shit I'm working on is so important, it deserves all of my mental energy AND the mental energy of my users at the expense of caring about anything else" – throughout the C++ culture. From the boost libraries to "Modern C++ Design" (the author has since repented and moved to D – or did he repent?..) to the justifications for duplicate and triplicate and still incomplete language features to your local C++ expert carrying his crazy libraries and syntactic wrappers and tangling your entire code base in his net.

And this approach to life – "altruism" plus perfectionism plus cleverness plus shrewdness – extends way beyond C++ and way beyond programming. And the alternative is taming your ambitions.

This was my larger purpose in writing about all this shit. I'm pretty sure I failed in the sense that it got drowned in C++ error messages and other shits and giggles.

So maybe you'll do better than me. If you do, you might contribute to the sanity of many a young idealistic programmer – these tend to get sucked into C++'s sphere of influence, either emerging old and embittered years later or lost to sanity forever, stuck in an internally consistent but absolutely crazy way of thinking.

BTW I never wanted it to be a personal attack, in the sense that (1) I don't know what's inside the heads of people who promote C++ and (2) I can tell you with certainty that if I made something 10 times as bad as C++ and it was 0.0001x as popular as C++, I'd be immensely proud of myself and I wouldn't give a damn about what anyone thought. Just like I don't give a damn about people with so much time on their hands to actually have thousands of karma points at StackOverflow explaining just how lame C++ FQA is.

In this sense, C++ is fine and a worthy achievement of a lifetime. Especially in a world where Putin is a candidate for a Nobel Peace prize and Obama already got one.

I basically just think that (1) C++'s horrible quirks are worth pointing out and (2) there's something to be learned from this story about the way we pave the road to hell with our good intentions. That's all.

***

A lot of people have spoken about "a C++ renaissance" when C++11 was ratified. I tend to agree – indeed the new standard is a fresh doze of the same thing that C++ always was: "zero-overhead" pretty-looking syntax with semantics quite horrendous once you think what it actually means.

Fittingly for a new revision of the C++ standard, more things we could once safely count on have gone with the wind. For instance, anything you could pass to a function used to be an expression, and expressions had one and only one type. How ironic that this revision, the revision that finally capitalized on this fact by introducing auto, also made this fact no longer true: {0} could be an int or an std::initializer_list, depending on the context. Context-dependent types were one thing that Perl had (scalar/vector context) but C++ didn't have.

(I have observed someone do this: _myarr[5]={0}; – they had in the .h file the definition int _myarr[5] and they remembered that this thing could be initialized with {0} in other contexts. What they did wouldn't compile in C++98; in C++11 it promptly assigned the int 0 to the non-existent 5th element of _myarr, and the usual hilarity ensued. Imagine how PHP would be ridiculed for this kind of little behavior – and PHP at least would never overwrite an unrelated variable with garbage. Imagine how with C++, the poor programmer will be ridiculed instead.)

This is nothing, BTW – it's not the kind of thing the FQA would normally poke fun at. We have bigger fish to fry. For instance, C++11's "closures" or "lambdas" – the preposterous thing where you say [](){…} and an anonymous struct gets generated. BTW it's one of the reasons I urged everyone where I work to upgrade to C++11; because it lets you implement a nicely-looking parallel_for. In C I would have added a compiler extension for it AGES AGO but extending the C++ grammar? No sir. I waited patiently until C++11 lambdas.

So, C++11 lambdas. Someone needs to write everything about those hideous capture lists – are references captured by value or by reference?! – and how you can end up passing dangling references if this closure thingie outlives variables it references (we don't need no stinking garbage collection! but why doesn't C++11 let one capture variables by std::shared_ptr?.. It's so much better than gc!), and about the type of this shit and how it looks in debuggers, and about std::function etc. etc.

And this won't be me because frankly, I don't have time to even fully master this arcana anymore.

If you dislike C++ and have the time to write about it, I'll gladly pass the torch on to you; specifically I'll redirect C++ FQA to point to your site, following the example of Bjarne Stroustrup and Marshall Cline. Or we could run a wiki, or something. Email or comment if you're interested.

A simple way to "get more people to code"

Disclaimers

  • I'm not a citizen of a country where "getting more people to code" is a widely shared concern. While my proposal seems to me like a simple solution to a simple problem, this is merely an outsider's observation.
  • Recently the tide of calls to "code" seems to have subsided. If it's no longer a pressing social problem, I regret being late with my excellent solution.
  • Like all brilliant ideas, my solution is simple, and I was surprised by not seeing it proposed in the voluminous discussions of the topic. I apologize to those who've proposed the same thing earlier without me noticing for not giving them proper credit.

The problem

For a long time now, people have been voicing a concern about a supposed shortage of computer programmers. Many also express narrower concerns, such as a shortage of children, women or people with a certain skin color in the ranks of computer programmers.

A high-profile example is US President Barack Obama. This US President went further than urging others to "code" and regretted not being able to do so himself: "I wanted to go in and fix <healthcare.gov> myself, but I don't write code."

(I understand President Obama. I once wanted to go in and fix international politics. Then I realized that I don't shoot bullets.)

People from technology companies have also urged others to learn programming – for example, Mark Zuckerberg and Bill Gates.

Finally, individuals such as NBA star Chris Bosh encouraged people to learn to code in their private capacity.

Even this humble blogger was asked to give an interview by someone interested to encourage people in general and women in particular to code! And you know why? Surprisingly, because of a piece I wrote where I told that it was money that attracted me to programming, and it is money that keeps me in programming.

It was then when a solution to the problem at hand popped to my mind.

A possible solution

If there's a shortage of programmers, we could pay programmers more money.

How can this be arranged, and how would it solve the problem? There are several possibilities.

For example, companies such as Google, Apple and Intel could raise wages, causing some people to choose programming over other occupations. Once a sufficient amount of people are attracted to programming, wages would fall.

Alternatively, governments such as that headed by Barack Obama could lower programmers' income tax, or introduce a negative tax for programming. Again rising wages would attract more people to programming. A government convinced that the number of programmers reached a high enough level could then cancel the subsidy.

Finally, concerned individuals such as the wealthy basketball player Chris Bosh could donate money to computer programmers until enough people are attracted to the profession.

Having outlined a solution to the general problem, we now direct our attention to the narrower concerns of representation of particular groups of people in computer programming.

Discrimination affirmative action

Companies could increase the representation of younger people in programming by paying extra money to programmers below a certain age.

Similarly, governments could give tax breaks to younger programmers, or to parents of children who regularly submit their code for review by public officials.

Finally, a trust fund could be set up paying children to learn to program.

A similar approach should in principle be applicable to the problems of increasing the representation of women, people of particular races or other groups.

In some cases, questions of legality arise; for instance, while a trust fund for the benefit of people of one race but not others is probably legal, for-profit companies and governments might not be able to legally discriminate based on race.

However, this problem could be overcome, either by changing the laws or by setting aside a sum of money equal in size to the discriminatory subsidy paid to the members of the group in question. Once enough people from that group enter the programming profession, that sum of money can be divided between working programmers not belonging to the group, in effect undoing the discrimination.

Or something along these lines.

Conclusion

If this strikes you as absurd, does it strike you as even more absurd that people claim something to be a problem when its "solution" is as obvious as it is ridiculous? (Or is it really that ridiculous? Farm subsidies exist. Why not FarmVille subsidies?)

If people don't "learn to code", maybe the option is insufficiently attractive to those who can, given current wages, lifetime employment prospects, and the complete uselessness of the skill outside work.[1]

If companies don't raise wages, maybe they don't feel a very pressing need for more programmers. (Of course they'd still "encourage" people to program – as long as the cost of encouragement is likely to be offset by a fall of wages, once a surplus of programmers results from the encouragement.)

If a situation persists, maybe there's a good reason for it.

[1] It is not completely fair to deny programming its uses outside work. A more fair presentation is an analogy with today's hobbyist 3D printing. An owner of a 3D printer recently told me that "having one really exposes the impotence of… not having one. For instance, I needed this little thingie to hold a shelf. Took 30 minutes to design and print. And where would I get it otherwise?!" The answer, of course, is "at a nearby store" where they have a box full of these thingies at about 20 cents apiece. Of course, in a couple thousand years, his investment in the 3D printer will repay itself though the continuous printing of thingies.

Further reading

Women in Science by Philip Greenspun is a must-read for anyone interested in increasing the representation of group X in occupation Y:

Adjusted for IQ, quantitative skills, and working hours, jobs in science are the lowest paid in the United States.

This article explores this … possible explanation for the dearth of women in science: They found better jobs.

Very funny, gdb. Ve-ery funny.

Have you ever opened a core dump with gdb, tried to print a C++ std::vector element, and got the following?

(gdb) p v[0]
You can't do that without a process to debug.

So after seeing this for years, my thoughts traveled along the path of, we could make a process out of the core dump.

No really, there used to be Unices with a program called undump that did just that. All you need to do is take the (say) ELF core dump file and generate an ELF executable file which loads the memory image saved in the core (that's actually the easier, portable part) and initializes registers to the right values (the harder, less portable part). I even wrote a limited version of undump for PowerPC once.

So we expended some effort on it at work.

And then I thought I'd just check a live C++ process (which I normally don't do, for various reasons). Let's print a vector element:

(gdb) p v[0]
Could not find operator[].
(gdb) p v.at(0)
Cannot evaluate function -- may be inlined.

Very funny, gdb. "You can't do that without a process to debug". Well, I guess you never did say that I could do that with a process to debug, now did you. Because, sure enough, I can't. Rolling on the floor, laughing. Ahem.

I suggest that we all ditch our evil C arrays and switch to slow-compiling, still-not-boundary-checked, still-not-working-in-debuggers-after-all-these-YEARS std::vector, std::array and any of the other zillion "improvements".

And gdb has these pretty printers which, if installed correctly (not easy with several gcc/STL versions around), can display std::vector – as in all of its 10000 elements, if that's how many elements it has. But they still don't let you print vec[0].member.vec2[5].member2. Sheesh!

P.S. undump could be useful for other things, say a nice sort of obfuscating scripting language compiler – Perl used to use undump for that AFAIK. And undump would in fact let you call functions in core dumps – if said functions could be, um, found by gdb. Still, ouch.

P.P.S. What gdb prints and when depends on things I do not comprehend. I failed to reproduce the reported behavior in full at home. I've seen it for years at work though.

Delayed printf for real-time logging

The article is at embeddedrelated.com. The nice bit is, you save format string pointers to files, and you read the strings later directly from the executable, and then do the formatting. There are a few tricks related to scripting gdb in Python and other such things. The upshot is that you get to do free-form text logging from virtually any context – including things like interrupt handlers, etc.

Coroutines in one page of C

I've written a single page implementation of coroutines in C using setjmp/longjmp and just a little bit of inline assembly. The article is at embeddedrelated.com. Here's an example C program using coroutines – equivalent to the Python code:

def iterate(max_x, max_y):
  for x in range(max_x):
    for y in range(max_y):
      yield x,y

for x,y in iterate(2,2):
  print x,y

In C:

#include <stdio.h>
#include "coroutine.h"

typedef struct {
  coroutine* c;
  int max_x, max_y;
  int x, y;
} iter;

void iterate(void* p) {
  iter* it = (iter*)p;
  int x,y;
  for(x=0; x<it->max_x; x++) {
    for(y=0; y<it->max_y; y++) {
      it->x = x;
      it->y = y;
      yield(it->c);
    }
  }
}

#define N 1024

int main() {
  coroutine c;
  int stack[N];
  iter it = {&c, 3, 2};
  start(&c, &iterate, &it, stack+N);
  while(next(&c)) {
    printf("x=%d y=%dn", it.x, it.y);
  }
}

Yeah it's a bit longer, or perhaps more than a bit longer. But it's still very useful at times. Check it out!

Do call yourself a programmer, and other career advice

This is a (very late) reply to Patrick McKenzie's "Don't Call Yourself A Programmer, And Other Career Advice". I find much of his advice very sensible, and it might be very helpful to someone in the beginning of their career – assuming they can act upon it (and I really don't know whether my 20-year-old self could actually use the advice to improve his negotiation skills, for example).

A few things in the article I disagree with, however. Here I'll mostly focus on those few things, recommending you to read the original article so that you don't miss the rest of it.

"Disagree" is not necessarily the right word – a more precise way to put it would be "it's different in my experience". Which is to be expected because both of us are speaking based on our own careers, which have been rather different. Patrick McKenzie is a small business owner running Bingo Card Creator and a successful consultant. I'm a lead chip architect at a billion-dollar company. Both of us have thus traveled some distance away from "purely programming" (whatever that means), but in rather different directions.

What company are you going to work for?

Patrick McKenzie says 90% of the jobs involve things like implementing an internal travel expense reporting form, rather than a product shipped to external customers. He advises you to get used to the idea, even though such software is "soul-crushingly boring" as he puts it.

How bad is it, and is it really 90% of the jobs? Spolsky thinks it's maybe 80% – and that it's bad enough to "drain the life out of you". He goes on to elaborate why it "sucks to be an in-house programmer":

  • There's rarely a business reason to improve in-house software past the point of "barely good enough". "Forget any pride of craftsmanship – you're going to churn out embarrassing junk".
  • At software companies, what you do is more directly related to the way the company makes money, so you're more likely to be respected. "A programmer is never going to rise to become CEO of Viacom, but you might well rise to become CEO of a tech company." "…no matter how critical it was for Viacom to get this internet thing right, when it came time to assign people to desks, the in-house programmers were stuck with 3 people per cubicle in a dark part of the office".

Note that McKenzie and Spolsky are in almost complete agreement over these points. But then Spolsky says you should be gunning for a position in a software company – the environment where creatures of your kind naturally thrive. Conversely, McKenzie explains how to prosper as a programmer outside software companies – moving in the opposite direction of where things go by default (being stuck in a dark part of the office while they're trying to outsource your job.)

So the question is which path you prefer. "Not so fast", you say: one of these jobs is way easier to land – 80-90% of the chances are you're not getting inside a software company – so it's not just a question of preference.

Here I disagree: even if only 10-20% of programmers work in software companies (where are the stats?..), and even if they're "the best" (according to what metric?), McKenzie himself says in that same article:

You radically overestimate the average skill of the competition because of the crowd you hang around with:  Many people already successfully employed as senior engineers cannot actually implement FizzBuzz.

But if competition is relatively unskilled on average, you probably can land a job in the 10-20% of the sector that you want – as did most people who graduated around the time I did. So I rather firmly believe that it's a matter of choice: do you want to work on in-house software or one-off businessy projects of that kind, or do you prefer a software company?

Let's proceed to McKenzie's advice to in-house programmers – which should in itself help one make that choice.

How to call yourself

One such advice is:

Don't call yourself a programmer. “Programmer” sounds like “anomalously high-cost peon who types some mumbo-jumbo into some other mumbo-jumbo.” Instead, describe yourself by what you have accomplished for previous employers vis-a-vis increasing revenues or reducing costs.

Sure – an in-house programmer is likely doing some type of expensive mumbo-jumbo in the eyes of his non-technical MBA-wielding manager.

To me, however, a programmer is who I'm looking for, while a resume full of revenue increases and cost reductions sounds like an "anomalously high-cost parasite who types some mumbo-jumbo into Excel and PowerPoint, claiming credit for others' work".

McKenzie says a software company looks at this just like a company hiring internal programmers, essentially. His example is "the guy who wrote the backend billing code that 97% of Google’s revenue passes through – he’s now an angel investor". The guy apparently got rich by being near a "profit center" rather than through his unusual skills.

The thing is, in this case I believe he's talking about Ron Garret, the PhD from NASA's Jet Propulsion Laboratory. Do you think they hired him because he described his work at the JPL in terms of revenues and costs? (BTW he didn't like working on the billing code, bought his stock options and quit, instead of choosing a career at the company's biggest "profit center".)

Did any unusual skills go into the billing code? Ron Garret says:

I did end up writing the credit card billing and accounting system, which is a nontrivial thing to get right. Fortunately for me, just before coming to Google I had taken some time to study computer security and cryptography, so I was actually well prepared for that particular task. …I designed the billing system to be secure against even a dishonest employee with root access (which is not such an easy thing to do). I have no idea if they are still using my system, but if they are then I'd feel pretty confident that my credit card number was not going to get stolen.

Sounds to me that his technical knowledge and programming ability was the bulk of his contribution, whereas deep thoughts such as realizing that there will be some "cost reduction" due to not having credit card numbers stolen is not something an employer needs to hire anyone for.

So if I ever send out a resume as a chip architect, I will focus on my technical role in transitioning from fixed-function hardware accelerators to programmable processors, more than the manpower this saved and the business we won as a result (which I think were real outcomes of our work, but which is rather hard to quantify – as these things often are unless you're a business-friendly-sounding liar.)

Incidentally, I'm not sure when I'll send out that resume, which brings us to the next point.

On job hopping, backstabbing, and the lack thereof

Co-workers and bosses are not usually your friends: You will spend a lot of time with co-workers.  You may eventually become close friends with some of them, but in general, you will move on in three years…

<your boss will> attempt to do things that none of your actual friends would ever do, like try to talk you down several thousand dollars in salary or guilt-trip you into spending more time with the company when you could be spending time with your actual friends.  You will have other coworkers who — affably and ethically — will suggest things which go against your interests…

There is a certain internal consistency to a view that your coworkers are not your friends, because you will move on in 3 years. In fact, it's a bit circular. They aren't your friends – because you'll move on. And why will you move on? Well, I dunno, maybe for a 10% salary increase. What's there to lose? Relationships with coworkers? But coworkers aren't your friends!

Again, I don't disagree, but rather offer an alternative view, equally internally consistent. I have stayed at one job for more than a decade, in large part because I'm rather attached to the people I work with. To be sure, I got raises, and I was ready to quit over employment terms – but it'd take much more than 10%.

Isn't it just a quantitative difference in preferences – a 10% raise not being fundamentally different than, say, 100%? Well, sufficiently large quantitative changes add up to qualitative changes, as Marxian dialectics or some other Soviet philosophy thingie that my parents sometimes quote taught us. What's going on is that both approaches can lead to career advancement, but they do so very differently.

If you're willing to change jobs over a small raise, you'll be changing them frequently. You won't get attached to people, or to the work you're doing together. You will be very good at finding jobs and you will know what's generally going on in the industry and what's in demand. You will not know that many things specific to any of your employers. You and your employer will become very useful to each other fairly quickly, but you'll also be somewhat expendable for each other.

Alternatively, you can keep a job as long as it's a fun environment, requiring a significant raise once in a while. Your relationships with people combined with your long-term outlook can let you do things together that you otherwise couldn't plan or execute, and learn things you wouldn't have learned.

Much of my knowledge about chip design comes from ASIC hackers I worked with, and their willingness to develop their biggest ideas together with me came from trust that necessarily took time to build. It takes time to learn that none of you is in the habit of "suggesting things going against the other's interest", or pulling other unfriendly shenanigans.

Incidentally, if you stay at one place for a long while, then your worth to the employer grows to the point where you can get the significant raise that you'd quit over without actually quitting. Your worth can also grow well above what employers are willing to pay to experienced new hires, so there's no longer a point in switching jobs. This is somewhat analogous to becoming a consultant after having switched a whole lot of jobs and now making more than the next job hop could give you.

Both approaches work, though I don't have stats showing which tends to be more effective. I do believe that the long-term approach is more fun. I could never land the kind of gig that I have now through job hopping. More importantly, I wouldn't have the relationships that I have at work.

"More importantly", because all means to reach our ends often fail, and then all we're left with is our means. You can't count on any career strategy to give you either a dream job or a load of money; it'll work to some extent or other but who knows. What you can count on is your lifestyle being affected rather predictably by your career choices. The impact of these choices on relationships could thus be weighted as more important than the impact on career advancement because it's more predictable.

The part about bosses is the only one I very much agree with. (I had enough bosses to be able to plausibly deny that I'm thinking about any particular one here.) Yes, some of them will want you to work more time for less money (by itself a natural desire for an employer) while attempting to look like your friends (which is where it becomes a tad irksome). This just means that you should guard your own interests (as always) – and perhaps not judge people too harshly before spending time in their shoes.

How to value an equity grant

McKenzie says you shouldn't value equity very much, and he doesn't spend many of words to say it. I'll talk about stock options, which are worse than an actual equity grant and which is the only thing I've ever been offered.

My basic outlook is again long-term. I work at a private company whose value rose almost tenfold over the decade I've been there. And it's still a private company, so there's never been an easy venue to make money off most of the stock options.

From a long-term view, stock options look worse – and better.

Worse, because having stock options ties your hands behind your back. You usually can't afford to buy them when you quit, or at least buying them is a significant risk that you might be reluctant to take. If the company survives for a long while, then you may start to dislike the place but the hope of making money off your stock options now makes it harder to quit. If you generally like the place, options make it harder to negotiate a raise, since they know you can't quit.

So in the long term, options can effectively be a liability.

On the other hand, as the company matures, its stock options tend to get undervalued by employees, and for no good reason. People intuitively think along the lines of, "it's already expensive – how much can a price rise from now on?" It's a natural thought if the price has went up threefold or tenfold already.

But what this misses is that you don't get paid in percentage points – you get dollars. A $100 share going up 20% to $120 means you make $20 per share. A $5 share going up 100% to $15 means you only make $10 per share. Stock options of a mature company whose price is still rising can thus be even nicer than stock options of a young company which rises more quickly but which is still cheap – and is more likely to go bust overnight.

The upshot is that people overvalue stock options early on – but they also often undervalue them later on.

Note that if you don't intend to stay for more than 3 years, than stock options are most certainly a liability because they make it harder to quit – while the chances that the company makes it big in that span of time are very low.

Working at a startup

McKenzie lists valid reasons not to. In terms of job satisfaction, he says you can work on many exciting things in large corporations, not just startups.

Here's one thing in favor of startups. A large corporation usually doesn't have huge gaping holes that it doesn't know how to deal with or doesn't even notice. A startup often does have many such gaping holes, because, well, nothing is established yet, they don't even understand what they're doing, and most importantly, they are severely understaffed.

This means that you can grab pretty much any responsibility that you want to. There will be areas that people are competing to work on everywhere, but in a startup doing something hard enough, there will be a ton of hard problems nobody is competing to solve because there's not enough time or people for everything. You can be the person pointing out that problem and grabbing that responsibility.

As companies mature, being able to just work on whatever you want gets harder. My metaphor for it is nomadic programmers moving from problem to problem vs settlers with states and national borders where even visiting your neighbor's code may involve a visa.

This isn't a recommendation to work for startups, just one thing worth pointing out. The counterpoint is that if you're an orderly person who wants an orderly process, then a larger company known for its development culture is probably a better idea.

Impact of career on life happiness

At the end of the day, your life happiness will not be dominated by your career.

In one way, I agree wholeheartedly; whatever the merits of a job, it's a job, and I actually noticed my productivity fall at times of treating it as more important than that. The healthy way of looking at it is "just a job, at the end of the day".

On the other hand, we do spend quite some time at work. The question is, to what extent does it make sense to separate "work" from "life" – and to what extent it's one part of life among many, to be treated similarly to those other parts of life?

I argue that the "work/life" separation shouldn't be strong enough to separate "coworkers" into a distinct category of human beings with whom relationships are formed fundamentally differently – nor is it necessarily great to be emotionally detached from the workplace to be always ready to abandon it and "move on".

(I'm not arguing that McKenzie's intent was to say the exact opposite of what I'm saying, BTW. I'm just commenting on some quotes and the general atmosphere of the text as I perceived it. A lot of things simply have different meaning when heard by different people; a simple advice like "be wary of others' intentions" is great for someone overly trusting, but not for someone already verging on paranoia. Some people need to hear that coworkers aren't friends; today I'm writing for the other people.)

Summary

When I introduce myself, I usually call myself a programmer, regardless of my current work on chip architecture and management and stuff. I got into programming for the money, so it's not like I'm overflowing with pride when uttering "programmer". I just think programming is a great career and the right thing to call myself for me.

There's an alternative approach where you program, but you don't call it that, and you use programming as a starting point from which you transition to some form of being involved in business as directly as possible.

It sounds a bit roundabout to me – why not just get an MBA instead? – but maybe it's the right path for some (especially considering that some prestigious MBA programs want you to have industry experience before you can even enroll.)

The important thing is to choose the path that suits your preferences, follow it consistently, and realize where your approach is most likely to succeed. Because where I work, someone applying for a programming position and not calling himself a programmer will not make a good impression.

I agree emphatically with many of the points in McKenzie's article – my favorite point is the importance of communication skills – and I very much recommend it.